" /> net.wars: August 2019 Archives

« July 2019 | Main | September 2019 »

August 30, 2019

The Fregoli delusion

Anomalisa-Fregoli.pngIn biology, a monoculture is a bad thing. If there's only one type of banana, a fungus can wipe out the entire species instead of, as now, just the most popular one. If every restaurant depends on Yelp to find its customers, Yelp's decision to replace their phone number with one under its own control is a serious threat. And if, as we wrote here some years ago, everyone buys everything from Amazon, gets all their entertainment from Netflix, and get all their mapping, email, and web browsing from Google, what difference does it make that you're iconoclastically running Ubuntu underneath?

The same should be true in the culture of software development. It ought to be obvious that a monoculture is as dangerous there as on a farm. Because: new ideas, robustness, and innovation all come from mixing. Plenty of business books even say this. It's why research divisions create public spaces, so people from different disciplines will cross-fertilize. It's why people and large businesses live in cities.

And yet, as the journalist Emily Chang documents in her 2018 book Brotopia: Breaking Up the Boys' Club of Silicon Valley, Silicon Valley technology companies have deliberately spent the last couple of decades progressively narrowing their culture. To a large extent, she blames the spreading influence of the Paypal Mafia. At Paypal's founding, she writes, this group, which includes Palantir founder Peter Thiel, LinkedIn founder Reid Hoffman, and Tesla supremo Elon Musk, adopted the basic principle that to make a startup lean, fast-moving, and efficient you needed a team who thought alike. Paypal's success and the diaspora of its early alumni disseminated a culture in which hiring people like you was a *strategy*. This is what #MeToo and fights for equality are up against.

Businesses are as prone to believing superstitions as any other group of people, and unicorn successes are unpredictable enough to fuel weird beliefs, especially in an already-insular place like Silicon Valley. Yet, Chang finds much earlier roots. In the mid-1960s, System Development Corporation hired psychologists William Cannon and Dallis Perry to create a profile to help it to identify recruits who would enjoy the new profession of computer programming. They interviewed 1,378 mostly male programmers, and found this common factor: "They don't like people." And so the idea that "antisocial" was a qualification was born, spreading outwards through increasingly popular "personality tests" and, because of the cultural differences in the way girls and boys are socialized, gradually and systematically excluding women.

Chang's focus is broad, surveying the landscape of companies and practices. For personal inside experiences, you might try Ellen Pao's Reset: My Fight for Inclusion and Lasting Change, which documents the experiences at Kleiner Perkins, which led her to bring a lawsuit, and at Reddit, where she was pilloried for trying to reduce some of the system's toxicity. Or, for a broader range, try Lean Out, a collection of personal stories edited by Elissa Shevinsky.

Chang finds that even Google, which began with an aggressive policy of hiring female engineers that netted it technology leaders Susan Wojcicki, CEO of YouTube, Marissa Mayer, who went on to try to rescue Yahoo, and Sheryl Sandberg, now COO of Facebook, failed in the long term. Today its male-female radio is average for Silicon Valley. She cites Slack as a notable exception; founder Stewart Butterfield set out to build a different kind of workplace.

In that sense, Slack may be the opposite of Facebook. In Zucked: Waking Up to the Facebook Catastrophe, Roger McNamee tells the mea culpa story of his early mentorship to Mark Zuckerberg and the company's slow pivot into posing problems he believes are truly dangerous. What's interesting to read in tandem with Chang's book is his story of the way Silicon Valley hiring changed. Until around 2000, hiring rewarded skill and experience; the limitations on memory, storage, and processing power meant companies needed trained and experienced engineers. Facebook, however, came along at the moment when those limitations had vanished and as the dot-com bust finished playing out. Suddenly, products could be built and scaled up much faster; open source libraries and the arrival of cloud suppliers meant they could be developed by less experienced, less skilled, *younger*, much *cheaper* people; and products could be free, paid for by advertising. Couple this with 20 years of Reagan deregulation and the influence, which he also cites, of the Paypal Mafia, and you have the recipe for today's discontents. McNamee writes that he is unsure what the solution is; his best effort at the moment appears to be advising Center for Humane Technology, led by former Google design ethicist Tristan Harris.

These books go a long way toward explaining the world Caroline Criado-Perez describes in 2018's Invisible Women: Data Bias in a World Designed for Men. Her discussion is not limited to Silicon Valley - crash test dummies, medical drugs and practices, and workplace design all appear - but her main point applies. If you think of one type of human as "default normal", you wind up with a world that's dangerous for everyone else.

You end up, as she doesn't say, with a monoculture as destructive to the world of ideas as those fungi are to Cavendish bananas. What Zucked and Brotopia explain is how we got there.


Illustrations: Still from Anomalisa (2015).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 23, 2019

Antepenultimate

rotated-gchq-secure-phone.jpegFor many reasons, I've never wanted to use my mobile phone for banking. For one thing, I have a desktop machine with three 24-inch monitors and a full-size functioning keyboard; why do I want to poke at a small screen with one finger?

Even if I did, the corollary is that mobile phones suck for typing passwords. For banking, you typically want the longest and most random password you can generate. For mobile phone use, you want something short, easy to remember and type. There is no obvious way to resolve this conflict, particularly in UK banking, where you're typically asked to type in three characters chosen from your password. It is amazingly easy to make mistakes counting when you're asked to type in letter 18 of a 25-character random string. (Although: I do admire the literacy optimism one UK bank displays when it asks for the "antepenultimate" character in your password. It's hard to imagine an American bank using this term.)

Beyond that, mobile phones scare me for sensitive applications in general; they seem far too vulnerable to hacking, built-in vulnerabilities, SIM swapping, and, in the course of wandering the streets of London, loss, breakage, or theft. So mine is not apped up for social media, ecommerce, or anything financial. I accept that two-factor authentication is a huge step forward in terms of security, but does it have to be on my phone? In this, I am, of course, vastly out of step with the bulk of the population, who are saying instead: "Can't it be on my phone?" What I want, however, is a 2FA device I can turn off and stash out of harm's way in a drawer at home. That approach would also mean not having to give my phone number to an entity that might, like Facebook has in the past, coopt it into their marketing plans.

So, it is with great unhappiness that I discover that the combination of the incoming Payment Services Directive 2 and the long-standing effort to get rid of cheques are combining to force me to install a mobile banking app.

PSD2 possibly will may perhaps not have been the antepenultimate gift from the EU28. At Wired, Laurie Clark explains the result of the directive's implementation, which is that ecommerce sites, as well as banks, must implement two-factor authentication (2FA) by September 14. Under this new regime, transactions above £30 (about $36.50, but shrinking by the Brexit-approaching day) will require customers to prove at least two of the traditional three security factors: something they have (a gadget such as a smart phone, a specific browser on a specific machine, or a secure key,, something they know (passwords and the answers to secondary questions), and something they are (biometrics, facial recognition). As Clark says, retailers are not going to love this, because anything that adds friction costs them sales and customers.

My guess is that these new requirements will benefit larger retailers and centralized services at the expense of smaller ones. Paypal, Amazon, and eBay already have plenty of knowledge about their customers to exploit to be confident of the customer's identity. Requiring 2FA will similarly privilege existing relationships over new ones.

So far, retail sites don' t seem to be discussing their plans. UK banking sites, however, began adopting 2FA some years ago, mostly in the form of secure keys that they issued and replaced as needed - credit card-sized electronic one-time pads. Those sites now are simply dropping the option of logging on with limited functionality without the key. These keys have their problems - especially non-inclusive design with small, fiddly keys and hard-to-read LCD screens - but I liked the option.

Ideally, this would be a market defined by standards, so people could choose among different options - such as the Yubikey, Where the banks all want to go, though, is to individual mobile phone apps that they can also use for marketing and upselling. Because of the broader context outlined above, I do not want this.

One bank I use is not interested in my broader context, only its own. It has ruled: must download app. My first thought was to load the app onto my backup, second-to-last phone, figuring that its unpatched vulnerabilities would be mitigated by its being turned off, stuck in a drawer, and used for nothing else. Not an option: its version of Android is two decimal places too old. No app for *you*!

At Bentham's Gaze, Steven Murdoch highlights a recent Which? study that found that those who can't afford, can't use, or don't want smartphones or who live with patchy network coverage will be shut out of financial services.

Murdoch, an expert on cryptography and banking security, argues that by relying on mobile apps banks are outsourcing their security to customers and telephone networks, which he predicts will fail to protect against criminals who infiltrate the phone companies and other threats. An additional crucial anti-consumer aspect is the refusal of phone manufacturers to support ongoing upgrades, forcing obsolescence on a captive audience, as we've complained before. This can only get worse as smartphones are less frequently replaced while being pressed into use for increasingly sensitive functions.

In the meantime, this move has had so little press that many people are being caught by surprise. There may be trouble ahead...

Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 16, 2019

The law of the camera

compressed-King's_Cross_Western_Concourse-wikimedia.jpgAs if cued by the end of last week's installment, this week the Financial Times (paywalled), followed by many others broke the news that Argent LLP, the lead developer in regenerating the Kings Cross district of London in the mid-2000s, is using facial recognition to surveil the entire area. The 67-acre site includes two mainline railway stations, a major Underground interchange station, schools, retailers, the Eurostar terminal, a local college, ten public parks, 20 streets, 50 buildings, 1,900 homes...and, because it happens to be there, Google's UK headquarters. (OK, Google: how do you like it when you're on the receiving end instead of dishing it out?)

So, to be clear: this system has been installed - doubtless "for your safety" - even though over and over these automated facial recognition systems are being shown to be almost laughably inaccurate: in London, Big Brother Watch found a 95% inaccuracy rate (PDF); in California, the ACLU found that the software incorrectly matched one in five lawmakers to criminals' mugshots. US cities - San Francisco, Oakland, Somerville, Massachusetts - are legislating bans as a result. In London, however, Canary Wharf, a large development area in east London, told the BBC and the Financial Times that it is considering following Kings Cross's lead.

Inaccuracy is only part of the problem with the Kings Cross situation - and the deeper problem will persist even if and when the systems become accurate enough for prime time (which will open a whole new can of worms). The deeper problem is the effective privatization of public space: here, a private entity has installed a facial recognition system with no notice to any of the people being surveilled, with no public debate, and, according to the BBC, no notice to either local or central government.

To place this in context, it's worth revisiting the history of the growth of CCTV cameras in the UK, the world leader (if that's the word you want) in this area. As Simon Davies recounts in his recently-published memoir about his 30 years of privacy campaigning (and as I also remember), the UK began embracing CCTV in the mid-1990s (PDF), fueled in part by the emotive role it played in catching the murderers in the 1993 Jamie Bulger case. Central government began offering local councils funding to install cameras. Deployment accelerated after 9/11, but the trend had already been set.

By 2012, when the Protection of Freedoms Act was passed to create the surveillance camera commissioner's office, public resistance had largely vanished. At the first Surveillance Camera Conference, in 2013, representatives from several local councils said they frequently received letters from local residents requesting additional cameras. They were not universally happy about this; around that time the responsibility for paying for the cameras and the systems to run them was being shifted to the councils themselves, and many seemed to be reconsidering their value. There has never been much research assessing whether the cameras cut crime; what there is suggests CCTV diverts it rather than stops it. A 2013 briefing paper by the College of Policing (PDF) says CCTV provides a "small, but statistically significant, reduction in crime", though it notes that effectiveness depends on the type of crime and the setting. "It has no impact on levels of violent crime," the paper concludes. A 2014 summary of research to date notes the need to balance privacy concerns and assess cost-effectiveness. Adding on highly unreliable facial recognition won't change that - but it will entrench unnecessary harassment.

The issue we're more concerned about here is the role of private operators. At the 2013 conference, public operators complained that their private counterparts, operating at least ten times as many cameras, were not required to follow the same rules as public bodies (although many did). Reliable statistics are hard to find. A recent estimate claims London hosts 627,707 CCTV cameras, but it's fairer to say that not even the Surveillance Camera Commissioner really knows. It is clear, however, that the vast majority of cameras are privately owned and operated.

Twenty years ago, Davies correctly foresaw that networking the cameras would enable tracking people across the city. Neither he nor the rest of us saw that (deeply flawed) facial recognition would arrive this soon, if only because it's the result of millions of independent individual decisions to publicly post billions of facial photographs. This is what created the necessary mass of training data that, as Olivia Solon has documented, researchers have appropriated.

For an area the size and public importance of Kings Cross to be monitored via privately-owned facial recognition systems that have attracted enormous controversy in the public sector is profoundly disturbing. You can sort of see their logic: Kings Cross station is now a large shopping mall surrounding a major train station, so what's the difference between that and a shopping mall without one? But effectively, in setting the rules of engagement for part of our city that no one voted to privatize, Argent is making law, a job no one voted to give it. A London - or any other major city - carved up into corporately sanitized districts connected by lawless streets - is not where any of us asked to live.


Illustrations: The new Kings Cross Western Concourse (via Colin on Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 9, 2019

Collision course

800px-Kalka-Shimla_Railway_at_night_in_Solan_-_approaching_train.JPGThe walk from my house to the tube station has changed very little in 30 years. The houses and their front gardens look more or less the same, although at least two have been massively remodeled on the inside. More change is visible around the tube station, where shops have changed hands as their owners retired. The old fruit and vegetable shop now sells wine; the weird old shop that sold crystals and carved stones is now a chain drug store. One of the hardware stores is a (very good) restaurant and the other was subsumed into the locally-owned health food store. And so on.

In the tube station itself, the open platforms have been enclosed with ticket barriers and the second generation of machines has closed down the ticket office. It's imaginable that had the ID card proposed in the early 2000s made it through to adoption the experience of buying a ticket and getting on the tube could be quite different. Perhaps instead of an Oyster card or credit card tap, we'd be tapping in and out using a plastic ID smart card that would both ensure that only I could use my free tube pass and ensure that all local travel could be tracked and tied to you. For our safety, of course - as we would doubtless be reminded via repetitive public announcements like the propaganda we hear every day about the watching eye of CCTV.

Of course, tracking still goes on via Oyster cards, credit cards, and, now, wifi, although I do believe Transport for London when it says its goal is to better understand traffic flows through stations in order to improve service. However, what new, more intrusive functions TfL may choose - or be forced - to add later will likely be invisible to us until an expert outsider closely studies the system.

In his recently published memoir, the veteran campaigner and Privacy International founder Simon Davies tells the stories of the ID cards he helped to kill: in Australia, in New Zealand, in Thailand, and, of course, in the UK. What strikes me now, though, is that what seemed like a win nine years ago, when the incoming Conservative-Liberal Democrat alliance killed the ID card, is gradually losing its force. (This is very similar to the early 1990s First Crypto Wars "win" against key escrow; the people who wanted it have simply found ways to bypass public and expert objections.)

As we wrote at the time, the ID card itself was always a brightly colored decoy. To be sure, those pushing the ID card played on it and British wartime associations to swear blind that no one would ever be required to carry the ID card and forced to produce it. This was an important gambit because to much of the population at the time being forced to carry and show ID was the end of the freedoms two world wars were fought to protect. But it was always obvious to those who were watching technological development that what mattered was the database because identity checks would be carried out online, on the spot, via wireless connections and handheld computers. All that was needed was a way of capturing a biometric that could be sent into the cloud to be checked. Facial recognition fits perfectly into that gap: no one has to ask you for papers - or a fingerprint, iris scan, or DNA sample. So even without the ID card we *are* now moving stealthily into the exact situation that would have prevailed if we had. Increasing numbers of police departments - South Wales, London, LA, India, and, notoriously, China - as Big Brother Watch has been documenting for the UK. There are many more remotely observable behaviors to be pressed into service, enhanced by AI, as the ACLU's Jay Stanley warns.

The threat now of these systems is that they are wildly inaccurate and discriminatory. The future threat of these systems is that they will become accurate and discriminatory, allowing much more precise targeting that may even come to seem reasonable *because* it only affects the bad people.

This train of thought occurred to me because this week Statewatch released a leaked document indicating that most of the EU would like to expand airline-style passenger data collection to trains and even roads. As Daniel Boffay explains at the Guardian (and as Edward Hasbrouck has long documented), the passenger name records (PNRs) airlines create for every journey include as many as 42 pieces of information: name, address, payment card details, itinerary, fellow travelers... This is information that gets mined in order to decide whether you're allowed to fly. So what this document suggests is that many EU countries would like to turn *all* international travel into a permission-based system.

What is astonishing about all of this is the timing. One of the key privacy-related objections to building mass surveillance systems is that you do not know who may be in a position to operate them in future or what their motivations will be. So at the very moment that many democratic countries are fretting about the rise of populism and the spread of extremism, those same democratic countries are proposing to put in place a system that extremists who get into power can operate anti-democratic ways. How can they possibly not see this as a serious systemic risk?


Illustrations: The light of the oncoming train (via Andrew Gray at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 2, 2019

Unfortunately recurring phenomena

JI-sunrise--2-20190107_071706.jpgIt's summer, and the current comprehensively bad news is all stuff we can do nothing about. So we're sweating the smaller stuff.

It's hard to know how seriously to take it, but US Senator Josh Hawley (R-MO) has introduced the Social Media Addiction Reduction Technology (SMART) Act, intended as a disruptor to the addictive aspects of social media design. *Deceptive* design - which figured in last week's widely criticized $5 billion FTC settlement with Facebook - is definitely wrong, and the dark patterns site has long provided a helpful guide to those practices. But the bill is too feature-specific (ban infinite scroll and autoplay) and fails to recognize that one size of addiction disruption cannot possibly fit all. Spending more than 30 minutes at a stretch reading Twitter may be a dangerous pastime for some but a business necessity for journalists, PR people - and Congressional aides.

A better approach, might be to require sites to replay the first video someone chooses at regular intervals until they get sick of it and turn off the feed. This is about how I feel about the latest regular reiteration of the demand for back doors in encrypted messaging. The fact that every new home secretary - in this case, Priti Patel - calls for this suggests there's an ancient infestation in their office walls that needs to be found and doused with mathematics. Don't Patel and the rest of the Five Eyes realize the security services already have bulk device hacking?

Ever since Microsoft announced it was acquiring the software repository Github, it should have been obvious the community would soon be forced to change. And here it is: Microsoft is blocking developers in countries subject to US trade sanctions. The formerly seamless site supporting global collaboration and open source software is being fractured at the expense of individual PhD students, open source developers, and others who trusted it, and everyone who relies on the software they produce.

It's probably wrong to solely blame Microsoft; save some for the present US administration. Still, throughout Internet history the communities bought by corporate owners wind up destroyed: CompuServe, Geocities, Television without Pity, and endless others. More recently, Verizon, which bought Yahoo and AOL for its Oath subsidiary (now Verizon Media), de-porned Tumblr. People! Whenever the online community you call home gets sold to a large company it is time *right then* to begin building your own replacement. Large companies do not care about the community you built, and this is never gonna change.

Also never gonna change: software is forever, as I wrote in 2014, when Microsoft turned off life support for Windows XP. The future is living with old software installations that can't, or won't, be replaced. The truth of this resurfaced recently, when a survey by Spiceworks (PDF) found that a third of all businesses' networks include at least one computer running XP and 79% of all businesses are still running Windows 7, which dies in January. In the 1990s the installed base updated regularly because hardware was upgraded so rapidly. Now, a computer's lifespan exceeds the length of a software generation, and the accretion of applications and customization makes updating hazardous. If Microsoft refuses to support its old software, at least open it to third parties. Now, there would be a law we could use.

The last few years have seen repeated news about the many ways that machine learning and AI discriminate against those with non-white skin, typically because of the biased datasets they rely on. The latest such story is startling: Wearables are less reliable in detecting the heart rate of people with darker skin. This is a "huh?" until you read that the devices use colored light and optical sensors to measure the volume of your blood in the vessels at your wrist. Hospital-grade monitors use infrared. Cheaper devices use green light, which melanin tends to absorb. I know it's not easy for people to keep up with everything, but the research on this dates to 1985. Can we stop doing the default white thing now?

Meanwhile, at the Barbican exhibit AI: More than Human...In a video, a small, medium-brown poodle turns his head toward the camera with a - you should excuse the anthropomorphism - distinct expression of "What the hell is this?" Then he turns back to the immediate provocation and tries again. This time, the Sony Aibo he's trying to interact with wags its tail, and the dog jumps back. The dog clearly knows the Aibo is not a real dog: it has no dog smell, and although it attempts a play bow and moves its head in vaguely canine fashion, it makes no attempt to smell his butt. The researcher begins gently stroking the Aibo's back. The dog jumps in the way. Even without a thought bubble you can see the injustice forming, "Hey! Real dog here! Pet *me*!"

In these two short minutes the dog perfectly models the human reaction to AI development: 1) what is that?; 2) will it play with me?; 3) this thing doesn't behave right; 4) it's taking my job!

Later, I see the Aibo slumped, apparently catatonic. Soon, a staffer strides through the crowd clutching a woke replacement.

If the dog could talk, it would be saying "#Fail".


Illustrations: Sunrise from the 30th floor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.