Main

December 23, 2022

An inherently adverse environment

Rockettes_2239922329_8e6ffd44de-370.jpgEarlier this year, I wrote a short story/provocation for the recent book 22 Ideas About the Future. My story imagined a future in which the British central government had undermined local authorities by allowing local communities to opt out and contract for their own services. One of the consequences was to carve London up into tiny neighborhoods, each with its own rules and sponsorships, making it difficult to plot a joined-up route across town. Like an idiot, I entirely overlooked the role facial recognition would play in such a scenario. Community blocs like these, some openly set up to exclude unwanted diversity, would absolutely grab at facial recognition to repel - or charge - unwelcome outsiders.

Most discussion of facial recognition to date has focused on privacy: that it becomes impossible to move around public spaces without being identified and tracked. We haven't thought enough about the potential use of facial recognition to underpin a braad permission-based society in which our presence in any space can be detected and terminated at any time. In such a society, we are all migrants.

That particular unwanted dystopian future is upon us. This week, we learned that a New Jersey lawyer was blocked from attending the Radio City Music Hall Christmas show with her daughter because the venue's facial recognition system identified her as a member of a law firm involved in litigation against Radio City's owner, MSG Entertainment. Security denied her entry, despite her protests that she was not involved in the litigation. Whether she was or wasn't shouldn't really matter; she had committed no crime, she was causing no disturbance, she was granted no due process, and she had no opportunity for redress.

Soon after she told her story a second instance emerged, a male lawyer who was blocked from attending a New York Knicks basketball game at Madison Square Garden. Then, quickly, a third: a woman and her husband were removed from their seats at a Brandi Carlile concert, also at Madison Square Garden.

MSG later explained that litigation creates "an inherently adverse environment". I read that this way: the company has chosen to use developing technology in an abusive display of power. In other words, MSG is treating its venues as if they were the new-style airports Edward Hasbrouck has detailed, also covered here a few weeks back. In its original context, airport thinking is bad enough; expanded to the world's many privately-owned public venues, the potential is terrifying.

Early adopters of sharing data to exclude bad people talked about barring known shoplifters from chains of pubs or supermarkets, or catching and punishing criminals much more quickly. The MSG story means the mission has crept from "terrorist" to "don't like their employer" at unprecedented speed.

The right to navigate the world without interference is one privileged folks have taken for granted. With some exceptions: in England, the right to ramble all parts of the countryside took more than a century to codify into law.To an American, exclusion from a public venue *feels* like it should be a Constitutional issue - but of course it's not, since the affected venues are owned by a private company. In the reactions I've seen to the MSG stories, people have called for a ban on live facial recognition. By itself that's probably not going to be enough, now that this compost heap of worms has been opened; we are going to need legislation to underpin the right to assemble in privately-owned public spaces. Such a right sort of exists already in the conditions baked into many relevant local licensing laws that require venue operators to be the real-world equivalent of common carriers in telecommunications, who are not allowed to pick and choose whose data they will carry.

In a fourth MSG incident, a lawyer who is suing Madision Square Garden for barring him from entering, tricked the cameras at the MSG-owned Beacon Theater by disguising himself with a beard and a baseball cap. He didn't exactly need to, as his company had won a restraining order requiring MSG to let its lawyers into its venues (the case continues).

In that case, MSG's lawyer told the court barring opposition lawyers was essential to protect the company: "It's not feasible for any entertainment venue to operate any other way,"

Since when? At the New York Times, Kashmir Hill explains that the company adopted this policy last summer and depends on the photos displayed on law firms' websites to feed into its facial recognition to look for matches. But really the answer can only be: since the technology became available to enforce such a ban. It is a clear case where the availability of a technology leads to worse behavior on the part of its owner.

In 1996, the software engineer turned essayist and novelist Ellen Ujllman wrote about exactly this with respect to databases: they infect their owners with the desire to use their new capabilities. In one of her examples, a man suddenly realized he could monitor what his long-trusted secretary did all day. In another, a system to help ensure AIDS patients were getting all the benefits they were entitled to slowly morphed into a system for checking entitlement. In the case of facial recognition, its availability infinitely extends the British Tories' concept of the hostile environment.


Illustrations: The Rockettes performing in 2008 (via skividal at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 25, 2022

Assume a spherical cow

SphericalCow-IngridKallick-370.jpgThe early months of 2020 were a time of radical uncertainty - that is, decisions had to be made that affected the lives of whole populations where little guidance was available. As Leonard Smith and David Tuckett explained at their 2018 conference on the subject (and a recent Royal Society scientific meeting) decisions under radical uncertainty are often one-offs whose lessons can't inform the future. Tuckett's and Smith's goal was to understand the decision-making process itself in the hope that this part of the equation at least could be reused and improved.

Inevitably, the discussion landed on mathematical models, which attempt to provide tools to answer the question, "What if?" This question is the bedrock of science fiction, but science fiction writers' helpfulness has limits: they don't have to face bereaved people if they get it wrong; they can change reality to serve their sense of fictional truth; and they optimize for the best stories, rather than the best outcomes. Beware.

In the case of covid, humanity had experience in combating pandemics, but not covid, which turned out to be unlike the first known virus family people grabbed for: flu. Imperial College epidemiologist Neil Ferguson became a national figure when it became known that his 2006 influenza model suggesting that inaction could lead to 500,000 deaths had influenced the UK government's delayed decision to impose a national lockdown. Ferguson remains controversial; Scotland's The Ferrett offers a fact check that suggests that many critics failed to understand the difference between projection and prediction and the importance of the caveat "if nothing is done". Models offer possible futures, but not immutable ones.

As Erica Thompson writes in her new book, Escape From Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It, models also have limits that we ignore at our peril. Chief among them is the fact that the model is always an abstracted version of reality. If it weren't, our computers couldn't calculate them any more than they can calculate all the real world's variables. Thompson therefore asks: how can we use models effectively in decision making without becoming trapped inside the models' internal worlds, where their simplified assumptions are always true? More important, how can we use models to improve our decision making with respect to the many problems we face that are filled with uncertainties?

The science of covid - or of climate change - is only a small part of the factors a government must weigh in deciding how to respond; what science tells us must be balanced against the economic and social impacts of different approaches. In June 2020, Ferguson estimated that locking down a week earlier would have saved 20,000 lives. At the time, many people had already begun withdrawing from public life. And yet one reason the government delayed was the belief that the population would quickly give in to lockdown fatigue and resist restrictions, rendering an important tool unusable later, when it might be needed even more. This assumption turned out to be largely wrong, as was the assumption in Ferguson's 2006 model that 50% of the population would refuse to comply with voluntary quarantine. Thompson calls this misunderstanding of public reaction a "gigantic failure of the model".

What else is missing? she asks. Ferguson had to resign when he himself was caught breaking the lockdown rules. Would his misplaced belief that the population wouldn't comply have been corrected by a more diverse team?

Thompson began her career with a PhD in physics that led her to examine many models of North Atlantic storms. The work taught her more about the inferences we make from models than about storms, and it opened for her the question of how to use the information models provide without falling into the trap of failing to recognize the difference between the real world and Model Land - that is, the assumption-enclosed internal world of the models.

From that beginning, Thompson works through different aspects of how models work and where their flaws can be found. Like Cathy O'Neil's Weapons of Math Destruction, which illuminated the abuse of automated scoring systems, this is a clearly-written and well thought-out book that makes a complex mathematical subject and accessible to a general audience. Thompson's final chapter, which offers approaches to evaluating models and lists of questions to ask modelers, should be read by everyone in government.

Thompson's focus on the dangers of failing to appreciate the important factors models omit leads her to skepticism about today's "AI", which of course is trained on such models: "It seems to me that rather than AI developing towards the level of human intelligence, we are instead in danger of human intelligence descending to the level of AI by concreting inflexible decision criteria into institutional structures, leaving no room for the human strengths of empathy, compassion, a sense of fairness and so on." Later, she adds, "AI is fragile: it can work wonderfully in Model Land but, by definition, it does not have a relationship with the real world other than one mediated by the models that we endow it with."

In other words, AI works great if you can assume a spherical cow.


Illustrations: The spherical cow that mocks unrealistic scientific models drawn jumping over the moon by Ingrid Kallick for the 1996 meeting of the American Astronomical Association (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 28, 2022

MAGICAL, Part 1

hasrbrouck-cpdp2017.jpg"What's that for?" I asked. The question referred to a large screen in front of me, with my newly-captured photograph in the bottom corner. Where was the camera? In the picture, I was trying to spot it.

The British Airways gate attendant at Chicago's O'Hare airport tapped the screen and a big green checkmark appeared.

"Customs." That was all the explanation she offered. It had all happened so fast there was no opportunity to object.

Behind me was an unforgiving line of people waiting to board. Was this a good time to stop to ask:

- What is the specific purpose of collecting my image?

- What legal basis do you have for collecting it?

- Who will be storing the data?

- How long will they keep it?

- Who will they share it with?

- Who is the vendor that makes this system and what are its capabilities?

It was not.

I boarded, tamely, rather than argue with a gate attendant who certainly didn't make the decision to install the system and was unlikely to know much about its details. Plus, we were in the US, where the principles of the data protection law don't really apply - and even if they did, they wouldn't apply at the border - even, it appears, in Illinois, the only US state to have a biometric privacy law.

I *did* know that US Customs and Border Patrol had begun trialing facial recognition in selected airports, beginning in 2017. Long-time readers may remember a net.wars report from the 2013 Biometrics Conference about the MAGICAL [sic] airport, circa 2020, through which passengers flow unimpeded because their face unlocks all. Unless, of course, they're "bad people" who need to be kept out.

I think I even knew - because of Edward Hasbrouck's indefatagable reporting on travel privacy - that at various airports airlines are experimenting with biometric boarding. This process does away entirely with boarding cards; the airline captures biometrics at check-in and uses them to entirely automate the "boarding process" (a favorite bit of airline-speak of the late comedian George Carlin). The linked explanation claims this will be faster because you can have four! automated lanes instead of one human-operated lane. (Presumably then the four lanes merge into a giant pile-up in the single-lane jetway.)

It was nonetheless startling to be confronted with it in person - and with no warning. CBP proposed taking non-US citizens' images in 2020, when none of us were flying, and Hasbrouck wrote earlier this year about the system's use in Seattle. There was, he complained, no signage to explain the system despite the legal requirement to do so, and the airport's website incorrectly claimed that Congress mandated capturing biometrics to identify all arriving and departing international travelers.

According to Biometric Update, as of last February, 32 airports were using facial recognition on departure, and 199 airports were using facial recognition on arrival. In total, 48 million people had their biometrics taken and processed in this way in fiscal 2021. Since the program began in 2018, the number of alleged impostors caught: 46.

"Protecting our nation, one face at a time," CBP calls it.

On its website, British Airways says passengers always have the ability to opt out except where biometrics are required by law. As noted, it all happened too fast. I saw no indication on the ground that opting out was possible, even though notice is required under the Paperwork Reduction Act (1980).

As Hasbrouck says, though, travelers, especially international travelers and even more so international travelers outside their home countries, go through so many procedures at airports that they have little way to know which are required by law and which are optional, and arguing may get you grounded.

He also warns that the system I encountered is only the beginning. "There is an explicit intention worldwide that's already decided that this is the new normal, All new airports will be designed and built with facial recognition built into them for all airlines. It means that those who opt out will find it more and more difficult and more and more delaying."

Hasbrouck, who is probably the world's leading expert on travel privacy, sees this development as dangerous. Largely, he says, it's happening unopposed because the government's desire for increased surveillance serves the airlines' own desire to cut costs through automating their business processes - which include herding travelers onto planes.

"The integration of government and business is the under-noticed aspect of this. US airports are public entities but operate with the thinking of for-profit entities - state power merged with the profit motive. State *monopoly* power merged with the profit motive. Automation is the really problematic piece of this. Once the infrastructure is built it's hard for airline to decide to do the right thing." That would be the "right thing" in the sense of resisting the trend toward "pre-crime" prediction.

"The airline has an interest in implying to you that it's required by government because it pressures people into a business process automation that the airline wants to save them money and implicitly put the blame on the government for that," he says. "They don't want to say 'we're forcing you into this privacy-invasive surveillance technology'."


Illustrations: Edward Hasbrouck in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 20, 2022

The laws they left behind

dailystar-lettuce-celebrates-Ffg3wfmXEAI1ZLX-370.jpegIn the spring of 2020, as country after country instituted lockdowns, mandated contact tracing, and banned foreign travelers, many, including Britain, hastily passed laws enabling the state to take such actions. Even in the strange airlessness of the time, it was obvious that someday there would have to be a reckoning and a reevaluation of all that new legislation. Emergency powers should not be allowed to outlive the emergency. I spent many of those months helping Privacy International track those new laws across the world.

Here in 2022, although Western countries believe the acute emergency phase of the pandemic is past, the reality is that covid is still killing thousands of people a week across the world, and there is no guarantee we're safe from new variants with vaccine escape. Nonetheless, the UK and US at least appear to accept this situation as if it were the same old "normal". Except: there's a European war, inflation, strikes, a cost of living crisis, energy shortages, and a load of workplace monitoring and other privacy invasions that would have been heavily resisted in previous times. (And, in the UK, a government that has lost its collective mind; as I type no one dares move the news cameras away from the doors of Number 10 Downing Street in case the lettuce wins.)

Laws last longer than pandemics, as the human rights lawyer Adam Wagner writes in his new book, Emergency State: How We Lost Our Freedoms in the Pandemic and Why It Matters. For the last couple of years, Wagner has been a constant presence in my Twitter feed, alongside numerous scientists and health experts posting and examining the latest new research. Wagner studies a different pathology: the gaps between what the laws actually said and what was merely guidance. and between overactive police enforcement and people's reasonable beliefs of what the laws should be.

In Emergency State, Wagner begins by outlining six characteristics of the power of emergency-empowered state: mighty, concentrated, ignorant, corrupt, self-reinforcing, and, crucially, we want it to happen. As a comparison, Wagner notes the surveillance laws and technologies rapidly adopted after 9/11. Much of the rest of the book investigates a seventh characteristic: these emergency-expanded states are hard to reverse. In an example that's frequently come up here, see Britain's World War II ID card, which took until 1952 to remove, and even then it took Harry Wilcock to win in court after refusing to show his papers on demand.

Most of us remember the shock and sudden silence of the first lockdown. Wagner remembers something most of us either didn't know or forgot: when Boris Johnson announced the lockdown and listed the few exceptional circumstances under which we were allowed to leave home, there was as yet no law in place on which law enforcement could rely. That only came days later. The emergency to justify this was genuine: dying people were filling NHS hospital beds. And yet: the government response overturned the basis of Britain's laws, which traditionally presume that everything is permitted unless it's specifically forbidden. Suddenly, the opposite - everything is forbidden unless explicitly permitted - was the foundation of daily life. And it happened with no debate.

Wagner then works methodically through Britain's Emergency State, beginning by noting that the ethos of Boris Johnson's government, continuing the conservatives' direction of travel, coincidentally was already disdainful of Parliamentary scrutiny (see also: prorogation of Parliament) and ready to weaken both the human rights act and the judiciary. As the pandemic wore on, Parliamentary attention to successive waves of incoming laws did not improve; sometimes, the laws had already changed by the time they reached the chamber. In two years, Parliament failed to amend any of them. Meanwhile, Wagner notes, behind closed doors government members ignored the laws they made.

The press dubbed March 18, 2022 Freedom Day, to signify the withdrawal of all restrictions. And yet: if scientists' worst fears come true, we may need them again. Many covid interventions - masks, ventilation, social distancing, contact tracing - are centuries old, because they work. The novelty here was the comprehensive lockdowns and widespread business closures, which Wagner suggests may have come about because the first country to suffer and therefore to react was China, where this approach was more acceptable to its authoritarian government. Would things have gone differently had the virus surfaced in a democratic country? We will never know. Either way, the effects of the cruelest restrictions - the separation among families and friends, the isolation imposed on the elderly and dying - cannot be undone.

In Britain's case, Wagner points to flaws in the Public Health Act (1984) that made it too easy for a months-old prime minister with a distaste for formalities to bypass democratic scrutiny. He suggests four remedies: urgently amend the act to include safeguards; review all prosecutions and fines under the various covid laws; codify stronger human rights, either in a written constitution or a bill of rights; and place human rights at the heart of emergency decision making. I'd add: elect leaders who will transparently explain which scientific advice they have and haven't followed and why, and who will plan ahead. The Emergency State may be in abeyance, but current UK legislation in progress seeks to undermine our rights regardless.


Illustrations: The Daily Star's QE2 lettuce declaring victory as 44-day prime minister Liz Truss resigns.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 7, 2022

Recycle

recycle.jpegBad ideas never die.

In particular, bad ideas in Internet policy never die. Partly, it's a newcomer problem. In the 1990s, one manifestation of this was that every newly-connected media outlet would soon run the story warning readers not to open an email with a particular subject line - for example, Join the Crew - because it would instantly infect your computer. These were virus hoaxes. At the time, emails were all plain text, and infection on opening an email was a technical impossibility. (Would that it still were.) This did end because the technology changed.

Still with us, though, are repeated calls to end online anonymity. It doesn't matter who it was this week, but there was a professorial tweet: social media should require proof of identity. This despite decades of experience and research that show that often the worst online behavior comes from people operating under their own well-known, real-world identity, and that many people who use anonymity really need it. And I do mean decades: it's 30 years since Lee Sproull and Sara Kiesler published their study of human behavior on corporate mailing lists.

This week, Konstantinos Komaitis, a senior director at the Internet Society, and 28 other Internet experts and academics sent a letter to the European Commission urging it to abandon possibly imminent proposals to require content providers such as Google and Facebook to pay "infrastructure fees" to telecommunications companies. The letter warns, as you'd expect, that bringing in such feeds upends the network neutrality rules in place in many parts of the world, including the EU, where they became law in the 2015 Open Internet Regulation.

Among prior attempts, Komaitis highlights similar proposals from 2012, but he could have as easily pointed to 2005, when the then CEO of AT&T, Ed Whitacre, said he was tired of big Internet sites using "my pipes" "for free". At the time, network neutrality was being hotly disputed.

The Internet community has long distrusted telcos. First, because the pioneers still remember their hostility to the nascent Internet and, as they will remind you at any mention of the International Telecommunications Union, because during the telcos' decades of monopoly were also decades of stagnation. A small sample of the workarounds and rule-breaking Internet founders had to adopt in Britain alone was presented at an event in 2013 that featured notable contributors Peter Kirstein, Roger Scantlebury, and Vint Cerf.

Of course, we all know what's happened since then: scrappy little Internet startups became Big Tech, and now everyone wants a piece of their wealth - governments, through taxation and telcos through changing the entire business model.

Until the EU's proposals surfaced last year, it was possible to think that this particular bad idea had finally died of old age. AT&T has changed CEOs a couple of times, and for a while in there it was owner of Time-Warner, which has its own streaming products. The fundamental issue is that the Internet infrastructure has grown up as a sort-of cooperative, in which everyone pays for their own connections and freely exchanges data with peers. In the world the telcos - and the postal services - live in, senders pay for carriage and intermediate carriers get a slice ("settlement"). Small wonder the telcos want to see that world return. (They shouldn't have been so dismissive at the beginning.)

EU telcos have been tilting at this particular wind turbine for a long time; in 2012, the European Telecommunications Network Operators Association (ETNO) called for settlement as part of a larger proposal to turn Internet governance over to the International Telecommunications Union. A contemporaneous 2012 presentation by analyst Falk von Bornstaedt argued that "sending party network pays" is the necessary future in order to provide quality-of-service guarantees.

The current EU call for this change is backed by Duetsche Telekom, Orange, Telefonica, and 13 other telcos. They have a new excuse: the energy crisis and plans for combating climate change mean they need Big Tech to share the costs of rolling out 5G and fiber optic cabling. More than half of global network traffic, they argue, is attributable to just six companies: Google, Facebook/Meta, Netflix, Apple, Amazon, and Microsoft.

It is certainly true that the all-you-can-eat model of Internet connection encourages some wastefulness such as ubiquitous Facebook trackers or constantly-connected subscription office software. Moving to "the metaverse", as Meta has $70 billion worth of hope that you will, will make this exponentially worse.

On the other hand, consider the truly undesirable consequences of changing the business model. The companies paying the telcos extra for carriage will expect in return to have their traffic prioritized. That in turn will disadvantage their competitors who don't have either that financial burden or that privileged access. Soon, what's left of the open Internet would be even more of an oligopoly, particularly with respect to high-bandwidth applications like video or virtual worlds, where network lag is the enemy of tolerable quality.

In a column (PDF), lays out the issues quite clearly and warns: 1) we may not have the tools to understand the consequences of such a change; and 2) we might not be able to unwind it if we regret it later, particularly if these companies continue to merge into even bigger and more predatory giants.

Tl;dr: Please don't do this.

Illustrations: Recycling symbol.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 26, 2022

Good-enough

rajesh-siri-date.jpgA couple of months on from Amazon's synthesized personal voices, it was intriguing to read this week, in the Financial Times ($) (thanks to Charles Arthur's The Overspill), that several AI startups are threatening voice actors's employment prospects. Actors Equity is campaigning to extend legal protection to the material computers synthesize from actors' voices and likeness that, as Equity puts it, "reproduces performances without generating a 'recording' or a 'copy'." The union's survey found that 65 of performance artists and 93% of audio artists thought AI voices pose a threat to their livelihood.

Voices gives a breakdown of their assignmenets. Fortuitously, most jobs seek "real person" acting - exactly where voice synthesizers fail. For many situations, though - railway announcements, customer service, marketing campaigns - "real person" is overkill. Plus, AI voices, the FT notes, "can be made to say anything at the push of a button". No moral qualms need apply.

We have seen this movie before. This is a more personalized version of appropriating our data in order to develop automated systems - think Google's language translation, developed from billions of human-translated web pages, or the cooption of images posted on Flickr to build facial recognition systems later used to identify deportees. More immediately pertinent are the stories of Susan Bennett, the actress whose voice was Siri 2011, and Jen Taylor, the voice of Microsoft's Cortana. Bennett reportedly had no idea that the phrases and sentences she'd spent so many hours recording were in use until a friend emailed. Shouldn't she have the right to object- or to royalties?

Freelance writers have been here: the 1990s saw an industry-wide shift from first-rights contracts under which we controlled our work and licensed one-time use to all-rights contracts that awarded ownership in perpetuity to a shrinking number of conglomerating publishers. Photographers have been here, watching as the ecosystem of small, dedicated agencies that cared about them got merged into Corbis and Getty while their work opportunities shrank under the confluence of digital cameras, smartphones, and social media. Translators, especially, have been here: while the most complex jobs require humans, for many uses machine translation is good enough. It's actors' "good-enough" ground that is threatened.

Like so many technologies, personalized voice synthesis started with noble intentions - to help people who'd lost their own voices to injury or illness. The new crop of companies the FT identifies are profit-focused; as so often, it's not the technology itself, but the rapidly decreasing cost that's making trouble.

First historical anecdote: Steve Williams, animation director for the 1991 film Terminator 2, warned the London Film Festival that it would soon be impossible to distinguish virtual reality from physical reality. Dead presidents would appear live on the news and Cary Grant would make new movies, Obvious result: just as musicians compete against the entire back catalogue of recorded music, might actors now be up against long-dead stars when auditioning for a role?

Second historical anecdote: in 1993, Silicon Graphics, then leading the field of computer graphics, in collaboration with sensor specialist SimGraphics, presented VActor, a system that captured measurements of body movements from live actors and turned them into computer simulations. Creating a few minutes of the liquid metal man (Robert Patrick) in Terminator 2, although a similar process, took 50 animators a year. VActor was faster and much cheaper at producting a reusable library of "good-enough" expressions and body movements. At the time, the company envisioned the system's use for presentations at exhibitions and trade shows and even talk shows. Prior art: Max Headroom 1987-1988, In 2022, SimGraphics is still offering "real-time interactive characters" - these days, for the metaverse. Its website says VActor, now "AI-VActor", is successfully animating Mario.

Third historical anecdote: in 1997, Fred Astaire, despite being dead at the time, appeared in ads performing some of his most memorable dance moves with a Dirt Devil vacuum cleaner. The ad used CGI to replace two of his dance partners - a mop, a hat rack. If old Cary Grant did have career prospects, they were now lost: the public *hated* the ad. Among the objectors was Astaire's daughter, who returned one of the company's vacuum cleaners with a letter that siad, in part, "Yes, he did dance with a mop but he wasn't selling that mop and it was his own idea " The public at large agreed: Astaire's extraordinary artistry deserved better than an afterlife as a shill.

Today, voice actors really could find themselves competing for work against synthesized versions of themselves. Equity's approach seems to be to push to extend copyright so that performers will get royalties for future reuse. Actors might, however, be better served by the personality rights as granted in some jurisdictions (not the UK). This right helped Cheers actors George Wendt and John Ratzenberger win when they sued and won against a company that created robots that looked like them, and the one Bette Midler used when the singer in an ad fooled people into thinking she herself was singing.

The bottom line: a tough profession looks like getting even tougher. As Michael (Dustin Hoffman) says in Tootsie (written by Murray Schisgal and Larry Gelbart), "I don't believe in Hell. I believe in unemployment, but I don't believe in Hell."


Illustrations:: The Big Bang Theory's Rajesh (Kumal Nayyar) tries to date Siri (Becky O'Donahue).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 22, 2022

Parting gifts

nw-Sunak-Truss-ITV-2022.pngAll national constitutions are written to a threat model that is clearly visible if you compare what they say to how they are put into practice. Ireland, for example, has the same right to freedom of religion embedded in its constitution as the US bill of rights does. Both were reactions to English abuse, yet they chose different remedies. The nascent US's threat model was a power-abusing king, and that focus coupled freedom of religion with a bar on the establishment of a state religion. Although the Founding Fathers were themselves Protestants and likely imagined a US filled with people in their likeness, their threat model was not other beliefs or non-belief but the creation of a supreme superpower derived from merging state and church. In Ireland, for decades, "freedom of religion" meant "freedom to be Catholic". Campaigners for the separation of church and state in 1980s Ireland, when I lived there, advocated fortifying the constitutional guarantee with laws that would make it true in practice for everyone from atheists to evangelical Christians.

England, famously, has no written constitution to scrutinize for such basic principles. Instead, its present Parliamentary system has survived for centuries under a "gentlemen's agreement" - a term of trust that in our modern era transliterates to "the good chaps rule of government". Many feel Boris Johnson has exposed the limitations of this approach. Yet it's not clear that a written constitution would have prevented this: a significant lesson of Donald Trump's US presidency is how many of the systems protecting American democracy rely on "unwritten norms" - the "gentlemen's agreement" under yet another name.

It turns out that tinkering with even an unwritten constitution is tricky. One such attempt took place in 2011, with the passage of the Fixed-term Parliaments Act. Without the act, a general election must be held at least once every five years, but may be called earlier if the prime minister advises the monarch to do so; they may also be called at any time following a vote of no confidence in the government. Because past prime ministers were felt to have abused their prerogative by timing elections for their political benefit, the act removed it in favor of a set five-year interval unless a no-confidence vote found a two-thirds super-majority. There were general elections in 2010 and 2015 (the first under the act). The next should have been in 2020. Instead...

No one counted on the 2016 vote to leave the EU or David Cameron's next-day resignation. In 2017, Theresa May, trying to negotiate a deal with an increasingly divided Parliament and thinking an election would win her a more workable majority and a mandate, got the necessary super-majority to call a snap election. Her reward was a hung Parliament; she spent the rest of her time in office hamstrung by having to depend on the good will of Northern Ireland's Democratic Unionist Party to get anything done. Under the act, the next election should have been 2022. Instead...

In 2019, a Conservative party leadership contest replaced May with Boris Johnson, who, after several failed attempts blocked by opposition MPs determined to stop the most reckless Brexit possibilities, won the necessary two-thirds majority and called a snap election, winning a majority of 80 seats. The next election should be in 2024. Instead...

They repealed the act in March 2022. As we were. Now, Johnson is going, leaving both party and country in disarray. An election in 2023 would be no surprise.

Watching the FTPA in action led me to this conclusion: British democracy is like a live frog. When you pin down one bit of it, as the FTPA did, it throws the rest into distortion and dysfunction. The obvious corollary is that American democracy is a *dead* frog that is being constantly dissected to understand how it works. The disadvantage to a written constitution is that some parts will always age badly. The advantage is clarity of expectations. Yet both systems have enabled someone who does not care about norms to leave behind a generation's worth of continuing damage.

All this is a long preamble to saying that last year's concerns about the direction of the UK's computers-freedom-privacy travel have not abated. In this last week before Parliament rose for the summer, while the contest and the heat saturated the news, Johnson's government introduced the Data Protection and Digital Information bill, which will undermine the rights granted by 25 years of data protection law. The widely disliked Online Safety bill was postponed until September. The final two leadership candidates are, to varying degrees, determined to expunge EU law, revamp the Human Rights act, and withdraw from the European Convention on Human Rights. In addition, lawyer Gina Miller warns, the Northern Ireland Protocol bill expands executive power by giving ministers the Henry VIII power to make changes without Parliamentary consent: "This government of Brexiteers are eroding our sovereignty, our constitution, and our ability to hold the government to account."

The British convention is that "government" is collective: the government *are*. Trump wanted to be a king; Johnson wishes to be a president. The coming months will require us to ensure that his replacement knows their place.


Illustrations: Final leadership candidates Rishi Sunak and Liz Truss in debate on ITV.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 27, 2022

Well may the bogeyman come

NCC-EPIC-award-CPDP-2022.jpgIt's only an accident of covid that this year's Computers, Privacy, and Data Protection conference - delayed from late January - coincided with the fourth anniversary of the EU's General Data Protection Regulation. Yet its failures and frustrations were on everyone's mind as they considered new legislation forthcoming from the EU: the Digital Services Act, the Digital Markets Act, and, especially, the AI Act,

Two main frustrations: despite GDPR, privacy invasions continue to expand, and, related, enforcement has been extremely limited. The first is obvious to everyone here. For the second...as Max Schrems explained in a panel on GDPR enforcement, none of the cross-border cases his NGO, noyb, filed on May 19, 2018, the day after GDPR came into force, have been decided, and even decisions on simpler cases have failed to deal with broader questions.

In one of his examples, Spain rejected a complaint because it wasn't doing historic cases and Austria claimed the case was solved because the organization involved had changed its procedures. "But my rights were violated then." There was no redress.

Schrems is the data protection bogeyman; because legal actions he has brought have twice struck down US-EU agreements to enable data flows, the possibility of "Schrems III" if the next version gets it wrong is frequently mentioned. This particular panel highlighted numerous barriers that block effective action.

Other speakers highlighted numerous gaps between countries that impede cross-border complaints: some authorities have tight deadlines that expire while other authorities are working to more leisurely schedules; there are many conflicts between national procedural laws; each data protection authority has its own approach and requirements; and every cross-border complaint must be time-consumingly translated into English, even when both relevant authorities speak, say, German. "Getting an answer to a two-minute question takes four months," Nina Herbort said, highlighting the common underlying problem: underresourcing.

"Weren't they designed to fail?" Finn Myrstad asked.

Even successful enforcement has largely been limited to levying fines - and despite some of the eye-watering numbers they're still just cost of doing business to major technology platforms.

"We have the tools for structural sanctions," Johnny Ryan said in a discussion on judicial actions. Some of that is beginning to happen. A day earlier, the UK'a Information Commissioner's Office fined Clearview AI £7.5 million and ordered it to delete the images it holds of UK residents. In February, Canada issued a similar order; a few weeks ago, Illinois permanently banned the company from selling its database to most private actors and businesses nationwide, and barred it from selling its service to any entity within Illinois for five years. Sanctions like these hurt more than fines as does requiring companies to delete the algorithms they've based on illegally acquired data.

Other suggestions included building sovereignty by ensuring that public procurement does not default to off-the-shelf products from a few foreign companies but is built on local expertise, advocated by. Jan-Philipp Albrecht, the former MEP who panel on the impact of Schrems II that he is now building up cloud providers using locally-built hardware and open source software for the province of Schleswig-Holstein. Quang-Minh Lepescheux suggested requiring transparency in how people are trained to use automated decision making systems and forcing technology providers to accept third-party testing. Cristina Caffara, probably the only antitrust lawyer in sight, wants privacy advocates and antitrust lawyers to work together; the economists inside competition authorities insist that more data means better products so it's good for consumers. Rebecca Slaughter wants to give companies the clarity they say they want (until they get it): clear, regularly updated rules banning a list of practices with a catchall. Ryan also noted that some sanctions can vastly improve enforcement efficiency: there's nothing to investigate after banning a company from making acquisitions. Enforcing purpose limitation and banning the single "OK to everything" is more complicated but, "Purpose limitation is Kryptonite to Big Tech when it's misusing data."

Any and all of these are valuable. But new kinds of thinking are also needed. The more complex issue and another major theme was the limitations of focusing on personal data and individual rights. This was long predicted as a particular problem for genetic data - the former science journalist Tom Wilkie was first to point out the implications, sounding a warning in his book Perilous Knowledge, published in 1994, at the beginning of the Human Genome Project. Singling out individuals who have been harmed can easily obfuscate collective damage. The obvious example is Cambridge Analytica and Facebook; the damage to national elections can't be captured one Friends list at a time, controls on the increasing use of aggregated data require protection at scale, and, perversely, monitoring for bias and discrimination requires data collection.

In response to a panel on harmful patterns in recent privacy proposals, an audience member suggested that the African philosophy of ubuntu as a useful source of ideas for thinking about collective and, even more important, *interdependent* data. This is where we need to go. Many forms of data - including both genetic data and financial data - cannot be thought of any other way.


Illustrations: The Norwegian Consumer Council receives EPIC's International Privacy Champion award at CPDP 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 20, 2022

Mona Lisa smile

Mona Lisa - cropped for net.wars.jpgA few weeks ago, Zoom announced that it intends to add emotion detection technology to its platform. According to Mark DeGeurin at Gizmodo, in response, 27 human rights groups from across the world, led by Fight for the Future, have sent an open letter demanding that the company abandon this little plan, calling the software "invasive" and "inherently biased". On Twitter, I've seen it called "modern phrenology"; a deep insult for those who remember the pseudoscience of studying the bumps on people's heads to predict their personalities.

It's an insult, but it's not really wrong. In 2019, Angela Chen at MIT Technology Review highlighted a study showing that facial expressions on their own are a poor guide to what someone is feeling. Cultures, context, personal style all affect how we present ourselves, and the posed faces AI developers use as part of their training of machine learning systems are even worse indicators, since few of us really how our faces look under the influence of different emotions. In 2021, Kate Crawford, author of Atlas of AI, used the same study to argue in The Atlantic that the evidence that these systems work at all is "shaky".

Nonetheless, Crawford goes on to report, this technology is being deployed in hiring systems and added into facial recognition. A few weeks ago, Kate Kaye reported at Protocol that Intel and virtual school software provider Classroom Technologies are teaming up to offer a version that runs on top of Zoom.

Cue for a bit of nostalgia: I remember the first time I heard of someone proposing computer emotion detection over the Internet. It was the late 1990s, and the source - or the perpetrator, depending on your point of view, was Rosalind Picard at the MIT Media Lab. Her book on the subject, Affective Computing, came out in 1997.

Picard's main idea was that to be truly intelligent - or at least, seem that way to us - computers would have to learn to recognize emotions and produce appropriate responses. One of the potential applications I remember hearing about was online classrooms, where the software could monitor students' expressions for signs of boredom, confusion, or distress and alert the teacher - exactly what Intel and Classroom Technologies want to do now. I remember being dubious: shouldn't teachers be dialed in on that sort of thing? Shouldn't they know their students well enough to notice? OK, remote, over a screen, maybe dozens or hundreds of students at a time...not so easy.... (Of course, the expensive schools offer mass online education schemes to exploit their "brands", but they still keep the small, in-person classes that creates those "brands" by churning out prime ministers and Silicon Valley dropouts.)

That wasn't Picard's main point, of course. In a recent podcast interview, she explains her original groundbreaking insight: that computers need to have emotional intelligence in order to make them less frustrating for us to deal with. If computers can capture the facial expressions we choose to show, the changes in our vocal tones, our gestures and muscle tension, perhaps they can respond more appropriately - or help humans to do so. Twenty-five years later, the ideas in Picard's work are now in use in media companies, ad agencies, and call centers - places where computer-human communication happens.

It seems a doubtful proposition. Humans learn from birth to read faces, and even we have argued for centuries over the meaning of the expression on the face of the Mona Lisa.

In 1997, Picard did not foresee the creepiness and giant technology exploiters. It's hard to know whether to be more alarmed about the technology's inaccuracy or its potential improvement. While it's inaccurate and biased, the dangers are the consequences of mistakes in interpretation; a student marked "inattentive", for example, may be penalized in their grade. But improving and debiasing the technology opens the way for fine-tuned manipulation and far more pervasive and intimate surveillance as it becomes embedded in every company, every conference, every government agency, every doctor's office, all of law enforcement. Meanwhile, the technological imperative of improving the system will require the collection of more and more data: body movements, heart rates, muscle tension, posture, gestures, surroundings.

I'd like to think that by this time we are smarter about how technology can be abused. I'm sure many of Zoom's corporate clients want emotion recognition technology; as in so many other cases, we are pawns because we're largely not the ones paying the bills or making the choice of platform. There's an analogy here to Elon Musk's negotiations with Twitter shareholders; the millions who use the service every day and find it valuable have no say in what will happen to it. If Zoom adopts emotion recognition, how long before law enforcement starts asking for user data in order to feed it into predictive policing systems? One of this week's more startling revelations was Aaron Gordon's report at Vice that San Francisco police are using driverless cars as mobile surveillance cameras, taking advantage of the fact that they are continuously recording their surroundings.

Sometimes the only way to block abuse of technology is to retire the idea entirely. If you really want to know what I'm thinking and feeling, just ask. I promise I'll tell you.


Illustrations: The emotional enigma that is the Mona Lisa.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 8, 2022

The price of "free"

Protest_against_Amazon_by_East_African_workers_(32446948818).jpg"This isn't over," we predicted in April 2021 when Amazon warehouse workers in Bessemer, Alabama voted against unionizing. And so it has proved: on April 1 workers at its Staten Island warehouse voted to join the Amazon Labor Union.

There will be more of this, and there needs to be. As much as people complain - often justifiably - about unions, no one individual can defend themselves and their rights in the face of the power of a giant company. Worse, as the largest companies continue to get bigger and the number of available employers shrinks, that power imbalance is still growing. Antitrust law can only help reopen the market to competition with smaller and newer businesses; organized labor and labor law are required to ensure fair treatment for workers (see also Amazon's warehouse injury rate, which is about double the industry average). Even the top class of Silicon Valley engineers have lost out; in 2015 Apple, Google, Adobe, and Intel were fined $415 million for operating a "no-poaching" cartel; Lucasfilm, Pixar, and Intuit settled earlier for a joint $20 million.

One lesson to take from this is that instead of treating multi-billionaires as symbols of success we should take the emergence of that level of wealth disparity as a bad sign.

In 1914, Henry Ford famously doubled wages for the factory workers building his cars. At Michigan Radio, Sarah Cwiek explains that it was a gamble intended to produce a better, more stable workforce. Cwiek cites University of California-Berkeley labor economist Harley Shaiken to knock on the head the notion that it was solely in order to expand the range of people who could afford to buy the cars - but that also was one of the benefits to his business.

The purveyors of "pay-with-data-and-watching-ads" services can't look forward to that sort of benefit. For one thing, as multi-sided markets their primary customers aren't us but advertisers who don't sell directly to the masses. For another, a company like Google or Facebook doesn't benefit directly from the increasing wealth of its users; it can collect their data either way. Even the companies like Amazon and Uber, that actually sell people things or services, see faster returns from squeezing both their customers and their third-party suppliers - which they can do because of their dominant positions.

On Twitter, Cory Doctorow has a long thread arguing that antitrust law also has a role to play in securing workers' rights against the hundreds of millions companies like Uber and DoorDash are pouring into lobbying for legislation that keeps their gig workers classed as "independent contractors" instead of employees with rights such as paid sick leave, health insurance, and workmen's compensation.

Doctorow's thread is based on analyzing two articles: a legal analysis by Marshall Steinbaum laying out the antitrust case against the gig economy platforms, which fail to deliver their promises of independence and control to workers. Steinbaum highlights the value of antitrust law to the self-employed, who rely on being able to work for many outlets. In what the law calls "vertical restraint", the platforms dictate prices to customers and require exclusivity - both the opposite of the benefits self-employment is supposed to deliver. Any freelance in any business knows that too-great dependence on one or two employers is dangerous; a single shift in personnel or company policy can threaten your ability to make rent. It is the joint operation of antitrust law and labor regulation that is necessary, Steinbaum writes: "...taking away their ability to exercise control in the absence of an employment relationship is a necessary condition for the success of any effort to curtail the gig economy and the threat it poses to worker power and to workers' welfare."

Doctorow goes on to add that using antitrust law in this way would open the way to requiring interoperability among platform apps, so that a driver could assess which platform would pay them the best and direct customers to that one. It's an idea with potential - but unfortunately it reminds me of Mark Huntley-James' story "Togetherness", which formed part of Tales of the Cybersalon - A New High Street. In it, a hapless customer trying to get a parcel delivery is shunted from app to app as the pickup shop keeps shifting to get a better deal. (The story, along with the rest of the Tales of the Cybersalon, will be published later this year.) I'm not sure that the urgent-lift-seeking customer experience will be enhanced by, "Sorry, luv, I can't take you unless you sign up for NewApp." However, Doctorow's main point stands.

All of this is yet another way that the big technology companies benefit from negative externalities - that is, the costs they impose on society at large. The content moderators who work for Facebook, Uber's and Lyft's drivers, the behind-the-scenes ghost-worker intermediaries that pass for "AI", Amazon's Amazon's time-crunched warehouse workers...together add up to a large economy of underpaid, stressed workers deliberately kept outside of standard employment contracts and workers' rights. Such a situation cannot be sustainable for a society.


Illustrations: Amazon warehouse workers protesting in Minnesota in 2018 (by Czar at Wikimedia, cc-by-2.0.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 1, 2022

Grounded

Boeing-737-MAX.png"The airline probably needed to do a better job to make sure its pilots understood exactly what to do in case the aircraft was performing in a unique, unusual way, and how to get out of the problem," former National Transportation Safety Board chair Mark Rosenker tells CBS News in the recent documentary Downfall: The Case Against Boeing (directed by Rory Kennedy, written by Mark Bailey and Keven McAlester, and streaming on Netflix). He then downplays the risk to passengers: "Certainly in the United States they understand how to operate this aircraft."

Rosenker was speaking soon after the 2018 Lion Air crash.

Three oh-my-god wrong things here: the smug assumption that *of course* American personnel are more competent than their Indonesian counterparts (see also contemporaneous articles dissing Indonesia's airline safety record); the presumption that a Boeing aircraft is safe and the crash a non-recurring phenomenon; and the logical sequitur that it must be the pilot's fault. All that went largely unchallenged until the Ethiopian Airlines crash, 19 weeks later. Even then, numerous countries grounded the plane before the US finally followed suit - and even *then* it was ordered by the president, not the Federal Aviation Authority. The FAA's regulatory failure needs its own movie.

As we all now know, a faulty attack sensor sent bad data to the aircraft's Maneuvering Characteristics Augmentation System, software intended to stabilize the plane. The pilot did his best in an impossible situation. Even after that became clear, Boeing still blamed the crew for not turning off MCAS. The reason: Boeing didn't tell them it was there. In Congressional testimony, the hero of the Hudson, Captain Sully Sullenberger, summed it up thusly: "We shouldn't expect pilots to have to compensate for flawed designs."

This blame game was a betrayal. One reason aviation is so safe is that all sides have understood that every crash damages everyone. The industry therefore embraced extensive cross-collaboration in which everyone is open about the causes of failures and shares solutions. Blame destroys that culture.

All of this could be a worked example in Jessie Singer's recent book There Are No Accidents: The Deadly Rise of Injury and Disaster - Who Profits and Who Pays the Price. Of course unintended injuries happen, but calling them "accidents" removes culpability and stops us from thinking too much about larger causes. "Accident" means: "nothing to see here".

With the 737 MAX, as press articles suggested at the time and the documentary shows, that larger cause was the demise of Boeing's pride-of-America safety-first engineering culture, which rewarded employees for notifying problems. The rot began in 1997, when a merger meant new bosses from McDonnell Douglas executives arrived, and, former quality manager John Barnett tells the camera, "Everything you've learned for 30 years is now wrong." Value for shareholders replaced safety-first. Employees were thinned. Planes were made of cheaper materials. Headquarters left Seattle, where engineering was based, for Chicago. The culture of safety gave way to a culture of concealment.

Aviation learned early the importance of ergonomic design to avoid pilot error. This is where the documentary is damning: Boeing's own emails show the company knew pilots needed training for MCAS and never provided it, even when directly asked - by Lion Air itself, in 2017. Boeing executives mocked them for asking, even though its own risk assessments predicted a 737 MAX crash every fifteen years. Boeing bet it could fix, test, and implement MCAS before it caused more trouble. It was wrong.

A fully-loaded plane crash makes headlines and sparks protests and Congressional investigations. Most of the "accidents" Singer writes about, however - traffic crashes, house fires, falls, drownings, and the nearly 840,000 opioid deaths classed as "unintentional injury by drug poisoning" since 1999 (see also Alex Gibney's Crime of the Century) - near-invisibly kill in a statistical trickle. One such was her best friend, killed when a car hit his bike. All these are "accidents" caused by human error. But even with undercounts of everything from shootings to medical errors, the "accidents" were the third leading cause of death in the US in 2019, behind heart disease and "malignant neoplasms" (cancer), ahead of cerebrovascular disease, chronic lower respiratory disease, Alzheimers, and diabetes. We research all those *and( covid-19, which was number three in 2020. Why not "accidents"? (Note: this all skews American; other wealthy countries are safer.)

Singer's argument resonates because during my ten years as the in-house writer for RISCS, then-director Angela Sasse argued repeatedly that users will do the secure thing if it's the easiest path to follow, and "user errors" are often failed security policies. Sometimes, fixes seem tangential, such as lessening worker stress by hiring more staff, updating computer systems, or ensuring better work-life balance, which may improve security because tired, stressed workers make more mistakes.

Singer argues that the human errors that cause "accidents" are predictable and preventable, and surviving them is a "marker of privilege". Across the US, she finds poverty correlated with "accidental" death and wealth with safety. The pandemic made this explicit. But Singer reminds that the same forces frame people crossing the street as "jaywalkers" and blame workers killed on factory lines for not following posted rules. Each time the less powerful is framed as the cause of their own demise. And so it required that second 737 MAX crash and 157 more deaths to ground that plane.


Illustrations: The Boeing 737 MAX (Boeing).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 25, 2022

Dangerous corner

War_damages_in_Mariupol,_12_March_2022_(01).jpgIf there is one thing the Western world has near-universally agreed in the last month, it's that in the Russian invasion of Ukraine, the Ukrainians are the injured party. The good guys.

If there's one thing that privacy advocates and much of the public agree on, it's that Clearview AI, which has amassed a database of (it claims) 10 billion facial images by scraping publicly accessible social media without the subjects' consent and sells access to it to myriad law enforcement organizations, is one of the world's creepiest companies. This assessment is exacerbated by the fact that the company and its CEO refuse to see anything wrong about their unconsented repurposing of other people's photos; it's out there for the scraping, innit?

Last week, Reuters reported that Clearview AI was offering Ukraine free access to its technology. Clearview's suggested uses: vetting people at checkpoints; debunking misinformation on social media; reuniting separated family members; and identifying the dead. Clearview's CEO, Hoan Ton-That, told Reuters that the company has 2 billion images of Russians scraped from Russian Facebook clone Vkonakte.

This week, it's widely reported that Ukraine is accepting the offer. At Forbes, Tom Brewster reports that Ukraine is using the technology to identify the dead.

Clearview AI has been controversial ever since January 2020, when Kashmir Hill reported its existence in the New York Times, calling it "the secretive company that might end privacy as we know it". Social media sites LinkedIn, Twitter, and YouTube all promptly sent cease-and-desist notices. A month later, Kim Lyons reported at The Verge that its 2,200 customers included the FBI, Interpol, the US Department of Justice, Immigration and Customs Enforcement, a UAE sovereign wealth fund, the Royal Canadian Mounted Police, and college campus police departments.

In May 2021, Privacy International filed complaints in five countries. In response, Canada, Australia, the UK, France, and Italy have all found Clearview to be in breach of data protection laws and ordered it to delete all the photos of people that it has collected in their territories. Sweden, Belgium, and Canada have declared law enforcement use of Clearview's technology to be illegal.

Ukraine is its first known use in a war zone. In a scathing blog posting, Privacy International says, "...the use of Clearview's database by authorities is a considerable expansion of the realm of surveillance, with very real potential for abuse."

Brewster cites critics, who lay out familiar privacy issues. Misidentification in a war zone could lead to death if a live soldier's nationality is wrongly assessed (especially common when the person is non-white) and unnecessary heartbreak for dead soldiers' families. Facial recognition can't distinguish civilians and combatants. In addition, the use of facial recognition by the "good guys" in a war zone might legitimize the technology. This last seems to me unlikely; we all distinguish the difference between what's acceptable in peace time versus an extreme context. This issue here is *company*, not the technology, as PI accurately pinpoints: "...it seems no human tragedy is off-limits to surveillance companies looking to sanitize their image."

Jack McDonald, a senior lecturer in war studies at Kings College London who researches the relationship between ethics, law, technology, and war, sees the situation differently.

Some of the fears Brewster cites, for example, are far-fetched. "They're probably not going to be executing people at checkpoints." If facial recognition finds a match in those situations, they'll more likely make an arrest and do a search. "If that helps them to do this, there's a very good case for it, because Russia does appear to be flooding the country with saboteurs." Cases of misidentification will be important, he agrees, but consider the scale of harm in the conflict itself.

McDonald notes, however, that the use of biometrics to identify refugees is an entirely different matter and poses huge problems. "They're two different contexts, even though they're happening in the same space."

That leaves the use Ukraine appears to be most interested in: identifying dead bodies. This, McDonald explains, represents a profound change from the established norms, which include social and institutional structures and has typically been closely guarded. Even though the standard of certainty is much lower, facial recognition offers the possibility of being able to do identification at scale. In both cases, the people making the identification typically have to rely on photographs taken elsewhere in other contexts, along with dental records and, if all else fails, public postings.

The reality of social media is already changing the norms. In this first month of the war, Twitter users posting pictures of captured Russian soldiers are typically reminded that it is technically against the Geneva Convention to do so. The extensive documentation - video clips, images, first-person reports - that is being posted from the conflict zones on services like TikTok and Twitter is a second front in its own right. In the information war, using facial recognition to identify the dead is strategic.

This is particularly true because of censorship in Russia, where independent media have almost entirely shut down and citizens have only very limited access to foreign news. Dead bodies are among the only incontrovertible sources of information that can break through the official denials. The risk that inaccurate identification could fuel Russian propaganda remains, however.

Clearview remains an awful idea. But if I thought it would help save my country from being destroyed, would I care?


Illustrations: War damage in Mariupol, Ukraine (Ministry of Internal Affairs of Ukraine, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 4, 2022

Sovereign stack

UA-sunflowers.jpgBut first, a note: RIP Cliff Stanford, who in 1993 founded the first Internet Service Provider to offer access to consumers in the UK, died this week. Stanford's Demon Internet was my first ISP, and I well remember having to visit their office so they could personally debug my connection, which required users to precisely configure a bit of code designed for packet radio (imagine getting that sort of service now!). Simon Rockman has a far better-informed obit than I could ever write.

***

On Monday, four days after Russia invaded Ukraine, the Ukrainian minister for digital transformation, Mykhailo Fedorov, sent a letter (PDF) to the Internet Corporation for Assigned Names and Numbers and asked it to shut down Russian country code domains such as .ru, .рф, and .su. Quick background: ICANN manages the Internet's domain name system, the infrastructure that turns the human-readable name for a website or email address that you type in into the routing numbers computers actually use to get your communications to where you want them to go. Fedorov also asked ICANN to shut down the DNS root servers located in Russia, and plans a separate letter to request the revocation of all numbered Internet addresses in use by Russian members of RIPE-NCC, the registry that allocates Internet numbers in Europe and West Asia.

Shorn of the alphabet soup, what Fedorov is asking ICANN to do is sanction Russia by using technical means to block both incoming (we can't get to their domains) and outgoing (they can't get to ours) Internet access, on the basis that Russia uses the Internet to spread propaganda, disinformation, hate speech and the promotion of violence.

ICANN's refusal (PDF) came quickly. For numerous reasons, ICANN is right to refuse, as the Internet Society, Access Now, and others have all said.

Internet old-timers would say that ICANN's job is management, not governance. This is a long-running argument going all the way back to 1998, when ICANN was created to take over from the previous management, the University of Southern California computer scientist Jon Postel. Among other things, Postel set up much of the domain name system, selecting among submitted proposals to run registries for both international top-level domains (.com and .net, for example), and country code domains (such as .uk and .ru). Especially in its early years, digital rights groups watched ICANN with distrust, concerned that it would stray into censorship at the behest of one or another government instead of focusing on its actual job, ensuring the stability and security of the network's operation.

For much of its history ICANN was accountable to the US National Telecommunications and Information Administration, part of the Department of Commerce. It became formally independent as a multistakeholder organization in 2016, after much wrangling over how to construct the new model.

This history matters because the alternative to ICANN was transitioning its functions to the International Telecommunications Union, an agency of the United Nations, a solution the Internet community generally opposed, then and now. Just a couple of weeks ago, Russia and China began a joint push towards greater state control, which they intended to present this week to the ITU's World Telecommunication Standardization Assembly. Their goal is to redesign the Internet to make it more amenable to government control, exactly the outcome everyone from Internet pioneers to modern human rights activists seeks to avoid.

So, now. Shutting down the DNS at the request of one country would put ICANN exactly where it shouldn't be: making value judgments about who should have access.

More to the specific situation, shutting off Russian access would be counterproductive. The state shut down the last remaining opposition TV outlet on Thursday, along with the last independent radio station. Many of the remaining independent journalists are leaving the country. Recognizing this, the BBC is turning its short-wave radio service back on. But other than that. the Internet is the only remaining possibility most Russians have of accessing independent news sources - and Russia's censorship bureau is already threatening to block Wikipedia if it doesn't cover the Ukraine invasion to its satisfaction.

In fact, Russia has long been working towards a totally-controlled national network that can function independently of the rest of the Internet, like the one China already has. As The Economist writes, China is way ahead; it has 25 years of investment in its Great Firewall, and owns its entire national "stack". That is, it has domestic companies that make chips, write software, and provide services. Russia is far more dependent on foreign companies to provide many of the pieces necessary to fill out the "sovereign stack" it mandated in 2019 legislation. In July 2021, Russia tested disconnecting its nascent "Runet" from the Internet, though little is known about the results. It is

There are other, more appropriate channels for achieving Fedorov's goal. The most obvious are the usual social media suspects and their ability to delete fake accounts and bots and label or remove misinformation. Facebook, Google, and Twitter all moved quickly to block Russian state media from running ads on their platforms or, in Facebook's case, monetizing content. Since then, Google has paused all ad sales in Russia. The economic sanctions enacted by many countries and the crash in the ruble should shut down Russians' access to most Western ecommerce. Many countries are kicking Russia's state-media channels off

This war is a week old. It will end - sometime. It will not pay in the long term (assuming we have one) to lock Russian citizens, many of whom oppose the war, into a state media-controlled echo chamber. Out best hope is to stay connected and find ways to remediate the damage, as painful as that is.


Illustrations: Sunflowers under a blue sky (by Inna Radetskaya at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 11, 2022

Freedom fries

"Someone ratted me out," a friend complained recently. They meant: after a group dinner, one of the participants had notified everyone to say they'd tested positive for covid a day later, and a third person had informed the test and trace authorities and now my friend was getting repeated texts along the lines of "isolate and get tested". Which they found invasive and offensive, and...well, just plain *unreasonable*.

Last night, Boris Johnson casually said in Parliament that he thought we could end all covid-related restrictions in a couple of weeks. Today there's a rumor that the infection survey that has produced the most reliable data on the prevalence and location of covid infections may be discontinued soon. There have been rumors, too, of charging for covid tests.

Fifteen hundred people died of covid in this country in the past week. Officially, there were more than 66,000 new infections yesterday - and that doesn't include all the people who felt like crap and didn't do a test, or did do a test and didn't bother to report the results (because the government's reporting web form demands a lot of information each time that it only needs if you tested positive), or didn't know they were infected. If he follows through. Johnson's announcement would mean that if said dinner happened a month from now, my friend wouldn't be told to isolate. They can get exposed and perhaps infected and mingle as normal in complete ignorance. The tradeoff is the risk for everyone else: how do we decide when it's safe enough to meet? Is the plan to normalize high levels of fatalities?

Brief digression: no one thinks Johnson's announcement is a thought-out policy. Instead, given the daily emergence of new stories about rule-breaking parties at 10 Downing Street during lockdown, his comment is widely seen as an attempt to distract us and quiet fellow Conservatives who might vote to force him out of office. Ironically, a key element in making the party stories so compelling is the hundreds of pictures from CCTV, camera phones, social media, Johnson's official photographer... Teenagers have known for a decade to agree to down cameras at parties, but British government officials are apparently less afraid anything bad will happen to them if they're caught.

At the beginning of the pandemic, we wrote about the inevitable clash between privacy and the needs of public health and epidemiology. Privacy was indeed much discussed then, at the design stage for contact tracing apps, test and trace, and other measures. Democratic countries had to find a balance between the needs of public health and human rights. In the end, Google and Apple wound up largely dictating the terms on which contact tracing apps could operate on their platforms.

To the chagrin of privacy activists, "privacy" has rarely been a good motivator for activism. The arguments are too complicated, though you can get some people excited over "state surveillance". In this pandemic, the big rallying cry has been "freedom", from the media-friendly Freedom Day, July 19, 2021, when Johnson removed that round of covid restrictions, to anti-mask and anti-vaccination protesters, such as the "Freedom Convoy" currently blocking up normally bland, government-filled downtown Ottawa, Ontario, and an increasing number of other locations around he world. Understanding what's going on there is beyond the scope of net.wars.

More pertinent is the diverging meaning of "freedom". As the number of covid prevention measures shrinks, the freedom available to vulnerable people shrinks in tandem. I'm not talking about restrictions like how many people may meet in a bar, but simple measures like masking on public transport, or getting restaurants and bars to information about their ventilation that would make assessing risk easier.

Elsewise, we have many people who seem to define "freedom" to mean "It's my right to pretend the pandemic doesn't exist". Masks, even on other people, then become intolerable reminders that there is a virus out there making trouble. In that scenario, however, self-protection, even for reasonably healthy people who just don't want to get sick, becomes near-impossible. The "personal responsibility" approach doesn't work in a situation where what's most needed is social collaboration.

The people landed with the most risk can do the least about it. As the aftermath of Hurricane Sandy highlighted, the advent of the Internet has opened up a huge divide between the people who have to go to work and the people who can work anywhere. I can Zoom into my friend's group dinner rather than attend in person, but the caterers and waitstaff can't. If "your freedom ends where my nose begins" (Zechariah Chafee Jr, it says hereapplies to physical violence, shouldn't it include infection by virus?

Many human rights activists warned against creating second-class citizens via vaccination passports. The idea was right, but privacy was the wrong lens, because we still view it predominantly as a right for the individual. You want freedom? Instead of placing the burden on each of us, as health psychologist Susan Michie has been advocating for months, make the *places* safer - set ventilation standards, have venues publish their protocols, display CO2 readings, install HEPA air purifiers. Less risk, greater freedom, and you'd get some privacy, too - and maybe fewer of us would be set against each other in standoffs no one knows how to fix.


Illustrations: Trucks protesting in Ottawa, February 2022 (via ΙΣΧΣΝΙΚΑ-888 at Wikimedia, CC-BY-SA-4.0).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 21, 2022

Power plays

thames-kew-2022-01-17.jpegWe are still catching up on updates and trends.

Two days before the self-imposed deadline, someone blinked in the game of financial chicken between Amazon UK and Visa. We don't know which one it was, but on January 17 Amazon said it wouldn't stop accepting Visa credit cards after all. Negotiations are reportedly ongoing.

Ostensibly, the dispute was about the size of Visa's transaction fees. At Quartz, Ananya Bhattacharya quotes Banked.com's Ben Goodall's alternative explanation: the dispute allowed Amazon to suck up a load of new data that will help it build "the super checkout for the future. For Visa, she concludes, resolving the dispute has relatively little value beyond PR: Amazon accounts for only 1% of its UK credit card volume. For the rest of us, it remains disturbing that our interests matter so little. If you want proof of market dominance, look no further.

In June 2021, the Federal Trade Commission tried to bring an antitrust suit against Facebook, and failed when the court ruled that in its complaint the FTC had failed to prove its most basic assumption: that Facebook had a dominant market position. Facebook was awarded the dismissal it requested. This week, however, the same judge ruled that the FTC's amended complaint, which was filed in August, will be allowed to go ahead, though he suggests in his opinion that the FTC will struggle to substantiate some of its claims. Essentially, the FTC accuses Facebook of a "buy or bury" policy when faced with a new and innovative competitor and says it needed to make up for its own inability to adapt to the mobile world.

We will know if Facebook (or its newly-renamed holding company owner, Meta) is worried if it starts claiming that damaging the company is bad for America. This approach began as satire, Robert Heller explained in his 1994 book The Fate of IBM. Heller cites a 1990 PC Magazine column by William E. Zachmann, who used it as the last step in an escalating list of how the "IBMpire" would respond to antitrust allegations.

This week, Google came close to a real-life copy in a blog posting opposing an amendment to the antitrust bill currently going through the US Congress. The goal behind the bill is to make it easier for smaller companies to compete by prohibiting the major platforms from advantaging their own products and services. Google argues, however, that if the bill goes through Americans might get worse service from Google's products, American technology companies could be placed at a competitive disadvantage, and America's national security could be threatened. Instead of suggesting ways to improve the bills, however, Google concludes with the advice that Congress should delay the whole thing.

To be fair, Google isn't the only one that dislikes the bill. Apple argues its provisions might make it harder for users to opt out of unwanted monitoring. Free Press Action argues that it will make it harder to combat online misinformation and hate speech by banning the platforms from "discriminating" against "similarly situated businesses" (the bill's language), competitor or not. EFF, on the other hand, thinks copyright is a bigger competition issue. All better points than Google's.

A secondary concern is the fact that these US actions are likely to leave the technology companies untouched in the rest of the world. In Africa, Nesrine Malik writes at the Guardian, Facebook is indispensable and the only Internet most people know because its zero-rating allows its free use outside of (expensive) data plans. Most African Internet users are mobile-only, and most data users are on pay-as-you-go plans. So while Westerners deleting their accounts is a real threat to the company's future - not least because, as Frances Haugen testified, they produce the most revenue - the company owns the market in Africa. There, it is literally the only game in town for both businesses and individuals. Twenty-five years ago, we thought the Internet would be a vehicle for exporting the First Amendment. Instead...

Much of the discussion about online misinformation focuses on content moderation. In a new report the Royal Society asks how to create a better information environment. Despite its harm, the report comes down against simply removing scientific misinformation. Like Charles Arthur in his 2021 book Social Warming, the report's authors argue for slowing the spread by various methods - adding a friction to social media sharing, reconfiguring algorithms, in a few cases de-platforming superspreaders. I like the scientists' conclusion that simple removal doesn't work; in science you must show your work, and deletion fuels conspiracy theories. During this pandemic, Twitter has been spectacular at making it possible to watch scientists grapple with uncertainty in real time.

The report also disputes some of our longstanding ideas about how online interaction works. A literature review finds that the filter bubbles and echo chambers Eli Pariser posited in 2011 are less important than we generally think. Instead most people have "relatively diverse media diets" and the minority who "inhabit politically partisan online news echo chambers" is about 6% to 8% of users.

Keeping it that way, however, depends on having choices, which leads back to these antitrust cases. The bigger and more powerful the platforms are, the less we - as both individuals and societies - matter to them.


Illustrations: The Thames at an unusually quiet moment, in January 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 24, 2021

Scale

hockey-stick.jpgWeb3, the push to decentralize the net, got a lot more attention this week after the venture capital firm Andreesen Horowitz published guidance for policy makers - while British software engineer Stephen Diehl to blogged calling web3 "bullshit", a "vapid marketing campaign", and a "rhetorical trick" (thanks to Mike Nelson for the pointer).

Here, a month ago, we tried to tease out some of the hard problems web3 is up against. Diehl attacks the technical basis, citing the costs of the computation and bandwidth necessary to run a censorship-proof blockchain network, plus the difficulty of storage, as in "who owns the data?". In other words, web3, as he understands it, won't scale.

Meanwhile, on Twitter, commenters have highlighted Andreesen Horowitz's introductory words, "We are radically optimistic about the potential of web3 to restore trust in institutions and expand access to opportunity." If, the argument goes, venture capitalists are excited about web3 that's a clear indicator that they expect to reap the spoils. Which implies an eventual outcome favoring giant corporate interests.

The thing that modern venture capitalists always seek with (due) diligence is scale. Scale means you can make more of something without incurring (much) additional cost. Scale meant Instagram could build a business Facebook would buy for $1 billion with only 13 employees. Venture capitalists want the hockey stick.

Unsurprisingly, given the venture capital appeal, the Internet is full of things that scale - social media sites, streaming services, software, other forms of digital content distribution, and so on. Yet many of the hard problems we struggle to solve are conflicts between scale and all the things on the Internet that either *don't* scale. Easy non-Internet example: viruses scale, nurses don't. Or, more nettishly, facial recognition scales; makeup artists don't. And so on.

An obvious and contentious Internet example: content moderation. Even after AI has automatically removed the obvious abuses, edge cases rapidly escalate beyond the resources most companies are willing to throw at it. In his book Social Warming, Charles Arthur suggests capping the size of social networks, an idea echoed recently by Lawfare editor Ben Wittes in an episode of In Lieu of Fun, who commented that sites shouldn't be allowed to grow larger than they can "moderate well". It's hard to think of a social media site that hasn't. It's also hard to understand how such a cap would work without frustrating everyone. If you're user number cap+1, do you have to persuade all your friends to join a less-populated network so you can be together?

More broadly - a recurrent theme - community on the Internet does not scale. In every form of online community back to bulletin board systems and Usenet, increasing size always brings abuse. In addition, over and over online forums show the power law distribution of posters: a small handful do most of the talking, followed by a long tail of occasional contributors and a vast majority of lurkers. The loudest and most persistent voices set the tone, get the attention, and reap the profits, if there are any to be had.

The problem of scaling content moderation applies more generally to online governance. As societies grow, become more complex, and struggle with abuse, turning governance over to paid professionals seems to be the near-universal solution.

Another thing that doesn't scale: discovery, as Benedict Evans recently pointed out in a discussion of email newsletters and Substack.

One of the marvels of 2021 has been the reinvention of emailed newsletters as a paying proposition. Of course, plenty of people were making *some* money from such things way back even before email. But this year has taken it to a new level. People are signing six-figure deals with Substack and giving up ordinary journalism gigs and book deals to do it.

Evans points out that in newsletters, as in previous Internet phenomena - podcasts, web pages (hence search engines, and ecommerce (hence aggregation) - the first people who show up in an empty space with good stuff people want do really well. We don't hear so much any more about first-mover advantage, but it often still applies.

Non-fungible tokens (NFTs) may be the latest example. A few very big paydays are drawing all sorts of people into the field. Some will profit, but many more will not. Meanwhile, scams and copyright and other issues are proliferating. Even if regulation eventually makes participation safer, the problem will remain: people have limited resources to spend on such things, and the field will be increasingly crowded.

So, too, Substacks and newsletters: there are not only limits to how many subscriptions people can afford, but also to how many things they have time to read. In a crowded field, discovery is everything.

Individuals' attention spans and financial resources do not scale. The latter is one reason the pay-with-data model has been so successful on the web; the former is part of why people will sacrifice privacy and participatory governance in favor of convenience.

So, our partial list of things that do not scale: content moderation, community, discovery, governance. Maybe also security to some extent. In general: anything that requires human labor to be added proportionately to its expansion. Incorporating solving problems of scale will matter if we're going to have a different outcome from web3 than from previous iterations.


Illustrations: A hockey stick.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 3, 2021

Trust and antitrust

coyote-roadrunner-cliff.pngFour years ago, 2021's new Federal Trade Commission chair, Lina Khan, made her name by writing an antitrust analysis of Amazon that made three main points: 1) Amazon is far more dangerously dominant than people realize; 2) antitrust law, which for the last 30 years has used consumer prices as its main criterion, needs reform; and 3) two inventors in a garage can no longer upend dominant companies because they'll either be bought or crushed. She also accused Amazon of leveraging the Marketplace sellers data it collects to develop and promote competing products.

For context, that was the year Amazon bought Whole Foods.

What made Khan's work so startling is that throughout its existence Amazon has been easy to love: unlike Microsoft (system crashes and privacy), Google (search spam and privacy), or Facebook (so many issues), Amazon sends us things we want when we want them. Amazon is the second-most trusted institution in America after the military, according to a 2018 study by Georgetown University and NYU Rounding out the top five: Google, local police, and colleges and universities. The survey may need some updating.

And yet: recent stories suggest our trust is out of date.

This week, a study by the Institute for Local Self-Reliance claims that Amazon's 20-year-old Marketplace takes even higher commissions - 34% - than the 30% Apple and Google are being investigated for taking (30%) from their app stores. The study estimates that Amazon will earn $121 billion from these fees in 2021, double its 2019 takings and that Amazon's 2020 operating profits from Marketplace will reach $24 billion. The company responded to TechCrunch that some of those fees are optional add-ons, while report author Stacy Mitchell counters that "add-ons" such as better keyword search placement and using Amazon's shipping and warehousing have become essential because of the way the company disadvantages sellers who don't "opt" for them. In August, Amazon passed Walmart as the world's largest retailer outside of China). It is the only source of income for 22% of its sellers and the single biggest sales channel for many more; 56% of items sold on Amazon are from third-party sellers.

I started buying from Amazon so long ago that I have an insulated mug they sent every customer as a Christmas gift. Sometime in the last year, I started noticing the frequency of unfamiliar brand names in search results for things like cables, USB sticks, or socks. Smartwool I recognize, but Yuedge, KOOOGEAR, and coskefy? I suddenly note a small, new? tickbox on the left: "our brands". And now I see : "our brands" this time are ouhos, srclo, SuMade, and Sunew. Is it me, or are these names just plain weird?

Of course I knew Amazon owned Zappos, IMDB, Goodreads, and Abe Books, but this is different. Amazon now has hundreds of house brands, according to a study The Markup published in October. The main finding: Amazon promotes its own brands at others' expense, and being an Amazon brand or Amazon-exclusive is more important to your product's prominence than its star ratings or reviews. Amazon denies doing this. It's a classic antitrust conflict of interest: shoppers rarely look beyond the first five listed products, and the platform owner has full control over the order. The Markup used public records to identify more than 150 Amazon brands and developed a browser add-on that highlights them for you. Personally, I'm more inclined to just shop elsewhere.

Also often overlooked is Amazon's growing advertising business. Insider Intelligence estimates its digital ad revenues in 2021 at $24.47 billion - 55.5% higher than 2020, and representing 11.6% (and rising) of the (US) digital advertising market. In July, noting its riseCNBC surmised that Amazon's first-party relationship with its customers relieves it of common technology-company privacy issues. This claim - perhaps again based on the unreasonable trust so many of us place in the company - has to be wrong. Amazon collects vast arrays of personal data from search and purchase records, Alexa recordings, home camera videos, and health data from fitness trackers. We provide it voluntarily, but we don't sign blank checks for its use. Based on confidential documents, Reuters reports that Amazon's extensive lobbying operation has "killed or undermined" more than three dozen privacy bills in 25 US states. (The company denies the story and says it has merely opposed poorly crafted privacy bills.)

Privacy may be the thing that really comes to bite the company. A couple of weeks ago, Will Evans reported at Reveal News, based on a lengthy study of leaked internal documents, that Amazon's retail operation has so much personal data that it has no idea what it has, where it's stored, or how many copies are scattered across its IT estate: "sprawling, fragmented, and promiscuously shared". The very long story is that prioritizing speed of customer service has its downside, in that the company became extraordinarily vulnerable to insider threats such as abuse of access.

Organizations inevitably change over time, particularly when they're as ambitious as this one. The systems and culture that are temporary in startup mode become entrenched and patched, but never fixed. If trust is the land mass we're running on, what happens is we run off the edge of a cliff like Wile E. Coyote without noticing that the ground we trust isn't there any more. Don't look down.


Illustrations: Wile E. Coyote runs off a cliff, while the roadrunner watches.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 25, 2021

Lawful interception

NSOGroup-database.pngFor at least five years the stories have been coming about the Israeli company NSO Group. For most people, NSO is not a direct threat. For human rights activists, dissidents, lawyers, politicians, journalists, and others targeted by hostile authoritarian states, however, its elite hackers are dangerous. NSO itself says it supplies lawful interception, and only to governments to help catch terrorists.

Now, finally, someone is taking action. Not, as you might reasonably expect, a democratic government defending human rights, but Apple, which is suing the company on the basis that NSO's exploits cost it resources and technical support. Apple has also alerted targets in Thailand, El Salvador, and Uganda.

On Twitter, intelligence analyst Eric Garland picks over the complaint. Among his more scathing quotes: "Defendants are notorious hackers - amoral 21st century mercenaries who have created highly sophisticated cyber-surveillance machinery that invites routine and flagrant abuse", "[its] practices threaten the rules-based international order", and "NSO's products...permit attacks, including from sovereign governments that pay hundreds of millions of dollars to target and attack a tiny fraction of users with information of particular interest to NSO's customers".

The hidden hero in this story is the Canadian research group calls NSO's work "despotism as a service".

Citizen Lab began highlighting NSO's "lawful intercept" software in 2016, when analysis it conducted with Lookout Security showed that a suspicious SMS message forwarded by UAE-based Ahmed Mansoor contained links belonging to NSO Group's infrastructure. The links would have led Mansoor to a chain of zero-day exploits that would have turned his iPhone 6 into a comprehensive, remotely operated spying device. As Citizen Lab wrote, "Some governments cannot resist the temptation to use such tools against political opponents, journalists, and human rights defenders." It went on to note the absence of human rights policies and due diligence at spyware companies; the economic incentives all align the wrong way. An Android version was found shortly afterwards.

Among the targets Citizen Lab found in 2017: Mexican scientists working on obesity and soda consumption and Amnesty International researchers, In 2018, Citizen Lab reported that Internet scans found 45 countries where Pegasus appeared to be in operation, at least ten of them working cross-border. In 2018, Citizen Lab found Pegasus on the phone of Canadian resident Omar Abdulaziz, a Saudi dissident linked to murdered journalist Jamal Khashoggi. In September 2021, Citizen Lab discovered NSO was using a zero-click, zero-day vulnerability in the image rendering library used in Apple's iMessage to take over targets' iOS, WatchOS, and MacOS devices. Apple patched 1.65 billion products.

Both Privacy International and the Pegasus project, an joint investigation into the company by media outlets including the Guardian and coordinated by Forbidden Stories, have found dozens more examples.

In July 2021, a leaked database of 50,000 phone numbers believed to belong to people of interest to NSO clients since 2016 included human rights activists, business executives, religious figures, academics, journalists, lawyers, and union and government officials around the world. It was not clear if their devices had been hacked. Shortly afterwards, Rappler reported that NSO spyware can successfully infect even the latest, most secure iPhones.

Citizen Lab began tracking litigation and formal complaints against spyware companies in 2018. In a complaint filed in 2019, WhatsApp and Facebook are arguing that NSO and Q Cyber used their servers to distribute malware; on November 8 the US ninth circuit court of appeals has rejected NSO's claim of sovereign immunity, opening the way to discovery.. Privacy International promptly urged the British government to send a clear message, given that NSO's target was a UK-based lawyer challenging the company over human rights violations in Mexico and Saudi Arabia.

Some further background is to be found at Lawfare, where shortly *before* the suit was announced, security expert Stephanie Pell and law professor David Kaye discuss how to regulate spyware. In 2019, Kaye wrote a report calling for a moratorium on the sale and transfer of spyware and noting that its makers "are not subject to any effective global or national control". Kaye proposes adding human rights-based export rules to the Wassenaar Arrangement export controls for conventional arms and dual-use technologies. Using Wassenaar, on November 3 the US Commerce Department recently blacklisted NSO along with fellow Israeli company Candiru, Russian company Positive Technologies, and Singapore-based Computer Security Initiative Consultancy as national security threats. And there are still more, such as the surveillance system sold to Egypt by France-based Thales subsidiary Dassault and Nexa Technologies.

The story proves the point many have made throughout 30 years of fighting for the right to use strong encryption: while governments and their law enforcement agencies insist they need access to keep us safe: there is no magic hole that only "good guys" can use, and any system created to give special access will always end up being abused. We can't rely on the technology companies to defend human rights; that's not in their business model. Governments need to accept and act on the reality that exceptional access for anyone makes everyone everywhere less safe.

Illustrations: Citizen Lab's 2021 map of the distribution of suspected NSO infections (via Democracy Now.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 19, 2021

Digital god squabble

Fighting_cocks -shree650.jpgOn Wednesday, Amazon customers in the UK woke up to an (in some cases, weirdly empty) email whose news was in the subject: Amazon will cease accepting Visa credit cards (but not debit cards) for payment as of January 19, 2022.

If your first reaction is, "What's the punchline?" I'm with you. What the hell kind of crazy business decision is that?

As Hilary Osborne reports at the Guardian, the email went on to explain that the decision is "due to the high fees Visa charges for processing credit card transactions."

Huh? Like most people, I remained under the impression that it's American Express, not Visa, that charges the highest commissions to merchants. On Twitter, Drew Graham offers a more interesting explanation: taxes. It's a *Brexit* thing. The UK's departure from the EU means that Amazon's habit of accepting payments via its no-tax Luxembourg subsidiary, means that UK shoppers' remittances are now cross-border payments subject to interchange fees. Both Visa and Mastercard, raised these earlier this year - now that EU regulation capping such fees no longer applies. Amazon *could* move its financial arrangements to the UK - but then (the theory continues) it would be hit with taxes. What's one of the biggest, most highly market-capped companies in the world supposed to do when mean, old Visa and national governments want to be paid?

Why Visa but not Mastercard? As several others pointed out, Amazon promotes a branded Mastercard in the UK and also has a deal with American Express. And so, only Visa credit cards take the hit. I find it all supremely weird: Amazon, which has made its name by espousing customer service to the max, is now going to make it less convenient for its UK customers to shop there? Does Amazon think that anyone who pays it with a Visa card probably *also* has a Mastercard? Is it hoping that its customers will rise up in anger and demand that Visa cut it a deal? Or rise up in protest against government taxation that pays for our schools, hospitals, and government corruption? Is it hoping that Visa will be persuaded by the share price drop the announcement occasioned (the day of the announcement, Visa dropped 6.7%)? Or is, it as seems more likely, we don't matter *at all* and this is one of those no-you're-the-chicken contests in which two bullies pretend they won't budge, leaving their customers to wait it out, annoyed, until they finally settle because less of something is better than all of nothing?

This is not a good look for a company trying to argue it's not a monopoly, nor a good look for a company that makes its money through usury.

The question being asked here is perennial, and more commonly found in the broadcasting and telecommunications industries: who owns the audience? This is part of what network neutrality is about. Periodically, TV channels disappear from US cable TV packages because of fights over who should pay more or less to access the audience (and who brings that audience). So here: do you buy from Amazon because you can pay with your Visa card, or do you have a Visa card because it lets you buy from Amazon (and thousands of other retailers)?

In past cases, technology giants have often pressed their users into service - see for example, Uber vs Transport for London. In this case, though, many users have alternatives available, either other credit cards (Mastercard, American Express, and so on) or debit cards (don't; in the UK, you're better protected against online fraud with a credit card). We also still have other suppliers, though they take time to locate and effort to set up new accounts.

According to Business Insider, the UK is Amazon's third-largest market, and represents one-tenth the sales of the US. At the Washington Post, Bloomberg opinion writer Paul J. Davies says industry data suggests that Visa credit cards represent only 7% of all card-based purchases in the UK. Extrapolated to Amazon's $26.5 billion 2020 UK net sales, that's a mere snip of $1.8 billion in sales. It's a reasonable bet that most people will simply choose an alternative method of payment - and, as Davies points out, new technology is offering consumers more and more alternatives that are faster and cheaper than Mastercard's and Visa's legacy networks. Calling Amazon's move "passive-aggressive", Davies adds that although Britain is hogging the headlines, users in Australia and Singapore are facing a 0.5% surcharge for using Visa cards there.

The whole thing is so many kinds of wrong. For the last several years, Amazon has been accused of using its data access to squeeze the small merchants that use its Marketplace platform, while . Now, both Amazon and Visa are so big that each thinks it can squeeze the other. What do we do if either turns out to be right?

At Telecom, Scott Bicheno correctly calls hogwash on Visa's plaint that it hates to see restrictions on consumer choice. "What we have here is an e-commerce near monopolist locking horns with a payment processing near-monopolist....we can but watch impotently as the digital gods squabble in the heavens over our hard-earned cash."

Unless we start reining in some of these companies, this is our future: fewer and fewer bigger and bigger companies fighting over an increasingly helpless us.

Illustrations: Cocks fighting (via shree650 at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 10, 2021

Globalizing Britain

Chatsworth_Cascade_and_House_-_geograph.org.uk_-_2191570.jpgBrexit really starts now. It was easy to forget, during the dramas that accompanied the passage of the Withdrawal Agreement and the disruption of the pandemic, that the really serious question had still not been answered: given full control, what would Britain do with it? What is a reshaped "independent global Britain" going to be when it grows up? Now is when we find out, as this government, which has a large enough majority to do almost anything it wants, pursues the policies it announced in the Queen's Speech last May.

Some of the agenda is depressingly cribbed from the current US Republican playbook. First and most obvious in this group is the Elections bill. The most contentious change is requiring voter ID at polling stations (even though there was a total of one conviction for voter fraud in 2019, the year of the last general election). What those in other countries may not realize is how many eligible voters in Britain lack any form of photo ID. The Guardian that 11 million people - a fifth of eligible voters - have neither driver's license nor passport. Naturally they are disproportionately from black and Asian backgrounds, older and disabled, and/or poor. The expected general effect, especially coupled with the additional proposal to remove the 15-year cap on voting while expatriate, is to put the thumb on the electoral scale to favor the Conservatives.

More nettishly, the government is gearing up for another attack on encryption, pulling out all the same old arguments. As Gareth Corfield explains at The Register, the current target is Facebook, which intends to roll out end-to-end encryption for messaging and other services, mixed with some copied FBI going dark rhetoric.

This is also the moment when the Online Safety bill (previously online harms). The push against encryption, which includes funding technical development is part of that because the bill makes service providers responsible for illegal content users post - and also, as Heather Burns points out at the Open Rights Group, legal but harmful content. Burns also details the extensive scope of the bill's age verification plans.

These moves are not new or unexpected. Slightly more so was the announcement that the UK will review data protection law with an eye to diverging from the EU; it opened the consultation today. This is, as many have pointed out before dangerous for UK businesses that rely on data transfers to the EU for survival. The EU's decision a few months ago to grant the UK an adequacy decision - that is, the EU's acceptance of the UK's data protection laws as providing equivalent protection - will last for four years. It seems unlikely the EU will revisit it before then, but even before divergence Ian Brown and Douwe Korff have argued that the UK's data protection framework should be ruled inadequate. It *sounds* great when they say it will mean getting rid of the incessant cookie pop-ups, but at risk is privacy protections that have taken years to build. The consultation document wants to promise everything: "even better data protection regime" and "unlocking the power of data" appear in the same paragraph, and the new regime will also both be "pro-growth and innovation-friendly" and "maintain high data protection standards".

Recent moves have not made it easier to trust this government with respect to personal data- first the postponed-for-now medical data fiasco and second this week's revelation that the government is increasingly using our data and hiring third-party marketing firms to target ads and develop personalized campaigns to manipulate the country's behavior. This "influence government" is the work of the ten-year-old Behavioural Insights Team - the "nudge unit", whose thinking is summed up in its behavioral economy report.

Then there's the Police, Crime, Sentencing, and Courts bill currently making its way through Parliament. This one has been the subject of street protests across the UK because of provisions that permit police and Home Secretary Priti Patel to impose various limits on protests.

Patel's Home Office also features in another area of contention, the Nationality and Borders bill. This bill would make criminal offenses out of arriving in the UK without permission a criminal offense and helping an asylum seeker enter the UK. The latter raises many questions, and the Law Society lists many legal issues that need clarification. Accompanying this is this week's proposal to turn back migrant boats, which breaks maritime law.

A few more entertainments lurk, for one, the plan to review of network neutrality announced by Ofcom, the communications regulator. At this stage, it's unclear what dangers lurk, but it's another thing to watch, along with the ongoing consultation on digital identity.

More expected, no less alarming, this government also has an ongoing independent review of the 1998 Human Rights Act, which Conservatives such as former prime minister Theresa May have long wanted to scrap.

Human rights activists in this country aren't going to get much rest between now and (probably) 2024, when the next general election is due. Or maybe ever, looking at this list. This is the latest step in a long march, and it reminds that underneath Britain's democracy lies its ancient feudalism.


Illustrations: Derbyshire stately home Chatsworth (via Trevor Rickards at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 13, 2021

Legacy

QRCode-2-Structure.pngThe first months of the pandemic saw a burst of energetic discussion about how to make it an opportunity to invest in redressing inequalities and rebuilding decaying systems - public health, education, workers' rights. This always reminded me of the great French film director François Truffaut, who, in his role as the director of the movie-within-the-movie in Day for Night, said, "Before starting to shoot, I hope to make a fine film. After the problems begin, I lower my ambition and just hope to finish it." It seemed more likely that if the pandemic went on long enough - back then the journalist Laurie Garrett was predicting a best case of three years - early enthusiasm for profound change would drain away to leave most people just wishing for something they could recognize as "normal". Drinks at the pub!

We forget what "normal" was like. London today seems busy. But with still no tourists, it's probably a tenth as crowded as in August 2019.

Eighteen months (so far) has been long enough to make new habits driven by pandemic-related fears, if not necessity, begin to stick. As it turns out the pandemic's new normal is really not the abrupt but temporary severance of lockdown, which brought with it fears of top-down government-driven damage to social equity and privacy: covid legislation, imminuty passports, and access to vaccines. Instead, the dangerous "new normal" is the new habits building up from the bottom. If Garrett was right, and we are at best halfway through this, these are likely to become entrenched. Some are healthy: a friend has abruptly realized that his grandmother's fanaticism about opening windows stemmed from living through the 1918 Spanish flu pandemic. Others...not so much.

One of the first non-human casualties of the pandemic has been cash, though the loss is unevenly spread. This week, a friend needed more than five minutes to painfully single-finger-type masses of detail into a pub's app, the only available option for ordering and paying for a drink. I see the convenience for the pub's owner, who can eliminate the costs of cash (while assuming the costs of credit cards and technological intermediation) and maybe thin the staff, but it's no benefit to a customer who'd rather enjoy the unaccustomed sunshine and chat with a friend. "They're all like this now," my friend said gloomily. Not where I live, fortunately.

Anti-cash campaigners have long insisted that cash is dirty and spreads disease; but, as we've known for a year, covid rarely spreads through surfaces, and (as Dave Birch has been generous enough to note) a recent paper finds that cash is sometimes cleaner. But still: try to dislodge the apps.

A couple of weeks ago, the Erin Woo at the New York Times highlighted cash-free moves. In New York City, QR codes have taken over in restaurants and stores as contact-free menus and ordering systems. In the UK, QR codes mostly appear as part of the Test and Trace contact tracing app; the idea is you check in when you enter any space, be it restaurant, cinema, or (ludicrously) botanic garden, and you'll be notified if it turns out it was filled with covid-infected people when you were there.

Whatever the purpose, the result is tight links between offline and online behavior. Pre-pandemic, these were growing slowly and insidiously; now they're growing like an invasive weed at a time when few of us can object. The UK ones may fall into disuse alongside the app itself. But Woo cites Bloomberg: half of all US full-service restaurant operators have adopted QR-code menus since the pandemic began.

The pandemic has also helped entrench workplace monitoring. By September 2020, Alex Hern was reporting at the Guardian that companies were ramping up their surveillance of workers in their homes, using daily mandatory videoconferences, digital timecards in the form of cloud logins, and forced participation on Slack and other channels.

Meanwhile at NBC News, Olivia Solon reports that Teleperformance, one of the world's largest call center companies, to which companies like Uber, Apple, and Amazon outsource customer service, has inserted clauses in its employment contracts requiring workers to accept in-home cameras that surveil them, their surroundings, and family members under 18. Solon reports that the anger over this is enough to get these workers thinking about unionizing. Teleperformance is global; it's trying this same gambit in other countries.

Nearer to home, all along, there's been a lot of speculation about whether anyone would ever again accept commuting daily. This week, the Guardian reports that only 18% of workers have gone back to their offices since UK prime minister Boris Johnson ended all official restrictions on July 19. Granted, it won't be clear for some time whether this is new habit or simply caution in the face of the fact that Britain's daily covid case numbers are still 25 times what they were a year ago. In the US, Google is suggesting it will cut pay for staff who resist returning to the office, on the basis that their cost of living is less. Without knowing the full financial position, doesn't it sound like Google is saving money twice?

All these examples suggest that what were temporary accommodations are hardening into "the way things are". Undoing them is a whole new set of items for last year's post-pandemic to-do list.


Illustrations: Graphic showing the structure of QR codes (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 6, 2021

Privacy-preserving mass surveillance

new-22portobelloroad.jpgEvery time it seems like digital rights activists need to stop quoting George Orwell so much, stuff like this happens.

In an abrupt turnaround, on Thursday Apple announced the next stage in the decades-long battle over strong cryptography: after years of resisting law enforcement demands, the company is U-turning to backdoor its cryptography to scan personal devices and cloud stores for child abuse images. EFF sums up the problem nicely: "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor". Or, more simply, a hole is a hole. Most Orweliian moment: Nicholas Weaver framing it on Lawfare as "privacy-sensitive mass surveillance".

Smartphones, particularly Apple phones, have never really been *our* devices in the way that early personal computers were, because the supplying company has always been able to change the phone's software from afar without permission. Apple's move makes this reality explicit.

The bigger question is: why? Apple hasn't said. But the pressure has been mounting on all the technology companies in the last few years, as an increasing number of governments have been demanding the right of access to encrypted material. As Amie Stepanovich notes on Twitter, another factor may be the "online harms" agenda that began in the UK but has since spread to New Zealand, Canada, and others. The UK's Online Safety bill is already (controversially) in progress., as Ross Anderson predicted in 2018. Child exploitation is a terrible thing; this is still a dangerous policy.

Meanwhile, 2021 is seeing some of the AI hype of the last ten years crash into reality. Two examples: health and autonomous vehicles. At MIT Technology Review, Will Douglas Heaven notes the general failure of AI tools in the pandemic. Several research studies - the British Medical Journal, Nature, and the Turing Institute (PDF) - find that none of the hundreds of algorithms were any clinical use and some were actively harmful. The biggest problem appears to have been poor-quality training datasets, leading the AI to either identify the wrong thing, miss important features, or appear deceptively accurate. Finally, even IBM is admitting that Watson, its Jeopardy! champion has not become a successful AI medical diagnostician. Medicine is art as well as science; who knew? (Doctors and nurses, obviously.)

As for autonomous vehicles, at Wired Andrew Kersley reports that Amazon is abandoning its drone delivery business. The last year has seen considerable consolidation among entrants in the market for self-driving cars, as the time and resources it will take to achieve them continue to expand. Google's Waymo is nonetheless arguing that the UK should not cap the number of self-driving cars on public roads and the UK-grown Oxbotica is proposing a code of practice for deployment. However, as Christian Wolmar predicted in 2018, the cars are not here. Even some Tesla insiders admit that.

The AI that has "succeeded" - in the narrow sense of being deployed, not in any broader sense - has been the (Orwellian) surveillance and control side of AI - the robots that screen job applications, the automated facial recognition, the AI-driven border controls. The EU, which invests in this stuff, is now proposing AI regulations; if drafted to respect human rights, they could be globally significant.

However, we will also have to ensure the rules aren't abused against us. Also this week, Facebook blocked the tool a group of New York University social scientists were using to study the company's ad targeting, along with the researchers' personal accounts. The "user privacy" excuse: Cambridge Analytica. The 2015 scandal around CA's scraping a bunch of personal data via an app users voluntarily downloaded eventually cost Facebook $5 billion in its 2019 settlement with the US Federal Trade Commission that also required it to ensure this sort of thing didn't happen again. The NYU researchers' Ad Observatory was collecting advertising data via a browser extension users opted to install. They were, Facebook says, scraping data. Potato, potahto!

People who aren't Facebook's lawyers see the two situations as entirely different. CA was building voter profiles to study how to manipulate them. The Ad Observatory was deliberately avoiding collecting personal data; instead, they were collecting displayed ads in order to study their political impact and identify who pays for them. Potato, *tomahto*.

One reason for the universal skepticism is that this move has companions - Facebook has also limited journalist access to CrowdTangle, a data tool that helped establish that far-right news content generate higher numbers of interactions than other types and suffer no penalty for being full of misinformation. In addition, at the Guardian, Chris McGreal finds that InfluenceMap reports that fossil fuel companies are using Facebook ads to promote oil and gas use as part of remediating climate change (have some clean coal).

Facebook's response has been to claim it's committed to transparency and blame the FTC. The FTC was not amused: "Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest." The FTC knows Orwellian fiction when it sees it.


Illustrations: Orwell's house on Portobello Road, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 9, 2021

The border-industrial complex*

Rohingya_Refugee_Camp_26_(sep_2020).jpgMost people do not realize how few rights they have at the border of any country.

I thought I did know: not much. EFF has campaigned for years against unwarranted US border searches of mobile phones, where "border" legally extends 100 miles into the country. If you think, well, it's a big country, it turns out that two-thirds of the US population lives within that 100 miles.

No one ever knows what the border of their own country is like for non-citizens. This is one reason it's easy for countries to make their borders hostile: non-citizens have no vote and the people who do have a vote assume hostile immigration guards only exist in the countries they visit. British people have no idea what it's like to grapple with the Home Office, just as most Americans have no experience of ICE. Datafication, however, seems likely to eventually make the surveillance aspect of modern border passage universal. At Papers, Please, Edward Hasbrouck charts the transformation of travel from right to privilege.

In the UK, the Open Rights Group and the3million have jointly taken the government to court over provisions in the post-Brexit GDPR-enacting Data Protection Act (2018) that exempted the Home Office from subject access rights. The Home Office invoked the exemption in more than 70% of the 19,305 data access requests made to its office in 2020, while losing 75% of the appeals against its rulings. In May, ORG and the3million won on appeal.

This week's announced Nationality and Borders Bill proposes to make it harder for refugees to enter the country and, according to analyses by the Refugee Council and Statewatch, make many of them - and anyone who assists them - into criminals.

Refugees have long had to verify their identity in the UK by providing biometrics. On top of that, the cash support they're given comes in the form of prepaid "Aspen" cards, which means the Home Office can closely monitor both their spending and their location, and cut off assistance at will, as Privacy International finds. Scotland-based Positive Action calls the results "bureaucratic slow violence".

That's the stuff I knew. I learned a lot more at this week's workshop run by Security Flows, which studies how datafication is transforming borders. The short version: refugees are extensively dataveilled by both the national authorities making life-changing decisions about them and the aid agencies supposed to be helping them, like the UN High Commissioner for Refugees (UNHCR). Recently, Human Rights Watch reported that UNHCR had broken its own policy guidelines by passing data to Myanmar that had been submitted by more than 830,000 ethnic Rohingya refugees who registered in Bangladeshi camps for the "smart" ID cards necessary to access aid and essential services.

In a 2020 study of the flow of iris scans submitted by Syrian refugees in Jordan, Aalborg associate professor Martin Lemberg-Pedersen found that private companies are increasingly involved in providing humanitarian agencies with expertise, funding, and new ideas - but that those partnerships risk turning their work into an experimental lab. He also finds that UN agencies' legal immunity coupled with the absence of common standards for data protection among NGOs and states in the global South leave gaps he dubs "loopholes of externalization" that allow the technology companies to evade accountability.

At the 2020 Computers, Privacy, and Data Protection conference a small group huddled to brainstorm about researching the "creepy" AI-related technologies the EU was funding. Border security represents a rare opportunity, invisible to most people and justified by "national security". Home Secretary Priti Patel's proposal to penalize the use of illegal routes to the UK is an example, making desperate people into criminals. People like many of the parents I knew growing up in 1960s New York.

The EU's immigration agencies are particularly obscure. I had encoutnered Warsaw-based Frontex, the European Border and Coast Guard Agency which manages operational control of the Schengen Area, but not of EU-LISA, which since 2012 has managed the relevant large-scale IT systems SIS II, VIS, EURODAC, and ETIAS (like the US's ESTA). Unappetizing alphabet soup whose errors few know how to challenge.

The behind-the-scenes the workshop described sees the largest suppliers of ICT, biometrics, aerospace, and defense provide consultants who help define work plans and formulate calls to which their companies respond. The list of vendors appearing in Javier Sánchez-Monedero's 2018 paper for the Data Justice Lab, begins to trace those vendors, a mix of well-known and unknown. A forthcoming follow-up focuses on the economics and lobbying behind all these databases.

In the recent paper on financing border wars, Mark Akkerman analyzes the economic interests behind border security expansion, and observes "Migration will be one of the defining human rights issues of the 21st century." We know it will increase, increasingly driven by climate change; the fires that engulfed the Canadian village of Lytton, BC on July 1 made 1,000 people homeless, and that's just the beginning.

It's easy to ignore the surveillance and control directed at refugees in the belief that they are not us. But take the UK's push to create a hostile environment by pushing border checks into schools, workplaces, and health services as your guide, and it's obvious: their surveillance will be your surveillance.

*Credit the phrase "border-industrial complex" to Luisa Izuzquiza.

Illustrations: Rohingya refugee camp in Bangladesh, 2020 (by Rocky Masum, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 2, 2021

This land

Nomadland-van.pngAn aging van drives off down a highway into a fantastical landscape of southwestern mountains and mesquite. In 1977, that could have been me, or any of my folksinging friends as we toured the US, working our way into debt (TM Andy Cohen). In 2020, however, the van is occupied by Fern (Frances McDormand), one of the few fictional characters in the film Nomadland, directed by Chloé Zhao, and based on the book by Jessica Bruder, which itself grew out of her 2014 article for Harper's magazine.

Nomadland captures two competing aspects of American life. First, the middle-class dream of the nice house with the car in the driveway, a chicken in a pot inside, and secure finances. Anyone who rejects this dream must be dangerous. But deep within also lurks the other American dream, of freedom and independence, which in the course of the 20th century moved from hopping freight trains to motor vehicles and hitting the open road.

For many of Nomadland's characters, living on the road begins as a necessary accommodation to calamity but becomes a choice. They are "retirees" who can't afford to retire, who balk at depending on the kindness of relatives, and have carved out a circuit of seasonal jobs. Echoing many of the vandwellers Bruder profiles, Fern tells a teen she used to tutor, "I'm not homeless - just houseless."

Linda May, for example, began working at the age of 12, but discovered at 62 that her social security benefits amounted to $550 a month (the fate that perhaps awaits the people Barbara Ehrenreich profiles in Nickel and Dimed). Others lost their homes in the 2008 crisis. Fern, whose story frames the movie, lost job and home in Empire, Nevada when the gypsum factory abruptly shut down, another casualty of the 2008 financial crisis. Six months later, the zipcode was scrubbed. This history appears as a title at the beginning of the movie. We watch Fern select items and lock a storage unit. It's go time.

Fern's first stop is the giant Amazon warehouse in Fernley, Nevada, where the money is good and a full-service parking space is included. Like thousands of other workampers, she picks stock and packs boxes for the Christmas rush until, come January, it's time to gracefully accept banishment. People advise her: go south, it's warmer. Shivering and scraping snow off the van, Fern soon accepts the inevitable. I don't know how cold she is, but it brought flashbacks to a few of those 1977 nights in my pickup-truck-with-camper-top when I slept in a full set of clothes and a hat while the shampoo solidified. I was 40 years younger than Fern, and it was never going to be my permanent life. On the other hand: no smartphone.

At the Rubber Tramp Rendezvous nearQuartzsite, Arizona, Fern finds her tribe: Swankie, Bob Wells, and the other significant fictional character, Dave (David Strathairn). She traces the annual job circuit: Amazon, camp hosting, beet harvesting in Nebraska, Wall Drug in South Dakota. Old hands teach her skills she needs: changing tires, inventing and building things out of scrap, remodeling her van, keeping on top of rust. She learns what size bucket to buy and that you must be ready to solve your own emergencies. Finally, she learns to say "See you down the road" instead of "Goodbye".

Earlier this year, at Silicon Flatiron's Privacy at the Margins, Tristia Bauman, executive director of the National Homelessness Law Center, explained that many cities have broadly-written camping bans that make even the most minimal outdoor home impossible. Worse, those policies often allow law enforcement to seize property. It may be stored, but often people still don't get it back; the fees to retrieving a towed-away home (that is, van) can easily be out of reach. This was in my mind when Bob talks about fearing the knock on the van that indicates someone in authority wants you gone.

"I've heard it's depressing," a friend said, when I recommended the movie. Viewed one way, absolutely. These aging Baby Boomers never imagined doing the hardest work of their lives in their "golden years", with no health insurance, no fixed abodes, and no prospects. It's not that they failed to achieve the American Dream. It's that they believed in the American Dream and then it broke up with them.

And yet "depressing" is not how I or my companion saw it, because of that *other* American Dream. There's a sense of ownership of both the land and your own life that comes with living on the road in such a spacious and varied country, as Woody Guthrie knew. Both Guthrie in the 1940s and Zhao now unsparingly document the poverty and struggles of the people they found in those wide-open spaces - but they also understand that here a person can breathe and find the time to appreciate the land's strange, secret wonders. Secret, because most of us never have the time to find them. This group does, because when you live nowhere you live everywhere. We get to follow them to some of these places, share their sense of belonging, and admire their astoundingly adaptable spirit. Despite the hardships they unquestionably face, they also find their way to extraordinary moments of joy.

See you down the road.

Illustrations: Fern's van, heading down the road.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 25, 2021

Them

Thumbnail image for IBM-watson-jeopardy.png"It." "It." "It."

In the first two minutes of a recent episode of the BBC program Panorama, "Are You Scared Yet, Human?", the word that kept popping out was "it". The program is largely about the AI race between the US and China, an obviously important topic - see Amy Webb's recent book, The Big Nine. But what I wanted to scream at the show's producers was: "AI is not *it*. AI is *they*." The program itself proved this point by seguing from commercial products to public surveillance systems to military dreams of accurate targeting and ensuring an edge over the other country.

The original rantish complaint I thought I was going to write was about gendering AI-powered voice assistants and, especially, robots. Even though Siri has a female voice it's not a "she". Even if Alexa has a male voice it's not a "he". Yes, there's a long tradition of dubbing ships, countries, and even fiddles "she", but that bothers me less than applying the term to a compliant machine. Yolande Strengers and Jenny Kennedy made this point quite well in their book The Smart Wife, in which they trace much of today's thinking about domestic robots to the role model of Rosie, in the 1960s outer space animated TV sitcom The Jetsons. Strengers and Kennedy want to "queer" domestic robots so they no longer perpetuate heteronormative gender stereotypes.

The it-it-it of Panorama raised a new annoyance. Calling AI "it" - especially when the speaker is, as here, Jeff Bezos or Elon Musk - makes it sound like a monolithic force of technology that can't be stopped or altered, rather than what it is: am umbrella term for a bunch of technologies, many of them experimental and unfinished, and all of which are being developed and/or exploited by large companies and military agencies for their own purposes, not ours. "It" hides the unrepresentative workforce defining AI's present manifestation, machine learning. *This* AI is "systems", not a *thing*, and their impact varies depending on the application.

Last week, Pew Research released the results of a survey it conducted in 2020, in which two-thirds of the experts they consulted predicted that ethics would not be embedded in AI by 2030. Many pointed out that societies and contexts differ; that who gets to define "ethics" is crucial, and that there will always be bad actors who ignore whatever values the rest of us agree on. The report quotes me saying it's not AI that needs ethics, it's the *owners*.

I made a stab at trying to categorize the AI systems we encounter every day. The first that spring to mind are scoring applications whose impact on most people's lives appears to be in refusing access to things we need - asylum, probation in the criminal justice system, welfare in the benefits system, credit in the financial system - and assistance systems that answer questions and offer help, such as recommendation algorithms, search engines, voice assistants, and so on. I forgot about systems playing games, and since then a fourth type has accelerated into public use, in the form of identification systems, almost all of them deeply flawed but being deployed anyway: automated facial recognition, emotion recognition, smile detection, and fancy lie detectors.

I also forgot about medical applications, but despite many genuine breakthroughs - such as today's story that machine learning has helped develop a blood test to detect 50 types of early-stage cancer - many highly touted efforts have been failures.

"It"ifying AI makes many machine learning systems sound more successful than they are. Today's facial recognition is biased and inaccurate . Even in the pandemic, Benedict Dellot told a recent Westminster Health Forum seminar on AI in health care, the big wins in the pandemic have come from conventional data analysis underpinned by new data sharing arrangements. As examples, he cited sharing lists of shielding patients with local authorities to ensure they got the support they needed, linking databases to help local authorities identify vulnerable people, and repurposing existing technologies. But shove "AI" in the name and it sounds more exciting; see also "nano" before this and "e-" before that.

Maybe - *maybe* - one day we will say "AI" and mean a conscious, superhuman brain as originally imagined by science fiction writers and Alan Turing. Machine learning is certainly not that. as Kate Crawford writes in her recent Atlas of AI. Instead, we're talking about a bunch of computers calculating statistics from historical data, forever facing backward. And, as authors such as Sarah T. Roberts and Mary L. Gray and Siddharth Suri have documented, very often today's AI is humans all the way down. Direct your attention to the poorly-paid worker behind the curtain.

Crawford's book reminded me of Arthur C. Clarke's famous line, "Any sufficiently advanced technology is indistinguishable from magic." After reading her structural analysis of machine-learning-AI, it morphed into: "Any technology that looks like magic is hiding something." For Crawford, what AI is hiding is its essential nature as an extractive industry. Let's not grant these systems any more power than we have to. Breaking "it" apart into "them" allows us to pick and choose the applications we want.

Illustrations: IBM's Watson winning at Jeopardy; its later adventures in health care were less successful.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 18, 2021

Libera me

irc-networks-netsplit.de-top10_2021u.pngA man walks into his bar and finds...no one there.

OK, so the "man" was me, and the "bar" was a Reddit-descended IRC channel devoted to tennis...but the shock of emptiness was the same. Because tennis is a global sport, this channel hosts people from Syracuse NY, Britain, Indonesia, the Netherlands. There is always someone commenting, checking the weather wherever tennis is playing, checking scores, or shooting (or befriending) the channel's frequent flying ducks.

Not now: blank, empty void, like John Oliver's background for the last, no-audience year. Those eight listed users are there in nickname only.

A year ago at this time, this channel's users were comparing pandemic restrictions. In our lockdowns, I liked knowing there was always someone in another time zone to type to in real time. So: slight panic. Where *are* they?

IRC dates to the old cooperative Internet. It's a protocol, not a service, so anyone can run an IRC server, and many people do, even though the mainstream, especially the younger mainstream, long since moved on through instant messaging and on to Twitter, WhatsApp groups, Telegram channels, Slack, and Discord. All of these undoubtedly look prettier and are easier to use, but the base functionality hasn't changed all that much.

IRC's enduring appeal is that it's all plain text and therefore bandwidth-light, it can host any size of conversation from a two-person secret channel to a public channel of thousands, multiple clients are available on every platform, and it's free. Genuinely free, not pay-with-data free - no ads! Accordingly, it's still widely used in the open source community. Individual channels largely set their own standards and community norms...and their own games. Circa 2003, I played silly trivia quizzes on a TV-related channel. On this one...ducks. A sample:

゜゜・。 ​ 。・゜゜\_o​< FLAP​ FLAP!

However, the fact that anyone *can* run their own server doesn't mean that everyone *does*, and like other Internet services (see also: open web, email), IRC gravitated towards larger networks that enable discovery. If you host your own server, strangers can only find it if you let them; on a large network users can search for channels, find topics they're interested in, and connect to the nearest server. While many IRC networks still survive, in recent years by far the biggest, according to Netsplit, is Freenode, largely because of its importance in providing connections and support for the open source community. Freenode is also where the missing tennis channel was hosted until about Tuesday, three days before I noticed it was silent. As you'll see in the Netsplit image above, that was when Freenode traffic plummeted, countered by a near-vertical rise in traffic on Libera Chat. That is where my channel turned out to be restored to its usual bustling self.

What happened is both complicated and pretty simple: ownership changed hands without anyone's quite realizing what it was going to mean. To say that IRC is free to use does not mean there are no costs: besides computers and bandwidth, the owners of IRC servers must defend their networks against attacks. Freenode, Wikipedia explains, began as a Linux support channel on another network run by four people, who went on to set up their own network, which eventually became the largest support network for the open source community. A series of ownership changes led from a California charity through a couple of steps to today's owner, the UK-based private company Freenode Ltd, which is owned by Andrew Lee, a technology entrepreneur and founder of the Private Internet Access VPN. No one appears to have thought much about this until last month, when 20 to 30 of the volunteers who run Freenode ("staff") resigned accusing Lee of executing a hostile takeover. Some of them promptly set up Libera as an alternative.

What makes this story about a somewhat arcane piece of the old Internet interesting - aside from the book that demands to be written about IRC's rich history, culture, and significance - is that this is the second time in the last 18 months that a significant piece of the non-profit infrastructure has been targeted for private ownership. The other was the .org top-level domain. These underpinnings need better protection.

On the day traffic plummeted, Lee made deciding to move really easy: as part of changing the network's underlying software, he decided to remove the entire database of registered names and channels - committing suicide, some called it. Because, really: if you're going to have to reregister and reconstruct everything anyway, the barrier to moving to that identical new network over there with all the familiar staff and none of the new owner mishegoss is gone. Hence the mass exodus.

This is why IRC never spawned a technology giant: no lock-in. Normally when you move a conversation it dies. In this case, the entire channel, with its scripts and games and familiar interface, could be recreated at speed and resume as if nothing had happened. All they had to do was tell people. Five minutes after I posted a plaintive query on Reddit, someone came to retrieve me.

So, now: a woman logs into an IRC channel and finds all the old regulars. A duck flaps past. I have forgotten the ".bang" command. I type ".bef" instead. The duck is saved.

Illustrations: Netsplit's graph of IRC network traffic from June 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 11, 2021

The fragility of strangers

Colonial_Pipeline_System.pngThis week, someone you've never met changed the configuration settings on their individual account with a company you've never heard of and knocked out 85% of that company's network. Dumb stuff like this probably happens all the time without attracting attention, but in this case the company, Fastly. is a cloud provider that also runs an intermediary content delivery network intended to speed up Internet connections. Result: people all over the world were unable to reach myriad major Internet sites such as Amazon, Twitter, Reddit, and the Guardian for about an hour.

The proximate cause of these outages, Fastly has now told the world, was a bug that was introduced (note lack of agency) into its software code in mid-May, which laid dormant until someone did something completely normal to trigger it.

In the early days, we all assumed that as more companies came onstream and admins built experience and expertise, this sort of thing would happen less and less. But as the mad complexity of our computer systems and networks continues to increase - Internet of Things! AI! - now it's more likely that stuff like this will also increase, will be harder to debug, and will cause far more ancillary damage - and that damage will not be limited to the virtual world. A single random human, accidentally or intentionally, is now capable of creating physical-world damage at scale.

Ransomware attacks earlier this month illustrate this. Attackers' use of a single leaked password linked to a disused VPN account in the systems that run the Colonial Pipeline compromised gasoline supplies down a large swathe of the US east coast. Near-simultaneously, a ransomware attack on the world's largest meatpacker, JBS, briefly halted production, threatening food security in North America and Australia. In December, an attack on network management software supplied by the previously little-known SolarWinds compromised more than 18,000 companies and government agencies. In all these cases, random strangers reached out across the world and affected millions of personal lives by leveraging a vulnerability inside a company that is not widely known but that provides crucial services to companies we do know and use every day.

An ordinary person just trying to live their life has no defense except to have backups of everything - not just data, but service providers and suppliers. Most people either can't afford that or don't have access to alternatives, which means that precarious lives are made even more so by hidden vulnerabilities they can't assess.

An earlier example: in 2012, journalist Matt Honan's data was entirely wiped out through an attack that leveraged quirks of two unrelated services - Apple and Amazon - against each other to seize control of his email address and delete all his data. Moral: data "in the cloud" is not a backup, even if the hosting company says they keep backups. Second moral: if there is a vulnerability, someone will find it, sometimes for motives you would never guess.

If memory serves, Akamai, founded in 1998, was the first CDN. The idea was that even though the Internet means the death of distance, physics matters. Michael Lewis captured this principle in detail in his book Flash Boys, in which a handful of Wall Street types pay extraordinary amounts to shave a few split-seconds off the time it takes to make a trade by using a ruler and map to send fiber topic cables along the shortest possible route between exchanges. Just so, CDNs cache frequently accessed content on mirror servers around the world. When you call up one of those pages, it, or frequently-used parts of it in the case of dynamically assembled pages, is served up from the nearest of those servers, rather than from the distant originator. By now, there are dozens of these networks and what they do has vastly increased in sophistication, just as the web itself has. A really major outlet like Amazon will have contracts with more than one, but apparently switching from one to the other isn't always easy, and because so many outages are very short it's often easier to wait it out. Not in this case!

At The Conversation, criminology professor David Wall also sees this outage as a sign of the future for the same reason I do: centralization and consolidation have shrunk, and continue to shrink, the number of single points of widespread failure. Yes, the Internet was built to withstand a bomb outage is true - but as we have been writing for 20 years now, this Internet is not that Internet. The path to today's Internet has led from the decentralized era of Usenet, IRC, and own-your-own mail server to web hosting farms to the walled gardens of Facebook, Google, and Apple, and the AI-dominating Big Nine. In 2013, Edward Snowden's revelations made plain how well that suits surveillance-hungry governments, and it's only gotten worse since, as companies seek to insert themselves into every aspect of our lives - intermediaries that bring us a raft of new insecurities that we have no time or ability to audit.

Increasing complexity, hidden intermediation, increasing numbers of interferers, and increasing scale all add up to a brittle and fragile Internet, onto which we continue to pile all our most critical services and activities. What could possibly go wrong?


Illustrations: Map of the Colonial Pipeline.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 20, 2021

Ontology recapiltulates phylogeny

Thumbnail image for sidewalklabs-streetcrossing.pngI may be reaching the "get off my lawn!" stage of life, except the things I'm yelling at are not harmless children but new technologies, many of which, as Charlie Stross writes, leak human stupidity into our environment.

Case in point: a conference this week chose for its platform an extraordinarily frustrating graphic "virtual congress center" that was barely more functional than Second Life (b. 2003). The big board displaying the agenda was not interactive; road signs and menu items pointed to venues by name, but didn't show what was going on in them. Yes, there was a reception desk staffed with helpful avatars. I do not want to ask for help, I want simplicity. The conference website advised: "This platform requires the installation of a dedicated software in your computer and a basic training." Training? To watch people speak on my computer screen? Why can't I just "click here to attend this session" and see the real, engaged faces of speakers, instead of motionless cartoon avatars?

This is not a new-technology issue but a usability issue that hasn't changed since Donald Norman's 1988 The Design of Everyday Things sought to do away with user manuals.

I tell myself that this isn't just another clash between generational habits.

Even so, if current technology trends continue I will be increasingly left behind, not just because I don't *want* to join in but because, through incalculable privilege, much of the time I don't *need* to. My house has no smart speakers, I see no reason to turn on open banking, and much of the time I can leave my mobile phone in a coat pocket, ignored.

But Out There in the rest of the world, where I have less choice, I read that Amazon is turning on Sidewalk, a proprietary mesh network that uses Bluetooth and 900MHz radio connections to join together Echo speakers, Ring cameras, and any other compatible device the company decides to produce. The company is turning this thing on by default (free software update!), though if you're lucky enough to read the right press articles you can turn it off. When individuals roam the streets piggybacking on open wifi connections, they're dubbed "hackers". But a company - just ask forgiveness, not permission, yes?

The idea appears to be that the mesh network will improve the overall reliability of each device when its wifi connection is iffy. How it changes the range and detail of the data each device collects is unclear. Connecting these devices into a network is a step change in physical tracking; CNet suggests that a Tile tag attached to a dog, while offering the benefit of an alert if the dog gets loose, could also provide Amazon with detailed tracking of all your dog walks. Amazon says the data is protected with three layers of encryption, but protection from outsiders is not the same as protection from Amazon itself. Even the minimal data Amazon says in its white paper (PDF) it receives - the device serial number and application server ID - reveal the type of device and its location.

We have always talked about smart cities as if they were centrally planned, intended to offer greater efficiency, smoother daily life, and a better environment, and built with some degree of citizen acceptance. But the patient public deliberation that image requires does not fit the "move fast and break things" ethos that continues to poison organizational attitudes. Google failed to gain acceptance for its Toronto plan; Amazon is just doing it. In London in 2019, neither private operators nor police bothered to inform or consult anyone when they decided to trial automated facial recognition.

In the white paper, Amazon suggests benefits such as finding lost pets, diagnostics for power tools, and supporting lighting where wifi is weak. Nice use cases, but note that the benefits accrue to the devices' owner while the costs belong to neighbors who may not have actively consented, but simply not known they had to change the default settings in order to opt out. By design, neither device owners nor server owners can see what they're connected to. I await the news of the first researcher to successfully connect an unauthorized device.

Those external costs are minimal now, but what happens when Amazon is inevitably joined by dozens more similar networks, like the collisions that famously plague the more than 50 companies that dig up London streets? It's disturbingly possible to look ahead and see our public spaces overridden by competing organizations operating primarily in their own interests. In my mind, Amazon's move opens up the image of private companies and government agencies all actively tracking us through the physical world the way they do on the web and fighting over the resulting "insights". Physical tracking is a sizable gap in GDPR.

Again, these are not new-technology issues, but age-old ones of democracy, personal autonomy, and the control of public and private spaces. As Nicholas Couldry and Ulises A. Mejias wrote in their 2020 book The Costs of Connection, this is colonialism in operation. "What if new ways of appropriating human life, and the freedoms on which it depends, are emerging?" they asked. Even if Amazon's design is perfect, Sidewalk is not a comforting sign.


Illustrations: A mock-up from Google's Sidewalk Labs plan for Toronto.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 14, 2021

Pre-crime

Unicorn_sculpture,_York_Crown_Court-Tim-Green.jpgMuch is being written about this week's Queen's speech, which laid out plans to restrict protests (the Police, Crime, Sentencing, and Courts bill), relax planning measures to help developers override communities, and require photo ID in order to vote even though millions of voters have neither passport nor driver's license and there was just one conviction for voting fraud in the 2019 general election. We, however, will focus here on the Online Safety bill, which includes age verification and new rules for social media content moderation.

At Politico, technology correspondent Mark Scott picks three provisions: the exemption granting politicians free rein on social media; the move to require moderation of content that is not illegal or criminal (however unpleasant it may be); and the carve-outs for "recognised news publishers". I take that to mean they wanted to avoid triggering the opposition of media moguls like Rupert Murdoch. Scott read it as "journalists".

The carve-out for politicians directly contradicts a crucial finding in last week's Facebook oversight board ruling on the suspension of former US president Donald Trump's account: "The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users." Politicians, in other words, may not be more special than other influencers. Given the history of this particular government, it's easy to be cynical about this exemption.

In 2019, Heather Burns, now policy manager for the Open Rights Group, predicted this outcome while watching a Parliamentary debate on the white paper: "Boris Johnson's government, in whatever communication strategy it is following, is not going to self-regulate its own speech. It is going to double down on hard-regulating ours." At ORG's blog, Burns has critically analyzed the final bill.

Few have noticed the not-so-hidden developing economic agenda accompanying the government's intended "world-leading package of online safety measures". Jen Persson, director of the children's rights advocacy group DefendDigitalMe, is the exception, pointing out that in May 2020 the Department of Culture, Media, and Sport released a report that envisions the UK as a world leader in "Safety Tech". In other words, the government views online safety (PDF; see Annex C) as not just an aspirational goal for the country's schools and citizens but also as a growing export market the UK can lead.

For years, Persson has been tirelessly highlighting the extent to which children's online use is monitored. Effectively, monitoring software watches every use of any school-owned device and whenever the child is logged into their school Gsuite account; some types can even record photos of the child at home, a practice that became notorious when it was tried in Pennsylvania.

Meanwhile, outside of DefendDigitalMe's work - for example its case study of eSafe and discussion of NetSupport DNA and this discussion of school safeguarding - we know disturbingly little about the different vendors, how they fit together in the education ecosystem, how their software works, how capabilities vary from vendor to vendor, how well they handle multiple languages, what they block, what data it collects, how they determine risk, what inferences are drawn and retained and by whom, and the rate of errors and their consequences. We don't even really know if any of it works - or what "works" means. "Safer online" does not provide any standard against which the cost to children's human rights can be measured. Decades of government policy have all trended toward increased surveillance and filtering, yet wherever "there" is we never seem to arrive. DefendDigitalMe has called for far greater transparency.

Persson notes both mission creep and scope creep: "The scope has shifted from what was monitored to who is being monitored, then what they're being monitored for." The move from harmful and unlawful content to lawful but "harmful" content is what's being proposed now, and along with that, Persson says, "children being assessed for potential risk". The controversial Prevent program program is about this: monitoring children for signs of radicalization. For their safety, of course.

Previous UK children's rights campaigners used to say that successive UK governments have consistently used children as test subjects for the controversial policies they wish to impose on adults, normalizing them early. Persson suggests the next market for safetytech could be employers monitoring employees for mental health issues. I imagine elderly people.

DCMS's comments support market expansion: "Throughout the consultations undertaken when compiling this report there was a sector consensus that the UK is likely to see its first Safety Tech unicorn (i.e. a company worth over $1bn) emerge in the coming years, with three other companies also demonstrating the potential to hit unicorn status within the early 2020s. Unicorns reflect their namesake - they are incredibly rare, and the UK has to date created 77 unicorn businesses across all sectors (as of Q4 2019)." (Are they counting the much-litigated Autonomy?)

There's something peculiarly ghastly about this government's staking the UK's post-Brexit economic success on exporting censorship and surveillance to the rest of the world, especially alongside its stated desire to opt out of parts of human rights law. This is what "global Britain" wants to be known for?

Illustrations: Unicorn sculpture at York Crown Court (by Tim Green via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 7, 2021

Decision not decision

Screenshot from 2021-01-07 13-17-20.pngIt is the best of decisions, it is the worst of decisions.

For some, this week's decision by Facebook's Oversight Board in the matter of "the former guy" Donald J. Trump is a deliberate PR attempt at distraction. For many, it's a stalling tactic. For a few, it is a first, experimental stab at calling the company to account.

It can be all these things at once.

But first, some error correction. Nothing the Facebook Oversight Board does or doesn't do tells us anything much about governing the Internet. Although there are countries where zero-rating deals with telcos make Facebook effectively the only online access most people have, Facebook is not the Internet and it's not the web. Facebook is a commercial company's walled garden that is reached over the Internet and via both the web and apps that bypass the web entirely. Governing Facebook is about how we regulate and govern commercial companies that use the Internet to achieve global reach. Like Trump, Facebook has no exact peer, so it is difficult to generalize from decisions about either to reach wider principles of content moderation.

It's also important to recognize that Trump used/uses different social media sites in different ways. Facebook was important to Trump for organizing campaigns and advertising, as well as getting his various messages amplified and spread by supporters. But there's little doubt that personally he'd rather have Twitter back; its public nature and instant response made it his id-to-fingers direct connection to the media. Twitter fed him the world's attention. Those were the postings that had everyone waking up in the middle of the night panicked in case he had abruptly declared war on North Korea. After his ban, the service was full of tweets expressing relief at the silence.

The board's decision has several parts. First, it says the company was right to suspend Trump's account. However, it goes on to say, the company erred in applying an "indeterminate and standardless penalty of indefinite suspension". It goes on to tell Facebook to develop "clear, necessary, and proportionate policies that promote public safety and freedom of expression". The board's charter requires Facebook to make an initial response within 30 days, and the decision itself orders Facebook to review the case to "determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform". It appears that the board is at least trying not to let itself be used as a shield.

At the New York Times, Kara Swisher calls the non-decision kind of perfect. At the Washington Post, Margaret Sullivan calls the board a high-priced fig leaf. At Lawfare, Evelyn Douek believes the decision shows promise but deplores the board's reluctance to constrain Facebook, On Wednesday's episode of Ben Wittes's and Kate Klonick's In Lieu of Fun, panelists speculated what indicators would show the board was achieving legitimacy. Carole Cadwalladr, who broke the Cambridge Analytica story in 2016, calls Facebook, simply, cancer and views the oversight board as a "dangerous distraction".

When the board first began issuing decisions, Jeremy Lewin commented that the only way the board - "a dangerous sham" - could show independence was to reverse Facebook's decisions, which in all cases, that would mean restoring deleted posts since the board has no role in evaluating decisions to retain posts. It turns out that's not true. In the Trump decision, the board found a third way: calling out Facebook for refusing to answer its questions, failing to establish and follow clear procedures, and punting on its responsibilities.

However, despite the decision's legalish language, the Oversight Board is not a court, and Facebook's management is not a government. For both good and bad: as Orin Kerr reminds Facebook can't fine, jail, or kill its users; as many others will note, as a commercial company its goals are profits and happy shareholders, not fairness, transparency, or a commitment to uphold democracy. If it adopts any of those latter goals, it's because the company has calculated that it will cost more not to. Therefore, *every* bit of governance it attempts is a PR exercise. In pushing the ultimate decision back to Facebook and demanding that the company write and publish clear rules the board is trying to make itself more than that. We will know soon whether it has any hope of success.

But even if the board succeeds in pushing Facebook into clarifying its approach to this case, "success" will be constrained. Here's the board's mission: "The purpose of the board is to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook's content policies." Nothing there permits the board to raise its own cases, examine structural defects, or query the company's business model. There is also no option for the board to survey Trump's case and the January 6 Capitol invasion and place it in the context of evidence on Facebook's use to incite violence in other countries - Myanmar, Sri Landa, Kindia, Indonesia, Mexico, Germany, and Ethiopia. In other words, the board can consider individual cases when it is assigned them, but not the patterns of behavior that Facebook facilitates and are in greatest need of disruption. That will take governments and governance.


Illustrations: The January 6 invasion of the US Capitol.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 30, 2021

The tonsils of the Internet

Screenshot from 2021-04-30 13-02-46.pngLast week the US Supreme Court decided the ten-year-old Google v. Oracle copyright case. Unlike anyone in Jarndyce v. Jarndyce, which bankrupted all concerned, Google will benefit financially, and in other ways so will the rest of us.

Essentially, the case revolved around whether Google violated Oracle's copyright by copying about 11,500 lines of the software code (out of millions) that makes up the Java platform, part of the application programming interface. Google claimed fair use. Oracle disagreed.

Tangentially: Oracle owns Java because in 2010 it bought its developer, Sun Microsystems, which open-sourced the software in 2006. Google bought Android in 2005; it, too, is open source. If the antitrust authorities had blocked the Oracle acquisition, which they did consider, there would have been no case.

The history of disputes over copying and interoperability case goes back to the 1996 case Lotus v. Borland, in which Borland successfully argued that copying the way Lotus organized its menus was copying function, not expression. By opening the way for software programs to copy functional elements (like menus and shortcut keys), the Borland case was hugely important. It paved the way for industry-wide interface standards and thereby improved overall usability and made it easier for users to switch from one program to another if they wanted to. This decision, similarly, should enable innovation in the wider market for apps and services.

Also last week, the US Congress conducted both the latest in the series of antitrust hearings and interrogated Lina Khan, who has been nominated for a position at the Federal Trade Commission. Biden's decision to appoint her, as well as Tim Wu to the National Economic Council, has been taken as a sign of increasing seriousness about reining in Big Tech.

The antitrust hearing focused on the tollbooths known as app stores; in his opening testimony, Mark Cooper, director of research at the Consumer Federations of America, noted that the practices described by the chair, Senator Amy Klobuchar (D-MN) were all illegal in the Microsoft case, which was decided in 1998. A few minutes later, Horacio Gutierrez, Spotify's head of global affairs and chief legal officer, noted that "even" Microsoft never demanded a 30% commission from software developers to run on its platform".

Watching this brought home the extent to which the mobile web, with its culture of walled gardens and network operator control, has overwhelmed the open web we Old Net Curmudgeons are so nostalgic about. "They have taken the Internet and moved it into the app stores", Jared Sine told the committee, and that's exactly right. Opening the Internet back up requires opening up the app stores. Otherwise, the mobile web will be little different than CompuServe, circa 1991.

BuzzFeed technology reporter Ryan Mac posted on Twitter the anonymous account of a just-quit Accenture employee's account of their two and a half years as a content analyst for Facebook. The main points: the work is a constant stream of trauma; there are insufficient breaks and mental health support; the NDAs they are forced to sign block them from turning to family and friends for help; and they need the chance to move around to other jobs for longer periods of respite. "We are the tonsils of the Internet," they wrote. Medically, we now know that the tonsils that doctors used to cheerfully remove play an important role in immune system response. Human moderation is essential if you want online spaces to be tolerably civil; machines simply aren't good enough, and likely never will be, and abuse appears to be endemic in online spaces above a certain size. But just as the exhausted health workers who have helped so many people survive this pandemic should be viewed as a rare and precious resource instead of interchangeable parts whose distress the anti-lockdown, no-mask crowd are willing to overlook, the janitors of the worst and most unpleasant parts of the Internet need to be treated with appropriate care.

The power differential, the geographic spread, their arms-length subcontractor status, and the technology companies' apparent lack of interest combine to make that difficult. Exhibit B: Protocol reports that contract workers in Google's data centers are required to leave the company for six months every two years and reapply for their jobs, apparently just so they won't gain the rights of permanent employees.

In hopes of change, many were watching the Bessemer, Alabama Amazon warehouse workers' vote on unionizing. Now, the results are in: 1,798 to 738 against. You would think that one thing that could potentially help these underpaid, traumatized content moderators - as well as the drivers, warehouse workers, and others who are kept at second-class arm's length from the technology companies who so diligently ensure they don't become full employees - is a union. Because of the potential impact on the industry at large, many were watching closely, both the organizating efforts and Amazon's drive to oppose them.

Nonetheless, this isn't over. Moves toward unionizing have been growing for years in pockets all over the technology industry, and eventually it will be inescapable. We're used to thinking about technology companies' power in terms of industry consolidating and software licensing; workers are the ones who most directly feel the effects.


Illustrations: The chancellor (Ian Richardson), announcing the end of Jarndyce and Jarndyce in the BBC's 2005 adaptation of Bleak House.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 23, 2021

Fast, free, and frictionless

Sinan-Aral-20210422_224835.jpg"I want solutions," Sinan Aral challenged at yesterday's Social Media Summit, "not a restatement of the problems". Don't we all? How many person-millennia have we spent laying out the issues of misinformation, disinformation, harassment, polarization, platform power, monopoly, algorithms, accountability, and transparency? Most of these have been debated for decades. The big additions of the last decade are the privatization of public speech via monopolistic social media platforms, the vastly increased scale, and the transmigration from purely virtual into physical-world crises like the January 6 Capitol Hill invasion and people refusing vaccinations in the middle of a pandemic.

Aral, who leads the MIT Initiative on the Digital Economy and is author of the new book The Hype Machine, chose his panelists well enough that some actually did offer some actionable ideas.

The issues, as Aral said, are all interlinked. (see also 20 years of net.wars). Maria Ressla connected the spread of misinformation to system design that enables distribution and amplification at scale. These systems are entirely opaque to us even while we are open books to them, as Guardian journalist Carole Cadwalladr noted, adding that while US press outrage is the only pressure that moves Facebook to respond, it no longer even acknowledges questions from anyone at her newspaper. Cadwalladr also highlighted the Securities and Exchange Commission's complaint that says clearly: Facebook misled journalists and investors. This dismissive attitude also shows in the leaked email, in which Facebook plans to "normalize" the leak of 533 million users' data.

This level of arrogance is the result of concentrated power, and countering it will require antitrust action. That in turn leads back to questions of design and free speech: what can we constrain while respecting the First Amendment? Where is the demarcation line between free speech and speech that, like crying "Fire!" in a crowded theater, can reasonably be regulated? "In technology, design precedes everything," Roger McNamee said; real change for platforms at global or national scale means putting policy first. His Exhibit A of the level of cultural change that's needed was February's fad, Clubhouse: "It's a brand-new product that replicates the worst of everything."

In his book, Aral opposes breaking up social media companies as was done incases such as Standard Oil, the AT&T. Zephyr Teachout agreed in seeing breakup, whether horizontal (Facebook divests WhatsApp and Instagram, for example) or vertical (Google forced to sell Maps) as just one tool.

The question, as Joshua Gans said, is, what is the desired outcome? As Federal Trade Commission nominee Lina Khan wrote in 2017, assessing competition by the effect on consumer pricing is not applicable to today's "pay-with-data-but-not-cash" services. Gans favors interoperability, saying it's crucial to restoring consumers' lost choice. Lock-in is your inability to get others to follow when you want to leave a service, a problem interoperability solves. Yes, platforms say interoperability is too difficult and expensive - but so did the railways and telephone companies, once. Break-ups were a better option, Albert Wenger added, when infrastructures varied; today's universal computers and data mean copying is always an option.

Unwinding Facebook's acquisition of WhatsApp and Instagram sounds simple, but do we want three data hogs instead of one, like cutting off one of Lernean Hydra's heads? One idea that emerged repeatedly is slowing "fast, free, and frictionless"; Yael Eisenstat wondered why we allow experimental technology at global scale but policy only after painful perfection.

MEP Marietje Schaake (Democrats 66-NL) explained the EU's proposed Digital Markets Act, which aims to improve fairness by preempting the too-long process of punishing bad behavior by setting rules and responsibilities. Current proposals would bar platforms from combining user data from multiple sources without permission; self-preferencing; and spying (say, Amazon exploiting marketplace sellers' data), and requires data portability and interoperability for ancillary services such as third-party payments.

The difficulty with data portability, as Ian Brown said recently, is that even services that let you download your data offer no way to use data you upload. I can't add the downloaded data from my current electric utility account to the one I switch to, or send my Twitter feed to my Facebook account. Teachout finds that interoperability isn't enough because "You still have acquire, copy, kill" and lock-in via existing contracts. Wenger argued that the real goal is not interoperability but programmability, citing open banking as a working example. That is also the open web, where a third party can write an ad blocker for my browser, but Facebook, Google, and Apple built walled gardens. As Jared Sine told this week's antitrust hearing, "They have taken the Internet and moved it into the app stores."

Real change will require all four of the levers Aral discusses in his book, money, code, norms, and laws - which Lawrence Lessig's 1996 book, Code and Other Laws of Cyberspace called market, software architecture, norms, and laws - pulling together. The national commission on democracy and technology Aral is calling for will have to be very broadly constituted in terms of disciplines and national representation. As Safiya Noble said, diversifying the engineers in development teams is important, but not enough: we need "people who know society and the implications of technologies" at the design stage.


Illustrations: Sinan Aral, hosting the summit.l

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 2, 2021

Medical apartheid

swiss-cheese-virus-defence.jpgEver since 1952, when Clarence Willcock took the British government to court to force the end of wartime identity cards, UK governments have repeatedly tried to bring them back, always claiming they would solve the most recent public crisis. The last effort ended in 2010 after a five-year battle. This backdrop is a key factor in the distrust that's greeting government proposals for "vaccination passports" (previously immunity passports). Yesterday, the Guardian reported that British prime minister Boris Johnson backs certificates that show whether you've been vaccinated, have had covid and recovered, or had a test. An interim report will be published on Monday; trials later this month will see attendees to football matches required to produce proof of negative lateral flow tests 24 hours before the game and on entry.

Simultaneously, England chief medical officer Chris Whitty told the Royal Society of Medicine that most experts think covid will become like the flu, a seasonal disease that must be perennially managed.

Whitty's statement is crucial because it means we cannot assume that the forthcoming proposal will be temporary. A deeply flawed measure in a crisis is dangerous; one that persists indefinitely is even more so. Particularly when, as this morning, culture secretary Oliver Dowden tries to apply spin: "This is not about a vaccine passport, this is about looking at ways of proving that you are covid secure." Rebranding as "covid certificates" changes nothing.

Privacy advocates and human rights NGOs saw this coming. In December, Privacy International warned that a data grab in the guise of immunity passports will undermine trust and confidence while they're most needed. "Until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair." We are a long, long way from that universal access and likely to remain so; today's vaccines will have to be updated, perhaps as soon as September. There is substantial, but not enough, parliamentary opposition.

A grassroots Labour discussion Wednesday night showed this will become yet another highly polarized debate. Opponents and proponents combine issues of freedom, safety, medical efficacy, and public health in unpredictable ways. Many wanted safety - "You have no civil liberties if you are dead," one person said; others foresaw segregation, discrimination, and exclusion; still others cited British norms in opposing making compulsory either vaccinations or carrying any sort of "papers" (including phone apps).

Aside from some specific use cases - international travel, a narrow range of jobs - vaccination passports in daily life are a bad idea medically, logistically, economically, ethically, and functionally. Proponents' concerns can be met in better - and fairer - ways.

The Independent SAGE advisory group, especially Susan Michie, has warned repeatedly that vaccination passports are not a good solution for solution life. The added pressure to accept vaccination will increase distrust, she has repeatedly said, particularly among victims of structural racism.

Instead of trying to identify which people are safe, she argues that the government should be guiding employers, businesses, schools, shops, and entertainment venues to make their premises safer - see for example the CDC's advice on ventilation and list of tools. Doing so would not only help prevent the spread of covid and keep *everyone* safe but also help prevent the spread of flu and other pathogens. Vaccination passports won't do any of that. "It again puts the burden on individuals instead of spaces," she said last night in the Labour discussion. More important, high-risk individuals and those who can't be vaccinated will be better protected by safer spaces than by documentation.

In the same discussion, Big Brother Watch's Silkie Carlo predicted that it won't make sense to have vaccination passports and then use them in only a few places. "It will be a huge infrastructure with checkpoints everywhere," she predicted, calling it "one of the civil liberties threats of all time" and "medical apartheid" and imagining two segregated lines of entry to every venue. While her vision is dramatic, parts of it don't go far enough: imagine when this all merges with systems already in place to bar access to "bad people". Carlo may sound unduly paranoid, but it's also true that for decades successive British governments at every decision point have chosen the surveillance path.

We have good reason to be suspicious of this government's motives. Throughout the last year, Johnson has been looking for a magic bullet that will fix everything. First it was contact tracing apps (failed through irrelevance), then test and trace (failing in the absence of "and isolate and support"), now vaccinations. Other than vaccinations, which have gone well because the rollout was given to the NHS, these failed high-tech approaches have handed vast sums of public money to private contractors. If by "vaccination certificates" the government means the cards the NHS gives fully-vaccinated individuals listing the shots they've had, the dates, and the manufacturer and lot number, well fine. Those are useful for those rare situations where proof is really needed and for our own information in case of future issues, it's simple, and not particularly expensive. If the government means a biometric database system that, as Michie says, individualizes the risk while relieving venues of responsibility, just no.

Illustrations: The Swiss Cheese Respiratory Virus Defence, created by virologist Ian McKay.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.


Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 19, 2021

Dystopian non-fiction

Screenshot from 2021-03-18 12-51-27.pngHow dumb do you have to be to spend decades watching movies and reading books about science fiction dystopias with perfect surveillance and then go on and build one anyway?

*This* dumb, apparently, because that what Shalini Kantayya discovers in her documentary Coded Bias, which premiered at the 2020 Sundance Film Festival. I had missed it until European Digital Rights (EDRi) arranged a streaming this week.

The movie deserves the attention paid to The Social Dilemma. Consider the cast Kantayya has assembled: "math babe" Cathy O'Neil, data journalism professor Meredith Broussard, sociologist Zeynep Tufekci, Big Brother Watch executive director Silkie Carlo, human rights lawyer Ravi Naik, Virginia Eubanks, futurist Amy Webb, and "code poet" Joy Buolamwini, who is the film's main protagonist and provides its storyline, such as it is. This film wastes no time on technology industry mea non-culpas, opting instead to hear from people who together have written a year's worth of reading on how modern AI disassembles people into piles of data.

The movie is framed by Buoalmwini's journey, which begins in her office at MIT. At nine, she saw a presentation on TV from MIT's Media Lab, and, entranced by Cynthia Breazeal's Kismet robot, she instantly decided: she was going to be a robotics engineer and she was going to MIT.

At her eventual arrival, she says, she imagined that coding was detached from the world - until she started building the Aspire Mirror and had to get a facial detection system working. At that point, she discovered that none of the computer vision tracking worked very well...until she put on a white mask. She started examining the datasets used to train the facial algorithms and found that every system she tried showed the same results: top marks for light-skinned men, inferior results for everyone else, especially the "highly melanated".

Teaming up with Deborah Raji, in 2018 Buolamwini published a study (PDF) of racial and gender bias in Amazon's Rekognition system, then being trialed with law enforcement. The company's response leads to a cameo, in which Buolamwini chats with Timnit Gebru about the methods technology companies use to discredit critics. Poignantly, today's viewers know that Gebru, then still at Google was only months away from becoming the target of exactly that behavior, fired over her own critical research on the state of AI.

Buolamwini's work leads Kantayya into an exploration of both algorithmic bias generally, and the uncontrolled spread of facial recognition in particular. For the first, Kantayya surveys scoring in recruitment, mortgage lending, and health care, and visits the history of discrimination in South Africa. Useful background is provided by O'Neil, whose Weapons of Math Destruction is a must-read on opaque scoring, and Broussard, whose Artificial Unintelligence deplores the math-based narrow conception of "intelligence" that began at Dartmouth in 1956, an arrogance she discusses with Kantayya on YouTube.

For the second, a US unit visits Brooklyn's Atlantic Plaza Towers complex, where the facial recognition access control system issues warnings for tiny infractions. A London unit films the Oxford Circus pilot of live facial recognition that led Carlo, with Naik's assistance, to issue a legal challenge in 2018. Here again the known future intervenes: after the pandemic stopped such deployments, BBW ended the challenge and shifted to campaigning for a legislative ban.

Inevitably, HAL appears to remind us of what evil computers look like, along with a red "I'm an algorithm" blob with a British female voice that tries to sound chilling.

But HAL's goals were straightforward: it wanted its humans dead. The motives behind today's algorithms are opaque. Amy Webb, whose book The Big Nine profiles the nine companies - six American, three Chinese - who are driving today's AI, highlights the comparison with China, where the government transparently tells citizens that social credit is always watching and bad behavior will attract penalties for your friends and family as well as for you personally. In the US, by contrast, everyone is being scored all the time by both government and corporations, but no one is remotely transparent about it.

For Buolamwini, the movie ends in triumph. She founds the Algorithmic Justice League and testifies in Congress, where she is quizzed by Alexandria Ocasio-Cortez(D-NY) and Jamie Raskin (D-MD), who looks shocked to learn that Facebook has patented a system for recognizing and scoring individuals in retail stores. Then she watches as facial recognition is banned in San Francisco, Somerville, Massachusetts, and Oakland, and the electronic system is removed from the Brooklyn apartment block - for now.

Earlier, however, Eubanks, author of Automating Inequality, issued a warning that seems prescient now, when the coronavirus has exposed all our inequities and social fractures. When people cite William Gibson's "The future is already here - it's just not evenly distributed", she says, they typically mean that new tools spread from rich to poor. "But what I've found is the absolute reverse, which is that the most punitive, most invasive, most surveillance-focused tools that we have, they go into poor and working communities first." Then they get ported out, if they work, to those of us with higher expectations that we have rights. By then, it may be too late to fight back.

See this movie!


Illustrations: Joy Buolamwini, in Coded Bias.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 5, 2021

Voter suppression in action

Thumbnail image for bush-gore-hanging-chad-florida.jpgThe clowder of legislation to restrict voting access that's popping up across the US is casting the last 20 years of debate over online voting in a new light.

For anyone who, like me, has never spent more than a few minutes casting their vote, the scenes from the 2020 US election were astounding. In response to a photo of a six-*hour* line of waiting voters, someone on Twitter observed, "That is democracy in action." Almost immediately a riposte: "That is voter suppression in action."

I had no idea of the tactics of voter suppression until the 2008 Computers, Freedom, and Privacy conference, when Lillie Coney led a panel on updates to deceptive election practices. Among those Coney and Tova Wang listed were robocalls advising Democrats and Republicans to vote on different days (one the real election day, one not) or saying that the polling location had changed and letters sent to Latino names threatening deportation if they voted illegally. Crude tactics, but effective, especially among new voters. Coney and Wang imagined these shifting to much better-targeted email and phony websites. It was too soon for anyone to spot two-year-old Facebook as the eventual vector.

By 2020, voter suppression was much more blatant. Republicans planted fake drop boxes in California; Texas selectively closed polling places, especially those in central locations easily accessed by public transport; and everywhere Donald Trump insisted that mail-in ballots meant fraud. Nonetheless, even Fox News admitted that the 2020 election was the most secure in US history and there's no evidence of fraud in any jurisdiction. The ability to audit and recount, not just read a number off an electronic counter, is crucial to being able to say this.

It now appears that this election was just a warm-up. The Brennan Center is currently tracking 253 bills that restrict voting access in 43 states, and 704 bills with provisions to expand it in a different set of 43 states. Sometimes both approaches coexist in the same bill. Outside the scope of legislation, later this year congressional districts will be redrawn based on the 2020 census, another process that can be gamed. At the federal level, Democrats are pushing the passage of H.R.1, the For the People Act, to reform many aspects of the US electoral system including financing, districting, and ethics. One section of the bill provides grants to update voting systems, creates security requirements for private companies that sell voting machines and election equipment, and requires those companies to report cybersecurity incidents. Citizens for Ethics supplies the sources of the ideas enshrined in the act. For even more, see Democracy Docket, whose founder, Marc Elias, has been fighting the legal cases with a remarkable record of success. Ensuring fairness is not specifically about Republicans; historically both parties have gamed the system to hang onto power when they've had the chance.

Ever since 1999, when Bill Clinton asked the National Science Foundation to look into online voting, the stated reasons have always *sounded* reasonable - basically, to increase turnout by improving convenience. In the UK, this argument was taken up by the now-defunct organization Webroots Democracy, which argued that it could improve access for younger people used to doing everything on their phones, and would especially grant better access for groups such as visually impaired people who are not well provided for under the present system. These problems still need to be solved.

The reasons against adopting online voting haven't changed since 2000, when Rebecca Mercuri first outlined the security problems. In the UK very little has changed since 2007, when a couple of pilots led the Electoral Commission to advise against pursuing the idea for sound reasons. Tl;dr: computer scientists prefer pencils.

In 2016, to celebrate its second anniversary, Webroots founder Areeq Chowdhury said national adoption in the UK was achievable by the "next general election", then expected in 2020. He had some reason to believe this; in 2015 then Speaker of the House John Bercow suggested online voting should be used for the 2020 election. But, oh, timing! Chowdhury could have no idea that a month after that Webroots meeting the UK was going to vote (using paper and pencils) to leave the EU. In the resulting change in the political climate, two general elections have passed, in 2017 and 2019, both conducted using pencils and paper. So will May's delayed London mayoral election. The government's 2019 plan tobring in mandatory photographic voter ID by 2023 will diminish, not increase, access.

In the US, only 55.7% of eligible voters participated in the 2016 election, and the turnout for congressional primaries can be as low as 11%. Again, time changed everything: between 2000 and 2016 it seemed as though turnout would go on dropping. Then came 2020. Loving or hating incumbent Donald Trump broke records: 66.3% of eligible voters cast ballots, the highest percentage since 1900. That result bears out what many have said: turnout depends on voters believing that their vote matters.

The aggregate picture suggests that the appeal of online voting may have been to encourage the kinds of voters politicians wanted at a time when it was mostly younger, affluent, and educated people who had smartphones and Internet access. Follow the self-interest.


Illustrations: Officials recount a ballot in the narrow Bush-Gore 2000 election.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 12, 2021

The spirit of Mother Jones

800px-Mother_Jones_1902-11-04.jpegThis week a commenter on one of the mailing lists I follow asked, perhaps somewhat plaintively, why, after watching 20 years of attempts to organize Silicon Valley workers that have led nowhere, suddenly the push of workers at Big Tech to unionize seems to be gaining traction. "What has changed?"

Well, for one thing, the existence of a history of 20 years of attempts to organize tech workers - which could be the nearly-flat portion of the famous venture capital hockey stick - by itself is a profound change,. "Why is she running when she has no chance?" people asked about Shirley Chisholm in 1972. Her campaign opened minds for Hillary Clinton and Kamala Harris, VPOTUS.

The next month should give a solid indication of whether tech worker unions' moment is now. It very well might be. The same trend toward unaccountable power that have led the US Congress and many other countries to scrutinize the practices of the big platforms is surely felt even more by their employees. It shouldn't be a surprise; when you recruit people with the promise that they can improve the lives of millions of people you should expect them to be angry when they realize their efforts are being used to cause worldwide damage, especially when they see that little progress has been made on long-standing complaints such as the lack of diversity surrounding them.

One reason today's unionizing moves may come as a surprise is that the image of the tech worker has remained stuck on highly-compensated programmers and engineers and the perks, stock options, and salaries they receive. And yet, in 2014, Silicon Valley software engineers discovered that they, too, were just workers to their employers, who were limiting their career prospects via a no-poaching agreement in which Apple, Google, Intel, Dell, IBM, Pixar, Lucasfilm, Intuit, and dozens of other companies agreed not to recruit from each other's workforce. The result was to depress compensation across the board for millions of engineers and programmers.

And these are the high-caste workers; for years "lower-class" occupations have been filled at many companies by workers under all sorts of arrangements designed to keep them from being classed as employees to whom the company would owe medical insurance, paid leave, and other hard-won benefits. In 2018, Microsoft bug testers cited the Republican environment in Washington as the reason they gave up on a successful unionizing effort that had won them the right to negotiate directly with their temp agency. More recently, Uber and Lyft drivers have demanded employee status in numerous countries.

At Google, temporary, vendor, and contract workers, the majority of the workforce, have complained of being invisible. In November 2018, after the New York Times reported that the company had given seven-figure payouts to two executives accused of sexual harassment, 20,000 of these workers walked out demanding transparency, accountability, and structural change. Google's response was apparently enough to get them back to work at the time.

However, in December 2020, the National Labor Relations Board filed a complaint on behalf of two employees who said they were fired for their organizing efforts. Last month, hundreds of Google workers created the Alphabet Workers' Union, open to both full-time and contract workers. This union won't be formally recognized for collective bargaining, but will use other means to push for change. More than 200 of its members have signed on with the Communications Workers of America.

In an op-ed in the New York Times software engineers Parul Koul and Chewy Shaw, the leaders of the new Alphabet Workers Union, cite that earlier walkout, the recent firing of leading AI researcher Timnit Gebru, as well as the company's general behavior. "Each time workers organize to demand change, Alphabet's executives make token promises, doing the bare minimum in the hopes of placating workers," they write.

The original question was, I think, inspired by the news that voting began Monday at an Amazon fulfillment center in Bessemer, Alabama on whether to unionize. As Lee Fang reports at The Intercept, Amazon has been campaigning against this development, hiring a union-busting law firm Morgan Lewis to mastermind a website, Facebook ads, and mass texts to workers. This is not really comparable to Google's union. The fact that these warehouse staff and delivery drivers work for a technology company is largely irrelevant except for the extra-creepiness of the surveillance Amazon is able to install in its warehouses and delivery vans. The same goes for Apple's retail store staff, whose efforts to organize failed in 2011.

Plus, the overall environment has changed. The pandemic has cast many issues of structural unfairness into sharper relief, and the US's new president has promised to strengthen unions. Add in generational shift to a group whose bleak present includes burdensome education debt, the climate crisis, and shrinking prospects. Yes, it really might be different now..


Illustrations: Union organizer "Mother" Mary G. Harris Jones, "the most dangerous woman in America", in 1902, (via Wikipedia). The title is a reference to the folksinger Andy Irvine's biographical ode to the Union Maid, The Spirit of Mother Jones.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 22, 2021

In the balance

Thumbnail image for 800px-Netherlands-4589_-_Lady_of_Justice_&_William_of_Orange_Coat-o-Arms_(12171086413).jpgAs the year gets going, two conflicts look like setting precedents for Internet regulation: Australia's push to require platforms to pay license fees for linking to their articles; and Facebook's pending decision whether to make former president Donald Trump's ban permanent, as Twitter already has.

Facebook has referred Trump's case to its new Oversight Board and asked it to make policy recommendations for political leaders. The Board says it will consider whether Trump's content violated Facebook community standards and "values", and whether its removal respected human rights standards. It expects to report within 90 days; the decision will be binding on Facebook.

On Twitter, Kate Klonick, an assistant professor at St. John's University of Law, who has been following the Oversight Board's creation and development in detail, says the important aspect is not the inevitably polarizing decision itself, but the creation of what she hopes will be a "transparent global process to adjudicate these human rights issues of speech". In a Yale Law Journal articledocumenting the board's history so far, she suggests that it could set a precedent for collaborative governance of private platforms.

Or - and this seems more likely - it could become the place where Facebook dumps the controversial cases where making its own decision gains the company nothing. Trump is arguably one of these. No matter how much money Trump's presidential campaign (which seems unlikely to have any future) netted the company, it surely must be a drop in the ocean of its overall revenues. With antitrust suits pending and a politically controversial decision, why *wouldn't* Facebook want to hand it off? Would the company do the same in a case where the company's business model was at stake, though? If it does and the decision goes against Facebook's immediate business interests, will shareholders sue?

Those questions won't be answered for some years. Meanwhile, this initial case will be a milestone in Internet history, as Klonick says. If the board does not create durable principles that can be applied across other countries and political systems, it will have failed. The larger question, however, which is the circulation of deliberate lies and misinformation, is more complex.

For that, letters sent this week by US Congress members Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) may be more germane: they have asked the CEOs of Facebook, Google, YouTube, and Twitter to alter their algorithms to stop promoting conspiracy theories at scale. Facebook has been able to ignore previous complaints it was inciting violence in markets less essential to its bottom line and of less personal significance.

The Australian case is smaller, and kind of a rerun, but still interesting. We noted in September that the Australian government had announced the draft News Media Bargaining Code, a law requiring Google and Facebook (to start with) to negotiate license fees for displaying snippets of news articles. By including YouTube, user postings, and search engine results, Australia hoped to ensure the companies could not avoid the law by shutting down, which was what happened in 2014 when Spain enacted a similar law that caught only Google News. Early reports indicated that its withdrawal resulted in a dramatic loss of traffic to publishers' sites.

However, by 2015, Spain's Association of Newspaper Editor was saying members were reporting just a 12% loss of traffic, and a 2019 assessment argues that in fact the closure (which persists) made little long-term difference to publishers. If this is true, it's unarguably better for publishers not to be dependent on a third-party company to send them traffic out of the goodness of their hearts. The more likely underlying reality, however, is that people have learned to use generic search engines and social media to find news stories - in which case the Australian law could still be damaging to publishers' revenues.

It is, as journalist Michael West points out, exceptionally difficult to tease out what portion of Google's or Facebook's revenues are attributable to news content. West argues that a better solution to those companies' rise is regulating their power and taxing them appropriately; neither Google nor Facebook is in the business of reporting the news and are not in direct competition with the traditional publishers - the biggest of which, in Australia, are owned by Rupert Murdoch and so filled with climate change denial that Murdoch's own son left the company because of it.

In December, Google and Facebook won a compromise that will allow Google to include in the negotiations the value it brings in the form of traffic; limit the data it has to share with publishers; and lower the requirement for platforms to share algorithm changes with the publishers. Prediction: the publishers aren't going to wind up getting much out of this.

For the rest of us, though, the notion that users could be stopped from sharing news links (as Facebook is threatening) should be alarming; open, royalty-free linking, as web inventor Tim Berners-Lee told Bloomberg above, is the fundamental characteristic of the web. We take the web so much for granted now that it's easy to forget that the biggest decision Berners-Lee made, with the backing of his employers at CERN, was to make it open instead of proprietary. The Australian law is the latest attempt to modify that decision. I wish I could say it will never catch on.

Illustrations: Justitia outside the Delft Town Hall, the Netherlands (via Dennis Jarvis at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 15, 2021

One thousand

net.wars-the-book.gifIn many ways, this 1,000th net.wars column is much like the first (the count is somewhat artificial, since net.wars began as a 1998 book, then presaged by four years of news analysis pieces for the Daily Telegraph, and another book in 2001...and a lot of my other writing also fits under "computers, freedom, and privacy"; *however*). That November 2001 column was sparked by former Home Office minister Jack Straw's smug assertion that after 9/11 those of us who had defended access to strong cryptography must be feeling "naive". Here, just over a week after the Capitol invasion, three long-running issues are pertinent: censorship; security and the intelligence failures that enabled the attack; and human rights when demands for increased surveillance capabilities surface, as they surely will.

Censorship first. The US First Amendment only applies to US governments (a point that apparently requires repeating). Under US law, private companies can impose their own terms of service. Most people expected Twitter would suspend Donald Trump's account approximately one second after he ceased being a world leader. Trump's incitement of the invasion moved that up, and led Facebook, including its subsidiaries Instagram and WhatsApp, Snapchat, and, a week after the others, YouTube to follow suit. Less noticeably, a Salesforce-owned email marketing company ceased distributing emails from the Republican National Committee.

None of these social media sites is a "public square", especially outside the US, where they've often ignored local concerns. They are effectively shopping malls, and ejecting Trump is the same as throwing out any other troll. Trump's special status kept him active when many others were unjustly banned, but ultimately the most we can demand from these services is clearly stated rules, fairly and impartially enforced. This is a tough proposition, especially when you are dependent on social media-driven engagement.

Last week's insurrection was planned on numerous openly accessible sites, many of which are still live. After Twitter suspended 70,000 accounts linked to QAnon, numerous Republicans complaining they had lost followers seemed to be heading to Parler, a relatively new and rising alt-right Twitterish site backed by Rebekah Mercer, among others. Moving elsewhere is an obvious outcome of these bans, but in this crisis short-term disruption may be helpful. The cost will be longer-term adoption of channels that are harder to monitor.

By January 9 Apple was removing Parler from the App Store, to be followed quickly by Android (albeit less comprehensively, since Android allows side-loading). Amazon then kicked Parler off its host, Amazon Web Services. It is unknown when, if ever, the site will return.

Parler promptly sued Amazon claiming an antitrust violation. AWS retaliated with a crisp brief that detailed examples of the kinds of comments the site felt it was under no obligation to host and noted previous warnings.

Whether or not you think Parler should be squashed - stipulating that the imminent inauguration requires an emergency response - three large Silicon Valley platforms have combined to destroy a social media company. This is, as Jillian C. York, Corynne McSherry, and Danny O'Brien write at EFF, a more serious issue. The "free speech stack", they write, requires the cooperation of numerous layers of service providers and other companies. Twitter's decision to ban one - or 70,000 - accounts has limited impact; companies lower down the stack can ban whole populations. If you were disturbed in 2010, when, shortly after the diplomatic cables release, Paypal effectively defunded Wikleaks after Amazon booted it off its servers, then you should be disturbed now. These decisions are made at obscure layers of the Internet where we have little influence. As the Internet continues to centralize, we do not want just these few oligarchs making these globally significant decisions.

Security. Previous attacks - 9/11 in particular - led to profound damage to the sense of ownership with which people regard their cities. In the UK, the early 1990s saw the ease of walking into an office building vanish, replaced by demands for identification and appointments. The same happened in New York and some other US cities after 9/11. Meanwhile, CCTV monitoring proliferated. Within a year of 9/11, the US passed the PATRIOT Act, and the UK had put in place a series of expansions to surveillance powers.

Currently, residents report that Washington, DC is filled with troops and fences. Clearly, it can't stay that way permanently. But DC is highly unlikely to return to the openness of just ten days ago. There will be profound and permanent changes, starting with decreased access to government buildings. This will be Trump's most visible legacy.

Which leads to human rights. Among the videos of insurrectionists shocked to discover that the laws do apply to them were several in which prospective airline passengers discovered they'd been placed preemptively on the controversial no-fly list. Many others who congregated at the Capitol were on a (separate) terrorism watch list. If the post-9/11 period is any guide, the fact that the security agencies failed to connect any of the dots available to them into actionable intelligence will be elided in favor of insisting that they need more surveillance powers. Just remember: eventually, those powers will be used to surveil all the wrong people.


Illustrations: net.wars, the book at the beginning.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 7, 2021

The most dangerous game

Screenshot from 2021-01-07 13-17-20.pngThe chaos is the point.

Among all the things to note about Wednesday's four-hour occupation of the US Capitol Building - the astoundingly ineffective blue line of police, the attacks on journalists, the haphazard mix of US, Trump, Confederate, and Nazi costumes and flags, the chilling in a hotel lobby - is this: no one seemed very clear about the plan. In accounts and images, once inside, some of the mob snap pictures, go oh, look! emails!, and grab mementos like dangerous and destructive tourists. Let's not glorify them and their fantasies of heroism; they are vandals, they are criminals, they are incipient felons, they are thugs. They are certainly not patriots.

One reason, of course, is that their leader, having urged them to storm the Capitol, went home to his protective Secret Service and the warmth of watching the wreckage on TV inside one of the most secure buildings on the planet. Trump is notoriously petty and vengeful against anyone who has crossed him. Why wouldn't he push the grievance-filled conspiracy theorists whose anger he harnessed for personal gain to destroy the country that dared to reject him? The festering anger that Trump's street-bully smarts (and those of his detonator, Roger Stone) correctly spotted as a political opportunity was perfectly poised for Trump's favorite chaos creation game: "Let's you and him fight".

"We love you," and "You are very special," Trump told the rioters to close out the video clip he issued to tell them to go home, as if this were a Hollywood movie and with a bit of sprinkled praise his special effects crew could cage the Kraken until he next wanted it.

The someday child studying this period in history class will marvel at our willful blindness to white violence openly fomented while applying maximum deterrence to Black Lives Matter.

Our greatest ire should be reserved for the cynically exploitative, opportunistic Trump and supporting senators Josh Hawley (R-MO) and Ted Cruz (R-TX), whom George F. Will says will permanently wear a scarlet "S" for "seditionist" and Trump's many other politicians and enablers who consciously lied, a list to which Marcy Wheeler adds senator Tommy Tuberville (R-AL). It's fashionable to despise former Trump fixer-lawyer Michael Cohen, but we should listen to him; his book, Disloyal, is an addict's fourth and fifth steps (moral inventory and admitting wrongs) that unflinchingly lays bare his collaboration in Trump's bullying exploitation.

The invasion perversely hastened Biden/Harris's final anointing; Republicans dropped most challenges in the interests of Constitutional honor (read: survival). Mitch McConnell (R-KY), who as Senate Majority Leader has personally made governance impossible, sounded like a man abruptly defibrillated into sanity, and Senator Lindsey Graham's (R-SC) careening wait-for-his-laugh "That's it! I'm done!" speech led some on Twitter to surmise he was drunk. Only Hawley (R-MO), earlier seen fist-pumping the rioters-in-waiting, seemed undeterred.

High-level Trump administration members - those who can afford health insurance are fleeing. Apparently we have finally found the line they won't cross, though it may not be the violence but the prospect of having to vote on invoking the 25th Amendment.

An under-discussed aspect of the gap between politics - Beltway or Westminster - and life as ordinary people know it is that for many politicians and media, making proposterous claims they don't really believe is a game. Playing exhibitionist contrarian for provocation is a staple of British journalism. Boris Johnson famously wrote pre-referendum columns arguing both Leave and Remain before choosing pro-Leave's personal opportunities. They appear to care little for the consequences, measured in covid deaths, food bank use, deportations, and shattered lives.

All these posturers score against each other from comfortable berths and comfortably assume they are beyond repercussions. It's the same dynamic as the one at work among the advocates of letting the virus rip through the population at large, as if infection is for the little people and our desperately overstressed, traumatized health care workers are replaceable parts rather than a precious resource.

Perhaps the most extraordinary aspect is that this entire thing was planned out in the open. There was no need to backdoor encryption. They had merch; Trump repeatedly tweeted his intentions; planning was on public forums. In September, the Department of Homeland Security warned that white supremacy is the "most lethal threat" to the US. On Tuesday, Bellingcat warned that a dangerous meld of numerous right-wing constituencies was setting out for DC. Talia Lavin's 2020 book, Culture Warlords, thoroughly documented the online hate growing into real-world violence.

Wednesday also saw myriad mostly peaceful statehouse protests: Texas, Utah, Michigan, California, Oregon, Arizona, Arkansas, Kansas, Wisconsin, Nevada (with a second protest in Las Vegas), Florida, and Georgia. Pause to remember Wednesday's opener: Democrats Jon Ossoff and Raphael Warnock won Georgia's Senate seats.

Trump has 12 more days. Twitter and Facebook, which CNN reporter Donie Sullivan calls complicit, have locked Trump's accounts; Shopify has closed his shops. The far-right forums are considering the results while the FBI makes arrests and Biden builds his administration.

The someday child will know the next part faster than we will.


Illustrations: Screenshot of Wednesday's riot in progress.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 11, 2020

Facebook in review

parliament-whereszuck.jpgLed by New York attorney general Letitia James, this week 46 US states, plus Guam, and Washington, DC, and, separately, the Federal Trade Commission filed suits against Facebook alleging that it has maintained an illegal monopoly while simultaneously reducing privacy protections and services to boost its bottom line. The four missing states: Alabama, Georgia, South Carolina, and South Dakota.

As they say, we've had this date from the beginning.

It's seemed likely for months that legal action against Facebook was on the way. There were the we-mean-business Congressional hearings and the subsequent committee report, followed by the suit against Google the Department of Justice filed in October.

Facebook seems peculiarly deserving. It began in 2004 as a Harvard-only network, using its snob appeal to expand to the other Ivy League schools, then thousands of universities and high schools, and finally the general public. Mass market adoption grew in tandem with the post-2009 explosion of smart phones. By then, Facebook had frequently tweaked its privacy settings and repeatedly annoyed users with new privacy-invasive features in the (sadly correct) and arrogant belief they'd never leave. By 2010, Zuckerberg was claiming that "privacy is no longer a social norm", adding that were he starting then he would make everything public by default, like Twitter.

It's hard to pick Facebook's creepiest moments out of so many, but here are a few: in 2011 it began auto-recognizing user photographs, in 2012 it dallied with in-network "democracy" - a forerunner of today's unsatisfactory oversight board, and in 2014 it tested emotionally manipulating its users.

In 2011, based on the rise and fall of earlier services like CompuServe, AOL, Geocities, LiveJournal, and MySpace you can practically carbon-date people by their choice of social media - some of us wrongly surmised that perhaps Facebook had peaked. "The [online] party keeps moving" is certainly true; what was different was that Zuckerberg knew it and launched his program of aggressive and defensive acquisitions.

The 2012 $1 billion acquisition of Instagram and 2014 $19 billion purchase of WhatsApp are the heart of the suits. The lawsuits suggest that without Facebook's intervention we'd have social media successfully competing on privacy. In his summary, Matt Stoller credits this idea to Dina Srinivasan, who argued in 2019 that Facebook saw off then-dominant MySpace by presenting itself as "privacy-centered" at a time when the press was claiming that MySpace's openness made it unsafe for children. Once in pole position, Facebook began gradually pushing greater openness on its users - bait and switch, I called it in 2010.

I'm less convinced that MySpace's continued existence could have curbed Facebook's privacy invasion. In 2004, the year of Facebook's birth, Australian privacy activist Roger Clarke surveyed the earliest social networks - chiefly Plaxo - and predicted that all social networks would inevitably exploit their users. "The only logical business model is the value of consumers' data," he told me for the Independent (TXT). I think, therefore, that the privacy-destructive race to the bottom-of-the-business-model was inevitable given the US's regulatory desert. Google began heading that way soon after its 2004 IPO; by 2006 privacy advocates were already warning of its danger.

Srinivasan details Facebook's progressive privacy invasion: the cooption of millions of third parties via logins and the Like button propagandize its service to collect and leverage vast amounts of personal data while it became a vector for the unscrupulous to hack elections. This is all without considering non-US issues such as Free Basics, which has made Facebook effectively the only Internet service in parts of the world. Facebook also had Silicon Valley's venture capital ethos at its back and Facebook's share structure, which awards Zuckerberg full and permanent control.

In a useful paper on nascent competitors, Tim Wu and C. Scott Hemphill discuss how to spot anticompetitive acquisitions. As I recall, though, many - notably the ever-prescient Jeff Chester - protested the WhatsApp and Instagram acquisitions at the time; the EU only agreed because Facebook promised not to merge the user databases, and issued a €110 million fine when it realized the company lied. Last year Facebook announced it would merge the databases, which critics saw as a preemptive move to block a potential breakup. Allowing the mergers to go ahead seems less dumb, however, if you remember that it took until 2017 and Lina Khan to realize that the era of two guys in a garage up-ending entrenched monopolists was over.

The suits ask the court to find Facebook guilty under Section 2 of the Sherman Act (which is a felony) and Section 7 of the Clayton Act, block it from making further acquisitions valued at $10 million or above, and require it to divest or restructure illegally acquired companies or current Facebook assets or business lines. Restoring some competition to the Internet ecosystem in general and social media in particular seems within reach of this action - though there are many other cases that also need attention. It won't be enough to fixing the damage to democracy and privacy, but perhaps the change in attitude it represents will ensure the next Facebook doesn't become a monster.


Illustrations: Mark Zuckerberg's empty chair at last year's Grand Committee hearing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 4, 2020

Scraped

Somehow I had missed the hiQ Labs v. LinkedIn case until this week, when I struggled to explain on Twitter why condemning web scraping is a mistake. Over the years, many have made similar arguments to ban ordinary security tools and techniques because they may also be abused. The usual real world analogy is: we don't ban cars just because criminals can use them to escape.

The basics: hiQ, which styles itself as a "talent management company", used automated bots to scrape public LinkedIn profiles, and analyze them into a service advising companies what training they should invest in or which employee might be on the verge of leaving. All together now: *so* creepy! LinkedIn objected that the practice violates its terms of service and harms its business. In return, hiQ accused LinkedIn of purely anti-competitive motives, and claimed it only objected now because it was planning its own version.

LinkedIn wanted the court to rule that hiQ's scraping its profiles constitutes felony hacking under the Computer Fraud and Abuse Act (1986). Meanwhile, hiQ argued that because the profiles it scraped are public, no "hacking" was involved. EFF, along with DuckDuckGo and the Internet Archive, which both use web scraping as a basic tool, filed an amicus brief arguing correctly that web scraping is a technique in widespread use to support research, journalism, and legitimate business activities. Sure, hiQ's version is automated, but that doesn't make it different in kind.

There are two separate issues here. The first is web scraping itself, which, as EFF says, has many valid uses that don't involve social media or personal data. The TrainTimes site, for example, is vastly more accessible than the National Rail site it scrapes and re-presents. Over the last two decades, the same author, Matthew Somerville, has built numerous other such sites that avoid the heavy graphics and scripts that make so many information sites painful to use. He has indeed gotten in trouble for it sometimes; in this example, the Odeon movie theaters objected to his making movie schedules more accessible. (Query: what is anyone going to do with the Odeon movie schedule beyond choosing which ticket to buy?)

As EFF writes in its summary of the case, web scraping has also been used by journalists to investigate racial discrimination on Airbnb and find discriminatory pricing on Amazon; in the early days of the web, civic-minded British geeks used web scraping to make information about Parliament and its debates more accessible. Web scraping should not be illegal!

However, that doesn't mean that all information that can be scraped should be scraped or that all information that can be scraped should be *legal* to scrape. Like so many other basic techniques, web scraping has both good and bad uses. This is where the tricky bit lies.

Intelligence agency personnel these days talk about OSINT - "open source intelligence". "Open source" in this context (not software!) means anything they can find and save, which includes anything posted publicly on social media. Journalists also tend to view anything posted publicly as fair game for quotation and reproduction - just look at the Guardian's live blog any day of the week. Academic ethics require greater care.

There is plenty of abuse-by-scraping. As Olivia Solon reported last year, IBM scraped Flickr users' innocently posted photographs repurposed them into a database to train facial recognition algorithms, later used by Immigration and Customs Enforcement to identify people to deport. (In June, when the protests after George Floyd's murder led IBM to pull back on selling facial recognition "for mass surveillance or racial profiling".) Clearview AI scraped billions of photographs off social media and collating them into a database service to sell to law enforcement. It's safe to say that no one posted their profile on LinkedIn with the intention of helping a third-party company get paid by their employer to spy on them.

Nonetheless, those abuse cases do not make web scraping "hacking" or a crime. They are difficult to rectify in the US because, as noted in last week's review of 30 years of data protection, the US lacks relevant privacy laws. Here in the UK, since the data Somerville was scraping was not personal, his complainants typically argued that he was violating their copyright. The hiQ case, if brought outside the US, would likely be based in data protection law.

In 2019, the Ninth Circuit ruled in favor of hiQ, saying it did not violate CFAA because LinkedIn's servers were publicly accessible. In March, LinkedIn asked the Supreme Court to review the case. SCOTUS could now decide whether scraping publicly accessible data is (or is not) a CFAA violation.

What's wrong in this picture is the complete disregard for the users in the case. As the National Review says, a ruling for hiQ could deprive users of all control over their publicly posted information. So, call a spade a spade: at its heart this case is about whether LinkedIn has an exclusive right to abuse its users' data or whether it has to share that right with any passing company with a scraping bot. The profile data hiQ scraped is public, to be sure, but to claim that opens it up for any and all uses is no more valid than claiming that because this piece is posted publicly it is not copyrighted.


Illustrations: I simply couldn't think of one.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 27, 2020

Data protection in review

Thumbnail image for 2015_Max_Schrems_(17227117226).jpgA tax on small businesses," a disgusted techie called data protection, circa 1993. The Data Protection Directive became EU law in 1995, and came into force in the UK in 1998.

The narrow data protection story of the last 25 years, like that of copyright, falls into three parts: legislation, government bypasses to facilitate trade, and enforcement. The broader story, however, includes a power struggle between citizens and both public and private sector organizations; a brewing trade war; and the difficulty of balancing conflicting human rights.

Like free software licenses, data protection laws seed themselves across the world by requiring forward compliance. Adopting this approach therefore set the EU on a collision course with the US, where the data-driven economy was already taking shape.

Ironically, privacy law began in the US, with the Fair Credit Reporting Act (1970), which gives Americans the right to view and correct the credit files that determine their life prospects. It was joined by the Privacy Act (1974), which covers personally identifiable information held by federal agencies, and the Electronic Communications Privacy Act (1986), which restricts government wiretaps on transmitted and stored electronic data. Finally, the 1996 Health Insurance Portability and Accountability Act protect health data (with now-exploding exceptions. In other words, the US's consumer protection-based approach leaves huge unregulated swatches of the economy. The EU's approach, by contrast, grew out of the clear historical harms of the Nazis' use of IBM's tabulation software and the Stasi's endemic spying on the population, and regulates data use regardless of sector or actor, minus a few exceptions for member state national security and airline passenger data. Little surprise that the results are not compatible.

In 1999, Simon Davies saw this as impossible to solve for Scientific American (TXT): "They still think that because they're American they can cut a deal, even though they've been told by every privacy commissioner in Europe that Safe Harbor is inadequate...They fail to understand that what has happened in Europe is a legal, constitutional thing, and they can no more cut a deal with the Europeans than the Europeans can cut a deal with your First Amendment." In 2000, he looked wrong: the compromise Safe Harbor agreement enabled EU-US data flows.

In 2008, the EU began discussing an update to encompass the vastly changed data ecosystem brought by Facebook, YouTube, and Twitter, the smartphone explosion, new types of personally identifiable information, and the rise and fall of what Andres Guadamuz last year called "peak cyber-utopianism". By early 2013, it appeared that reforms might weaken the law, not strengthen it. Then came Snowden, whose revelations reanimated privacy protection. In 2016, the upgraded General Data Protection Regulation was passed despite a massive opposing lobbying operation. It the month before GDPR came into force">came into force in 2018, but even now many US sites still block European visitors rather than adapt because "you are very important to us".

Everyone might have been able to go on pretending the fundamental incompatibility didn't exist but for two things. The first is the 2014 European Court of Justice decision requiring Google to honor "right to be forgotten" requests (aka Costeja). Americans still see Costeja as a terrible abrogation of free speech; Europeans more often see it as a balance between conflicting rights and a curb on the power of large multinational companies to determine your life.

The second is Austrian lawyer Max Schrems. While still a student, Schrems saw that Snowden's revelations utterly up-ended the Safe Harbor agreement. He filed a legal case - and won it, in 2016, just as GDPR was being passed.The EU and US promptly negotiated a replacement, Privacy Shield. Schrems challenged again. And won again, this year. "There must be no Schrems III!", EU politicians said in September. In other words: some framework must be found to facilitate transfers that passes muster within the law. The US's approach appears to be trying to get data protection and localization laws barred via trade agreements despite domestic opposition. One of the Trump administration's first acts was to require federal agencies to exempt foreigners from Privacy Act protections.

No country is more affected by this than the UK, which as a new non-member can't trade without an adequacy decision and no longer gets the member-state exception for its surveillance regime. This dangerous high-wire moment for the UK traps it in that EU-US gap.

Last year, I started hearing complaints that "GDPR has failed". The problem, in fact, is enforcement. Schrems took action because the Irish Data Protection Regulator, in pole position because companies like Facebook have sited their European headquarters there, was failing to act. The UK's Information Commissioner's Office was under-resourced from the beginning. This month, the Open Rights Group sued the ICO to force it to act on the systemic breaches of the GDPR it acknowledged in a June 2019 report (PDF) on adtech.

Equally a problem are the emerging limitations of GDPR and consent, which areentirely unsuited for protecting privacy in the onrushing "smart" world in which you are at the mercy of others' Internet of Things. The new masses of data that our cities and infrastructure will generate will need a new approach.


Illustrations: Max Schrems in 2015.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 6, 2020

Crypto in review

Caspar_Bowden-IMG_8994-2013-rama.jpgBy my count, this is net.wars number 990; the first one appeared on November 2, 2001. If you added in its predecessors - net.wars-the-book, and its sequel From Anarchy to Power, as well as the more direct precursors, the news analysis pieces I wrote for the Daily Telegraph between 1997 and early 2001, you'd get a different number I don't know how to calculate. Therefore: this is net.wars #990, and the run-up to 1,000 seems a good moment to review some durable themes of the last 20 years via what we wrote at the time.

net.wars #1 has, sadly, barely aged; it could almost be published today unchanged. It was a ticked-off response to former Home Secretary Jack Straw, who weeks after the 9/11 attacks told Britain's radio audience that the people who had opposed key escrow were now realizing they'd been naive. We were not! The issue Straw was talking about was the use of strong cryptography, and "key escrow" was the rejected plan to require each individual to deposit a copy of their cryptographic key with a trusted third party. "Trusted", on its surface meant someone *we* trusted to guard our privacy; in subtext it meant someone the government trusted to disclose the key when ordered to do so - the digital equivalent of being required to leave a copy of the key to your house with the local police in case they wanted to investigate you. The last half of the 1990s saw an extended public debate that concluded with key escrow being dropped for the final version of the Regulation of Investigatory Powers Act (2000) in favor of requiring individuals to produce cleartext when law enforcement require it. A 2014 piece for IEEE Security & Privacy explains RIPA and its successors and the communications surveillance framework they've created.

With RIPA's passage, a lot of us thought the matter was settled. We were so, so wrong. It did go quiet for a decade. Surveillance-related public controversy appeared to shift, first to data retention and then to ID cards, which were proposed soon after the 2005 attacks on London's tube and finally canned in 2010 when the incoming coalition government found a note from the previous chancellor, "There's no money".

As the world discovered in 2013, when Edward Snowden dropped his revelations of government spying, the security services had taken the crypto debate into their own hands, undermining standards and making backroom access deals. The Internet community reacted quickly with first advice and then with technical remediation.

In a sense, though, the joke was on us. For many netheads, crypto was a cause in the 1990s; the standard advice was that we should all encrypt all our email so the important stuff wouldn't stand out. To make that a reality, however, crypto software had to be frictionless to use - and the developers of the day were never interested enough in usability to make it so. In 2011, after I was asked to write an instruction manual for installing PGP (or GPG), the lack of usability was maddening enough for me to write: "There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners."

The only really successful crypto at that point were backend protocols like SSL (used to secure ecommerce transactions over the web), TLS (secures communications), and HTTPS (secures web connections) and the encryption built into mobile phone standards. Much has changed since, most notably Facebook's and Apple's decision to protect user messages and data, at a stroke turning crypto on for billions of users. The result, as Ross Anderson predicted in 2018, was to change the focus of governments' demand for access to hacking devices rather than cracking individual messages.

The arguments have not changed in all those years; they were helpfully collated by a group of senior security experts in 2015 in the report Keys Under Doormats (PDF). Encryption is mathematics; you cannot create a hole that only "good guys" can use. Everyone wants uncrackable encryption for themselves - but to be able to penetrate everyone else's. That scenario is no more possible than the suggestion some of Donald Trump's team are making that the same votes that are electing Republican senators and Congresspeople are not legally valid when applied to the presidency.

Nonetheless, we've heard repeated calls from law enforcement for breakable encryption: in 2015, 2017, and, most recently, six weeks ago. In between, while complaining that communications were going dark, in 2016 the FBI tried to force Apple to crack its own phones to enable an investigation. When the FBI found someone to crack it to order, Apple turned on end-to-end encryption.

I no longer believe that this dispute can be settled. Because it is built on logic proofs, mathematics will always be hard, non-negotiable, and unyielding, and because of their culture and responsibilities security services and law enforcement will always want more access. For individuals, before you adopt security precautions, think through your threat model and remember that most attacks will target the endpoints, where cleartext is inevitable. For nations, remember whatever holes you poke in others' security will be driven through in your own.


Illustrations: The late Caspar Bowden (1961-2015), who did so much to improve and explain surveillance policy in general and crypto policy in particular (via rama at Wikmedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 30, 2020

The reckoning

parliament-whereszuck.jpgIt seems clear that we're approaching a reckoning for Big Tech as the societal costs of their success keep becoming bigger and clearer. Like so many other things, the pandemic has made these issues more urgent, as the money these companies suck away from local businesses and communities is now badly needed to help rebuild suffering economies. Twenty-five years ago, some were celebrating the dawn of cyberspace as the approaching end of the nation-state. Today's crises remind that some problems only governments can solve.

In the US, two types of legal actions are heading GAFA's way, as suggested by the recent two-pronged antitrust hearing. The first, which led to the Democrat-led antitrust report of a few weeks ago, has spawned a lawsuit case against Google alleging anticompetitive behavior surrounding its search engine. The second, reflecting the Republican-led grievance that conservative voices are being suppressed, has led to this week's Commerce Committee hearing on platform censorship. Thoughts on that one, which will likely result in a push to reform S230, will have to wait for concrete proposals.

Pending elsewhere: both users and Epic Games are suing Apple over the 30% commissions charged by its App Store. Meanwhile, in France, a coalition of trade groups has filed an antitrust complaint ($) asking the French competition authority to stop Apple from following through on its plans to restrict mobile trackers for advertising. This is, as the FT puts it, "one of the first legal actions alleging that big tech groups are using privacy arguments to abuse their market power". On Twitter, Lukasz Olejnik rightly says that this case about "privacy-competition trade-off" will be fascinating. It will, not least because privacy has not in general been a market mover.

Tech-related antitrust suits are typically ten years late, largely because the industry's speed makes it hard to see where to push until the damage has become deeply entrenched. In 2014, I thought Google's purchase of Nest would be the antitrust case of 2024. Instead, Google is being accused of abusing its position by illegally tying its search engine, its main revenue source, to its Chrome browser and Android licensing agreements, and, paying other browser makers such as Apple for pole position as their default search engine. (Query: if Google search is so great, why do they need to do this? The steady degradation of the Google experience has been clearer to those of us who stopped using it.)

Both Sarah Miller and Matt Stoller see the Google case as a near-copy of the late 1990s case against Microsoft, which also focused on tying. In that case, Microsoft used its Windows dominance to make its Internet Explorer the default for browsing the web. The current complaint specifically references that case, calling Google's tactics "the same playbook". Privacy is not among its concerns, though it does at least note that the key to Google's success and scale is the data it collects as the price consumers pay for its "free" services.

It's rare that an antitrust case scores a hit on an entirely different company. Google pays Apple $8 to $12 billion a year - compared to Apple's Q4 2019 $13.7 billion in profits. Apple will survive if Google is enjoined from making such payments. Firefox, however, might not, since its Google contract represents most of its income. Diversifying the search market is good for competition; shrinking the browser market is not.

My suspicion is that an additional factor in the answer to "why now?" is the arrogance and indifference to complaints that these companies have often displayed. Facebook founder Mark Zuckerberg has been particularly resistant, refusing in 2018 to show up to testify in front of representatives of nine countries.

It's tempting to divide these companies into those still run by their founders - Amazon and Facebook - and those that are on their second (Google) or later (Apple) generation of leaders. But the better division is between normal share structures (Apple and Amazon) and kingmaker share structures. Google has ensured that founders Sergey Brin and Larry Page, along with original company chair Eric Schmidt, could never lose control of the company. Facebook's share structure is even more tightly controlled, giving Zuckerberg 60% of the voting rights; he is the company's king.

Neither hearings nor complaint mention this, but I think it's crucial. The benefit of these structures was supposed to be to keep the companies nimble and innovative. It's not clear it's worked. The downside is the showrunners can be unresponsive to complaints; Facebook will never change as long as Zuckerberg is in charge - and no one can push him out. For this reason, ownership structures should be a consideration in modernizing antitrust law/

In the end, the Microsoft case was largely abandoned - but it reportedly nonetheless left a mark by changing the company's culture into one vastly more cautious and risk-averse, like IBM before it. Today's biggest technology companies have been less easily intimidated by big and bigger fines or adverse decisions. But governments won't give up; these cases, like others before them are all part of the long arc of the power struggle between global technology and national governments. We are just at the beginning.


Illustrations: Mark Zuckerberg's empty chair in front of the Grand Committee.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 16, 2020

The rights stuff

Humans-Hurt-synth.pngIt took a lake to show up the fatuousness of the idea of granting robots legal personality rights.

The story, which the AI policy expert Joanna Bryson highlighted on Twitter, goes like this: in February 2019 a small group of people, frustrated by their inability to reduce local water pollution, successfully spearheaded a proposition in Toledo, Ohio that created the Lake Erie Bill of Rights. Its history since has been rocky. In February 2020, a farmer sued the city and a US district judge invalidated the bill. This week, three-judge panel from Ohio's Sixth District Court of Appeals ruled the February judge made a mistake. For now, the lake still has its rights. Just.

We will leave aside the question of whether giving lakes and the other ecosystems listed in the above-linked Vox article is an effective means of environmental protection. But given that the idea of giving robots rights keeps coming up - the EU is toying with the possibility - it seems worth teasing out the difference.

In response to Bryson, Nicholas Bohm noted the difference between legal standing and personality rights. The General Data Protection Regulation, for example, grants legal standing in two new ways: collective action and civil society representing individuals seeking redress. Conversely, even the most-empowered human often lacks legal standing; my outrage that a brick fell on your head from the top of a nearby building does not give me the right to sue the building's owner on your behalf.

Rights as a person, however, would allow the brick to sue on its own behalf for the damage done to it by landing on a misplaced human. We award that type of legal personhood to quite a few things that aren't people - corporations, most notoriously. In India, idols have such rights, and Bohm cites a case in which the trustee of a temple, because the idol they represented had these rights in India, was allowed to join a case claiming improper removal in England.

Or, as Bohm put it more succinctly, "Legal personality is about what you are; standing is about what it's your business to mind."

So if lakes, rivers, forests, and idols, why not robots? The answer lies in what these things represent. The lakes, rivers, and forests on whose behalf people seek protection were not human-made; they are parts of the larger ecosystem that supports us all, and most intimately the people who live on their banks and verges. The Toledoans who proposed granting legal rights to Lake Erie were looking for a way to force municipal action over the lake's pollution, which was harming them and all the rest of the ecosystem the lake feeds. At the bottom of the lake's rights, in other words, are humans in existential distress. Granting the lake rights is a way of empowering the humans who depend on it. In that sense, even though the Indian idols are, like robots, human-made, giving them personality rights enables action to be taken on behalf of the human community for whom they have significance. Granting the rights does not require either the lake or the idol to possess any form of consciousness.

In a paper to which Bryson linked, S.G. Solaiman argues that animals don't quality for rights, even though they have some consciousness, because a legal personality must be able to "enjoy rights and discharge duties". The Smithsonian National Zoo's giant panda, who has been diligently caring for her new cub for the last two months, is not doing so out of legal obligation.

Nothing like any of this can be said of rights for robots, certainly not now and most likely not for a long time into the future, if ever. Discussions such as David Gunkel's How to Survive a Robot Invasion, which compactly summarizes the pros and cons, generally assume that robots will only qualify for rights after a certain threshold of intelligent consciousness has been met. Giving robots rights in order to enable suffering humans to seek redress does not come up at all, even when the robots' owners hold funerals because the manufacturer has discontinued the product. Those discussions rightly focus on manufacturer liability.

In the 2015 British TV series Humans (a remake of the 2012 Swedish series Äkta människor), an elderly Alzheimer's patient (William Hurt) is enormously distressed when his old-model carer robot is removed, taking with it the only repository of his personal memories, which he can no longer recall unaided. It is not necessary to give the robot the right to sue to protect the human it serves, since family or health workers could act on his behalf. The problem in this case is an uncaring state.

The broader point, as Bryson wrote on Twitter, is that while lakes are unique and can be irreparably damaged, digital technology - including robots - "is typically built to be fungible and upgradeable". Right: a compassionate state merely needs to transfer George's memories into a new model. In a 2016 blog posting, Bryson also argues against another commonly raised point, which is whether the *robots* suffer: if designers can install suffering as a feature, they can take it out again.

So, the tl;dr: sorry, robots.


Illustrations: George (William Hurt) and his carer "synth", in Humans.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 9, 2020

Incoming

fifth-element-destroys-evil.pngThis week saw the Antitrust Subcommittee of the (US) House Judiciary Committee release the 449-page report (PDF) on its 16-month investigation into Google, Apple, Facebook, and Amazon - GAFA, as we may know them. Or, if some of the recommendations in this report get implemented, *knew* them. The committee has yet to vote on the report, and the Republican members have yet to endorse it. So this is very much a Democrats' report...but depending how things go over the next month, come January that may be sufficient to ensure action.

At BIG, Matt Stoller has posted a useful and thorough summary. As he writes, the subcommittee focused on a relatively new idea of "gatekeeper power", which each of the four exercises in its own way (app stores, maps, search, phone operating systems, personal connections), and each of which is aided by its ability to surveil the entirety of the market and undermine current and potential rivals. It also attacks the agencies tasked with enforcing the antitrust laws for permitting the companies to make some 500 acquisitions. The resulting recommendations fall into three main categories: restoring competition in the digital economy, strengthening the antitrust laws, and reviving antitrust enforcement.

In a discussion while the report was still just a rumor, a group of industry old-timers seemed dismayed at the thought of breaking up these companies. A major concern was the impact on research. The three great American corporate labs of the 1950s to 1980s were AT&T's Bell Labs, Xerox PARC, and IBM's Watson. All did basic research, developing foundational ideas for decades to come but that might never provide profits for the company itself. The 1984 AT&T breakup effectively killed Bell Labs. Xerox famously lost out on the computer market. IBM redirected its research priorities toward product development. GAFA and Microsoft operate substantial research labs today, but they are more focused on the technologies, such as AI and robotics, that they envision as their own future.

The AT&T case is especially interesting. Would the Internet have disrupted AT&T's business even without the antitrust case, or would AT&T, kept whole, been able to use its monopoly power to block the growth of the Internet? Around the same time, European countries were deliberately encouraging competition by ending the monopolies of their legacy state telcos. Without that - or with AT&T left intact - anyone wanting to use the arriving Internet would have been paying a small fortune to the telcos just to buy a modem to access it with. Even as it was, the telcos saw Voice over IP as a threat to their lucrative long distance business, and it was only network neutrality that kept them from suppressing it. Today, Zoom-like technology might be available, but likely out of reach for most of us.

The subcommittee's enlistment of Lina Khan as counsel suggests GAFA had this date from the beginning. Khan made waves while still a law student by writing a lengthy treatise on Amazon's monopoly power and its lessons for reforming antitrust law, back when most of us still thought Amazon was largely benign. One of her major points was that much opposition to antitrust enforcement in the technology industry is based on the idea that every large company is always precariously balanced because at any time, a couple of guys in a garage could be inventing the technology that will make them obsolete. Khan argued that this was no longer true, partly because those two garage guys were enabled by antitrust enforcement that largely ceased after the 1980s, and partly because GAFA are so powerful that few start-ups can find funding to compete with them directly and rich enough to buy and absorb or shut down anyone who tries. The report, like the hearings, notes the fear of reprisal among business owners asked for their experiences, as well as the disdain with which these companies - particularly Facebook - have treated regulators. All four companies have been repeat offenders, apparently not inspired to change their behavior by even the largest fines.

Stoller thinks that we may now see real action because our norms have shifted. In 2011, admiration for monopolists was so widespread, he writes, that Occupy Wall Street honored Steve Jobs' death, whereas today US and EU politicians of all stripes are targeting monopoly power and intermediary liability. Stoller doesn't speculate about causes, but we can think of several: the rapid post-2010 escalation of social media and smartphones; Snowden's 2013 revelations; the 2016 Cambridge Analytica scandal; and the widespread recognition that, as Kashmir Hill found, it's incredibly difficult to extricate yourself from these systems once you are embedded in them. Other small things have added up, too, such as Mark Zuckerberg's refusal to appear in front of a grand committee assembled by nine nations.

Put more simply, ten years ago GAFA and other platforms and monopolists made the economy look good. Today, the costs they impose on the rest of society - precarious employment, lost privacy, a badly damaged media ecosystem, and the difficulty of containing not just misinformation but anti-science - are clearly visible. This, too, is a trend that the pandemic has accelerated and exposed. When the cost of your doing business is measured in human deaths, people start paying attention pretty quickly. You should have paid your taxes, guys.


Illustrations: The fifth element breaks up the approaching evil in The Fifth Element.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 25, 2020

The zero on the phone

WeRobot2020-Poster.jpegAmong the minor casualties of the pandemic has been the appearance of a Swiss prototype robot at this year's We Robot, the ninth year of this unique conference that crosses engineering, technology policy, and law to identify future conflicts and pre-emptively suggest solutions. The result was to leave the robots considered by this virtual We Robot remarkably (appropriately) abstract.

We Robot was founded to get a jump on the coming conflicts that robots will bring to law and policy, in part so that we don't repeat the Internet experience of repeating the same arguments decades on end. This year's event pre-empted the Internet experience in a new way: many authors have drawn on the failed optimism and cooperation of the 1990s to begin defining ways to ensure that robotics and AI do not follow the same path. Where at the beginning we were all eager to embrace robots, this year their disembodied AIs are being done *to* us.

In the one slight exception to this rule, Hallie Siegel's exploration of senior citizens' attitudes towards new technologies found that the seniors she studies are pragmatic, concerned about their privacy and autonomy and only really interested in technologies that provided benefits they really need.

Jason Millar and Elizabeth Gray drew directly on the Internet experience by comparing network neutrality to the issues surrounding the mapping software that controls turn-by-turn navigation systems in a discussion of "mobility shaping". Should navigation services be common carriers, as telephone lines are? The idea appeals to me, if only because the potential for physical control of where our vehicles are allowed to go seems so clear.

The theme of exploitation was particularly visible in the two papers on Africa. In the first, Arthur Gwagwa (Strathmore University, Nairobi), Erika Kraemer-Mbula, Nagla Rizk, Isaac Rutenberg, and Jeremy de Beer warn that the combination of foreign capital and local resources is likely to reproduce the power structures of previous forms of colonialism, an argument also seen recently in a paper by Abeba Birhane. Women in particular, who run the majority of start-ups in some African countries, may be ignored, and the authors suggest that a GDPR-like rule awarding individuals control over their own data could be crucial in creating value for, rather than extracted from, Africa.

In the second, Laura Foster (Indiana University), Bram Van Wiele, and Tobias Schönwetter extracted a database of press stories about AI in Africa from Lexis-Nexus, to find the familiar set of claims for new technology: happy, value-neutral disruption, yay!. The failure of most of these articles to consider gender and race, they observed, doesn't make the emerging picture neutral, but serves to reinforce the default of the straight, white male.

One way we push back against AI/robot control is the "human in the loop" to whom the final decision is delegated. This human has featured in every We Robot conference, most notably in 2016 as Madeleine Elish's moral crumple zone. In his paper, Liam McCoy argues for the importance of meaningful control, because the middle ground, where the human is expected to solve the most complex situations where AI fails without support or authority is truly dangerous. The middle ground may be profitable; at UK IGF a few weeks ago, Gus Hosein noted that automating dispute resolution has what's made GAFA rich. But in the higher stakes of cyber-physical systems, the human you summon by pushing zero has to be able to make a difference.

Silvia de Conca's idea of "human-centered legal design", which sought to give autonomous agents a duty of care as a way of filling the gap in liability that presently exists, and Cynthia Khoo's interest in vulnerable communities who are harmed by behavior that emerges from combined business models, platform scale, human nature, and algorithm design presented different methods of putting a human in the loop. Often, Khoo has found in investigating this idea, the potential harm was in fact known and simply ignored; how much can and should be foreseen when system parts interact in unexpected ways is a rising issue.

Several papers explored previously unnoticed vectors for bias and control. Sentiment analysis, last seen being called "the snake oil of 2011", and its successor, emotion analysis, which I first saw explored in the 1990s by Rosalind Picard at MIT, are creeping into AI systems. Some are particularly dubious: aggression detection systems and emotion recognition cameras.

Emily McBain-Ashfield and Jason Millar are the first I'm aware of to study how stereotyping gets into these systems. Yes, it's in the data - but the problem lies in the process analyzing and tagging it. The authors found three methods of doing this: manual (human, slow), dictionary-based using seed words (automated), and crowdsourced (see also Mary L. Gray and Siddharth Suri's 2019 book, Ghost Work. All have problems; automating that sort of issue creates notoriously crude mistakes, and the participants in crowdsourcing may be from very different linguistic and cultural contexts.

The discussant for this paper, Osonde Osaba sounded appalled: "By having these AI models of emotion out in the wild in commercial products we are essentially sanctioning the unregulated experimentation on humans and their emotional processes without oversight or control."

Remedies have to contend, however, with the legacy infrastructure. Alice Xiang discovered a conflict between traditional anti-discrimination law, which bars decision making based on a set of protected classes and the technical methods of mitigating algorithmic bias. "If we're not careful," she said, "the vast majority of approaches proposed in machine learning literature might actually be illegal if they are ever tested in court."

We Robot 2020 was the first to be held outside the US, and chairs Florian Martin-Bariteau, Jason Millar, and Katie Szilagyi set out to widen its international character and diversity. When the pandemic hit, the resulting exceptional breadth of location of authors and discussants made it infeasible to ask everyone to pretend they were in Ottawa's time zone. The conference therefore has recorded the authors' and discussants' conversations as if live - which means that you, too, can experience the originals. Just follow the links. We Robot events not already linked here: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference.


Illustrations: Our robot avatars attend the conference for us on the We Robot 2020 poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 28, 2020

Through the mousehole

Rodchenkov-Fogel-Icarus.pngIt's been obvious for a long time that if you want to study a thoroughly dysfunctional security system you could hardly do better than doping control in sports. Anti-doping has it all: perverse incentives, wrong assumptions, conflicts of interest, and highly motivated opponents. If you doubt this premise, consider: none of the highest-profile doping cases were caught by the anti-doping system. Lance Armstrong (2010) was outed by a combination of dogged journalistic reporting by David Walsh and admissions by his former teammate Floyd Landis; systemic Russian doping (2014) was uncovered by journalist Hajo Seppelt, who has also broadcast investigations of China, Kenya, Germany, and weightlighting; BALCO (2002) was exposed by a coach who sent samples to the UCLA anti-doping lab; and Willy Voet (1998), soigneur to the Festina cycling team, was busted by French Customs.

I bring this up - again - because two insider tales of the Russian scandal have just been published. The first, The Russian Affair, by David Walsh, tells the story of Vitaly and Yuliya Stepanov, who provided Seppelt with material for The Secrets of Doping: How Russia Makes Its Winners (2014); the second, The Rodchenkov Affair, is a first-person account of the Russian system by Grigory Rodchenkov, from 2006 to 2015 the director of Moscow's testing lab. Together or separately, these books explain the Russian context that helped foster its particular doping culture. They also show an anti-doping system that isn't fit for purpose.

The Russian Affair is as much the story of the Stepanovs' marriage as of contrasting and complementary views of the doping system. Vitaly was an idealistic young recruit at the Russian Anti-Doping Agency; Yuliya Rusanova was an aspiring athlete willing to do anything to escape the desperate unhappiness and poverty of her native area, Kursk. While she lectured him about not understanding "the real world", he continued hopefully writing letters to contacts at the World Anti-Doping Agency describing the violations he was seeing. Yuliya comes to see the exploitation of a system that protects winners but lets others test positive to make the system look functional. Under Vitaly's guidance, she records the revealing conversations that Seppelt's documentary featured. Rodchenkov makes a cameo appearance; the Stepanovs believed he was paid to protect specific athletes from positive tests.

In the vastly more entertaining The Rodchenkov Affair, Rodchenkov denies receiving payment, calling Yuliya a "has-been" he'd never met. Instead, Rodchenkov describes developing new methods of detecting performance-enhancing substances, then finding methods to beat those same tests. If the nearest analogue to the Walsh-described Stepanovs' marriage is George and Kellyanne Conway, Rodchenkov's story is straight out of Philip K. Dick's A Scanner Darkly, in which an undercover narcotics agent is assigned to spy on himself.

Russia has advantages for dopers. For example, its enormous land mass allows athletes to sequenster themselves in training camps so remote they are out of range for testers. More important may be the pervasive sense of resignation that Vitaly Stepanov describes as his boss slashes WADA's 80 English pages of anti-doping protocols to ten in Russian translation because various aspects are "not possible to do in Russia". Rodchenkov, meanwhile, plans the Sochi anti-doping lab that the McLaren report later made famous for swapping positive samples for pre-frozen clean ones through a specially built "mousehole" operated by the FSB.

If you view this whole thing as a security system, it's clear that WADA's threat model was too simple, something like "athletes dope". Even in 1988, when Ben Johnson tested positive at the Seoul Olympics, it was obvious that everyone's interests depended on not catching star athletes. International sports depend on their stars - as do their families, coaches, support staff, event promoters, governments, fans, and even other athletes, who know the star attractions make their own careers possible. Anti-doping agencies must thread their way through this thicket.

In Rodchenkov's description, WADA appears inept, even without its failure to recognize this ecosystem. In one passage, Rodchenkov writes about the double-blind samples the IOC planted from time to time to test the lab: "Those DBs were easily detectable because they contained ridiculous compounds...which were never seen in doping control routine analysis." In another, he says: "[WADA] also assumed that all accredited laboratories were similarly competent, which was not the case. Some WADA-accredited laboratories were just sloppy, and would reach out to other countries' laboratories when they had to process quality control samples to gain re-accreditation."

Flaws are always easy to find once you know they're there. But WADA was founded in 1999. Just six years earlier, the opening of the Stasi records exposed the comprehensive East German system. The possibility of state involvement should have been high on the threat list from the beginning, as should the role of coaches and doctors who guide successive athletes to success.

It's hard to believe this system can be successfully reformed. Incentives to dope will always be with us, just as it would be impossible to eliminate all incentives to break into computer systems. Rodchenkov, who frequently references Orwell's 1984, insists that athletes dope because otherwise their bodies cannot cope with the necessary training, which he contends is more physically damaging than doping. This much is clear: a system that insists on autonomy while failing to fulfill its most basic mission is wrong. Small wonder that Rodchenkov concludes that sport will never be clean.


Illustrations: Grigory Rodchenkov and Bryan Fogel in Fogel's documentary, Icarus.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 21, 2020

The end of choice

new-22portobelloroad.jpgAt the Congressional hearings a few weeks ago, all four CEOs who appeared - Mark Zuckerberg (Facebook), Jeff Bezos (Amazon), Sundar Pichai (Google), and Tim Cook (Apple) - said essentially the same thing in their opening statements: they have lots of competitors, they have enabled millions of people to build small businesses on their platforms, and they do not have monopoly power. The first of these is partly true, the second is true, and the third...well, it depends which country you're talking about, how you look at it, and what you think they're competing for. In some countries outside the US, for example, Facebook *is* the Internet because of its Free Basics program.

In the weeks since: Google still intends to buy Fitbit, which for $2.1 billion would give it access to a huge pile of health-data-that's-not-categorized-as-health data; both the US and the EU are investigating.

In California, an appeals court has found that Amazon can be liable for defective products sold by third-party sellers.

Meanwhile, Apple, which this week became the first company in history to hit a $2 trillion market cap, deleted Epic's hugely popular game Fortnite from the App Store because its latest version breaks Apple's rules by allowing players to bypass the Apple payment system (and 30% commission) to pay Epic directly for in-game purchases. In response, Epic has filed suit - and, writes Matt Stoller, if a company with Epic's clout can't force Apple to negotiate terms, who can? Stoller describes the Apple-Epic suit as certainly about money but even more about "the right way to run an economy". Stoller goes on to find this thread running through other current disputes, and believes this kind of debate leads to real change.

At Stratechery Ben Thompson argues that the Democrats didn't prove their case. Most interesting of the responses to the hearings, though, is an essay by Benedict Evans, who argues that breaking up the platforms will achieve nothing. Instead, he says, citing relevant efforts by the EU and UK competition authorities, better to dig into how the platforms operate and write rules to limit the potential for abuse. I like this idea, in part because it is genuinely difficult to see how break-ups would work. However, the key issue is enforcement; the EU made not merging databases a condition of Facebook's acquisition of WhatsApp - and three years later Facebook decided to do it anyway. The resulting fine of €110 million was less than 1% of the $19 billion purchase price.

In 1998, when the Evil Borg of Tech was Microsoft, it, too, was the subject of antitrust actions. Echoing the 1984 breakup of AT&T, people speculated about creating "Baby Bills", either by splitting the company between operating systems and productivity software or by splitting it into clones and letting them compete with each other. Instead, in 2004 the EU ordered Microsoft to unbundle its media player and, in 2009, Internet Explorer to avoid new fines. The company changed, but so did the world around it: the web, online services, free software, smartphones, and social media all made Microsoft less significant. Since 2010, the landscape has changed again. As the economist Lina Khan wrote in 2017, two guys in a garage can no longer knock off the current crop by creating the next new big technology.

Today's expanding hybrid cyber-physical systems will entrench choices none of us made into infrastructure none of us can avoid. In 2017, for example, San Diego began installing "smart" streetlights intended to do all sorts of good things: drive down energy costs, monitor air pollution, point out empty parking spaces, and so on. The city also thought it might derive some extra income from allowing third parties to run apps on its streetlight network. Instead, as Tekla S. Perry reported at IEEE Spectrum in January, to date the system's sole use has been to provide video footage to law enforcement, which has taken advantage to solve serious crimes but also to investigate vandalism and illegal dumping.

In the UK, private developers and police have been rolling out automated facial recognition without notifying the public; this week, in a case brought by Liberty, the UK Court of Appeal ruled that its use breaches privacy rights and data protection and equality laws. This morning, I see that, undeterred, Lincolnshire Police will trial a facial recognition system that is supposed to be able to detect people's moods.

The issue of monopoly power is important. But even if we find a way to ensure fair competition we won't have solved a bigger problem that is taking shape: individuals increasingly have no choice about whether to participate in the world these companies are building. For decades we have had no choice about being credit-scored. Three years ago, despit the fatuous comments of senior politicians, it was obvious that the only people who can opt out of using the Internet are those who are economically inactive or highly privileged; last year journalist Kashmir Hill proved the difficulty of doing without GAFA. The pandemic response is making opting out either antisocial, a health risk, or both. And increasingly, going out of your house means being captured on video and analyzed whether you like it or not. No amount of controlling individual technology companies will solve this loss of agency. That is up to us.

Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 14, 2020

Revenge of the browser wars

Netscape-1.0N.pngThis week, the Mozilla Foundation announced major changes. As is the new norm these days, Mozilla is responding to a problem that existed BCV (before coronavirus) but has been exposed, accelerated, and compounded by the pandemic. But the response sounds grim: approximately a quarter of the workforce to be laid off and a warning that the company needs to find new business models. Just a couple of numbers explain the backdrop: according to Statcounter, Firefox's second-position share of desktop/laptop browser usage has dropped to 8.61% behind Chrome at 69.55%. On mobile and tablets, where the iPhone's Safari takes a large bite out of Chrome's share, Firefox doesn't even crack 1%. You might try to trumpify those percentages by suggesting it's a smaller share but a larger user population, but unfortunately no; at CNet, Stephen Shankland reports that usage is shrinking in raw numbers, too, down to 210 million monthly users from 300 million in 2017.

Yes, I am one of those users.

In its 2018 annual report and 2018 financial statement (PDF), Mozilla explains that most of its annual income - $430 million - comes from royalty deals with search engines, which pay Firefox to make them the default (users can change this at will). The default varies across countries: Baidu (China), Yandex (Russia, Belarus, Kazakhstan, Turkey, and Ukraine), and Google everywhere else, including the US and Canada. It derives a relatively small amount - $20 million or so in total - of additional income from subscriptions, advertising, donations and dividends and interest on the investments where it's parked its capital.

The pandemic has of course messed up everyone's financial projections. In the end, though, the underlying problem is that long-term drop in users; fewer users must eventually generate fewer search queries on which to collect royalties. Presumably this lies behind Mozilla's acknowledgment that it needs to find new ways to support itself - which, the announcement also makes clear, it has so far struggled to do.

The problem for the rest of us is that the Internet needs Firefox - or if not Firefox itself, another open source browser with sufficiently significant cloud to keep the commercial browsers and their owners honest. At the moment, Mozilla and Firefox are the only ones in a position to lead that effort, and it's hard to imagine a viable replacement.-

As so often, the roots of the present situation go back to 1995, when - no Google then and Apple in its pre-Jobs-return state - the browser kings were Microsoft's Internet Explorer and Netscape Navigator, both seeking world wide web domination. Netscape's 1995 IPO is widely considered the kickoff for the dot-com boom. By 1999, Microsoft was winning and then high-flying AOL was buying Netscape. It was all too easy to imagine both building out proprietary protocols that only their browsers could read, dividing the net up into incompatible walled gardens. The first versions of what became Firefox were, literally, built out of a fork of Netscape whose source code was released before the AOL acquisition.

The players have changed and the commercial web has grown explosively, but the danger of slowly turning the web into a proprietary system has not. Statcounter has Google (Chrome) and Apple (Safari) as the two most significant players, followed by Samsung Internet (on mobile) and Microsoft's Edge (on desktop), with a long tail of others including Opera (which pioneered many now-common features), Vivaldi (built by the Opera team after Telenor sold it to a Chinese consortium), and Brave, which markets itself as a privacy browser. All these browsers have their devoted fans, but they are only viable because websites observe open standards. If Mozilla can't find a way to reverse Firefox's user base shrinkage, web access will be dominated by two of the giant companies that two weeks ago were called in to the US Congress to answer questions about monopoly power. Browsers are a chokepoint they can control. I'd love to say the hearings might have given them pause, but two weeks later Google is still buying Fitbit, Apple and Google have removed Fortnite from the app store for violating its in-app payment rules, and Facebook has launched Tiktok clone Instagram Reels.

There is, at the moment, no suggestion that either Google or Apple wants to abuse its dominance in browser usage. If they're smart, they'll remember the many benefits of the standards-based approach that built the web. They may also remember that in 2009 the threat of EU fines led Microsoft to unbundle its Internet Explorer browser from Windows.

The difficulty of finding a viable business model for a piece of software that millions of people use is one of the hidden costs of the Internet as we know it. No one has ever been able to persuade large numbers of users to pay for a web browser; Opera tried in the late 1990s, and wound up switching first to advertising sponsorship and then, like Mozilla, to a contract with Google.

Today, Catalin Cimpanu reports at ZDNet that Google and Mozilla will extend their deal until 2023, providing Mozilla with perhaps $400 million to $500 million a year. Assuming it goes through as planned, it's a reprieve - but it's not a solution - as Mozilla, fortunately, seems to know.

Illustrations: Netscape 1.0, in 1994 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 7, 2020

The big four

vlcsnap-2020-08-06-22h38m37s848.png"Companies aren't bad just because they're big," Mark Zuckerberg told the US Congress ten days ago, though he failed to suggest aspirational counterexamples. Of course, the point isn't *that* a company is big - but *how*.

July 28, 2020 saw Zuckerberg, Jeff Bezos, Tim Cook, and Sundar Pichai lined up to face the House Judiciary committee in a hearing on Online Platforms and Market Power. As so often these days - and as Julia Angwin writes at The Markup, Democrats and Republicans (excepting Kelly Armstrong, R-ND), conducted different hearings. Both were essentially hostile. Democrats plus Armstrong asked investigative journalism-style questions about company practices, citing detailed historical examples: unfair competition, abuse of a dominant position (Apple, Amazon), editorial manipulation (Facebook, Google), past acquisitions, third-party cookies (Google), targeted advertising, content moderation, hate speech, Russian interference in the 2016 election (Facebook), smart speakers as home hubs (Amazon), counterfeit products (Amazon), and so on for five and a half hours. Each of the four, but particularly Cook, spent a fair bit of time waiting through other people's questions. Overall response: this stuff is *hard*; we're doing a *lot*, we have lots of competition, while their questioners fretted at the loss of every second of their limited time. It must be years since any of these guys has been so frequently peremptorily interrupted while waffling: "Yes or no?"

The Markup kept a tally of "I'll get back to you on that": Bezos edged out Zuckerberg by a hair. (Not entirely fair, since Cook had many fewer chances to play.)

At one point, Pramila Jayapal (D-WA) explained to Bezos that the point of the committee's work was to ensure that more companies like these four could be created. (Maybe start by blocking Google from buying Fitbit.) She was particularly impressive asking about multi-sided markets and revenue sharing, and also pushed Zuckerberg to quickly implement the recommendations in its recent civil rights audit (PDF). But will her desired focus be reflected in the final report, or will it get derailed by arguments over political bias?

Aggrieved Republicans pushed hard on their claim that social media stifles conservative voices, perhaps not achieving the effect they hoped. Jim Sensenbrenner (R-WI) asked Zuckerberg why Donald J. Trump Jr's account was suspended (for sharing a bizarre video full of misinformation about the coronavirus). Zuckerberg had to tell him that was Twitter, although Facebook did remove that same video. Greg Steube (R-FL) demanded of Pichai why Google sorted his campaign emails into his parents' spam folder: "This appears to only be happening to conservative Republicans." (The Markup has found this is non-partisan sorting of "marketing" email, and Val Demings (D-FL) noted it happens to her.) Steube also claimed that soon after the hearing was agreed conservative websites had jumped back up out of obscurity in Google's search results. Why was that? While Pichai struggled to answer, someone quipped on Twitter, "This is everyone trying to explain the Internet to their parents."

Jim Jordan (R-OH), whose career aspiration is apparently Court Jester, opened with: "Big Tech is out to get conservatives - that is a not suspicion, not a hunch, it's a fact." He reeled off a list of incidents and dates: the removal of right wing news website Breitbart, donations from Google employees to then-presidential-candidate Hillary Clinton in 2016, and Twitter removing posts from Donald Trump calling for violence against protesters, and claimed he'd been "shadowbanned" when Twitter (still not present) demoted his tweets to make them less visible, adding that he tried to call Twitter CEO Jack Dorsey as "our" witness. Was Google going to tailor its features to help Joe Biden in the upcoming election? "It's against our core values," said Pichai. Jordan pounced: "But you did it in 2016." He had emails.

Matt Gaetz (R-FL) also seemed offended that - as an American company - Google had withdrawn from the Department of Defense's Project Maven and asked Pichai to promise the company would not withdraw from cooperating with law enforcement, accusing the company of "bigoted, anti-police policies". Gaetz was also disturbed by Google's technical center and collaboration on AI in China - a complaint seemingly pioneered by Peter Thiel..

Steube also found time to take a swipe at the EU: "It's no secret that Europe seems to have an agenda of attacking large, successful US tech companies, yet Europe's approach to regulation in general, and antitrust in particular, seems to have been much less successful than America's approach. America is a remarkable nursery for market innovation and entrepreneurship in pursuit of the American Dream." The irony of saying this while investigating the resulting monopoly power appeared lost on him.

In their opening statements, all four CEOs had embraced only-in-America. At last week's gikii, Chris Marsden countered with this list of technology inventions by Europeans: the Linux kernel (Finland); the Opera browser (Norway), Skype (Estonia); the chip maker ARM (UK), the Raspberry Pi (UK); the VLC media player, and an obscure technology called the World Wide Web (UK, working in Switzerland). "Social good," Marsden concluded, "rather than unicorns". Some of those - Skype, ARM, Opera - were certainly sold off to other parts of the world. But all of the big four have benefited from at least one of them.


Illustrations: Jeff Bezos, Mark Zuckerberg, Sundar Pichai, and Tim Cook are sworn in via Webex.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 10, 2020

Trading digital rights

The_Story_of_Mankind_-_Mediæval_Trade.pngUntil this week I hadn't fully appreciated the number of ways Brexiting UK is trapped between the conflicting demands of major international powers of the size it imagines itself still to be. On the question of whether to allow Huawei to participate in building the UK's 5G network, the UK is caught between the US and China. On conditions of digital trade - especially data protection - the UK is trapped between the US and the EU with Northern Ireland most likely to feel the effects. This was spelled out on Tuesday in a panel on digital trade and trade agreements convened by the Open Rights Group.

ORG has been tracking the US-UK trade negotiations and their effect on the UK's continued data protection adequacy under the General Data Protection Regulation. As discussed here before, the basic problem with respect to privacy is that outside the state of California, the US has only sector-specific (mainly health, credit scoring, and video rentals) privacy laws, while the EU regards privacy as a fundamental human right, and for 25 years data protection has been an essential part of implementing that right.

In 2018 when the General Data Protection Regulation came into force, it automatically became part of British law. On exiting the EU at the end of January, the UK replaced it with equivalent national legislation. Four months ago, Boris Johnson said the UK intends to develop its own policies. This is risky; according to Oliver Patel and Nathan Lea at UCL, 75% of the UK's data flows are with the EU (PDF). Deviation from GDPR will mean the UK will need the EU to issue an adequacy ruling that the UK's data protection framework is compatible. The UK's data retention and surveillance policies may make obtaining that adequacy decision difficult; as Anna Fielder pointed out in Tuesday's discussion, this didn't arise before because national security measures are the prerogative of EU member states. The alternatives - standard contractual clauses and binding corporate rules - are more expensive to operate, are limited to the organization that uses them, and are being challenged in the European Court of Justice.

So the UK faces a quandary: does it remain compatible with the EU, or choose the dangerous path of deviation in order to please its new best friend, the US? The US, says Public Citizen's Burcu Kilic, wants unimpeded data flows and prohibitions on requirements for data localization and disclosure of source code and algorithms (as proposals for regulating AI might mandate).

It is easy to see these issues purely in terms of national alliances. The bigger issue for Kilic - and for others such as Transatlantic Consumer Dialogue - is the inclusion of these issues in trade agreements at all, a problem we've seen before with intellectual property provisions. Even when the negotiations aren't secret, which they generally are, international agreements are relatively inflexible instruments, changeable only via the kinds of international processes that created them. The result is to severely curtail the ability of national governments and legislatures to make changes - and the ability of civil society to participate. In the past, most notably with respect to intellectual property rights, corporate interests' habit of shopping their desired policies around from country to country until one bit and then using that leverage to push the others to "harmonize" has been called "policy laundering". This is a new and updated version, in which you bypass all that pesky, time-consuming democracy nonsense. Getting your desired policies into a trade agreement gets you two - or more - countries for the price of one.

In the discussion, Javier Ruiz called it "forum shifting" and noted that the latest example is intermediary liability, which is included in the US-Mexico-Canada agreement that replaced NAFTA. This is happening just as countries - including the US - are responding to longstanding problems of abuse on online platforms by considering how to regulate the big online platforms - in the US, the debate is whether and how to amend S230 of the Communications Decency Act, which offers a shield against intermediary liability, in the UK it's the online harms bill and the age-appropriate design code.

Every country matters in this game. Kilic noted that the US is also in the process of negotiating a trade deal with Kenya that will also include digital trade and intellectual property - small in and of itself, but potentially the model for other African deals - and for whatever deal Kenya eventually makes with the UK.

Kilic traces the current plans to the Trans-Pacific Partnership, which included the US during the Obama administration and which attracted public anger over provisions for investor-state dispute settlement. On assuming the presidency, Trump withdrew, leaving the other countries to recreate it as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership, which was formally signed in March 2018. There has been some discussion of the idea that a newly independent Britain could join it, but it's complicated. What the US wanted in TPP, Kilic said, offers a clear guide to what it wants in trade agreements with the UK and everywhere else - and the more countries enter into these agreements, the harder it becomes to protect digital rights. "In trade world, trade always comes first."


Illustrations: Medieval trade routes (from The Story of Mankind, 1921).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 12, 2020

Getting out the vote

Thumbnail image for bush-gore-hanging-chad-florida.jpg"If voting changed anything, they'd abolish it, the maverick British left-wing politician Ken Livingstone wrote in 1987.

In 2020, the strategy appears to be to lecture people about how they should vote if they want to change things, and then make sure they can't. After this week's denial-of-service attack on Georgia voters and widespread documentation of voter suppression tactics, there should be no more arguments about whether voter suppression is a problem.

Until a 2008 Computers, Freedom, and Privacy tutorial on "e-deceptive campaign practices", organized by Lillie Coney, I had no idea how much effort was put into disenfranchising eligible voters. The tutorial focused on the many ways new technology - the pre-social media Internet - was being adapted to do very old work to suppress the votes of those who might have undesired opinions. The images from the 2018 mid-term elections and from this week in Georgia tell their own story.

In a presentation last week, Rebecca Mercuri noted that there are two types of fraud surrounding elections. Voter fraud, which is efforts by individuals to vote when they are not entitled to do so and is the stuff proponents of voter ID requirements get upset about, is vanishingly rare. Election fraud, where one group or another try to game the election in their favor, is and has been common throughout history, and there are many techniques. Election fraud is the big thing to keep your eye on - and electronic voting is a perfect vector for it. Paper ballots can be reexamined, recounted, and can't easily be altered without trace. Yes, they can be stolen or spoiled, but it's hard to do at scale because the boxes of ballots are big, heavy, and not easily vanished. Scale is, however, what computers were designed for, and just about every computer security expert agrees that computers and general elections do not mix. Even in a small, digitally literate country like Estonia a study found enormous vulnerabilities.

Mercuri, along with longtime security expert Peter Neumann, was offering an update on the technical side of voting. Mercuri is a longstanding expert in this area; in 2000, she defended her PhD thesis, the first serious study of the security problems for online voting, 11 days before Bush v. Gore burst into the headlines. TL;DR: electronic voting can't be secured.

In the 20 years since, the vast preponderance of computer security experts have continued to agree with her. Naturally, people keep trying to find wiggle room, as if some new technology will change the math; besides election systems vendors there are well-meaning folks with worthwhile goals, such as improving access for visually impaired people, ensuring access for a widely scattered membership, such as unions, or motivating younger people.

Even apart from voter suppression tactics, US election systems continue to be a fragmented mess. People keep finding new ways to hack into them; in 2017, Bloomberg reported that Russia hacked into voting systems in 39 US states before the US presidential election and targeted election systems in all 50. Defcon has added a voting machine hacking village, where, in 2018, an 11-year-old hacked into a replica of the Florida state voting website in under ten minutes. In 2019, Defcon hackers were able to buy a bunch of voting machines and election systems on eBay - and cracked every single one for the Washington Post. The only sensible response: use paper.

Mercuri has long advocated for voter-verified paper ballots (including absentee and mail-in ballots) as the official votes that can be recounted or audited as needed. The complexity and size of US elections, however, means electronic counting.

In Congressional testimony, Matt Blaze, a professor at Georgetown University, has made three recommendations (PDF): immediately dump all remaining paperless direct recording electronic voting machines; provide resources, infrastructure, and training to local and state election officials to help them defend their systems against attacks; and conduct risk-limiting audits after every election to detect software failures and attacks. RLAs, which were proposed in a 2012 paper by Mark Lindeman and Philip B. Stark (PDF), involves counting a statistically significant random sampling of ballots and checking the results against the machine. The proposal has a fair amount of support, including from the Electronic Frontier Foundation.

Mercuri has doubts; she argues that election administrators don't understand the math that determines how many ballots to count in these audits, and thinks the method will fail to catch "dispersed fraud" - that is, a few votes changed across many precincts rather than large clumps of votes changed in a few places. She is undeniably right when she says that RLAs are intended to avoid counting the full set of ballots; proponents see that as a *good* thing - faster, cheaper, and just as good. As a result, some states - Michigan, Colorado (PDF) - are beginning to embrace it. My guess is there will be many mistakes in implementation and resulting legal contests until everyone either finds a standard for best practice or decides they're too complicated to make work.

Even more important, however, is whether RLAs can successfully underpin public confidence in election integrity. Without that, we've got nothing.

Illustrations: Hanging chad, during the 2000 Bush versus Gore vote.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 29, 2020

Tweeted

sbisson-parrot-49487515926_0c97364f80_o.jpgAnyone who's ever run an online forum has at some point grappled with a prolific poster who deliberately spreads division, takes over every thread of conversation, and aims for outraged attention. When your forum is a few hundred people, one alcohol-soaked obsessive bent on suggesting that anyone arguing with him should have their shoes filled with cement before being dropped into the nearest river is enormously disruptive, but the decision you make about whether to ban, admonish, or delete their postings matters only to you and your forum members. When you are a public company, your forum is several hundred million people, and the poster is a world leader...oy.

Some US Democrats have been calling Donald Trump's outrage this week over having two tweets labeled with a fact-check an attempt to distract us all from the terrible death toll of the pandemic under his watch. While this may be true, it's also true that the tweets Trump is so fiercely defending form part of a sustained effort to spread misinformation that effectively acts as voter suppression for the upcoming November election. In the 12 hours since I wrote this column, Trump has signed an Executive Order to "prevent online censorship", and Twitter has hidden, for "glorifying violence", Trump tweets suggesting shooting protesters in Minneapolis. It's clear this situation will escalate over the coming week. Twitter has a difficult balance to maintain: it's important not to hide the US president's thoughts from the public, but it's equally important to hold the US president to the same standards that apply to everyone else. Of course he feels unfairly picked on.

Rewind to Tuesday. Twitter applied its recently-updated rules regarding election integrity by marking two of Donald Trump's tweets. The tweets claimed that conducting the November presidential election via postal ballots would inevitably mean electoral fraud. Trump, who moved his legal residence to Florida last year, voted by mail in the last election. So did I. Twitter added a small, blue line to the bottom of each tweet: "! Get the facts about mail-in ballots". The link leads to numerous articles debunking Trump's claim. At OneZero, Will Oremus explains Twitter's decision making process. By Wednesday, Trump was threatening to "shut them down" and sign an Executive Order on Thursday.

Thursday morning, a leaked draft of the proposed executive order had been found, and Daphne Keller had color coded it to show which bits matter. In a fact-check of what power Trump actually has for Vox, Shirin Ghaffary quotes a tweet from Lawrence Tribe, who calls Trump's threat "legally illiterate". Unlike Facebook, Twitter doesn't accept political ads that Trump can threaten to withdraw, and unlike Facebook and Google, Twitter is too small for an antitrust action. Plus, Trump is addicted to it. At the Washington Post, Tribe adds that Trump himself *is* violating the First Amendment by continuing to block people who criticize his views, a direct violation of a 2019 court order.

What Trump *can* do - and what he appears to intend to do - is push the FTC and Congress to tinker with Section 230 of the Communications Decency Act (1996), which protects online platforms from liability for third-party postings spreading lies and defamation. S230 is widely credited with having helped create the giant Internet businesses we have today; without liability protection, it's generally believed that everything from web comment boards to big social media platforms will become non-viable.

On Twitter, US Senator Ron Wyden (D-OR), one of S230's authors, explains what the law does and does not do. At the New York Times, Peter Baker and Daisuke Wakabayashi argue, I think correctly, that the person a Trump move to weaken S230 will hurt most is...Trump himself. Last month, the Washington Post put the count of Trump's "false or misleading claims" while in office at 18,000 - and the rate has grown over time. Probably most of them have been published on Twitter.

As the lawyer Carrie A. Goldberg points out on Twitter, there are two very different sets of issues surrounding S230. The victims she represents cannot sue the platforms where they met serial rapists who preyed on them or continue to tolerate the revenge porn their exes have posted. Compare that very real damage to the victimhood conservatives are claiming: that the social media platforms are biased against them and disproportionately censor their posts. Goldberg wants access to justice for the victims she represents, who are genuinely harmed, and warns against altering S230 for purposes such as "to protect the right to spread misinformation, conspiracy theory, and misinformation".

However, while Goldberg's focus on her own clients is understandable, Trump's desire to tweet unimpeded about mail-in ballots or shooting protesters is not trivial. We are going to need to separate the issue of how and whether S230 should be updated from Trump's personal behavior and his clearly escalating war with the social medium that helped raise him from joke to viable presidential candidate. The S230 question and how it's handled in Congress is important. Calling out Trump when he flouts clearly stated rules is important. Trump's attempt to wield his power for a personal grudge is important. Trump versus Twitter, which unfortunately is much easier to write about, is a sideshow.


Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 20, 2020

The beginning of the world as we don't know it

magnolia-1.jpgOddly, the most immediately frightening message of my week was the one from the World Future Society, subject line "URGENT MESSAGE - NOT A DRILL". The text began, "The World Future Society over its 60 years has been preparing for a moment of crisis like this..."

The message caused immediate flashbacks to every post-disaster TV show and movie, from The Leftovers (in which 2% of the world's population mysteriously vanishes) to The Last Man on Earth (in which everyone who isn't in the main cast has died of a virus). In my case, it also reminds unfortunately of the very detailed scenarios I saw posted in the late 1990s to the comp.software.year-2000 Usenet newsgroup, in which survivalists were certain that the Millennium Bug would cause the collapse of society. In one scenario I recall, that collapse was supposed to begin with the banks failing, pass through food riots and cities burning, and end with four-fifths of the world's population dead: the end of the world as we know it (TEOTWAWKI). So what I "heard" in the World Future Society's tone was that the "preppers", who built bunkers, stored sacks of beans, rice, dried meat, and guns, were finally right and this was their chance to prove it.

Naturally, they meant no such thing. What they *did* mean was that futurists have long thought about the impact of various types of existential risks, and that what they want is for as many people as possible to join their effort to 1) protect local government and health authorities, 2) "co-create back-up plans for advanced collaboration in case of societal collapse", and 3) collaborate on possible better futures post-pandemic. Number two still brings those flashbacks, but I like the first goal very much, and the third is on many people's minds. If you want to see more, it's here.

It was one of the notable aspects of the early Internet that everyone looked at what appeared to be a green field for development and sought to fashion it in their own desired image. Some people got what they wanted: China, for example, defying Western pundits who claimed it was impossible, successfully built a controlled national intranet. Facebook, while coming along much later, through zero rating deals with local telcos for its Free Basics, is basically all the Internet people know in countries like Ghana and the Philippines, a phenomenon Global Voices calls "digital colonialism". Something like that mine-to-shape thinking is visible here.

I don't think WFS meant to be scary; what they were saying is in fact what a lot of others are saying, which is that when we start to rebuild after the crisis we have a chance - and a need - to do things differently. At Wired, epidemiologist Larry Brilliant tells Steven Levy he hopes the crisis will "cause us to reexamine what has caused the fractional division we have in [the US]".

At Singularity University's virtual summit on COVID-19 this week, similar optimism was on display (some of it probably unrealistic, like James Ehrlich's land-intensive sustainable villages). More usefully, Jamie Metzl compared the present moment to 1941, when US president Franklin Delano Roosevelt began to imagine how the world might be reshaped after the war would end in the Atlantic charter. Today, Metzl said, "We are the beneficiaries of that process." Therefore, like FDR we should start now to think about how we want to shape our upcoming different geopolitical and technological future. Like net.wars last week and John Naughton at the Guardian, Metzl is worried that the emergency powers we grant today will be hard to dislodge later. Opportunism is open to all.

I would guess that the people who think it's better to bail out businesses than support struggling people also fear permanence will become true of the emergency support measures being passed in multiple countries. One of the most surreal aspects of a surreal time is that in the space of a few weeks actions that a month ago were considered too radical to live are suddenly happening: universal basic income, grounding something like 80% of aviation, even support for *some* limited free health care and paid sick leave in the US.

The crisis is also exposing a profound shift in national capabilities. China could build hospitals in ten days; the US, which used to be able to do that sort of thing, is instead the object of charity from Chinese billionaire Alibaba founder Jack Ma, who sent over half a million test kits and 1 million face masks.

Meanwhile, all of us, with a few billionaire exceptions are turning to the governments we held in so little regard a few months ago to lead, provide support, and solve problems. Libertarians who want to tear governments down and replace all their functions with free-market interests are exposed as a luxury none of us can afford. Not that we ever could; read Paulina Borsook's 1996 Mother Jones article Cyberselfish if you doubt this.

"It will change almost everything going forward," New York State governor Andrew Cuomo said of the current crisis yesterday. Cuomo, who is emerging as one of the best leaders the US has in an emergency, and his counterparts are undoubtedly too busy trying to manage the present to plan what that future might be like. That is up to us to think about while we're sequestered in our homes.


Illustrations:: A local magnolia tree, because it *is* spring.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 12, 2020

Privacy matters

china-alihealth.jpegSometime last week, Laurie Garrett, the Pulitzer Prize-winning author of The Coming Plague, proposed a thought experiment to her interviewer on MSNBC. She had been describing the lockdown procedures in place in China, and mulling how much more limited actions are available to the US to mitigate the spread. Imagine, she said (or more or less), the police out on the interstate pulling over a truck driver "with his gun rack" and demanding a swab, running a test, and then and there ordering the driver to abandon the truck and putting him in isolation.

Um...even without the gun rack detail...

The 1980s AIDS crisis may have been the first time my generation became aware of the tension between privacy and epidemiology. Understanding what was causing the then-unknown "gay cancer" involved tracing contacts, asking intimate questions, and, once it was better understood, telling patients to contact their former and current sexual partners. At a time when many gay men were still closeted, this often meant painful conversations with wives as well as ex-lovers. (Cue a well-known joke from 1983: "What's the hardest part of having AIDS? Trying to convince your wife you're Haitian.")

The descriptions emerging of how China is working to contain the virus indicate a level of surveillance that - for now - is still unthinkable in the West. In a Huangzhou project, for example, citizens are required to install the Alipay Health Code app on their phones that assigns them a traffic light code based on their recent contacts and movements - which in turn determines which public and private spaces they're allowed to enter. Paul Mozur, who co-wrote that piece for the New York Times with Raymond Zhong and Aaron Krolik, has posted on Twitter video clips of how this works on the ground, while Ryutaro Uchiyama marvels at Singapore's command and open publication of highly detailed data This is a level of control that severely frightened people, even in the West, might accept temporarily or in specific circumstances - we do, after all, accept being data-scanned and physically scanned as part of the price of flying. I have no difficulty imagining we might accept barriers and screening before entering nursing homes or hospital wards, but under what conditions would the citizens of democratic societies accept being stopped randomly on the street and our phones scanned for location and personal contact histories?

The Chinese system has automated just such a system. Quite reasonably, at the Guardian Lily Kuo wonders if the system will be made permanent, essentially hijacking this virus outbreak in order to implement a much deeper system of social control than existed before. Along with all the other risks of this outbreak - deaths, widespread illness, overwhelmed hospitals and medical staff, widespread economic damage, and the mental and emotional stress of isolation, loss, and lockdown - there is a genuine risk that "the new normal" that emerges post-crisis will have vastly more surveillance embedded in it.

Not everyone may think this is bad. On Twitter, Stewart Baker, whose long-held opposition to "warrant-proof" encryption we noted last week, suggested it was time for him to revive his "privacy kills" series. What set him off was a New York Times piece about a Washington-based lab that was not allowed to test swabs they'd collected from flu patients for coronavirus, on the basis that the patients would have to give consent for the change of use. Yes, the constraint sounds stupid and, given the situation, was clearly dangerous. But it would be more reasonable to say that either *this* interpretation or *this* set of rules needs to be changed than to conclude unliterally that "privacy is bad". Making an exemption for epidemics and public health emergencies is a pretty easy fix that doesn't require up-ending all patient confidentiality on a permanent basis. The populations of even the most democratic, individualistic countries are capable of understanding the temporary need for extreme measures in a crisis. Even the famously national ID-shy UK accepted identity papers during wartime (and then rejected them after the war ended (PDF)).

The irony is that lack of privacy kills, too. At The Atlantic, Zeynep Tufecki argues that extreme surveillance and suppression of freedom of expression paradoxically results in what she calls "authoritarian blindness": a system designed to suppress information can't find out what's really going on. At The Bulwark, Robert Tracinski applies Tufecki's analysis to Donald Trump's habit of labeling anything he doesn't like "fake news" and blaming any events he doesn't like on the "deep state" and concludes that this, too, engenders widespread and dangerous distrust. It's just as hard for a government to know what's really happening when the leader doesn't want to know as when the leader doesn't want anyone *else* to know.

At this point in most countries it's early stages, and as both the virus and fear of it spread, people will be willing to consent to any measure that they believe will keep them and their loved ones safe. But, as Access Now agrees, there will come a day when this is past and we begin again to think about other issues. When that day comes, it will be important to remember that privacy is one of the tools needed to protect public health.


Illustrations: Alipay Health Code in action (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 6, 2020

Transitive rage

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgSomething has changed," a privacy campaigner friend commented last fall, observing that it had become noticeably harder to get politicians to understand and accept the reasons why strong encryption is a necessary technology to protect privacy, security, and, more generally, freedom. This particular fight had been going on since the 1990s, but some political balance had shifted. Mathematical reality of course remains the same. Except in Australia.

At the end of January, Bloomberg published a leaked draft of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT), backed by US Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT). In its analysis the Center for Democracy and Technology find the bill authorizes a new government commission, led by the US attorney general, to regulate online speech and, potentially, ban end-to-end encryption. At Lawfare, Stewart Baker, a veteran opponent of strong cryptography, dissents, seeing the bill as combating child exploitation by weakening the legal liability protection afforded by Section 230. Could the attorney general mandate that encryption never qualifies as "best practice"? Yes, even Baker admits, but he still thinks the concerns voiced by CDT and EFF are overblown.

In our real present, our actual attorney general, William Barr believes "warrant-proof encryption" is dangerous. His office is actively campaigning in favor of exactly the outcome CDT and EFF fear.

Last fall, my friend connected the "change" to recent press coverage of the online spread of child abuse imagery. Several - such as Michael H. Keller and Gabriel J.X. Dance's November story - specifically connected encryption to child exploitation, complaining that Internet companies fail to use existing tools, and that Facebook's plans to encrypt Messenger, "the main source of the imagery", will "vastly limit detection".

What has definitely changed is *how* encryption will be weakened. The 1990s idea was key escrow, a scheme under which individuals using encryption software would deposit copies of their private keys with a trusted third party. After years of opposition, the rise of ecommerce and its concomitant need to secure in-transit financial details eventually led the UK government to drop key escrow before the passage of the Regulation of Investigatory Powers Act (2000), which closed that chapter of the crypto debates. RIPA and its current successor, the Investigatory Powers Act (2016), requires individuals to descrypt information or disclose keys to government representatives. There have have been three prosecutions.

In 2013, we learned from Edward Snowden's revelations that the security services had not accepted defeat but had gone dark, deliberately weakening standards. The result: the Internet engineering community began the work of hardening the Internet as much as they could.

In those intervening years, though, outside of a few very limited cases - SSL, used to secure web transactions - very few individuals actually used encryption. Email and messaging remained largely open. The hardening exercise Snowden set off eventually included companies like Facebook, which turned on end-to-end encryption for all of WhatsApp in 2016, overnight turning 1 billion people into crypto users and making real the long-ago dream of the crypto nerds of being lost in the noise. If 1 billion people use messaging and only a few hundred use encryption, the encryption itself is a flag that draws attention. If 1 billion people use encrypted messaging, those few hundred are indistinguishable.

In June 2018, at the 20th birthday of the Foundation for Information Policy Research, Ross Anderson predicted that the battle over encryption would move to device hacking. The reasoning is simple: if they can't read the data in transit because of end-to-end encryption, they will work to access it at the point of consumption, since it will be cleartext at that point. Anderson is likely still to be right - the IPA includes provisions allowing the security services to engage in "bulk equipment interference", which means, less politely, "hacking".

At the same time, however, it seems clear that those governments that are in a position to push back at the technology companies now figure that a backdoor in the few giant services almost everyone uses brings back the good old days when GCHQ could just put in a call to BT. Game the big services, and the weirdos who use Signal and other non-mainstream services will stick out again.

At Stanford's Center for Internet and Society, Riana Pfefferkorn believes the DoJ is opportunistically exploiting the techlash much the way the security services rushed through historically and politically unacceptable surveillance provisions in the first few shocked months after the 9/11 attacks. Pfefferkorn calls it "transitive rage": Congresspeople are already mad at the technology companies for spreading false news, exploiting personal data, and not paying taxes, so encryption is another thing to be mad about - and pass legislation to prevent. The IPA and Australia's Assistance and Access Act are suddenly models. Plus, as UN Special Rapporteur David Keye writes in his book Speech Police: The Global Struggle to Govern the Internet, "Governments see that company power and are jealous of it, as they should be."

Pfefferkorn goes on to point out the inconsistency of allowing transitive rage to dictate banning secure encryption. It protects user privacy, sometimes against the same companies they're mad at. We'll let Alec Muffett have the last word, reminding that tomorrow's children's freedom is also worth protecting.


Illustrations: GCHQ's Bude listening post, at dawn (by wizzlewick at Wikimedia, CC3.0).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpg

February 14, 2020

Pushy algorithms

cyberporn.jpgOne consequence of the last three and a half years of British politics, which saw everything sucked into the Bermuda Triangle of Brexit debates, is that things that appeared to have fallen off the back of the government's agenda are beginning to reemerge like so many sacked government ministers hearing of an impending cabinet reshuffle and hoping for reinstatement.

One such is age verification, which was enshrined in the Digital Economy Act (2017) and last seen being dropped to wait for the online harms bill.

A Westminster Forum seminar on protecting children online shortly before the UK's December 2019 general election, reflected that uncertainty. "At one stage it looked as if we were going to lead the world," Paul Herbert lamented before predicting it would be back "sooner or later".

The expectation for this legislation was set last spring, when the government released the Online Harms white paper. The idea was that a duty of care should be imposed on online platforms, effectively defined as any business-owned website that hosts "user-generated content or user interactions, for example through comments, forums, or video sharing". Clearly they meant to target everyone's current scapegoat, the big social media platforms, but "comments" is broad enough to include any ecommerce site that accepts user reviews. A second difficulty is the variety of harms they're concerned about: radicalization, suicide, self-harm, bullying. They can't all have the same solution even if, like one bereaved father, you blame "pushy algorithms".

The consultation exercise closed in July, and this week the government released its response. The main points:

- There will be plentiful safeguards to protect freedom of expression, including distinguishing between illegal content and content that's legal but harmful; the new rules will also require platforms to publish and transparently enforce their own rules, with mechanisms for redress. Child abuse and exploitation and terrorist speech will have the highest priority for removal.

- The regulator of choice will be Ofcom, the agency that already oversees broadcasting and the telecommunications industry. (Previously, enforcing age verification was going to be pushed to the British Board of Film Classification.)

- The government is still considering what liability may be imposed on senior management of businesses that fall under the scope of the law, which it believes is less than 5% of British businesses.

- Companies are expected to use tools to prevent children from accessing age-inappropriate content "and protect them from other harms" - including "age assurance and age verification technologies". The response adds, "This would achieve our objective of protecting children from online pornography, and would also fulfill the aims of the Digital Economy Act."

There are some obvious problems. The privacy aspects of the mechanisms proposed for age verification remain disturbing. The government's 5% estimate of businesses that will be affected is almost certainly a wild underestimate. (Is a Patreon page with comments the responsibility of the person or business that owns it or Patreon itself?). At the Guardian, Alex Hern explains the impact on businesses. The nastiest tabloid journalism is not within scope.

On Twitter, technology lawyer Neil Brown identifies four fallacies in the white paper: the "Wild West web"; that privately operated computer systems are public spaces; that those operating public spaces owe their users a duty of care; and that the offline world is safe by default. The bigger issue, as a commenter points out, is that the privately operated computer systems UK government seeks to regulate are foreign-owned. The paper suggests enforcement could include punishing company executives personally and ordering UK ISPs to block non-compliant sites.

More interesting and much less discussed is the push for "age-appropriate design" as a method of harm reduction. This approach was proposed by Lorna Woods and Will Perrin in January 2019. At the Westminster eForum, Woods explained, "It is looking at the design of the platforms and the services, not necessarily about ensuring you've got the latest generation of AI that can identify nasty comments and take it down."

It's impossible not to sympathize with her argument that the costs of move fast and break things are imposed on the rest of society. However, when she started talking about doing risk assessments for nascent products and services I could only think she's never been close to software developers, who've known for decades that from the instant software goes out into the hands of users they will use it in ways no one ever imagined. So it's hard to see how it will work, though last year the ICO proposed a code of practice.

The online harms bill also has to be seen in the context of all the rest of the monitoring that is being directed at children in the name of keeping them - and the rest of us - safe. DefendDigital.me has done extensive work to highlight the impact of such programs as Prevent, which requires schools and libraries to monitor children's use of the Internet to watch for signs of radicalization, and the more than 20 databases that collect details of every aspect of children's educational lives. Last month, one of these - the Learning Records Service - was caught granting betting companies access to personal data about 28 million children. DefendDigital.me has called for an Educational Rights Act. This idea could be usefully expanded to include children's online rights more broadly.


Illustrations: Time magazine's 1995 "Cyberporn" cover, which marked the first children-Internet panic.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 24, 2020

The inevitability narrative

new-22portobelloroad.jpg"We could create a new blueprint," Woody Hartzog said in a rare moment of hope on Wednesday at this year's Computers, Privacy, and Data Protection in a panel on facial recognition. He went on stress the need to move outside of the model of privacy for the last two decades: get consent, roll out technology. Not necessarily in that order.

A few minutes earlier, he had said, "I think facial recognition is the most dangerous surveillance technology ever invented - so attractive to governments and industry to deploy in many ways and so ripe for abuse, and the mechanisms we have so weak to confront the harms it poses that the only way to mitigate the harms is to ban it."

This week, a leaked draft white paper revealed that the EU is considering, as one of five options, banning the use of facial recognition in public places. In general, the EU has been pouring money into AI research, largely in pursuit of economic opportunity: if the EU doesn't develop its own AI technologies, the argument goes, Europe will have to buy it from China or the United States. Who wants to be sandwiched between those two?

This level of investment is not available to most of the world's countries, as Julia Powles elsewhere pointed out with respect to AI more generally. Her country, Australia, is destined to be a "technology importer and data exporter", no matter how the three-pronged race comes out. "The promises of AI are unproven, and the risks are clear," she said. "The real reason we need to regulate is that it imposes a dramatic acceleration on the conditions of the unrestrained digital extractive economy." In other words, the companies behind AI will have even greater capacity to grind us up as dinosaur bones and use the results to manipulate us to their advantage.

At this event last year there was a general recognition that, less than a year after the passage of the general data protection regulation, it wasn't going to be an adequate approach to the growth of tracking through the physical world. This year, the conference is awash in AI to a truly extraordinary extent. Literally dozens of sessions: if it's not AI in policing it's AI and data protection, ethics, human rights, algorithmic fairness, or embedded in autonomous vehicles, Hartzog's panel was one of at least half a dozen on facial recognition, which is AI plus biometrics plus CCTV and other cameras. As interesting are the omissions: in two full days I have yet to hear anything about smart speakers or Amazon Ring doorbells, both proliferating wildly in the soon-to-be non-EU UK.

These technologies are landing on us shockingly fast. This time last year, automated facial recognition wasn't even on the map. It blew up just last May, when Big Brother Watch pushed the issue into everyone's consciousness by launching a campaign to stop the police from using what is still a highly flawed technology. But we can't lean too heavily on the ridiculous - 98%! - inaccuracy of its real-world trials, because as it becomes more accurate it will become even more dangerous to anyone on the wrong list. Here, it has become clear that it's being rapidly followed by "emotional recognition", a build-out of technology pioneered 25 years ago at MIT by Rosalind Picard under the rubric "affective computing".

"Is it enough to ban facial recognition?" a questioner asked. "Or should we ban cameras?"

Probably everyone here is carrying at least two camera (pause to count: two on phone, one on laptop).

Everyone here is also conscious that last week, Kashmir Hill broke the story that the previously unknown, Peter Thiel-backed company Clearview AI had scraped 3 billion facial images off social media and other sites to create a database that enables its law enforcement cutomers to grab a single photo and get back matches from dozens of online sites. As Hill reminds, companies like Facebook have been able to do this since 2011, though at the time - just eight and a half years ago! - this was technology that Google (though not Facebook) thought was "too creepy" to implement.

In the 2013 paper A Theory of Creepy, Omer Tene and Jules Polonetsky. cite three kinds of "creepy" that apply to new technologies or new uses: it breaks traditional social norms; it shows the disconnect between the norms of engineers and those of the rest of society; or applicable norms don't exist yet. AI often breaks all three. Automated, pervasive facial recognition certainly does.

And so it seems legitimate to ask: do we really want to live in a world where it's impossible to go anywhere without being followed? "We didn't ban dangerous drugs or cars," has been a recurrent rebuttal. No, but as various speakers reminded, we did constrain them to become much safer. (And we did ban some drugs.) We should resist, Hartzog suggested, "the inevitability narrative".

Instead, the reality is that, as Lokke Moerel put it, "We have this kind of AI because this is the technology and expertise we have."

One panel pointed us at the AI universal guidelines, and encouraged us to sign. We need that - and so much more.


Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 29, 2019

Open season

A_Large_Bird_Attacking_a_Stag_LACMA_65.37.315.jpgWith no ado, here's the money quote:

The [US Trade Representative] team is keen to move into the formal phase of negotiations. Ahead of the publication of UK negotiating objectives, there now little that we will be able to achieve in further pre-negotiation engagement. USTR officials noted continued pressure from their political leadership to pursue an FTA [free trade agreement] and a desire to be fully prepared for the launch of negotiations after the end of October. They envisage a high cadence negotiation - with rounds every 6 weeks - but it was interesting that my opposite number thought that there would remain a political and resource commitment to a UK negotiation even if it were thought that the chances of completing negotiations in a Trump first term were low. He felt that being able to point to advanced negotiations with the UK was viewed as having political advantages for the President going in to the 2020 elections. USTR were also clear that the UK-EU situation would be determinative: there would be all to play for in a No Deal situation but UK commitment to the Customs Union and Single Market would make a UK-U.S. FTA a non-starter.

This quote appears on page two of one of the six leaked reports that UK Labour leader Jeremy Corbyn flourished at a press conference this week. The reports summarize the US-UK Trade and Investment Working Group's efforts to negotiate a free trade agreement between the US and post-Brexit Britain (if and when). The quote dates to mid-July 2019; to recap, Boris Johnson became prime minister on July 24 swearing the UK would exit the EU on October 31.

Three key points jump out:

- Donald Trump thinks a deal with Britain will help him win re-election next year. This is not a selling point to most people in Britain.

- The US negotiators condition the agreement on a no-deal Brexit - the most damaging option for the UK and European economies. Despite the last Parliament's efforts, this could still happen because two cliff edges still loom: the revised January 31 exit date, and December 2020, when the transition period is due to end (and which Johnson swears he won't extend). Whose interests is Johnson prioritizing here?

- Wednesday's YouGov model poll predicts that Johnson will win a "comfortable" majority, suggesting that the cliff edge remains a serious threat.

At Open Democracy, Nick Dearden sums up the worst damage. Among other things, it shows the revival of some of the most-disliked provisions in the abandoned Transatlantic Trade Investment Partnership treaty, most notably investor-state dispute resolution (ISDS), which grants corporations the right to sue governments that pass laws they oppose in secret tribunals. As Dearden writes, these documents make clear that "taking back control" means "giving the US control". The Trade Justice Movement's predictions from earlier this year seem accurate enough.

On Twitter, UKTrade Forum co-founder David Henig has posted a thread explaining why adopting a US-first trade policy will be disastrous for British farmers and manufacturers.

Global Justice's analysis highlights both the power imbalance, and the US's demands for free rein. It's also clear that Johnson can say the NHS is not on the table, Trump can say the opposite, and both can be telling some value of truth, because the focus is on pharmaceutical pricing and patent extension. An unscrupulous government filled with short-term profiteers might figure that they'll be gone by the time the costs become clear.

For net.wars, this is all background and outside our area of expertise. The picture is equally alarming for digital rights. In 1999, Simon Davies predicted that data protection would become a trade war between the US and EU. Even a partial reading of these documents suggests that now, 20 years on, may be the moment. Data protection is a hinge, in that you might, at some expense, manage varying food standards for different trading regions, but data regimes want to be unitary. The UK can either align with the EU, GDPR, which enshrines privacy and data protection as human rights, or with the US and its technology giants. This goes double if Max Schrems, whose legal action brought down the Safe Harbor agreement, wins his NOYB case against Privacy Shield. Choose the EU and GDPR, and the US likely walks, as the February 2019 summary of negotiation objectives (PDF) makes plain. That document also is clear that the US wants to bar the UK from mandating local data storage, restricting cross-border data flows, imposing customs duties on digital products, requiring the disclosure of computer code or algorithms, and holding online platforms liable for third-party content. Many of these are opposite to the EU's general direction of travel.

The other hinge issue is the absolute US ban on mentioning climate change. The EU just declared a climate emergency and set out an action list.

The UK cannot hope to play both sides. It's hard to overstress how much worse a position these negotiations seem to offer the UK, which *is* a full EU partner, but which will always be viewed by the US as a lesser entity.

Illustrations: A large bird attacking a stag (Hendrik Hondius, 1610; from LA County Museum of Art, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 8, 2019

Burn rate

One of my favorite moments in the 1996 sitcom 3rd Rock from the Sun was when Dick (John Lithgow), the high commander of the aliens' mission to Earth, marveled at humans' ability to live every day as though they didn't know they were going to die. For everyone but Woody Allen and the terminally ill, that denial is useful: it allows us to get up every day and do things like watch silly sitcoms without being overwhelmed by the sense of doom.

In other contexts, the denial of existential limits is less helpful: being aware of the limits of capital reminds to use it wisely. During those 3rd Rock years, I was baffled by the recklessly rapid adoption of the Internet for serious stuff - banking, hospital systems - apparently without recognizing that the Internet was still a somewhat experimental network and lacked the service level agreements and robust engineering provided by the legacy telephone networks. During Silicon Valley's 2007 to 2009 bout of climate change concern it was an exercise in cognitive dissent to watch CEOs explain the green values they were imposing on themselves and their families while simultaneously touting their companies' products and services, which required greater dependence on electronics, power grids, and always-on connections. At an event on nanotechnology in medicine, it was striking that the presenting researchers never mentioned power use. The mounting consciousness of the climate crisis has proceeded in a separate silo from the one in which the "there's an app for that" industries have gone on designing a lifestyle of total technological dependence, apparently on the basis that electrical power is a constant and the Internet is never interrupted. (Tell that to my broadband during those missing six hours last Thursday.)

The last few weeks of California have shown that we need to completely rethink this dependence. At The Verge, Nicole Westman examines the fragility of American hospital systems. Many do have generators, but few have thought-out plans for managing during a black-out. As she writes, hospitals may be overwhelmed by unexpected influxes of patients from nursing homes that never mentioned the hospital was their fallback plan and local residents searching for somewhere to charge their phones. And, Westman notes, electronic patient records bring hard choices: do you spend your limited amount of power on keeping the medicines cold, or do you keep the computer system running?

Right now, with paper records still so recent, staff may be able to dust off their old habits and revert, but ten years hence that won't be true. British Airways' 2018 holiday weekend IT collapse at Heathrow provides a great example of what happens when there is (apparently) no plan and less experience.

At the Atlantic, Alexis Madrigal warns that California's blackouts and wildfires are samples of our future; the toxic "technical debt" of accumulated underinvestment in American infrastructure is being exposed by the abruptly increased weight of climate change. How does it happen that the fifth largest economy in the world has millions of people with no electric power? The answer, Madrigal (and others) writes is the diversion of capital that should have been spent improving the grid and burying power lines to shareholders' dividends. Add higher temperatures, less rainfall, and exceptional drought, and here's your choice: power outages or fires?

Someone like me, with a relatively simple life, a lot of paper records, sufficient resources, and a support network of friends and shopkeepers, can manage. Someone on a zero-hours contract, whose life and work depend on their phone, who can't cook, and doesn't know how to navigate the world of people if they can't check the website to find out why the water is out...can't. In these crises we always hear about the sick and the elderly, but I also worry about the 20-somethings whose lives are predicated on the Internet always being there because it always has been.

A forgotten aspect is the loss of social infrastructure, as Aditya Chakrabortty writes in the Guardian. Everyone notes that since online retail has bitten great chunks off Britain's high streets, stores have closed and hub businesses like banks have departed. Chakrabortty points out that this is only half of the depredation in those towns: the last ten years of Conservative austerity have sliced away social support systems such as youth clubs and libraries. Those social systems are the caulk that gives resilience in times of stress, and they are vanishing.

Both pieces ought to be taken as a serious warning about the many kinds of capital we are burning through, especially when read in conjunction with Derek Thompson's contention that the "millennial lifestyle" is ending. "If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates, you've interacted with seven companies that will collectively lose nearly $14 billion this year," he observes. He could have added Netflix, whose 2019 burn rate is $3 billion. And, he continues, WeWork's travails are making venture capitalists and bond markets remember that losing money, long-term, is not a good bet, particularly when interest rates start to rise.

So: climate crisis, brittle systems, and unsustainable lifestyles. We are burning through every kind of capital at pace.

Illustrations: California wildfire, 2008.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2019

I never paid for it in my life

lanier-lrm-2017.jpgSo Jaron Lanier is back, arguing that we should be paid for our data. He was last seen in net.wars two years back, arguing that if people had started by charging for email we would not now be the battery fuel for "behavior modification empires". In a 2018 TED talk, he continued that we should pay for Facebook and Google in order to "fix the Internet".

Lanier's latest disquisition goes like this: the big companies are making billions from our data. We should have some of it. That way lies human dignity and the feeling that our lives are meaningful. And fixing Facebook!

The first problem is that fixing Facebook is not the same as fixing the Internet, a distinction Lanier surely understands. The Internet is a telecommunications network; Facebook is a business. You can profoundly change a business by changing who pays for its services and how, but changing a telecommunications network that underpins millions of organizations and billions of people in hundreds of countries is a wholly different proposition. If you mean, as Lanier seems to, that what you want to change is people's belief that content on the Internet should be free, then what you want to "fix" is the people, not the network. And "fixing" people at scale is insanely hard. Just ask health professionals or teachers. We'd need new incentives,

Paying for our data is not one of those incentives. Instead of encouraging people to think more carefully about privacy, being paid to post to Facebook would encourage people to indiscriminately upload more data. It would add payment intermediaries to today's merry band of people profiting from our online activities, thereby creating a whole new class of metadata for law enforcement to claim it must be able to access.

A bigger issue is that even economists struggle to understand how to price data; as Diane Coyle asked last year, "Does data age like fish or like wine?" Google's recent announcement that it would allow users to set their browser histories to auto-delete after three or 12 months has been met by the response that such data isn't worth much three months on, though the privacy damage may still be incalculable. We already do have a class of people - "influencers" - who get paid for their social media postings, and as Chris Stokel-Walker portrays some of their lives, it ain't fun. Basically, while paying us all for our postings would put a serious dent into the revenues of companies like Google, and Facebook, it would also turn our hobbies into jobs.

So a significant issue is that we would be selling our data with no concept of its true value or what we were actually selling to companies that at least know how much they can make from it. Financial experts call this "information asymmetry". Even if you assume that Lanier's proposed "MID" intermediaries that would broker such sales will rapidly amass sufficient understanding to reverse that, the reality remains that we can't know what we're selling. No one happily posting their kids' photos to Flickr 14 years ago thought that in 2014 Yahoo, which owned the site from 2005 to 2015, was going to scrape the photos into a database and offer it to researchers to train their AI systems that would then be used to track protesters, spy on the public, and help China surveil its Uighur population.

Which leads to this question: what fire sales might a struggling company with significant "data assets" consider? Lanier's argument is entirely US-centric: data as commodity. This kind of thinking has already led Google to pay homeless people in Atlanta to scan their faces in order to create a more diverse training dataset (a valid goal, but oh,.the execution).

In a paywalled paper for Harvard Business Review, Lanier apparently argues that instead he views data as labor. That view, he claims, opens the way to collective bargaining via "data labor unions" and mass strikes.

Lanier's examples, however, are all drawn from active data creation: uploading and tagging photos, writing postings. Yet much of the data the technology companies trade in is stuff we unconsciously create - "data exhaust" - as we go through our online lives: trails of web browsing histories, payment records, mouse movements. At Tech Liberation, Will Rinehart critiques Lanier's estimates, both the amount (Lanier suggests a four-person household could gain $20,000 a year) and the failure to consider the differences between and interactions among the three classes of volunteered, observed, and inferred data. It's the inferences that Facebook and Google really get paid for. I'd also add the difference between data we can opt to emit (I don't *have* to type postings directly into Facebook knowing the company is saving every character) and data we have no choice about (passport information to airlines, tax data to governments). The difference matters: you can revise, rethink, or take back a posting; you have no idea what your unconscious mouse movements reveal and no ability to edit them. You cannot know what you have sold.

Outside the US, the growing consensus is that data protection is a fundamental human right. There's an analogy to be made here between bodily integrity and personal integrity more broadly. Even in the US, you can't sell your kidney. Isn't your data just as intimate a part of you?


Illustrations: Jaron Lanier in 2017 with Luke Robert Mason (photo by Eva Pascoe).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2019

The China syndrome

800px-The_Great_wall_-_by_Hao_Wei.jpgAbout five years ago, a friend commented that despite the early belief - promulgated by, among others, then-US president Bill Clinton and vice-president Al Gore - that the Internet would spread democracy around the world, so far the opposite seemed to be the case. I suggested perhaps it's like the rising sea level, where local results don't give the full picture.

Much longer ago, I remember wondering how Americans would react when large parts of the Internet were in Chinese. My friend shrugged. Why should they care? They don't have to read them.

This week's news shows that we may both have been wrong in both cases. The reality, as the veteran technology journalist Charles Arthur suggested in the Wednesday and Thursday editions of his weekday news digest, The Overspill, is that the Hong Kong protests are exposing and enabling the collision between China's censorship controls and Western standards for free speech, aided by companies anxious to access the Chinese market. We may have thought we were exporting the First Amendment, but it doesn't apply to non-government entities.

It's only relatively recently that it's become generally acknowledged that governments can harness the Internet themselves. In 2008, the New York Times thought there was a significant domestic backlash against China's censors; by 2018, the Times was admitting China's success, first in walling off its own edited version of the Internet, and second in building rival giant technology companies and speeding past the US in areas such as AI, smartphone payments, and media creation.

So, this week. On Saturday, Demos researcher Carl Miller documented an ongoing edit war at Wikipedia: 1,600 "tendentious" edits across 22 articles on topics such as Taiwan, Tiananmen Square, and the Dalai Lama to "systematically correct what [officials and academics from within China] argue are serious anti-Chinese biases endemic across Wikipedia".

On Sunday, the general manager of the Houston Rockets, an American professional basketball team, withdrew a tweet supporting the Hong Kong protesters after it caused an outcry in China. Who knew China was the largest international market for the National Basketball Association? On Tuesday, China responded that it wouldn't show NBA pre-season games, and Chinese fans may boycott the games scheduled for Shanghai. The NBA commissioner eventually released a statement saying the organization would not regulate what players or managers say. The Americanness of basketball: restored.

Also on Tuesday, Activision Blizzard suspended Chung Ng Wai, a professional player of the company's digital card game, Hearthstone, after he expressed support for the Hong Kong protesters in a post-win official interview and fired the interviewers. Chung's suspension is set to last for a year, and includes forfeiting his thousands of dollars of 2019 prize money. A group of the company's employees walked out in protest, and the gamer backlash against the company was such that the moderators briefly took the Blizzard subreddit private in order to control the flood of angry posts (it was reopened within a day). By Wednesday, EU-based Hearthstone gamers were beginning to consider mounting a denial-of-service-attack against Blizzard by sending so many subject access requests under the General Data Protection Regulation that it will swamp the company's resources complying with the legal requirement to fulfill them.

On Wednesday, numerous media outlets reported that in its latest iOS update Apple has removed the Taiwan flag emoji from the keyboard for users who have set their location to Hong Kong or Macau - you can still use the emoji, but the procedure for doing so is more elaborate. (We will save the rant about the uselessness of these unreadable blobs for another time.)

More seriously, also on Wednesday, the New York Times reported that Apple has withdrawn the HKmap.live app that Hong Kong protesters were using to track police after China's state media accusing and protecting the protesters.

Local versus global is a long-standing variety of net.war, dating back to the 1991 Amateur Action bulletin board case. At Stratechery, Ben Thompson discusses the China-US cultural clash, with particular reference to TikTok, the first Chinese company to reach a global market; a couple of weeks ago, the Guardian revealed the site's censorship policies.

Thompson argues that, "Attempts by China to leverage market access into self-censorship by U.S. companies should also be treated as trade violations that are subject to retaliation." Maybe. But American companies can't win at this game.

In her recent book, The Big Nine, Amy Webb discusses China AI advantage as it pours resources and, above all, data into becoming the world leader via Baidu, Ali Baba, and Tencent, which have grown to rival Google, Amazon, and Facebook, without ever needing to leave home. Beyond that, China has been spreading its influence by funding telecommunications infrastructure. The Belt and Road initiative has projects in 152 countries. In this, China is taking advantage of the present US administration's inward turn and worldwide loss of trust.

After reviewing the NBA's ultimate decision, Thompson writes, "I am increasingly convinced this is the point every company dealing with China will reach: what matters more, money or values?" The answer will always be money; whose values count will depend on which market they can least afford to alienate. This week is just a coincidental concatenation of early skirmishes; just wait for the Internet of Things.

Illustrations: The Great Wall of China (by Hao Wei, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 20, 2019

Jumping the shark

800px-Guadalupe_Island_Great_White_Shark_with_Horizon_Charters.pngThis week, the Wall Street Journal claimed that Amazon has begun ordering item search results according to their profitability for the company. ( (The story is summarized at Ars Technica, for non-WSJ subscribers.) Amazon has called the story "not factually accurate", though, unsurprisingly, it declined to explain its algorithm's inner workings.

My reaction: "Well, that's a jump the shark moment."

Of course we know that every business seeks to optimize profits. Supermarkets - doubtless including Amazon's Whole Foods - choose the products to place at the ends of aisles and at cash registers only partly because those are the ones that tempt customers to make impulse buys but also because the product manufacturers pay them to do so. Both halves of that motivation have to be there. But Amazon's business and reputation are built on being fiercely devoted to putting customers first. So what makes this story different is the - perhaps only very slight - change in the weighting given to customer welfare.

In this, Amazon is following a time-honored Silicon Valley tradition (despite being based 800 miles north, in Seattle). In 2017, the EU fined Google $2.7 billion for favoring its own services in its shopping search results.

Obviously, Amazon has done and is doing far worse things. Just a few days earlier, the company announced changes that will remove health benefits for nearly 2,000 part-time employees at Whole Foods. It seems capriciously cruel: the richest man in the world, who last year told Business Insider he couldn't think of anything to spend his money on other than space travel, is willing to actively harm (given the US health system) some of the most vulnerable people who work for him. Even if he can't see it himself, you'd think the company's PR department would.

And that's just the latest in the catalogue. The company's warehouse workers regularly tell horror stories about their grueling jobs - and have for years. It will pay no US federal taxes this year for the second year in a row.

Whether or not it's true, one reason the story is so plausible is that increasingly we have no idea how businesses make their money. We *assume* we know that Coca-Cola's primary business is selling soft drinks, airlines' is selling seats on planes, and Spotify's is the sort of combination of subscriptions and advertising that has sustained many different media for a century. But not so fast: in 2017, Bloomberg reported that actually airlines make more money selling miles than they do from selling seats. Maybe the miles can't exist without the seats, but motives go where the money is, so this business reality must have consequences. Spotify, it turns out, has been building itself up into the third-largest player in digital advertising, collaborating with the PR and advertising holding company WPP to mine the billions of data points collected daily from its users' playlists and giving advertisers a new meaning for the term "mood music".

In the most simple mental model, we might expect Amazon to profit more from items it sells itself than from those sold on its platform by marketplace sellers. In fact, Amazon noted in its 2008 annual report (PDF, see p32) that its profits were about the same either way. This year, however, the EU opened an investigation into whether the company is taking advantage of the data it collects about third-party sales to identify profitable products it can cherry-pick and make for itself. No one, Lina Khan wrote in 2017 in a discussion of the modern failings of the US's antitrust enforcement, valued the data Amazon collects from smaller sellers' transactions, not even in those annual reports. Revenue-neutral, indeed.

In fact, Amazon's biggest source of profits is not its retail division, which even The Motley Fool can't figure out if it makes money. Amazon's biggest profit center is Amazon Web Services; *Netflix* was built on it. It may in fact be the case that the cloud business enables Amazon to act as an increasingly rapacious predator feasting on the rest of retail, a business model familiar from Uber (though it's far from the only one).

So Spotify is a music service in the same sense that Adobe and Oracle are software companies. Probably none of their original business plans focused on data exploitation, and their "pivot" (or bait and switch) into data passes us by while Facebook and Google get all the stick. Amazon may be the most problematic; it is, as Kashmir Hill discovered earlier this year, hard to do without Google but impossible to excise Amazon from your life. Finding alternatives for retail can still be done with enough diligence, but opting out of every business that depends on its cloud services can't be done.

Amazon was doing very well at escaping the negative scrutiny accruing to Facebook, Uber, and Google, all while becoming arguably the bigger threat, in part because we think of it as a nice company that sends us things. But if its retail customers are becoming just fungible piles of data to be optimized, that's a systemic failure the company can't reverse by restoring 2,000 people's health benefits, or paying taxes, or getting its owner to say, oh, yeah, space travel...what was I thinking?


Illustrations: Great white shark (via Sharkcrew at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2019

The Fregoli delusion

Anomalisa-Fregoli.pngIn biology, a monoculture is a bad thing. If there's only one type of banana, a fungus can wipe out the entire species instead of, as now, just the most popular one. If every restaurant depends on Yelp to find its customers, Yelp's decision to replace their phone number with one under its own control is a serious threat. And if, as we wrote here some years ago, everyone buys everything from Amazon, gets all their entertainment from Netflix, and get all their mapping, email, and web browsing from Google, what difference does it make that you're iconoclastically running Ubuntu underneath?

The same should be true in the culture of software development. It ought to be obvious that a monoculture is as dangerous there as on a farm. Because: new ideas, robustness, and innovation all come from mixing. Plenty of business books even say this. It's why research divisions create public spaces, so people from different disciplines will cross-fertilize. It's why people and large businesses live in cities.

And yet, as the journalist Emily Chang documents in her 2018 book Brotopia: Breaking Up the Boys' Club of Silicon Valley, Silicon Valley technology companies have deliberately spent the last couple of decades progressively narrowing their culture. To a large extent, she blames the spreading influence of the Paypal Mafia. At Paypal's founding, she writes, this group, which includes Palantir founder Peter Thiel, LinkedIn founder Reid Hoffman, and Tesla supremo Elon Musk, adopted the basic principle that to make a startup lean, fast-moving, and efficient you needed a team who thought alike. Paypal's success and the diaspora of its early alumni disseminated a culture in which hiring people like you was a *strategy*. This is what #MeToo and fights for equality are up against.

Businesses are as prone to believing superstitions as any other group of people, and unicorn successes are unpredictable enough to fuel weird beliefs, especially in an already-insular place like Silicon Valley. Yet, Chang finds much earlier roots. In the mid-1960s, System Development Corporation hired psychologists William Cannon and Dallis Perry to create a profile to help it to identify recruits who would enjoy the new profession of computer programming. They interviewed 1,378 mostly male programmers, and found this common factor: "They don't like people." And so the idea that "antisocial" was a qualification was born, spreading outwards through increasingly popular "personality tests" and, because of the cultural differences in the way girls and boys are socialized, gradually and systematically excluding women.

Chang's focus is broad, surveying the landscape of companies and practices. For personal inside experiences, you might try Ellen Pao's Reset: My Fight for Inclusion and Lasting Change, which documents the experiences at Kleiner Perkins, which led her to bring a lawsuit, and at Reddit, where she was pilloried for trying to reduce some of the system's toxicity. Or, for a broader range, try Lean Out, a collection of personal stories edited by Elissa Shevinsky.

Chang finds that even Google, which began with an aggressive policy of hiring female engineers that netted it technology leaders Susan Wojcicki, CEO of YouTube, Marissa Mayer, who went on to try to rescue Yahoo, and Sheryl Sandberg, now COO of Facebook, failed in the long term. Today its male-female radio is average for Silicon Valley. She cites Slack as a notable exception; founder Stewart Butterfield set out to build a different kind of workplace.

In that sense, Slack may be the opposite of Facebook. In Zucked: Waking Up to the Facebook Catastrophe, Roger McNamee tells the mea culpa story of his early mentorship to Mark Zuckerberg and the company's slow pivot into posing problems he believes are truly dangerous. What's interesting to read in tandem with Chang's book is his story of the way Silicon Valley hiring changed. Until around 2000, hiring rewarded skill and experience; the limitations on memory, storage, and processing power meant companies needed trained and experienced engineers. Facebook, however, came along at the moment when those limitations had vanished and as the dot-com bust finished playing out. Suddenly, products could be built and scaled up much faster; open source libraries and the arrival of cloud suppliers meant they could be developed by less experienced, less skilled, *younger*, much *cheaper* people; and products could be free, paid for by advertising. Couple this with 20 years of Reagan deregulation and the influence, which he also cites, of the Paypal Mafia, and you have the recipe for today's discontents. McNamee writes that he is unsure what the solution is; his best effort at the moment appears to be advising Center for Humane Technology, led by former Google design ethicist Tristan Harris.

These books go a long way toward explaining the world Caroline Criado-Perez describes in 2018's Invisible Women: Data Bias in a World Designed for Men. Her discussion is not limited to Silicon Valley - crash test dummies, medical drugs and practices, and workplace design all appear - but her main point applies. If you think of one type of human as "default normal", you wind up with a world that's dangerous for everyone else.

You end up, as she doesn't say, with a monoculture as destructive to the world of ideas as those fungi are to Cavendish bananas. What Zucked and Brotopia explain is how we got there.


Illustrations: Still from Anomalisa (2015).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 9, 2019

Collision course

800px-Kalka-Shimla_Railway_at_night_in_Solan_-_approaching_train.JPGThe walk from my house to the tube station has changed very little in 30 years. The houses and their front gardens look more or less the same, although at least two have been massively remodeled on the inside. More change is visible around the tube station, where shops have changed hands as their owners retired. The old fruit and vegetable shop now sells wine; the weird old shop that sold crystals and carved stones is now a chain drug store. One of the hardware stores is a (very good) restaurant and the other was subsumed into the locally-owned health food store. And so on.

In the tube station itself, the open platforms have been enclosed with ticket barriers and the second generation of machines has closed down the ticket office. It's imaginable that had the ID card proposed in the early 2000s made it through to adoption the experience of buying a ticket and getting on the tube could be quite different. Perhaps instead of an Oyster card or credit card tap, we'd be tapping in and out using a plastic ID smart card that would both ensure that only I could use my free tube pass and ensure that all local travel could be tracked and tied to you. For our safety, of course - as we would doubtless be reminded via repetitive public announcements like the propaganda we hear every day about the watching eye of CCTV.

Of course, tracking still goes on via Oyster cards, credit cards, and, now, wifi, although I do believe Transport for London when it says its goal is to better understand traffic flows through stations in order to improve service. However, what new, more intrusive functions TfL may choose - or be forced - to add later will likely be invisible to us until an expert outsider closely studies the system.

In his recently published memoir, the veteran campaigner and Privacy International founder Simon Davies tells the stories of the ID cards he helped to kill: in Australia, in New Zealand, in Thailand, and, of course, in the UK. What strikes me now, though, is that what seemed like a win nine years ago, when the incoming Conservative-Liberal Democrat alliance killed the ID card, is gradually losing its force. (This is very similar to the early 1990s First Crypto Wars "win" against key escrow; the people who wanted it have simply found ways to bypass public and expert objections.)

As we wrote at the time, the ID card itself was always a brightly colored decoy. To be sure, those pushing the ID card played on it and British wartime associations to swear blind that no one would ever be required to carry the ID card and forced to produce it. This was an important gambit because to much of the population at the time being forced to carry and show ID was the end of the freedoms two world wars were fought to protect. But it was always obvious to those who were watching technological development that what mattered was the database because identity checks would be carried out online, on the spot, via wireless connections and handheld computers. All that was needed was a way of capturing a biometric that could be sent into the cloud to be checked. Facial recognition fits perfectly into that gap: no one has to ask you for papers - or a fingerprint, iris scan, or DNA sample. So even without the ID card we *are* now moving stealthily into the exact situation that would have prevailed if we had. Increasing numbers of police departments - South Wales, London, LA, India, and, notoriously, China - as Big Brother Watch has been documenting for the UK. There are many more remotely observable behaviors to be pressed into service, enhanced by AI, as the ACLU's Jay Stanley warns.

The threat now of these systems is that they are wildly inaccurate and discriminatory. The future threat of these systems is that they will become accurate and discriminatory, allowing much more precise targeting that may even come to seem reasonable *because* it only affects the bad people.

This train of thought occurred to me because this week Statewatch released a leaked document indicating that most of the EU would like to expand airline-style passenger data collection to trains and even roads. As Daniel Boffay explains at the Guardian (and as Edward Hasbrouck has long documented), the passenger name records (PNRs) airlines create for every journey include as many as 42 pieces of information: name, address, payment card details, itinerary, fellow travelers... This is information that gets mined in order to decide whether you're allowed to fly. So what this document suggests is that many EU countries would like to turn *all* international travel into a permission-based system.

What is astonishing about all of this is the timing. One of the key privacy-related objections to building mass surveillance systems is that you do not know who may be in a position to operate them in future or what their motivations will be. So at the very moment that many democratic countries are fretting about the rise of populism and the spread of extremism, those same democratic countries are proposing to put in place a system that extremists who get into power can operate anti-democratic ways. How can they possibly not see this as a serious systemic risk?


Illustrations: The light of the oncoming train (via Andrew Gray at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 26, 2019

Hypothetical risks

Great Hack - data connections.png"The problem isn't privacy," the cryptography pioneer Whitfield Diffie said recently. "It's corporate malfeasance."

This is obviously right. Viewed that way, when data profiteers claim that "privacy is no longer a social norm", as Facebook CEO Mark Zuckerberg did in 2010, the correct response is not to argue about privacy settings or plead with users to think again, but to find out if they've broken the law.

Diffie was not, but could have been, talking specifically about Facebook, which has blown up the news this week. The first case grabbed most of the headlines: the US Federal Trade Commission fined the company $5 billion. As critics complained, the fine was insignificant to a company whose Q2 2019 revenues were $16.9 billion and whose quarterly profits are approximately equal to the fine. Medium-term, such fines have done little to dent Facebook's share prices. Longer-term, as the cases continue to mount up...we'll see. Also this week, the US Department of Justice launched an antitrust investigation into Apple, Amazon, Alphabet (Google), and Facebook.

The FTC fine and ongoing restrictions have been a long time coming; EPIC executive director Marc Rotenberg has been arguing ever since the Cambridge Analytica scandal broke that Facebook had violated the terms of its 2011 settlement with the FTC.

If you needed background, this was also the week when Netflix released the documentary, The Great Hack, in which directors Karim Amer and Jehane Noujairn investigate the role Cambridge Analytica and Facebook played in the 2016 EU referendum and US presidential election votes. The documentary focuses primarily on three people: David Carroll, who mounted a legal action against Facebook to obtain his data; Brittany Kaiser, a director of Cambridge Analytica who testified against the company; and Carole Cadwalladr, who broke the story. In his review at the Guardian, Peter Bradwell notes that Carroll's experience shows it's harder to get your "voter profile" out of Facebook than from the Stasi, as per Timothy Garton Ash. (Also worth viewing: the 2006 movie The Lives of Others.)

Cadwalladr asks in her own piece about The Great Hack and in her 2019 TED talk, whether we can ever have free and fair elections again. It's a difficult question to answer because although it's clear from all these reports that the winning side of both the US and UK 2016 votes used Facebook and Cambridge Analytica's services, unless we can rerun these elections in a stack of alternative universes we can never pinpoint how much difference those services made. In a clip taken from the 2018 hearings on fake news, Damian Collins (Conservative, Folkstone and Hythe), the chair of the Digital, Culture, Media, and Sport Committee, asks Chris Wylie, a whistleblower who worked for Cambridge Analytica, that same question (The Great Hack, 00:25:51). Wylie's response: "When you're caught doping in the Olympics, there's not a debate about how much illegal drug you took or, well, he probably would have come in first, or, well, he only took half the amount, or - doesn't matter. If you're caught cheating, you lose your medal. Right? Because if we allow cheating in our democratic process, what about next time? What about the time after that? Right? You shouldn't win by cheating."

Later in the film (1:08:00), Kaiser, testifying to DCMS, sums up the problem this way: "The sole worth of Google and Facebook is the fact that they own and possess and hold and use the personal data from people all around the world.". In this statement, she unknowingly confirms the prediction made by the veteran Australian privacy advocate Roger Clarke,who commented in a 2009 interview about his 2004 paper, Very Black "Little Black Books", warning about social networks and privacy: "The only logical business model is the value of consumers' data."

What he got wrong, he says now, was that he failed to appreciate the importance of micro-pricing, highlighted in 1999 by the economist Hal Varian. In his 2017 paper on the digital surveillance economy, Clarke explains the connection: large data profiles enable marketers to gauge the precise point at which buyers begin to resist and pitch their pricing just below it. With goods and services, this approach allows sellers to extract greater overall revenue from the market than pre-set pricing would; with politics, you're talking about a shift from public sector transparency to private sector black-box manipulation. Or, as someone puts it in The Great Hack, a "full-service propaganda machine". Load, aim at "persuadables", and set running.

Less noticed than either of these is the Securities and Exchange Commission settlement with Facebook, also announced this week. While the fine is relatively modest - a mere $100 million - the SEC has nailed the company's conflicting statements. On Twitter, Jason Kint has helpfully highlighted the SEC's statements laying out the case that Facebook knew in 2016 that it had sold Cambridge Analytica some of the data underlying the 30 million personality profiles CA had compiled - and then "misled" both the US Congress and its own investors. Besides the fine, the SEC has permanently enjoined Facebook from further violations of the laws it broke in continuing to refer to actual risks as "hypothetical". The mills of trust have been grinding exceeding slow; they may yet grind exceeding small.


Illustrations: Data connections in The Great Hack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 28, 2019

Failure to cooperate

sweat-nottage.jpgIn her 2015 Pulitzer Prize-winning play, Sweat, on display nightly in London's West End until mid-July, Lynn Nottage explores class and racial tensions in the impoverished, post-industrial town of Reading, PA. In scenes alternating between 2000 and 2008, she explores the personal-level effects of twin economic crashes, corporate outsourcing decisions, and tribalism: friends become opposing disputants; small disagreements become violent; and the prize for "winning" shrinks to scraps. Them who has, gets; and from them who have little, it is taken.

Throughout, you wish the characters would recognize their real enemies: the company whose steel tubing factory has employed them for decades, their short-sighted union, and a system that structurally short-changes them. The pain of the workers when they are locked out is that of an unwilling divorce, abruptly imposed.

The play's older characters, who would be in their mid-60s today, are of the age to have been taught that jobs were for life. They were promised pensions and could look forward to wage increases at a steady and predictable pace. None are wealthy, but in 2000 they are financially stable enough to plan vacations, and their children see summer jobs as a viable means of paying for college and climbing into a better future. The future, however, lies in the Spanish-language leaflets the company is distributing to frustrated immigrants the union has refused to admit and who will work for a quarter the price. Come 2008, the local bar is run by one of those immigrants, who of necessity caters to incoming hipsters. Next time you read an angry piece attacking Baby Boomers for wrecking the world, remember that it's a big demographic and only some were the destructors. *Some* Baby Boomers were born wreckage, some achieved it, and some had it thrust upon them.

We leave the characters there in 2008: hopeless, angry, and alienated. Nottage, who has a history of researching working class lives and the loss of heavy industry, does not go on to explore the inner workings of the "digital poorhouse" they're moving into. The phrase comes from Virginia Eubanks' 2018 book, Automating Inequality, which we unfortunately missed reviewing before now. If Nottage had pursued that line, she might have found what Eubanks finds: a punitive, intrusive, judgmental, and hostile benefits system. Those devastated factory workers must surely have done something wrong to deserve their plight.

Eubanks presents three case studies. In the first, struggling Indiana families navigate the state's new automated welfare system, a $1.3 billion, ten-year privatization effort led by IBM. Soon after its 2006 launch, it began sending tens of thousands of families notices of refusal on this Kafkaesque basis: "Failure to cooperate". Indiana eventually canceled IBM's contract, and the two have been suing each other ever since. Not represented in court is, as Eubanks says, the incalculable price paid in the lives of the humans the system spat out.

In the second, "coordinated entry" matches homeless Los Angelenos to available resources in order of vulnerability. The idea was that standardizing the intake process across all possible entryways would help the city reduce waste and become more efficient while reducing the numbers on Skid Row. The result, Eubanks finds, is an unpredictable system that mysteriously helps some and not others, and that ultimately fails to solve the underlying structural problem: there isn't enough affordable housing.

In the third, a Pennsylvania predictive system is intended to identify children at risk of abuse. Such systems are proliferating widely and controversially for varying purposes, and all raise concerns about fairness and transparency: custody decisions (Durham, England), gang membership and gun crime (Chicago and London), and identifying children who might be at risk (British local councils). All these systems gather and retain, perhaps permanently, huge amounts of highly intimate data about each family. The result in Pennsylvania was to deter families from asking for the help they're actually entitled to, lest they become targets to be watched. Some future day, those same records may pop when a hostile neighbor files a minor complaint, or haunt their now-grown children when raising their own children.

All these systems, Eubanks writes, could be designed to optimize access to benefits instead of optimizing for efficiency or detecting fraud. I'm less sanguine. In prior art, Danielle Citron has written about the difficulties of translating human law accurately into programming code, and the essayist Ellen Ullman warned in 1996 that even those with the best intentions eventually surrender to computer system imperatives of improving data quality, linking databases, and cross-checking, the bedrock of surveillance.

Eubanks repeatedly writes that middle class people would never put up with this level of intrusion. They may have no choice. As Sweat highlights, many people's options are shrinking. Refusal is only possible for those who can afford to buy their help, an option increasingly reserved for a privileged few. Poor people, Eubanks is frequently told, are the experimental models for surveillance that will eventually be applied to all of us.

In 2017, Cathy O'Neil argued in Weapons of Math Destruction that algorithmic systems can be designed for fairness. Eubanks' analysis suggests that view is overly optimistic: the underlying morality dates back centuries. Digitization has, however, exacerbated its effects, as Eubanks concludes. County poorhouse inmates at least had the community of shared experience. Its digital successor squashes and separates, leaving each individual to drink alone in that Reading bar.


Illustrations: Sweat's London production poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 17, 2019

Genomics snake oil

DNA_Double_Helix_by_NHGRI-NIH-PD.jpgIn 2011, as part of an investigation she conducted into the possible genetic origins of the streak of depression that ran through her family, the Danish neurobiologist Lone Frank had her genome sequenced and interviewed many participants in the newly-opening field of genomics that followed the first complete sequencing of the human genome. In her resulting book, My Beautiful Genome, she commented on the "Wild West" developing around retail genetic testing being offered to consumers over the web. Absurd claims such as using DNA testing to find your perfect mate or direct your child's education abounded.

This week, at an event organized by Breaking the Frame, New Zealand researcher Andelka M. Phillips presented the results of her ongoing study of the same landscape. The testing is just as unreliable, the claims even more absurd - choose your diet according to your DNA! find out what your superpower is! - and the number of companies she's collected has reached 289 while the cost of the tests has shrunk and the size of the databases has ballooned. Some of this stuff makes astrology look good.

To be perfectly clear: it's not, or not necessarily, the gene sequencing itself that's the problem. To be sure, the best lab cannot produce a reading that represents reality from poor-quality samples. And many samples are indeed poor, especially those snatched from bed sheets or excavated from garbage cans to send to sites promising surreptitious testing (I have verified these exist, but I refuse to link to them) to those who want to check whether their partner is unfaithful or whether their child is in fact a blood relative. But essentially, for health tests at least, everyone is using more or less the same technology for sequencing.

More crucial is the interpretation and analysis, as Helen Wallace, the executive director of GeneWatch UK, pointed out. For example, companies differ in how they identify geographical regions, frame populations , and the makeup of their databases of reference contributions. This is how a pair of identical Canadian twins got varying and non-matching test results from five companies, one Ashkenazi Jew got six different ancestry reports, and, according to one study, up to 40% of DNA results from consumer genetic tests are false positives. As I type, the UK Parliament is conducting an inquiry into commercial genomics.

Phillips makes the data available to anyone who wants to explore it. Meanwhile, so far she's examined the terms of service and privacy policies of 71 companies, and finds them filled with technology company-speak, not medical information. They do not explain these services' technical limitations or the risks involved. Yet it's so easy to think of disastrous scenarios: this week, an American gay couple reported that their second child's birthright citizenship is being denied under new State Department rules. A false DNA test could make a child stateless.

Breaking the Frame's organizer, Dave King, believes that a subtle consequence of the ancestry tests - the things everyone was quoting in 2018 that tell you that you're 13% German, 1% Somalian, and whatever else - is to reinforce the essentially racist notion that "Germanness" has a biological basis. He also particularly disliked the services claiming they can identify children's talents; these claim, as Phillips highlighted, that testing can save parents money they might otherwise waste on impossible dreams. That way lies Gattaca and generations of children who don't get to explore their own abilities because they've already been written off.

Even more disturbing questions surround what happens with these large databases of perfect identifiers. In the UK, last October the Department of Health and Social Care announced its ambition to sequence 5 million genomes. Included was the plan to being in 2019 to offer whole genome sequencing to all seriously ill children and adults with specific rare diseases or hard-to-treat cancers as part of their care. In other words, the most desperate people are being asked first, a prospect Phil Booth, coordinator of medConfidential, finds disquieting. As so much of this is still research, not medical care, he said, like the late despised care.data, it "blurs the line around what is your data, and between what the NHS was and what some would like it to be". Exploitation of the nation's medical records as raw material for commercial purposes is not what anyone thought they were signing up for. And once you have that giant database of perfect identifiers...there's the Home Office, which has already been caught using the NHS to hunt illegal immigrants and DNA testing immigrants.

So Booth asked this: why now? Genetic sequencing is 20 years old, and to date it has yet to come close to being ready to produce the benefits predicted for it. We do not have personalized medicine, or, except in a very few cases (such as a percentage of breast cancer) drugs tailored to genetic makeup. "Why not wait until it's a better bet?" he asked. Instead of spending billions today - billions that, as an audience member pointed out, would produce better health more widely if spent on improving the environment, nutrition, and water - the proposal is to spend them on a technology that may still not be producing results 20 years from now. Why not wait, say, ten years and see if it's still worth doing?


Illustrations: DNA double helix (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 12, 2019

The Algernon problem

charly-movie-image.jpgLast week we noted that it may be a sign of a maturing robotics industry that it's possible to have companies specializing in something as small as fingertips for a robot hand. This week, the workshop day kicking off this year's We Robot conference provides a different reason to think the same thing: more and more disciplines are finding their way to this cross-the-streams event. This year, joining engineers, computer scientists, lawyers, and the odd philosopher are sociologists, economists, and activists.

The result is oddly like a meeting of the Research Institute for the Science of Cyber Security, where a large part of the point from the beginning has been that human factors and economics are as important to good security as technical knowledge. This was particularly true in the face-off between the economist Rob Seamans and the sociologist Beth Bechky, which pitted quantitative "things we can count" against qualitative "study the social structures" thinking. The range of disciplines needed to think about what used to be "computer" security keeps growing as the ways we use computers become more complex; robots are computer systems whose mechanical manifestations interact with humans. This move has to happen.

One sign is a change in language. Madeline Elish, currently in the news for her newly published 2016 We Robot paper, Moral Crumple Zones, said she's trying to replace the term "deploying" with "integrating" for arriving technologies. "They are integrated into systems," she explained, "and when you say "integrate" it implies into what, with whom, and where." By conrast, "deployment" is military-speak, devoid of context. I like this idea, since by 2015, it was clear from a machine learning conference at the Royal Society that many had begun seeing robots as partners rather than replacements.

Later, three Japanese academics - the independent researcher Hideyuki Matsumi, Takayuki Kato, and Pumio Shimpo - tried to explain why Japanese people like robots so much - more, it seems, than "we" do (whoever "we" are). They suggested three theories: the influence of TV and manga; the influence of the mainstream Shinto religion, which sees a spirit in everything; and the Japanese government strategy to make the country a robotics powerhouse. The latter has produced a 356-page guideline for research development.

"Japanese people don't like to draw distinctions and place clear lines," Shinto said. "We think of AI as a friend, not an enemy, and we want to blur the lines." Shimpo had just said that even though he has two actual dogs he wants an Aibo. Kato dissented: "I personally don't like robots."

The MIT researcher Kate Darling, who studies human responses to robots, found positive reinforcement in studies that have found that autistic kids respond well to robots. "One theory is that they're social, but not too social." An experiment that placed these robots in homes for 30 days last summer had "stellar results". But: when the robots were removed at the end of the experiment, follow-up studies found that the kids were losing the skills the robots had brought them. The story evokes the 1958 Daniel Keyes story Flowers for Algernon, but then you have to ask: what were the skills? Did they matter to the children or just to the researchers and how is "success" defined?

The opportunities anthropomorphization opens for manipulation are an issue everywhere. Woody Hartzog called the tendency to believe what the machine says "automation bias", but that understates the range of motivations: you may believe the machine because you like it, because it's manipulated you, or because you're working in a government benefits agency where you can't be sure you won't get fired if you defy the machine's decision. Would that everyone could see Bill Smart and Cindy Grimm follow up their presentation from last year to show: AI is just software; it doesn't "know" things; and it's the complexity that gets you. Smart hates the term "autnomous" for robots "because in robots it means deterministic software running on a computer. It's teleoperation via computer code."

This is the "fancy hammer" school of thinking about robots, and it can be quite valuable. Kevin Bankston soon demonstrated this: "Science fiction has trained us to worry about Skynet instead of housing discrimination, and expect individual saviors rather than communities working together to deal with community problems." AI is not taking our jobs; capitalists are using AI to take our jobs - a very different problem. As long as we see robots and AI as autonomous, we miss that instead they ares agents carrying out others' plans. This is a larger example of a pervasive problem with smartphones, social media sites, and platforms generally: they are designed to push us to forget the data-collecting, self-interested, manipulative behemoth behind them.

Returning to Elish's comment, we are one of the things robots integrate with. At the moment, this is taking the form of making random people research subjects: the pedestrian killed in Arizona by a supposedly self-driving car, the hapless prisoners whose parole is decided by it's-just-software, the people caught by the Metropolitan Police's staggeringly flawed facial recognition, the homeless people who feel threatened by security robots, the Caltrain passengers sharing a platform with an officious delivery robot. Did any of us ask to be experimented on?


Illustrations: Cliff Robertson in Charly, the movie version of "Flowers for Algernon".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 15, 2019

Schrödinger's Brexit

Parliament_Clock_Westminster-wikimedia.jpg

"What's it like over there now?" American friends keep asking as the clock ticks down to midnight on March 29. Even American TV seems unusually interested: last week's Full Frontal with Samantha Bee had Amy Hoggart explain in detail; John Oliver made it a centerpiece two weeks ago, and US news outlets are giving it as much attention as if it were a US story. They're even - so cute! - trying to pronounce "Taoiseach". Everyone seems fascinated by the spectacle of the supposedly stoic, intellectual British holding meaningless "meaningful" votes and avoiding making any decisions that could cause anyone to lose face. So this is what it's like to live through a future line in the history books: other countries fret on your behalf while you're trying to get lunch.

In 14 days, Britain will either still be a member of the European or it won't. It will have a deal describing the future relationship or it won't. Ireland will be rediscovering civil war or it won't. In two months, we will be voting in the European Parliamentary elections as if nothing has happened, or we won't. All possible outcomes lead to protests in Parliament Square.

No one expects to be like Venezuela. But no one knows what will happen, either. We were more confident approaching Y2K. At least then you knew that thousands of people had put years of hard work into remediating the most important software that could fail. Here...in January, returning from CPDP and flowing seamlessly via Eurostar from Brussels to London, my exit into St Pancras station held the question: is this the last time this journey will be so simple? Next trip, will there be Customs channels and visa checks? Where will they put them? There's no space.

A lot of the rhetoric both at the time of the 2016 vote and since has been around taking back control and sovereignty. That's not the Britain I remember from the 1970s, when the sense of a country recovering from the loss of its empire was palpable, middle class people had pay-as-you-go electric and gas meters, and the owner of a Glasgow fruit and vegetable shop stared at me when I asked for fresh garlic. In 1974, a British friend visiting an ordinary US town remarked, "You can tell there's a lot more money around in this country." And another, newly expatriate and struggling: "But at least we're eating real meat here." This is the pre-EU Britain I remember.

"I've worked for them, and I know how corrupt they are," a 70-something computer scientist said to me of the EU recently. She would, she said, "man the barriers" if withdrawal did not go through. We got interrupted before I could ask if she thought we were safer in the hands of the Parliament whose incompetence she had also just furiously condemned.

The country remains profoundly in disagreement. There may be as many definitions of "Brexit" as there are Leave voters. But the last three years have brought everyone together on one thing: no matter how they voted, where they're from, which party they support, or where they get their news, everyone thinks the political class has disgraced itself. Casually-met strangers laugh in disbelief at MPs' inability to put country before party or self-interest or say things like "It's sickening". Even Wednesday's hair's width vote taking No Deal off the table is absurd: the clock inexorably ticks toward exiting the EU with nothing unless someone takes positive action, either by revoking Article 50, or by asking for an extension, or by signing a deal. But action can get you killed politically. I've never cared for Theresa May, but she's prime minister because no one else was willing to take this on.

NB for the confused: in the UK "tabling a motion" means to put it up for discussion; in the US it means to drop it.

Quietly, people are making just-in-case preparations. One friend scheduled a doctor's appointment to ensure that he'd have in hand six months' worth of the medications he depends on. Others stockpile EU-sourced food items that may be scarce or massively more expensive. Anyone who can is applying for a passport from an EU country; many friends are scrambling to research their Irish grandparents and assemble documentation. So the people in the best position are the recent descendants of immigrants that would would not now be welcome. It is unfair and ironic, and everyone knows it. A critical underlying issue, Danny Dorling and Sally Tomlinson write in their excellent and eye-opening Rule Britannia: Brexit and the End of Empire is education that stresses the UK's "glorious" imperial past. Within the EU, they write, UK MEPs are most of the extreme right, and the EU may be better off - more moderate, less prone to populism - without the UK, while British people may achieve a better understanding of their undistinguished place in the world. Ouch.

The EU has never seemed irrelevant to digital rights activists. Computers, freedom, and privacy (that is, "net.wars") shows the importance of the EU in our time, when the US refuses to regulate and the Internet is challenging national jurisdiction. International collaboration matters.

Just as I wrote that, Parliament finally voted to take the smallest possible action and ask the EU for a two-month extension. Schrödinger needs a bigger box.

Illustrations: "Big Ben" (Aldaron, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 8, 2019

Pivot

parliament-whereszuck.jpgWould you buy a used social media platform from this man?

"As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platform," Mark Zuckerberg wrote this week at the Facebook blog, also summarized at the Guardian.

Zuckerberg goes on to compare Facebook and Instagram to "the digital equivalent of a town square".

So many errors, so little time. Neither Facebook nor Instagram is open. "Open information, Rufus Pollock explained last year in The Open Revolution, "...can be universally and freely used, built upon, and shared." While, "In a Closed world information is exclusively 'owned' and controlled, its attendant wealth and power more and more concentrated".

The alphabet is open. I do not need a license from the Oxford English Dictionary to form words. The web is open (because Tim Berners-Lee made it so). One of the first social media, Usenet, is open. Particularly in the early 1990s, Usenet really was the Internet's town square.

*Facebook* is *closed*.

Sure, anyone can post - but only in the ways that Facebook permits. Running apps requires Facebook's authorization, and if Facebook makes changes, SOL. Had Zuckerberg said - as some have paraphrased him - "town hall", he'd still be wrong, but less so: even smaller town halls have metal detectors and guards to control what happens inside. However, they're publicly owned. Under the structure Zuckerberg devised when it went public, even the shareholders have little control over Facebook's business decisions.

So, now: this week Zuckerberg announced a seeming change of direction for the service. Slate, the Guardian, and the Washington Post all find skepticism among privacy advocates that Facebook can change in any fundamental way, and they wonder about the impact on Facebook's business model of the shift to focusing on secure private messaging instead of the more public newsfeed. Facebook's former chief security officer Alex Stamos calls the announcement a "judo move" that removes both the privacy complaints (Facebook now can't read what you say to your friends) and allows the site to say that complaints about circulating fake news and terrorist content are outside its control (Facebook now can't read what you say to your friends *and* doesn't keep the data).

But here's the thing. Facebook is still proposing to unify the WhatsApp, Instagram, and Facebook user databases. Zuckerberg's stated intention is to build a single unified secure messaging system. In fact, as Alex Hern writes at the Guardian that's the one concrete action Zuckerberg has committed to, and that was announced back in January, to immediate privacy queries from the EU.

The point that can' t be stressed enough is that although Facebook is trading away the ability to look at the content of what people post it will retain oversight of all the traffic data. We have known for decades that metadata is even more revealing than content; I remember the late Caspar Bowden explaining the issues in detail in 1999. Even if Facebook's promise to vape the messages doesn't include keeping no copies for itself (a stretch, given that we found out in 2013 that the company keeps every character you type), it will be able to keep its insights into the connections between people and the conclusions it draws from them. Or, as Hern also writes, Zuckerberg "is offering privacy on Facebook, but not necessarily privacy from Facebook".

Siva Vaidhyanathan, author of Antisocial Media, seems to be the first to get this, and to point out that Facebook's supposed "pivot" is really just a decision to become more dominant, like China's WeChat.WeChat thoroughly dominates Chinese life: it provides messaging, payments, and a de facto identity system. This is where Vaidhyanathan believes Facebook wants to go, and if encrypting messages means it can't compete in China...well, WeChat already owns that market anyway. Let Google get the bad press.

Facebook is making a tradeoff. The merged database will give it the ability to inspect redundancy - are these two people connected on all three services or just one? - and therefore far greater certainty about which contacts really matter and to whom. The social graph that emerges from this exercise will be smaller because duplicates will have been merged, but far more accurate. The "pivot" does, however, look like it might enable Facebook to wriggle out from under some of its numerous problems - uh, "challenges". The calls for regulation and content moderation focus on the newsfeed. "We have no way to see the content people write privately to each other" ends both discussions, quite possibly along with any liability Facebook might have if the EU's copyright reform package passes with Article 11 (the "link tax") intact.

Even calls that the company should be broken up - appropriate enough, since the EU only approved Facebook's acquisition of WhatsApp when the company swore that merging the two databases was technically impossible - may founder against a unified database. Plus, as we know from this week's revelations, the politicians calling for regulation depend on it for re-election, and in private they accommodate it, as Carole Cadwalladr and Duncan Campbell write at the Guardian and Bill Goodwin writes at Computer Weekly.

Overall, then, no real change.


Illustrations: The international Parliamentary committee, with Mark Zuckerberg's empty seat.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 28, 2019

Systemic infection

Thumbnail image for 2001-hal.png"Can you keep a record of every key someone enters?"

This question brought author and essayist Ellen Ullman up short when she was still working as a software engineer and it was posed to her circa 1996. "Yes, there are ways to do that," she replied after a stunned pause.

In her 1997 book Close to the Machine, Ullman describes the incident as "the first time I saw a system infect its owner". After a little gentle probing, her questioner, the owner of a small insurance agency, explained that now that he had installed a new computer system he could find out what his assistant, who had worked for him for 26 years and had picked up his children from school when they were small, did all day. "The way I look at it," he explained, "I've just spent all this money on a system, and now I get to use it the way I'd like to."

Ullman appeared to have dissuaded this particular business owner on this particular occasion, but she went on to observe that over the years she saw the same pattern repeated many times. Sooner or later, someone always realizes that they systems they have commissioned for benign purposes can be turned to making checks and finding out things they couldn't know before. "There is something...in the formal logic of programs and data, that recreates the world in its own image," she concludes.

I was reminded of this recently when I saw a report at The Register that the US state of New Jersey, along with two dozen others, may soon require any contractor working on a contract worth more than $100,000 to install keylogging software to ensure that they're actually working all the hours - one imagines that eventually, it will be minutes - they bill for. Veteran reporter Thomas Claburn goes on to note that the text of the bill was provided by TransparentBusiness, a maker of remote work management software, itself a trend.

Speaking as a taxpayer, I can see the point of ensuring that governments are getting full value for our money. But speaking as a freelance writer who occasionally has had to work on projects where I'm paid by the hour or day (a situation I've always tried to avoid by agreeing a rate for the whole job), the distrust inherent in such a system seems poisonous. Why are we hiring people we can't trust? Most of us who have taken on the risks of self-employment do so because one of the benefits is autonomy and a certain freedom from bosses. And now we're talking about the kind of intensive monitoring that in the past has been reserved for full-time employees - and that none of them have liked much either.

One of the first sectors that is already fighting its way through this kind of transition is trucking. In 2014, Cornell sociologist Karen Levy published the results of three years of research into the arrival of electronic monitoring into truckers' cabs as a response to safety concerns. For truckers, whose cabs are literally their part-time homes, electronic monitoring is highly intrusive; effectively, the trucking company is installing a camera and other sensors not just in their office but also in their living room and bedroom. Instead of using electronics to try to change unsafe practices, she argues, alter the economic incentives. In particular, she finds that the necessity of making a living at low per-mile rates pushes truckers to squeeze the unavoidable hours of unpaid work - waiting for loading and unloading, for example - into their statutory hours of "rest".

The result sounds like it would be familiar to Uber drivers or modern warehouse workers, even if Amazon never deploys the wristbands it patented in 2016. In an interview published this week, Data & Society Institute researcher Alex Rosenblat outlines the results of a four-year study of ride-hail drivers across the US and Canada. Forget the rhetoric that these drivers are entrepreneurs, she writes; they have a boss, and it's the company's algorithm, which dictates their on-the-job behavior and withholds the data they need to make informed decisions.

If we do nothing, this may be the future of all work. In a discussion last week, University of Leicester associate professor Phoebe Moore located "quantified work" at the intersection of two trends: first, the health-oriented self-quantified movement, and second the succeeding waves of workplace management from industrialization through time and motion study, scientific management, and today's organizational culture, where, as Moore put it, we're supposed to "love our jobs and identify with our employer". The first of these has led to "wellness" programs that, particularly in the US, helped grant employers access to vastly more detailed personal data about their employees than has ever been available to them before.

Quantification, the combination of the two trends, Moore warns at Medium, will alter the workplace's social values by tending to pit workers against each other, race track style. Vendors now claim predictive power for AI: which prospective employees fit which jobs, or when staff may be about to quit or take sick leave. One can, as Moore does, easily imagine that, despite the improvements AI can bring, the AI-quantified workplace, will be intensively worker-hostile. The infection continues to spread.


Illustrations: HAL, from 2001: A Space Odyssey (1968).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.