Main

October 20, 2022

The laws they left behind

dailystar-lettuce-celebrates-Ffg3wfmXEAI1ZLX-370.jpegIn the spring of 2020, as country after country instituted lockdowns, mandated contact tracing, and banned foreign travelers, many, including Britain, hastily passed laws enabling the state to take such actions. Even in the strange airlessness of the time, it was obvious that someday there would have to be a reckoning and a reevaluation of all that new legislation. Emergency powers should not be allowed to outlive the emergency. I spent many of those months helping Privacy International track those new laws across the world.

Here in 2022, although Western countries believe the acute emergency phase of the pandemic is past, the reality is that covid is still killing thousands of people a week across the world, and there is no guarantee we're safe from new variants with vaccine escape. Nonetheless, the UK and US at least appear to accept this situation as if it were the same old "normal". Except: there's a European war, inflation, strikes, a cost of living crisis, energy shortages, and a load of workplace monitoring and other privacy invasions that would have been heavily resisted in previous times. (And, in the UK, a government that has lost its collective mind; as I type no one dares move the news cameras away from the doors of Number 10 Downing Street in case the lettuce wins.)

Laws last longer than pandemics, as the human rights lawyer Adam Wagner writes in his new book, Emergency State: How We Lost Our Freedoms in the Pandemic and Why It Matters. For the last couple of years, Wagner has been a constant presence in my Twitter feed, alongside numerous scientists and health experts posting and examining the latest new research. Wagner studies a different pathology: the gaps between what the laws actually said and what was merely guidance. and between overactive police enforcement and people's reasonable beliefs of what the laws should be.

In Emergency State, Wagner begins by outlining six characteristics of the power of emergency-empowered state: mighty, concentrated, ignorant, corrupt, self-reinforcing, and, crucially, we want it to happen. As a comparison, Wagner notes the surveillance laws and technologies rapidly adopted after 9/11. Much of the rest of the book investigates a seventh characteristic: these emergency-expanded states are hard to reverse. In an example that's frequently come up here, see Britain's World War II ID card, which took until 1952 to remove, and even then it took Harry Wilcock to win in court after refusing to show his papers on demand.

Most of us remember the shock and sudden silence of the first lockdown. Wagner remembers something most of us either didn't know or forgot: when Boris Johnson announced the lockdown and listed the few exceptional circumstances under which we were allowed to leave home, there was as yet no law in place on which law enforcement could rely. That only came days later. The emergency to justify this was genuine: dying people were filling NHS hospital beds. And yet: the government response overturned the basis of Britain's laws, which traditionally presume that everything is permitted unless it's specifically forbidden. Suddenly, the opposite - everything is forbidden unless explicitly permitted - was the foundation of daily life. And it happened with no debate.

Wagner then works methodically through Britain's Emergency State, beginning by noting that the ethos of Boris Johnson's government, continuing the conservatives' direction of travel, coincidentally was already disdainful of Parliamentary scrutiny (see also: prorogation of Parliament) and ready to weaken both the human rights act and the judiciary. As the pandemic wore on, Parliamentary attention to successive waves of incoming laws did not improve; sometimes, the laws had already changed by the time they reached the chamber. In two years, Parliament failed to amend any of them. Meanwhile, Wagner notes, behind closed doors government members ignored the laws they made.

The press dubbed March 18, 2022 Freedom Day, to signify the withdrawal of all restrictions. And yet: if scientists' worst fears come true, we may need them again. Many covid interventions - masks, ventilation, social distancing, contact tracing - are centuries old, because they work. The novelty here was the comprehensive lockdowns and widespread business closures, which Wagner suggests may have come about because the first country to suffer and therefore to react was China, where this approach was more acceptable to its authoritarian government. Would things have gone differently had the virus surfaced in a democratic country? We will never know. Either way, the effects of the cruelest restrictions - the separation among families and friends, the isolation imposed on the elderly and dying - cannot be undone.

In Britain's case, Wagner points to flaws in the Public Health Act (1984) that made it too easy for a months-old prime minister with a distaste for formalities to bypass democratic scrutiny. He suggests four remedies: urgently amend the act to include safeguards; review all prosecutions and fines under the various covid laws; codify stronger human rights, either in a written constitution or a bill of rights; and place human rights at the heart of emergency decision making. I'd add: elect leaders who will transparently explain which scientific advice they have and haven't followed and why, and who will plan ahead. The Emergency State may be in abeyance, but current UK legislation in progress seeks to undermine our rights regardless.


Illustrations: The Daily Star's QE2 lettuce declaring victory as 44-day prime minister Liz Truss resigns.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 22, 2022

Parting gifts

nw-Sunak-Truss-ITV-2022.pngAll national constitutions are written to a threat model that is clearly visible if you compare what they say to how they are put into practice. Ireland, for example, has the same right to freedom of religion embedded in its constitution as the US bill of rights does. Both were reactions to English abuse, yet they chose different remedies. The nascent US's threat model was a power-abusing king, and that focus coupled freedom of religion with a bar on the establishment of a state religion. Although the Founding Fathers were themselves Protestants and likely imagined a US filled with people in their likeness, their threat model was not other beliefs or non-belief but the creation of a supreme superpower derived from merging state and church. In Ireland, for decades, "freedom of religion" meant "freedom to be Catholic". Campaigners for the separation of church and state in 1980s Ireland, when I lived there, advocated fortifying the constitutional guarantee with laws that would make it true in practice for everyone from atheists to evangelical Christians.

England, famously, has no written constitution to scrutinize for such basic principles. Instead, its present Parliamentary system has survived for centuries under a "gentlemen's agreement" - a term of trust that in our modern era transliterates to "the good chaps rule of government". Many feel Boris Johnson has exposed the limitations of this approach. Yet it's not clear that a written constitution would have prevented this: a significant lesson of Donald Trump's US presidency is how many of the systems protecting American democracy rely on "unwritten norms" - the "gentlemen's agreement" under yet another name.

It turns out that tinkering with even an unwritten constitution is tricky. One such attempt took place in 2011, with the passage of the Fixed-term Parliaments Act. Without the act, a general election must be held at least once every five years, but may be called earlier if the prime minister advises the monarch to do so; they may also be called at any time following a vote of no confidence in the government. Because past prime ministers were felt to have abused their prerogative by timing elections for their political benefit, the act removed it in favor of a set five-year interval unless a no-confidence vote found a two-thirds super-majority. There were general elections in 2010 and 2015 (the first under the act). The next should have been in 2020. Instead...

No one counted on the 2016 vote to leave the EU or David Cameron's next-day resignation. In 2017, Theresa May, trying to negotiate a deal with an increasingly divided Parliament and thinking an election would win her a more workable majority and a mandate, got the necessary super-majority to call a snap election. Her reward was a hung Parliament; she spent the rest of her time in office hamstrung by having to depend on the good will of Northern Ireland's Democratic Unionist Party to get anything done. Under the act, the next election should have been 2022. Instead...

In 2019, a Conservative party leadership contest replaced May with Boris Johnson, who, after several failed attempts blocked by opposition MPs determined to stop the most reckless Brexit possibilities, won the necessary two-thirds majority and called a snap election, winning a majority of 80 seats. The next election should be in 2024. Instead...

They repealed the act in March 2022. As we were. Now, Johnson is going, leaving both party and country in disarray. An election in 2023 would be no surprise.

Watching the FTPA in action led me to this conclusion: British democracy is like a live frog. When you pin down one bit of it, as the FTPA did, it throws the rest into distortion and dysfunction. The obvious corollary is that American democracy is a *dead* frog that is being constantly dissected to understand how it works. The disadvantage to a written constitution is that some parts will always age badly. The advantage is clarity of expectations. Yet both systems have enabled someone who does not care about norms to leave behind a generation's worth of continuing damage.

All this is a long preamble to saying that last year's concerns about the direction of the UK's computers-freedom-privacy travel have not abated. In this last week before Parliament rose for the summer, while the contest and the heat saturated the news, Johnson's government introduced the Data Protection and Digital Information bill, which will undermine the rights granted by 25 years of data protection law. The widely disliked Online Safety bill was postponed until September. The final two leadership candidates are, to varying degrees, determined to expunge EU law, revamp the Human Rights act, and withdraw from the European Convention on Human Rights. In addition, lawyer Gina Miller warns, the Northern Ireland Protocol bill expands executive power by giving ministers the Henry VIII power to make changes without Parliamentary consent: "This government of Brexiteers are eroding our sovereignty, our constitution, and our ability to hold the government to account."

The British convention is that "government" is collective: the government *are*. Trump wanted to be a king; Johnson wishes to be a president. The coming months will require us to ensure that his replacement knows their place.


Illustrations: Final leadership candidates Rishi Sunak and Liz Truss in debate on ITV.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 27, 2022

Well may the bogeyman come

NCC-EPIC-award-CPDP-2022.jpgIt's only an accident of covid that this year's Computers, Privacy, and Data Protection conference - delayed from late January - coincided with the fourth anniversary of the EU's General Data Protection Regulation. Yet its failures and frustrations were on everyone's mind as they considered new legislation forthcoming from the EU: the Digital Services Act, the Digital Markets Act, and, especially, the AI Act,

Two main frustrations: despite GDPR, privacy invasions continue to expand, and, related, enforcement has been extremely limited. The first is obvious to everyone here. For the second...as Max Schrems explained in a panel on GDPR enforcement, none of the cross-border cases his NGO, noyb, filed on May 19, 2018, the day after GDPR came into force, have been decided, and even decisions on simpler cases have failed to deal with broader questions.

In one of his examples, Spain rejected a complaint because it wasn't doing historic cases and Austria claimed the case was solved because the organization involved had changed its procedures. "But my rights were violated then." There was no redress.

Schrems is the data protection bogeyman; because legal actions he has brought have twice struck down US-EU agreements to enable data flows, the possibility of "Schrems III" if the next version gets it wrong is frequently mentioned. This particular panel highlighted numerous barriers that block effective action.

Other speakers highlighted numerous gaps between countries that impede cross-border complaints: some authorities have tight deadlines that expire while other authorities are working to more leisurely schedules; there are many conflicts between national procedural laws; each data protection authority has its own approach and requirements; and every cross-border complaint must be time-consumingly translated into English, even when both relevant authorities speak, say, German. "Getting an answer to a two-minute question takes four months," Nina Herbort said, highlighting the common underlying problem: underresourcing.

"Weren't they designed to fail?" Finn Myrstad asked.

Even successful enforcement has largely been limited to levying fines - and despite some of the eye-watering numbers they're still just cost of doing business to major technology platforms.

"We have the tools for structural sanctions," Johnny Ryan said in a discussion on judicial actions. Some of that is beginning to happen. A day earlier, the UK'a Information Commissioner's Office fined Clearview AI £7.5 million and ordered it to delete the images it holds of UK residents. In February, Canada issued a similar order; a few weeks ago, Illinois permanently banned the company from selling its database to most private actors and businesses nationwide, and barred it from selling its service to any entity within Illinois for five years. Sanctions like these hurt more than fines as does requiring companies to delete the algorithms they've based on illegally acquired data.

Other suggestions included building sovereignty by ensuring that public procurement does not default to off-the-shelf products from a few foreign companies but is built on local expertise, advocated by. Jan-Philipp Albrecht, the former MEP who panel on the impact of Schrems II that he is now building up cloud providers using locally-built hardware and open source software for the province of Schleswig-Holstein. Quang-Minh Lepescheux suggested requiring transparency in how people are trained to use automated decision making systems and forcing technology providers to accept third-party testing. Cristina Caffara, probably the only antitrust lawyer in sight, wants privacy advocates and antitrust lawyers to work together; the economists inside competition authorities insist that more data means better products so it's good for consumers. Rebecca Slaughter wants to give companies the clarity they say they want (until they get it): clear, regularly updated rules banning a list of practices with a catchall. Ryan also noted that some sanctions can vastly improve enforcement efficiency: there's nothing to investigate after banning a company from making acquisitions. Enforcing purpose limitation and banning the single "OK to everything" is more complicated but, "Purpose limitation is Kryptonite to Big Tech when it's misusing data."

Any and all of these are valuable. But new kinds of thinking are also needed. The more complex issue and another major theme was the limitations of focusing on personal data and individual rights. This was long predicted as a particular problem for genetic data - the former science journalist Tom Wilkie was first to point out the implications, sounding a warning in his book Perilous Knowledge, published in 1994, at the beginning of the Human Genome Project. Singling out individuals who have been harmed can easily obfuscate collective damage. The obvious example is Cambridge Analytica and Facebook; the damage to national elections can't be captured one Friends list at a time, controls on the increasing use of aggregated data require protection at scale, and, perversely, monitoring for bias and discrimination requires data collection.

In response to a panel on harmful patterns in recent privacy proposals, an audience member suggested that the African philosophy of ubuntu as a useful source of ideas for thinking about collective and, even more important, *interdependent* data. This is where we need to go. Many forms of data - including both genetic data and financial data - cannot be thought of any other way.


Illustrations: The Norwegian Consumer Council receives EPIC's International Privacy Champion award at CPDP 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 20, 2022

Mona Lisa smile

Mona Lisa - cropped for net.wars.jpgA few weeks ago, Zoom announced that it intends to add emotion detection technology to its platform. According to Mark DeGeurin at Gizmodo, in response, 27 human rights groups from across the world, led by Fight for the Future, have sent an open letter demanding that the company abandon this little plan, calling the software "invasive" and "inherently biased". On Twitter, I've seen it called "modern phrenology"; a deep insult for those who remember the pseudoscience of studying the bumps on people's heads to predict their personalities.

It's an insult, but it's not really wrong. In 2019, Angela Chen at MIT Technology Review highlighted a study showing that facial expressions on their own are a poor guide to what someone is feeling. Cultures, context, personal style all affect how we present ourselves, and the posed faces AI developers use as part of their training of machine learning systems are even worse indicators, since few of us really how our faces look under the influence of different emotions. In 2021, Kate Crawford, author of Atlas of AI, used the same study to argue in The Atlantic that the evidence that these systems work at all is "shaky".

Nonetheless, Crawford goes on to report, this technology is being deployed in hiring systems and added into facial recognition. A few weeks ago, Kate Kaye reported at Protocol that Intel and virtual school software provider Classroom Technologies are teaming up to offer a version that runs on top of Zoom.

Cue for a bit of nostalgia: I remember the first time I heard of someone proposing computer emotion detection over the Internet. It was the late 1990s, and the source - or the perpetrator, depending on your point of view, was Rosalind Picard at the MIT Media Lab. Her book on the subject, Affective Computing, came out in 1997.

Picard's main idea was that to be truly intelligent - or at least, seem that way to us - computers would have to learn to recognize emotions and produce appropriate responses. One of the potential applications I remember hearing about was online classrooms, where the software could monitor students' expressions for signs of boredom, confusion, or distress and alert the teacher - exactly what Intel and Classroom Technologies want to do now. I remember being dubious: shouldn't teachers be dialed in on that sort of thing? Shouldn't they know their students well enough to notice? OK, remote, over a screen, maybe dozens or hundreds of students at a time...not so easy.... (Of course, the expensive schools offer mass online education schemes to exploit their "brands", but they still keep the small, in-person classes that creates those "brands" by churning out prime ministers and Silicon Valley dropouts.)

That wasn't Picard's main point, of course. In a recent podcast interview, she explains her original groundbreaking insight: that computers need to have emotional intelligence in order to make them less frustrating for us to deal with. If computers can capture the facial expressions we choose to show, the changes in our vocal tones, our gestures and muscle tension, perhaps they can respond more appropriately - or help humans to do so. Twenty-five years later, the ideas in Picard's work are now in use in media companies, ad agencies, and call centers - places where computer-human communication happens.

It seems a doubtful proposition. Humans learn from birth to read faces, and even we have argued for centuries over the meaning of the expression on the face of the Mona Lisa.

In 1997, Picard did not foresee the creepiness and giant technology exploiters. It's hard to know whether to be more alarmed about the technology's inaccuracy or its potential improvement. While it's inaccurate and biased, the dangers are the consequences of mistakes in interpretation; a student marked "inattentive", for example, may be penalized in their grade. But improving and debiasing the technology opens the way for fine-tuned manipulation and far more pervasive and intimate surveillance as it becomes embedded in every company, every conference, every government agency, every doctor's office, all of law enforcement. Meanwhile, the technological imperative of improving the system will require the collection of more and more data: body movements, heart rates, muscle tension, posture, gestures, surroundings.

I'd like to think that by this time we are smarter about how technology can be abused. I'm sure many of Zoom's corporate clients want emotion recognition technology; as in so many other cases, we are pawns because we're largely not the ones paying the bills or making the choice of platform. There's an analogy here to Elon Musk's negotiations with Twitter shareholders; the millions who use the service every day and find it valuable have no say in what will happen to it. If Zoom adopts emotion recognition, how long before law enforcement starts asking for user data in order to feed it into predictive policing systems? One of this week's more startling revelations was Aaron Gordon's report at Vice that San Francisco police are using driverless cars as mobile surveillance cameras, taking advantage of the fact that they are continuously recording their surroundings.

Sometimes the only way to block abuse of technology is to retire the idea entirely. If you really want to know what I'm thinking and feeling, just ask. I promise I'll tell you.


Illustrations: The emotional enigma that is the Mona Lisa.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 29, 2022

The abundance of countries

Adam Smith-National Gallery of Scotland-PD.jpgThis week, some updates.

First up is the Court of Justice of the European Union's ruling largely upholding Article 17 of the 2019 Copyright Directive. Article 17, also known as the "upload filter", was last seen leading many to predict it would break the web. Poland challenged the provision, arguing that requiring platforms to check user-provided material for legality infringed the rights to freedom of expression and information.

CJEU dismissed Poland's complaint, and Article 17 stands. However, at a panel convened by Communia, former Pirate Party MEP Felix Reda found the disappointment is outweighed by the court's opinion regarding safeguards, which bans general monitoring, and, Joao Pedro Quantais explained, restrict content removal to material whose infringing nature is obvious.

More than half of EU countries have failed to meet the June 2021 deadline to transpose the directive into national law, and some that have simply copied and pasted the directive's two most contentious articles - Articles 17 and 15 (the "link tax") rather than attempt to resolve the directive's internal contradictions. As Glyn Moody explains at Walled Culture, the directive requires the platforms to both block copyright-infringing content from being uploaded and make sure legal content is not removed. Moody also reports that Finland's attempts at resolution have attracted complaints from the copyright industries, who want the country to make its law more restrictive. Among the other countries that have transposed the directive, Reda believes only Germany's and Austria's interpretations provide safeguards in line with the court's ruling - and Austria's only with some changes.

***

The best response I've seen to the potential sale of Twitter comes from writer Racheline Maltese: who tweeted, "On the Internet, your home will always leave you."

In a discussion sparked by the news, Twitter user Yishan argues that "free speech" isn't what it used to be. In the 1990s version, the threat model was religious conservatives in the US. This isn't entirely true; some feminist groups also sought to censor pornography, and 1980s Internet users had to bypass Usenet hierarchy administrators to create newsgroups for sex and drugs. However, the understanding that abuse and trolling drive people away and chill them into silence definitely took longer to accept as a denial of free speech rights. Today, Yishan writes, *everyone* feels their free speech is under threat from everyone else. And they're likely right.

***

It's also worth noting the early stages of the cybercrime treaty. It's now 20 years since the Convention on Cybercrime was formulated; as of December 2020 65 states have ratified it and four have signed it. The push for a new treaty is coming from countries that either opposed the original or weren't involved in drafting it - Russia in particular, ironically enough. At Human Rights Watch, Deborah Brown warns of risks to fundamental rights: "cybercrime" has no agreed definition and some states want expansion to include "incitement to terrorism" and copyright infringement. In addition, while many states back including human rights protections, detail is lacking. However, we might get some clues from this week's White House declaration for the future of the Internet, which seeks to "reclaim the promise of the Internet" and embed human rights. It's backed by 60 countries - but not China or Russia.

There is general agreement that the vast escalation of cybercrime means better cross-border cooperation is needed, as Summer Walker writes at Foreign Policy. However, she notes that as work progressed in 2021 a number of states already felt excluded from the decision-making process.

The goal is to complete an agreement by early 2024.

***

Finally....20 years ago I wrote (in a piece from the lostweb) about the new opportunities for plagiarism afforded by the Internet. That led to a new industry sector: online services that check each new paper against a database of known material. The services do manage to find previously published text; six days after publication even a free example service rates the first two paragraphs of last week's net.wars as "100% plagiarized". Even so, the concept is flawed, particularly for academics, whose papers have been flagged or rejected for citations, standardized descriptions of experimental methodology, or reused passages describing their own previous work - "self-plagiarism". In some cases, academics have reported on Twitter, the automated systems in use at some journals reject their work before an editor can see it.

Now there's a new twist in this little arms race: rephrasing services that freshen up published material so it will pass muster. The only problem is (of course) that the AI is supremely stupid and poorly educated. Last year, Nature reported on "tortured phrases" that indicated plagiarized research papers, particularly rife in computer science. This week Essex senior lecturer Matt Lodder reported on Twitter his sightings of AI-rephrased material in students' submissions. First clue: "It read oddly." Well, yes. When I ran last week's posting through several of these services, they altered direct quotes (bad journalism), rewrote active sentences into passive ones (bad writing), and changed the meaning (bad editing). In Lodder's student's text, the AI had substituted "graph" for "chart"; in a paper submitted to a friend of his, "the separation of powers" had been rendered as "the sundering of puissances" and Adam Smith's classic had become "The Abundance of Countries". People: when you plagiarize, read what you turn in!


Illustrations: Adam Smith, author of The Wealth of Nations (portrait from the National Gallery of Scotland, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 25, 2022

Irreparable harm

Vladimir_Putin_at_award_ceremonies-with-eteri-_(2018-11-27)_31.jpgThe anti-doping systems in sports have long intrigued me as a highly visible example of a failed security system. The case of Kamila Valieva at the recent Winter Olympics provides yet another example.

I missed the event itself because: I don't watch the Olympics. The corruption in the bidding process, documented by Andrew Jennings in 1992, was the first turn-off. Joan Ryan's 1995 Little Girls in Pretty Boxes, which investigated abuse in in women's gymnastics and figure skating, made the tiny teens in both sports uncomfortable to watch. Then came the death of German luger Nodar Kumaritashvili in a training run at the 2010 Vancouver Winter Olympics. The local organizing committee had been warned that the track was dangerous as designed, and did - nothing. Care for the athletes is really the bottom line.

Anti-doping authorities have long insisted that athletes are responsible for every substance found in their bodies. However, as a minor Valieva is subject to less harsh rules. When the delayed results of a December test emerged, the Russian Anti-Doping Agency determined that she should be allowed to compete. The World Anti-Doping Agency, the Internet Olympic Committee, and the International Skating Union all appealed the decision. Instead, the Court for Arbitration for Sport upheld RUSADA's decision, writing in its final report : "...athletes should not be subject to the risk of serious harm occasioned by anti-doping authorities' failure to function effectively at a high level of performance and in a manner designed to protect the integrity of the operation of the Games" wrote in deciding to allow Valieva to compete. In other words, the lab and the anti-doping authorities should have gotten all this resolved out of the world's sight, before the Games began, and because Valieva was a leading contender for the gold medal, denying her the right to compete could do her "irreparable harm".

The overlooked nuance here appears to be that Valieva had been issued a *provisional* suspension. As a "Protected Person" - that is, a child - she does not have to meet the same threshold of proof that adults do. The CAS judges accepted the possibility that, as her legal team argued, her positive test could have been due to contamination or accidental ingestion, as her grandfather used this medication. If you take the view that further investigation may eventually exonerate her, but too late for this Olympics, they have a point. If you take the strict view that the fight against doping requires hard lines to be drawn, then she should have been sent home.

But. But. But. On her doping control form, Valieva had acknowledged taking two more heart medications that aren't banned: L-carnitine, and hypoxen. Why is a 15-year-old testing positive for *three* heart medications? I don't care that two of them are legal.

Similarly: why is RUSADA involved when it's still suspended following Russia's state-sponsored doping scandal, which still has Russian athletes competing under the flag of the Russian Olympic Committee in a pretense that Russia is being punished?

Skating experts have had a lot to say about Valieva's coaches. We know from gymnastics as well as figure skating that the way women's bodies mature through their teens puts the most difficult - and most exciting - acrobatics out of reach. That reality has led to young female athletes being put on strict diets and doped with puberty blockers to keep them pre-pubescent. In her book, Ryan advocated age restrictions and greater oversight for both gymnastics and figure skating. Reviews complained that her more than 100 interviews with current and former gymnasts did not include the world's major successes, but that's the point: the 0.01% for whom the sport brings stardom are not representative. At Slate, Rita Wenxin Wang describes the same "culture of child abuse" Ryan described 25 years ago, pinpointing in particular Valieva's coach, Eteri Tutberidze, whose work with numerous young winning Russian teens won her Russia's Order of Honour from Vladimir Putin in 2018.

At The Ringer, Michael Baumann reports that Tutbeidze burns through young skaters at a frantic pace; they wow the world for two or three years, and vanish. That could help explain CAS's conviction that this medal shot was irreplaceable..

At the Guardian, former gymnast Sarah Clarke calls out the IOC for its failure to protect Valieva. Clarke was one of the hundreds of victims of sexual predator Larry Nassar and notes that while Nassar has been jailed his many enablers have never been prosecuted and the IOC never acted against any of the organizations (US Gymnastics, USADA) that looked the other way. Also at the Guardian, Sean Ingle calls the incident clear evidence of abuse of a minor. At Open Democracy, Aiden McQuade calls Valieva's treatment "child trafficking" and an indictment of the entire Olympic movement.

Given that minors should not be put in the position Valieva was, there's just one answer: bring in the age restrictions that Ryan advocated in 1995 and that gymnastics and tennis brought in 25 years ago - tennis, after watching a series of high-profile teenaged stars succumb to injuries and burnout. This is a different definition of "harm".

The sports world has long insisted that it should be self-regulating, independent of all governments. The evidence continues to suggest the conflicts of interest run too deep.


Illustrations: Russian women's figure skating coach Eteri Tutberidze, at the 2018 award ceremony with Vladimir Putin (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 11, 2022

Freedom fries

"Someone ratted me out," a friend complained recently. They meant: after a group dinner, one of the participants had notified everyone to say they'd tested positive for covid a day later, and a third person had informed the test and trace authorities and now my friend was getting repeated texts along the lines of "isolate and get tested". Which they found invasive and offensive, and...well, just plain *unreasonable*.

Last night, Boris Johnson casually said in Parliament that he thought we could end all covid-related restrictions in a couple of weeks. Today there's a rumor that the infection survey that has produced the most reliable data on the prevalence and location of covid infections may be discontinued soon. There have been rumors, too, of charging for covid tests.

Fifteen hundred people died of covid in this country in the past week. Officially, there were more than 66,000 new infections yesterday - and that doesn't include all the people who felt like crap and didn't do a test, or did do a test and didn't bother to report the results (because the government's reporting web form demands a lot of information each time that it only needs if you tested positive), or didn't know they were infected. If he follows through. Johnson's announcement would mean that if said dinner happened a month from now, my friend wouldn't be told to isolate. They can get exposed and perhaps infected and mingle as normal in complete ignorance. The tradeoff is the risk for everyone else: how do we decide when it's safe enough to meet? Is the plan to normalize high levels of fatalities?

Brief digression: no one thinks Johnson's announcement is a thought-out policy. Instead, given the daily emergence of new stories about rule-breaking parties at 10 Downing Street during lockdown, his comment is widely seen as an attempt to distract us and quiet fellow Conservatives who might vote to force him out of office. Ironically, a key element in making the party stories so compelling is the hundreds of pictures from CCTV, camera phones, social media, Johnson's official photographer... Teenagers have known for a decade to agree to down cameras at parties, but British government officials are apparently less afraid anything bad will happen to them if they're caught.

At the beginning of the pandemic, we wrote about the inevitable clash between privacy and the needs of public health and epidemiology. Privacy was indeed much discussed then, at the design stage for contact tracing apps, test and trace, and other measures. Democratic countries had to find a balance between the needs of public health and human rights. In the end, Google and Apple wound up largely dictating the terms on which contact tracing apps could operate on their platforms.

To the chagrin of privacy activists, "privacy" has rarely been a good motivator for activism. The arguments are too complicated, though you can get some people excited over "state surveillance". In this pandemic, the big rallying cry has been "freedom", from the media-friendly Freedom Day, July 19, 2021, when Johnson removed that round of covid restrictions, to anti-mask and anti-vaccination protesters, such as the "Freedom Convoy" currently blocking up normally bland, government-filled downtown Ottawa, Ontario, and an increasing number of other locations around he world. Understanding what's going on there is beyond the scope of net.wars.

More pertinent is the diverging meaning of "freedom". As the number of covid prevention measures shrinks, the freedom available to vulnerable people shrinks in tandem. I'm not talking about restrictions like how many people may meet in a bar, but simple measures like masking on public transport, or getting restaurants and bars to information about their ventilation that would make assessing risk easier.

Elsewise, we have many people who seem to define "freedom" to mean "It's my right to pretend the pandemic doesn't exist". Masks, even on other people, then become intolerable reminders that there is a virus out there making trouble. In that scenario, however, self-protection, even for reasonably healthy people who just don't want to get sick, becomes near-impossible. The "personal responsibility" approach doesn't work in a situation where what's most needed is social collaboration.

The people landed with the most risk can do the least about it. As the aftermath of Hurricane Sandy highlighted, the advent of the Internet has opened up a huge divide between the people who have to go to work and the people who can work anywhere. I can Zoom into my friend's group dinner rather than attend in person, but the caterers and waitstaff can't. If "your freedom ends where my nose begins" (Zechariah Chafee Jr, it says hereapplies to physical violence, shouldn't it include infection by virus?

Many human rights activists warned against creating second-class citizens via vaccination passports. The idea was right, but privacy was the wrong lens, because we still view it predominantly as a right for the individual. You want freedom? Instead of placing the burden on each of us, as health psychologist Susan Michie has been advocating for months, make the *places* safer - set ventilation standards, have venues publish their protocols, display CO2 readings, install HEPA air purifiers. Less risk, greater freedom, and you'd get some privacy, too - and maybe fewer of us would be set against each other in standoffs no one knows how to fix.


Illustrations: Trucks protesting in Ottawa, February 2022 (via ΙΣΧΣΝΙΚΑ-888 at Wikimedia, CC-BY-SA-4.0).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 7, 2022

Resolutions

Winnie-the-Pooh-north pole_143.pngWe start 2022 with some catch-ups.

On Tuesday, the verdict came down in the trial of Theranos founder Elizabeth Holmes: guilty on four counts of wire fraud, acquitted on four counts, jury hung on three. The judge said he would call a mistrial on those three, but given that Holmes will already go to prison, expectations are that there will be no retrial.

The sad fact is that the counts on which Holmes was acquitted were those regarding fraud against patients. While investment fraud should be punished, the patients were the people most harmed by Theranos' false claims to be able to perform multiple accurate tests on very small blood samples. The investors whose losses saw Holmes found guilty could by and large afford them (though that's no justification). I know the $350 million collectively lost by Trump education secretary Betsy DeVos, Rupert Murdoch, and the Cox family is a lot of money, but it's a vanishingly tiny percentage of their overall wealth (which may help explain DeVos family investment manager Lisa Peterson's startlingly tcasual approach to research). By contrast, for a woman who's already had three miscarriages, the distress of being told she's losing a fourth, despite the eventual happy ending, is vastly more significant.

I don't think this case by itself will make a massive difference in Silicon Valley's culture, despite Holmes's prison sentence - how much did bankers change after the 2008 financial crisis? Yet we really do need the case to make a substantial difference in how regulators approach diagnostic devices, as well as other cyber-physical hybrid offerings, so that future patients don't become experimental subjects for the unscrupulous.

***

On New Year's Eve, Mozilla, the most important browser that ">only 3% of the market uses, reminded people it accepts donations in cryptocurencies through Bitpay. The message set off an immediate storm, not least among two of the organization's co-founders, one of whom, Jamie Zawinski, tweeted that everyone involved in the decision should be "witheringly ashamed". At The Register, Liam Proven points out that it's not new for Mozilla to accept cryptocurrencies; it's just changed payment providers.

One reason to pay attention to this little fiasco is that while Mozilla (and other Internet-related non-profits and open software projects) appeal greatly to the same people who care about the environment and believe that cryptocurrency mining is wasteful and energy-intensive and deplore the anti-government rhetoric of its most vocal libertarian promoters, the richest people willing to donate to such projects are often those libertarians. Trying to keep both onside is going to become increasingly difficult. Mozilla has now suspended its acceptance of cryptocurrencies to consider its position.

***

In 2010, fatally frustrated with Google, I went looking for a replacement search engine and found DuckDuckGo. It took me a little while to get the hang of formulating successful queries, but both it and I got better. It's a long time since I needed to direct a search elsewhere.

At the time, a lot of people thought it was bananas for a small startup to try to compete against Google. In an interview, founder Gabriel Weinberg explained that the decision had been driven by his own frustration with Google's results. Weinberg talked most about getting to the source you want more efficiently.

Even at that early stage, embracing privacy was part of his strategy. Nearly 12 years on from the company's founding, its 35.3 billion searches last year - up 46% from 2020 - remain a rounding error compared to Google's many hundreds of billions per day. But the company continues to offer things I actually want. I have its browser on my phone, and (despite still having a personal email server) have signed up for one of its email addresses because it promises to strip out the extensive tracking inserted into many email newsletters. And all without having to buy into Apple's ecosystem.

Privacy has long been a harder sell than most privacy advocates would like to admit, usually because it involves giving up a lot of convenience to get it. In this case...it's easy. So far.

***

Never doubt that tennis is where cultural clashes come home to roost. Tennis had the first transgender athlete; it was at the forefront of second wave feminism; and now it's the venue for science versus anti-science. And now, as even people who *aren't* interested in tennis have seen, it is the foremost venue for the clash between vaccine mandates and anti-vaxx refuseniks. Result: the men's world number one, Serbian player Novak Djokovic (and, a day later, doubles specialist Renata Voracova), was diverted to a government quarantine hotel room like any non-famous immigrant awaiting deportation.

Every tennis watcher saw this coming months ago. On one side, Australian rules; on the other, a tennis tournament that apparently believed it could accommodate a star's balking at an immigration requirement as unyieldingly binary as pregnancy or the Northern Ireland protocol

Djokovic is making visible to the world a reality that privacy advocates have been fighting to expose: you have no rights at borders. If you think Djokovic, with all his unique resources, should be meeting better treatment, then demand better treatment for everyone, legal or illegal, at all borders, not just Australia's.


Illustrations: Winnie the Pooh, discovering the North Pole, by Ernest Howard Shepard, finally in the public domain (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 25, 2021

Lawful interception

NSOGroup-database.pngFor at least five years the stories have been coming about the Israeli company NSO Group. For most people, NSO is not a direct threat. For human rights activists, dissidents, lawyers, politicians, journalists, and others targeted by hostile authoritarian states, however, its elite hackers are dangerous. NSO itself says it supplies lawful interception, and only to governments to help catch terrorists.

Now, finally, someone is taking action. Not, as you might reasonably expect, a democratic government defending human rights, but Apple, which is suing the company on the basis that NSO's exploits cost it resources and technical support. Apple has also alerted targets in Thailand, El Salvador, and Uganda.

On Twitter, intelligence analyst Eric Garland picks over the complaint. Among his more scathing quotes: "Defendants are notorious hackers - amoral 21st century mercenaries who have created highly sophisticated cyber-surveillance machinery that invites routine and flagrant abuse", "[its] practices threaten the rules-based international order", and "NSO's products...permit attacks, including from sovereign governments that pay hundreds of millions of dollars to target and attack a tiny fraction of users with information of particular interest to NSO's customers".

The hidden hero in this story is the Canadian research group calls NSO's work "despotism as a service".

Citizen Lab began highlighting NSO's "lawful intercept" software in 2016, when analysis it conducted with Lookout Security showed that a suspicious SMS message forwarded by UAE-based Ahmed Mansoor contained links belonging to NSO Group's infrastructure. The links would have led Mansoor to a chain of zero-day exploits that would have turned his iPhone 6 into a comprehensive, remotely operated spying device. As Citizen Lab wrote, "Some governments cannot resist the temptation to use such tools against political opponents, journalists, and human rights defenders." It went on to note the absence of human rights policies and due diligence at spyware companies; the economic incentives all align the wrong way. An Android version was found shortly afterwards.

Among the targets Citizen Lab found in 2017: Mexican scientists working on obesity and soda consumption and Amnesty International researchers, In 2018, Citizen Lab reported that Internet scans found 45 countries where Pegasus appeared to be in operation, at least ten of them working cross-border. In 2018, Citizen Lab found Pegasus on the phone of Canadian resident Omar Abdulaziz, a Saudi dissident linked to murdered journalist Jamal Khashoggi. In September 2021, Citizen Lab discovered NSO was using a zero-click, zero-day vulnerability in the image rendering library used in Apple's iMessage to take over targets' iOS, WatchOS, and MacOS devices. Apple patched 1.65 billion products.

Both Privacy International and the Pegasus project, an joint investigation into the company by media outlets including the Guardian and coordinated by Forbidden Stories, have found dozens more examples.

In July 2021, a leaked database of 50,000 phone numbers believed to belong to people of interest to NSO clients since 2016 included human rights activists, business executives, religious figures, academics, journalists, lawyers, and union and government officials around the world. It was not clear if their devices had been hacked. Shortly afterwards, Rappler reported that NSO spyware can successfully infect even the latest, most secure iPhones.

Citizen Lab began tracking litigation and formal complaints against spyware companies in 2018. In a complaint filed in 2019, WhatsApp and Facebook are arguing that NSO and Q Cyber used their servers to distribute malware; on November 8 the US ninth circuit court of appeals has rejected NSO's claim of sovereign immunity, opening the way to discovery.. Privacy International promptly urged the British government to send a clear message, given that NSO's target was a UK-based lawyer challenging the company over human rights violations in Mexico and Saudi Arabia.

Some further background is to be found at Lawfare, where shortly *before* the suit was announced, security expert Stephanie Pell and law professor David Kaye discuss how to regulate spyware. In 2019, Kaye wrote a report calling for a moratorium on the sale and transfer of spyware and noting that its makers "are not subject to any effective global or national control". Kaye proposes adding human rights-based export rules to the Wassenaar Arrangement export controls for conventional arms and dual-use technologies. Using Wassenaar, on November 3 the US Commerce Department recently blacklisted NSO along with fellow Israeli company Candiru, Russian company Positive Technologies, and Singapore-based Computer Security Initiative Consultancy as national security threats. And there are still more, such as the surveillance system sold to Egypt by France-based Thales subsidiary Dassault and Nexa Technologies.

The story proves the point many have made throughout 30 years of fighting for the right to use strong encryption: while governments and their law enforcement agencies insist they need access to keep us safe: there is no magic hole that only "good guys" can use, and any system created to give special access will always end up being abused. We can't rely on the technology companies to defend human rights; that's not in their business model. Governments need to accept and act on the reality that exceptional access for anyone makes everyone everywhere less safe.

Illustrations: Citizen Lab's 2021 map of the distribution of suspected NSO infections (via Democracy Now.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 29, 2021

Majority report

Frari_(Venice)_nave_left_-_Monument_to_Doge_Giovanni_Pesaro_-_Statue_of_the_Doge.jpgHow do democracy and algorithmic governance live together? This was the central question of a workshop this week on computational governance. This is only partly about the Internet; many new tools for governance are appearing all the time: smart contracts, for example, and AI-powered predictive systems. Many of these are being built with little idea of how they can go wrong.

The workshop asked three questions:

- What can technologists learn from other systems of governance?
- What advances in computer science would be required for computational systems to be useful in important affairs like human governance?
- Conversely, are there technologies that policy makers can use to improve existing systems?

Implied is this: who gets to decide? On the early Internet, for example, decisions were reached by consensus among engineers, funded by hopeful governments, who all knew each other. Mass adoption, not legal mandate, helped the Internet's TCP/IP protocols dominate over many other 1990s networking systems: it was free, it worked well enough, and it was *there*. The same factors applied to other familiar protocols and applications: the web, email, communications between routers and other pieces of infrastructure. Proposals circulated as Requests for Comments, and those that found the greatest acceptance were adopted. In those early days, as I was told in a nostalgic moment at a conference in 1998, anyone pushing a proposal because it was good for their company would have been booed off the stage. It couldn't last; incoming new stakeholders demanded a voice.

If you're designing an automated governance system, the fundamental question is this: how do you deal with dissenting minorities? In some contexts - most obviously the US Supreme Court - dissenting views stay on the record alongside the majority opinion. In the long run of legal reasoning, it's important to know how judgments were reached and what issues were considered. You must show your work. In other contexts where only the consensus is recorded, minority dissent is disappeared - AI systems, for example, where the labelling that's adopted is the result of human votes we never see.

In one intriguing example, a panel of judges may rule a defendant is guilty or not guilty depending on whether you add up votes by premise - the defendant must have both committed the crime and possessed criminal intent - or by conclusion, in which each judge casts a final vote and only these are counted. In a small-scale human system the discrepancy is obvious. In a large-scale automated system, which type of aggregation do you choose, and what are the consequences, and for whom?

Decentralization poses a similarly knotty conundrum. We talk about the Internet's decentralized origins, but its design fundamentally does not prevent consolidation. Centralized layers such as the domain name system and anti-spam blocking lists are single points of control and potential failure. If decentralization is your goal, the Internet's design has proven to be fundamentally flawed. Lots of us have argued that we should redecentralize the Internet, but if you adopt a truly decentralized system, where do you seek redress? In a financial system running on blockchains and smart contracts, this is a crucial point.

Yet this fundamental flaw in the Internet's design means that over time we have increasingly become second-class citizens on the Internet, all without ever agreeing to any of it. Some US newspapers are still, three and a half years on, ghosting Europeans for fear of GDPR; videos posted to web forums may be geoblocked from playing in other regions. Deeper down the stack, design decisions have enabled surveillance and control by exposing routing metadata - who connects to whom. Efforts to superimpose security have led to a dysfunctional system of digital certificates that average users either don't know is there or don't know how to use to protec themselves. Efforts to cut down on attacks and network abuse have spawned a handful of gatekeepers like Google, Akamai, Cloudflare, and SORBS that get to decide what traffic gets to go where. Few realize how much Internet citizenship we've lost over the last 25 years; in many of our heads, the old cooperative Internet is just a few steps back. As if.

As Jon Crowcroft and I concluded in our paper on leaky networks for this year's this year's Gikii, "leaky" designs can be useful to speed development early on even though they pose problems later, when issues like security become important. The Internet was built by people who trusted each other and did not sufficiently imagine it being used by people who didn't, shouldn't, and couldn't. You could say it this way: in the technology world, everything starts as an experiment and by the time there are problems it's lawless.

So this the main point of the workshop: how do you structure automated governance to protect the rights of minorities? Opting to slow decision making to consider the minority report impedes decision making in emergencies. If you limit Internet metadata exposure, security people lose some ability to debug problems and trace attacks.

We considered possible role models: British corporate governance; smart contracts;and, presented by Miranda Mowbray, the wacky system by which Venice elected a new Doge. It could not work today: it's crazily complex, and impossible to scale. But you could certainly code it.


Illustrations: Monument to the Doge Giovanni Pesaro (via Didier Descouens at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 22, 2021

It's about power

vampire-squid-flickr-2111032672_db268e72d9_c.jpgIt is tempting to view every legislative proposal that comes from the present UK government as an act of revenge against the people and institutions that have disagreed with it.

The UK's Supreme Court undid Boris Johnson's decision to prorogue Parliament in the 2019 stages of the Brexit debates; the proposes to limit judicial review. The Election Commission recommended codes of conduct to keep political advertising fair; the Elections Bill, as retiring House of Lords member David Puttnam writes at the Guardian as one element of a long list of anti-democratic moves, prioritizes registration and voter ID, here as in the US, measures likely to disenfranchise opposition voters.

The UK government's proposals for reforming data protection law - the consultation is open until November 19 - also seem to fit this scenario. Granted, the UK wasn't a fan, even in 2013, when the EU's General Data Protection Regulatioon was being negotiated. Today's proposals would roll back some aspects of the law. Notably, it suggests discouraging individuals from filing subject access requests by introducing fees, last seen in the 1998 Data Protection Act GDPR replaced, and giving organizations greater latitude to refuse. This thinking is familiar from the 2013 discussions about freedom of information requests. The difference here: it's *our* data we want to access.

More pervasive, though, is the consultation's general assumption that data protection is a burden that impedes innovation and needs to be lightened to unlock economic growth. The EU, reading it, may be relieved it only granted the UK's data protection regime adequacy for four years.

It is impossible to read the subject access rights section (page 69ff) without concluding that the "burden" the government seeks to relieve is its own. In a panel on the proposed changes at the UK Internet Governance Forum, speakers agreed that businesses are not calling for this. What they *do* want is guidance. Diverging from GDPR makes life more complicated by creating multiple regimes that all require compliance. If you're a business, you want consistency and clarity. It's hard to see how these proposals provide them.

This is even more true for individuals who depend on their rights under GDPR (and equivalent) to understand the decisions that have been made about them. As Renate Samson put it at UKIGF, viewing their data is crucial in obtaining redress for erroneous immigration and asylum decisions. "Understanding why the computer says no is critical for redress purposes." In May, the Open Rights Group and the3million won this very battle against the government - under GDPR.

These issues are familiar ground for net.wars. What's less so is the UK's behavior. As in other areas - the widely criticized covid response, its dealings throughout the Brexit negotiations - Britain seems to assume it can dictate terms. At UKIGF, Michael Veale tried to point out the reality: "The UK has to engage with GDPR in a way that shows it understands it's now a rule-taker." It's almost impossible to imagine this government understanding any such thing.

A little earlier, the MP Chris Philip, had said the UK is determined to be a scientific and technology "superpower". This country, he said, is number three behind the US and China; we need to get to "an even better position".

Pause for puzzlement. Does Philip think the UK can pass either the US or China in AI? What would that even mean? AI, of all technologies, requires collaboration. Is he really overlooking the EU's technical weight as a bloc? Is the argument that data is essential for AI, AI is the economic future of Britain, so therefore individuals should roll over and open up for...Apple and Google? Do Apple and Google see their role in life as helping the UK to become a world leader in AI?

After all, "the US" isn't really the US as a nation in this discussion; in AI "the US" is the six giant multinational companies Amy Webb that all want to dominate (Google, Microsoft, Apple, Facebook, IBM, Amazon). Data protection law is one of the essential tools for limiting their ability to slurp up everyone's data.

Meanwhile, this government's own policies seem to be in conflict with each other. Simultaneously, it's also at work on a digital identity framework. Getting people to use it will require trust, which proposals to reform data protection law undermine. And trust in this government with respect to data is already faltering, because of the fiasco over our medical data back in June. It's not clear the government is making any of these connections;

Twenty years ago, data protection was about privacy and the big threat was governments. Gradually, as the online advertising industry formed and start-ups became giant companies, the view of data protection law expanded to include helping to redress the imbalance of power between individuals and large companies. Now, with those companies dominating the landscape, data protection is also about restructuring power and ensuring that small players have a chance faced with giant competitors who can corral everyone's devices and extract their data. The more complicated the regulations, as European Digital Rights keeps saying, the more it's only the biggest companies that can afford the infrastructure to comply with them. "Data protection" sounds abstract and boring. Don't be fooled. It's about power.


Illustrations: Vampire squid (via Anne-Lise Heinrichs, on Flickr, following Michael Veale's comparison to Big Tech at UKIGF).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 10, 2021

Globalizing Britain

Chatsworth_Cascade_and_House_-_geograph.org.uk_-_2191570.jpgBrexit really starts now. It was easy to forget, during the dramas that accompanied the passage of the Withdrawal Agreement and the disruption of the pandemic, that the really serious question had still not been answered: given full control, what would Britain do with it? What is a reshaped "independent global Britain" going to be when it grows up? Now is when we find out, as this government, which has a large enough majority to do almost anything it wants, pursues the policies it announced in the Queen's Speech last May.

Some of the agenda is depressingly cribbed from the current US Republican playbook. First and most obvious in this group is the Elections bill. The most contentious change is requiring voter ID at polling stations (even though there was a total of one conviction for voter fraud in 2019, the year of the last general election). What those in other countries may not realize is how many eligible voters in Britain lack any form of photo ID. The Guardian that 11 million people - a fifth of eligible voters - have neither driver's license nor passport. Naturally they are disproportionately from black and Asian backgrounds, older and disabled, and/or poor. The expected general effect, especially coupled with the additional proposal to remove the 15-year cap on voting while expatriate, is to put the thumb on the electoral scale to favor the Conservatives.

More nettishly, the government is gearing up for another attack on encryption, pulling out all the same old arguments. As Gareth Corfield explains at The Register, the current target is Facebook, which intends to roll out end-to-end encryption for messaging and other services, mixed with some copied FBI going dark rhetoric.

This is also the moment when the Online Safety bill (previously online harms). The push against encryption, which includes funding technical development is part of that because the bill makes service providers responsible for illegal content users post - and also, as Heather Burns points out at the Open Rights Group, legal but harmful content. Burns also details the extensive scope of the bill's age verification plans.

These moves are not new or unexpected. Slightly more so was the announcement that the UK will review data protection law with an eye to diverging from the EU; it opened the consultation today. This is, as many have pointed out before dangerous for UK businesses that rely on data transfers to the EU for survival. The EU's decision a few months ago to grant the UK an adequacy decision - that is, the EU's acceptance of the UK's data protection laws as providing equivalent protection - will last for four years. It seems unlikely the EU will revisit it before then, but even before divergence Ian Brown and Douwe Korff have argued that the UK's data protection framework should be ruled inadequate. It *sounds* great when they say it will mean getting rid of the incessant cookie pop-ups, but at risk is privacy protections that have taken years to build. The consultation document wants to promise everything: "even better data protection regime" and "unlocking the power of data" appear in the same paragraph, and the new regime will also both be "pro-growth and innovation-friendly" and "maintain high data protection standards".

Recent moves have not made it easier to trust this government with respect to personal data- first the postponed-for-now medical data fiasco and second this week's revelation that the government is increasingly using our data and hiring third-party marketing firms to target ads and develop personalized campaigns to manipulate the country's behavior. This "influence government" is the work of the ten-year-old Behavioural Insights Team - the "nudge unit", whose thinking is summed up in its behavioral economy report.

Then there's the Police, Crime, Sentencing, and Courts bill currently making its way through Parliament. This one has been the subject of street protests across the UK because of provisions that permit police and Home Secretary Priti Patel to impose various limits on protests.

Patel's Home Office also features in another area of contention, the Nationality and Borders bill. This bill would make criminal offenses out of arriving in the UK without permission a criminal offense and helping an asylum seeker enter the UK. The latter raises many questions, and the Law Society lists many legal issues that need clarification. Accompanying this is this week's proposal to turn back migrant boats, which breaks maritime law.

A few more entertainments lurk, for one, the plan to review of network neutrality announced by Ofcom, the communications regulator. At this stage, it's unclear what dangers lurk, but it's another thing to watch, along with the ongoing consultation on digital identity.

More expected, no less alarming, this government also has an ongoing independent review of the 1998 Human Rights Act, which Conservatives such as former prime minister Theresa May have long wanted to scrap.

Human rights activists in this country aren't going to get much rest between now and (probably) 2024, when the next general election is due. Or maybe ever, looking at this list. This is the latest step in a long march, and it reminds that underneath Britain's democracy lies its ancient feudalism.


Illustrations: Derbyshire stately home Chatsworth (via Trevor Rickards at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 27, 2021

The threat we left behind

afghan-455th_ESFG_scanning_iris.JPGBe careful what systems you build with good intentions. The next owner may not be so kind.

It has long been a basic principle among privacy activists that a significant danger in embedding surveillance technologies is regime change: today's government is benign, but tomorrow's may not be, so let's not build the technologies that could support a police state for that hostile government to wield. Equally - although it's often politic not to say this explicitly - the owner may remain the same but their own intentions may change as the affordances of the system give them new ideas about what it's possible for them to know.

I would be hard-pressed to produce evidence of a direct connection, but one of the ideas floating around Virtual Diplomacy, a 1997 conference that brought together the Internet and diplomacy communities, was that the systems that are privacy-invasive in Western contexts could save lives and avert disasters on the ground in crisis situations. Not long afterwards, the use of biometric identification and other technologies were being built into refugee systems in the US and EU.

In a 2018 article for The New Humanitarian, Paul Currian observes that the systems' development were "driven by the interests of national governments, technology companies, and aid agencies - in that order". Refugees quoted in the article express trust in the UN, but not much understanding of the risks of compliance.

Currian dates the earliest use of "humanitarian biometrics" to 2003 - and identifies the location of that groundbreaking use as...Afghanistan, which iris testing to verify the identities of Afghans returning from Pakistan to prevent fraud. In 2006, then-current, now just-departed, president Ashraf Ghani wrote a book pinpointing biometric identification as the foundation of Afghanistan's social policy. Afghanistan, the article concludes, is "the most biometrically identifiable country in the world" - and, it adds, "although UNHCR and the Afghan government have both invested heavily in biometric databases, the US military has been the real driving force." It bases this latter claim on a 2014 article in Public Intelligence that studies US military documents on the use of biometrics in Afghanistan.

These are the systems that now belong to the Taliban.

Privacy International began warning of the issues surrounding privacy and refugees in the mid-2000s. In 2011, by which time it had been working with UNHCR to improve its practices for four years, PI noted how little understanding there was among funders and the public of why privacy mattered to refugees.

Perhaps it's the word: "privacy" sounds like a luxury, a nice-to-have rather than a necessity, and anyway, how can people held in camps waiting to be moved on to their next location care about privacy when what they need is safety, food, shelter, and a reunion with the rest of their families? PI's answer: "Putting it bluntly, getting privacy wrong will get people arrested, imprisoned, tortured, and may sometimes lead to death." Refugees are at risk from both the countries they're fleeing *from* and the countries they're fleeing *to*, which may welcome and support them - or reject, return, deport, or imprison them, or hold them in bureaucratic purgatory. (As I type this, HIAS president and CEO Mark Hetfield is telling MSNBC that the US's 14-step checking process is stopping Afghan-Americans from getting their families out.)

As PI goes on to explain, there is no such thing as "meaningful consent" in these circumstances. At The New Humanitarian, in a June 2021 article, Zara Rahman agrees. She was responding to a Human Rights Watch report that the United Nations High Commissioner for Refugees had handed a detailed biometric database covering hundreds of thousands of Rohynga refugees to the Myanmar government from which they fled. HRW accused the agency of breaking its own rules for collecting and protecting data, and failing to obtain informed consent; UNHCR denies this charge. But you're desperate and in danger, and UNHCR wants your fingerprint. Can you really say no?

In many countries UNHCR is the organization that determines refugee status. Personal information is critical to this process. The amount of information has increased in some areas to include biometrics; as early as 2008 the US was considering using genetic information to confirm family relationships. More important, UNHCR is not always in control of the information it collects. In 2013, PI published a detailed analysis of refugee data collection in Syria. Last week, it published an even more detailed explanation of the systems built in Afghanistan over the last 20 years and that now have been left behind.

Shortly after the current crisis began, April Glaser and Sephora Smith reported at NBC News that Afghans were hastily deleting photographs and documents on their phones that might link them to Westerners, international human rights groups, the Afghan military, or the recently-departed Afghan government. It's an imperfect strategy: instructions on how to do this in local Afghan languages are not always available, and much of the data and the graph of their social connections are stored on social media that don't necessarily facilitate mass deletions. Facebook has released tools to help, including a one-click locking button and pop-up instructions on Instagram. Access Now also offers help and is telling international actors to close down access to these databases before leaving.

This aspect of the Afghan crisis was entirely avoidable.


Illustrations: Afghan woman being iris-scanned for entry into the Korean hospital at Bagram Airfield, Afghanistan, 2012 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 6, 2021

Privacy-preserving mass surveillance

new-22portobelloroad.jpgEvery time it seems like digital rights activists need to stop quoting George Orwell so much, stuff like this happens.

In an abrupt turnaround, on Thursday Apple announced the next stage in the decades-long battle over strong cryptography: after years of resisting law enforcement demands, the company is U-turning to backdoor its cryptography to scan personal devices and cloud stores for child abuse images. EFF sums up the problem nicely: "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor". Or, more simply, a hole is a hole. Most Orweliian moment: Nicholas Weaver framing it on Lawfare as "privacy-sensitive mass surveillance".

Smartphones, particularly Apple phones, have never really been *our* devices in the way that early personal computers were, because the supplying company has always been able to change the phone's software from afar without permission. Apple's move makes this reality explicit.

The bigger question is: why? Apple hasn't said. But the pressure has been mounting on all the technology companies in the last few years, as an increasing number of governments have been demanding the right of access to encrypted material. As Amie Stepanovich notes on Twitter, another factor may be the "online harms" agenda that began in the UK but has since spread to New Zealand, Canada, and others. The UK's Online Safety bill is already (controversially) in progress., as Ross Anderson predicted in 2018. Child exploitation is a terrible thing; this is still a dangerous policy.

Meanwhile, 2021 is seeing some of the AI hype of the last ten years crash into reality. Two examples: health and autonomous vehicles. At MIT Technology Review, Will Douglas Heaven notes the general failure of AI tools in the pandemic. Several research studies - the British Medical Journal, Nature, and the Turing Institute (PDF) - find that none of the hundreds of algorithms were any clinical use and some were actively harmful. The biggest problem appears to have been poor-quality training datasets, leading the AI to either identify the wrong thing, miss important features, or appear deceptively accurate. Finally, even IBM is admitting that Watson, its Jeopardy! champion has not become a successful AI medical diagnostician. Medicine is art as well as science; who knew? (Doctors and nurses, obviously.)

As for autonomous vehicles, at Wired Andrew Kersley reports that Amazon is abandoning its drone delivery business. The last year has seen considerable consolidation among entrants in the market for self-driving cars, as the time and resources it will take to achieve them continue to expand. Google's Waymo is nonetheless arguing that the UK should not cap the number of self-driving cars on public roads and the UK-grown Oxbotica is proposing a code of practice for deployment. However, as Christian Wolmar predicted in 2018, the cars are not here. Even some Tesla insiders admit that.

The AI that has "succeeded" - in the narrow sense of being deployed, not in any broader sense - has been the (Orwellian) surveillance and control side of AI - the robots that screen job applications, the automated facial recognition, the AI-driven border controls. The EU, which invests in this stuff, is now proposing AI regulations; if drafted to respect human rights, they could be globally significant.

However, we will also have to ensure the rules aren't abused against us. Also this week, Facebook blocked the tool a group of New York University social scientists were using to study the company's ad targeting, along with the researchers' personal accounts. The "user privacy" excuse: Cambridge Analytica. The 2015 scandal around CA's scraping a bunch of personal data via an app users voluntarily downloaded eventually cost Facebook $5 billion in its 2019 settlement with the US Federal Trade Commission that also required it to ensure this sort of thing didn't happen again. The NYU researchers' Ad Observatory was collecting advertising data via a browser extension users opted to install. They were, Facebook says, scraping data. Potato, potahto!

People who aren't Facebook's lawyers see the two situations as entirely different. CA was building voter profiles to study how to manipulate them. The Ad Observatory was deliberately avoiding collecting personal data; instead, they were collecting displayed ads in order to study their political impact and identify who pays for them. Potato, *tomahto*.

One reason for the universal skepticism is that this move has companions - Facebook has also limited journalist access to CrowdTangle, a data tool that helped establish that far-right news content generate higher numbers of interactions than other types and suffer no penalty for being full of misinformation. In addition, at the Guardian, Chris McGreal finds that InfluenceMap reports that fossil fuel companies are using Facebook ads to promote oil and gas use as part of remediating climate change (have some clean coal).

Facebook's response has been to claim it's committed to transparency and blame the FTC. The FTC was not amused: "Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest." The FTC knows Orwellian fiction when it sees it.


Illustrations: Orwell's house on Portobello Road, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 23, 2021

Immune response

Thumbnail image for china-alihealth.jpegThe slight reopening of international travel - at least inbound to the UK - is reupping discussions of vaccination passports, which we last discussed here three months ago. In many ways, the discussion recapitulates not only the ID card battles of 2006-2010 but also last year's concerns about contact tracing apps.

We revisit so soon for two reasons. First, the UK government has been sending out conflicting messages for the last month or more. Vaccination passports may - or may not - be required for university attendance and residence; they may be required for domestic venues - and football games! - in September. One minister - foreign secretary Dominic Raab - says the purpose would be to entice young people to get vaccinated, an approach that apparently worked in France, where proposing to require vaccination passports in order to visit cafes caused a Eiffel Tower-shaped spike in people presenting for shots. Others seem to think that certificates of either vaccination or negative tests will entice people to go out more and spend money. Or maybe the UK won't do them at all; if enough people are vaccinated why would we need proof of any one individual's status? Little has been said about whatever the government may have learned from the test events that were supposed to show if it was safe to resume mass entertainment gatherings.

Second, a panel discussion last month hosted by Allyson Pollack raised some new points. Many of us have thought of covids passport for international travel as roughly equivalent to proof of vaccination for yellow fever. However, Linet Taylor argues that the only time someone in a high-income country needs one is if they're visiting a country where the disease is endemic. By contrast, every country has covid, and large numbers - children, especially - either can't access or do not qualify for covid vaccinations. The problems that disparity caused for families led Israel to rethink its Green Pass, which expired in June and was not renewed. Therefore, Taylor said, it's more relevant to think about lowering the prevalence of the disease than to try to distinguish between vaccinated and unvaccinated. The chief result of requiring vaccination passports for international travel, she said, will be to add extra barriers for those traveling from low-income countries to high-income countries and cement into place global health inequality and unequal access to vaccines. She concluded that giving the responsibility to technology companies merely shows we have "no plan to solve them any other way".

It also brings other risks. Michael Veale, and Seda F. Gürses explain why the computational infrastructure required to support online vaccination verification undercuts public health objectives. Ellen Ullman wrote about this in 1997: computer logic eliminates fuzzy human accommodations, and its affordances foster administrative change from help to surveillance and inclusion to exclusion. No one using the system - that is people going to pubs and concerts - will have any control over what it's doing.

Last year, Westerners were appalled at the passport-like controls China put in place. This year, New York state is offering the Excelsior Pass. Once you load the necessary details into the pass, a mobile phone app, scanning it gains you admission to a variety of venues. IBM, which built the system, is supposedly already investigating how it can be expanded.

As Veale pointed out, a real-time system to check vaccination certificates will also know everywhere each individual certificate hass been checked, adding inevitable intrusion far beyond the vaccinated-yes/no binary. Two stories this week bear Veale out. The first is the New York Times story that highlighted the privacy risks of QR codes that are proliferating in the name of covid safety. Again, the average individual has no way to tell what data is incorporated into the QR code or what's being saved.

The second story is the outing of Monsignor Jeffrey Burrill by The Pillar, a Medium newsletter that covers the Catholic Church. The Pillar says its writers legally obtained 24 months' worth of supposedly anonymized, aggregated app signal data. Out of that aggregated mass they used known locations Burrill frequents to pick out a phone ID with matching history, and used that to track the phone's use of the LGBTQ dating app Grindr and visits to gay nightclubs. Burrill resigned shortly after being informed of the story.

More important is the conclusion Bruce Schneier draws: location data cannot be successfully anonymized. So checking vaccination passports in fact means building the framework of a comprehensive tracking system, whether or not that's the intention..

Like contact tracing apps before them, vaccination passports are a mirage that seem to offer the prospect of living - in this case, to people who've been vaccinated against covid - as if the pandemic does not exist. Whether it "works" depends on what your goal is. If it's to create an airport-style fast track through everyday life, well, maybe. If it's to promote public health, then safety measures such as improved ventilation, moving events outdoors, masks, and so on are likely a better bet. If we've learned anything from the last year and a half, it should be that no one can successfully create an individual bubble in which they can pretend the pandemic is over even while it rages in the rest of the world,


Illustrations: China's Alipay Health Code in March, 2020 (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 9, 2021

The border-industrial complex*

Rohingya_Refugee_Camp_26_(sep_2020).jpgMost people do not realize how few rights they have at the border of any country.

I thought I did know: not much. EFF has campaigned for years against unwarranted US border searches of mobile phones, where "border" legally extends 100 miles into the country. If you think, well, it's a big country, it turns out that two-thirds of the US population lives within that 100 miles.

No one ever knows what the border of their own country is like for non-citizens. This is one reason it's easy for countries to make their borders hostile: non-citizens have no vote and the people who do have a vote assume hostile immigration guards only exist in the countries they visit. British people have no idea what it's like to grapple with the Home Office, just as most Americans have no experience of ICE. Datafication, however, seems likely to eventually make the surveillance aspect of modern border passage universal. At Papers, Please, Edward Hasbrouck charts the transformation of travel from right to privilege.

In the UK, the Open Rights Group and the3million have jointly taken the government to court over provisions in the post-Brexit GDPR-enacting Data Protection Act (2018) that exempted the Home Office from subject access rights. The Home Office invoked the exemption in more than 70% of the 19,305 data access requests made to its office in 2020, while losing 75% of the appeals against its rulings. In May, ORG and the3million won on appeal.

This week's announced Nationality and Borders Bill proposes to make it harder for refugees to enter the country and, according to analyses by the Refugee Council and Statewatch, make many of them - and anyone who assists them - into criminals.

Refugees have long had to verify their identity in the UK by providing biometrics. On top of that, the cash support they're given comes in the form of prepaid "Aspen" cards, which means the Home Office can closely monitor both their spending and their location, and cut off assistance at will, as Privacy International finds. Scotland-based Positive Action calls the results "bureaucratic slow violence".

That's the stuff I knew. I learned a lot more at this week's workshop run by Security Flows, which studies how datafication is transforming borders. The short version: refugees are extensively dataveilled by both the national authorities making life-changing decisions about them and the aid agencies supposed to be helping them, like the UN High Commissioner for Refugees (UNHCR). Recently, Human Rights Watch reported that UNHCR had broken its own policy guidelines by passing data to Myanmar that had been submitted by more than 830,000 ethnic Rohingya refugees who registered in Bangladeshi camps for the "smart" ID cards necessary to access aid and essential services.

In a 2020 study of the flow of iris scans submitted by Syrian refugees in Jordan, Aalborg associate professor Martin Lemberg-Pedersen found that private companies are increasingly involved in providing humanitarian agencies with expertise, funding, and new ideas - but that those partnerships risk turning their work into an experimental lab. He also finds that UN agencies' legal immunity coupled with the absence of common standards for data protection among NGOs and states in the global South leave gaps he dubs "loopholes of externalization" that allow the technology companies to evade accountability.

At the 2020 Computers, Privacy, and Data Protection conference a small group huddled to brainstorm about researching the "creepy" AI-related technologies the EU was funding. Border security represents a rare opportunity, invisible to most people and justified by "national security". Home Secretary Priti Patel's proposal to penalize the use of illegal routes to the UK is an example, making desperate people into criminals. People like many of the parents I knew growing up in 1960s New York.

The EU's immigration agencies are particularly obscure. I had encoutnered Warsaw-based Frontex, the European Border and Coast Guard Agency which manages operational control of the Schengen Area, but not of EU-LISA, which since 2012 has managed the relevant large-scale IT systems SIS II, VIS, EURODAC, and ETIAS (like the US's ESTA). Unappetizing alphabet soup whose errors few know how to challenge.

The behind-the-scenes the workshop described sees the largest suppliers of ICT, biometrics, aerospace, and defense provide consultants who help define work plans and formulate calls to which their companies respond. The list of vendors appearing in Javier Sánchez-Monedero's 2018 paper for the Data Justice Lab, begins to trace those vendors, a mix of well-known and unknown. A forthcoming follow-up focuses on the economics and lobbying behind all these databases.

In the recent paper on financing border wars, Mark Akkerman analyzes the economic interests behind border security expansion, and observes "Migration will be one of the defining human rights issues of the 21st century." We know it will increase, increasingly driven by climate change; the fires that engulfed the Canadian village of Lytton, BC on July 1 made 1,000 people homeless, and that's just the beginning.

It's easy to ignore the surveillance and control directed at refugees in the belief that they are not us. But take the UK's push to create a hostile environment by pushing border checks into schools, workplaces, and health services as your guide, and it's obvious: their surveillance will be your surveillance.

*Credit the phrase "border-industrial complex" to Luisa Izuzquiza.

Illustrations: Rohingya refugee camp in Bangladesh, 2020 (by Rocky Masum, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 2, 2021

This land

Nomadland-van.pngAn aging van drives off down a highway into a fantastical landscape of southwestern mountains and mesquite. In 1977, that could have been me, or any of my folksinging friends as we toured the US, working our way into debt (TM Andy Cohen). In 2020, however, the van is occupied by Fern (Frances McDormand), one of the few fictional characters in the film Nomadland, directed by Chloé Zhao, and based on the book by Jessica Bruder, which itself grew out of her 2014 article for Harper's magazine.

Nomadland captures two competing aspects of American life. First, the middle-class dream of the nice house with the car in the driveway, a chicken in a pot inside, and secure finances. Anyone who rejects this dream must be dangerous. But deep within also lurks the other American dream, of freedom and independence, which in the course of the 20th century moved from hopping freight trains to motor vehicles and hitting the open road.

For many of Nomadland's characters, living on the road begins as a necessary accommodation to calamity but becomes a choice. They are "retirees" who can't afford to retire, who balk at depending on the kindness of relatives, and have carved out a circuit of seasonal jobs. Echoing many of the vandwellers Bruder profiles, Fern tells a teen she used to tutor, "I'm not homeless - just houseless."

Linda May, for example, began working at the age of 12, but discovered at 62 that her social security benefits amounted to $550 a month (the fate that perhaps awaits the people Barbara Ehrenreich profiles in Nickel and Dimed). Others lost their homes in the 2008 crisis. Fern, whose story frames the movie, lost job and home in Empire, Nevada when the gypsum factory abruptly shut down, another casualty of the 2008 financial crisis. Six months later, the zipcode was scrubbed. This history appears as a title at the beginning of the movie. We watch Fern select items and lock a storage unit. It's go time.

Fern's first stop is the giant Amazon warehouse in Fernley, Nevada, where the money is good and a full-service parking space is included. Like thousands of other workampers, she picks stock and packs boxes for the Christmas rush until, come January, it's time to gracefully accept banishment. People advise her: go south, it's warmer. Shivering and scraping snow off the van, Fern soon accepts the inevitable. I don't know how cold she is, but it brought flashbacks to a few of those 1977 nights in my pickup-truck-with-camper-top when I slept in a full set of clothes and a hat while the shampoo solidified. I was 40 years younger than Fern, and it was never going to be my permanent life. On the other hand: no smartphone.

At the Rubber Tramp Rendezvous nearQuartzsite, Arizona, Fern finds her tribe: Swankie, Bob Wells, and the other significant fictional character, Dave (David Strathairn). She traces the annual job circuit: Amazon, camp hosting, beet harvesting in Nebraska, Wall Drug in South Dakota. Old hands teach her skills she needs: changing tires, inventing and building things out of scrap, remodeling her van, keeping on top of rust. She learns what size bucket to buy and that you must be ready to solve your own emergencies. Finally, she learns to say "See you down the road" instead of "Goodbye".

Earlier this year, at Silicon Flatiron's Privacy at the Margins, Tristia Bauman, executive director of the National Homelessness Law Center, explained that many cities have broadly-written camping bans that make even the most minimal outdoor home impossible. Worse, those policies often allow law enforcement to seize property. It may be stored, but often people still don't get it back; the fees to retrieving a towed-away home (that is, van) can easily be out of reach. This was in my mind when Bob talks about fearing the knock on the van that indicates someone in authority wants you gone.

"I've heard it's depressing," a friend said, when I recommended the movie. Viewed one way, absolutely. These aging Baby Boomers never imagined doing the hardest work of their lives in their "golden years", with no health insurance, no fixed abodes, and no prospects. It's not that they failed to achieve the American Dream. It's that they believed in the American Dream and then it broke up with them.

And yet "depressing" is not how I or my companion saw it, because of that *other* American Dream. There's a sense of ownership of both the land and your own life that comes with living on the road in such a spacious and varied country, as Woody Guthrie knew. Both Guthrie in the 1940s and Zhao now unsparingly document the poverty and struggles of the people they found in those wide-open spaces - but they also understand that here a person can breathe and find the time to appreciate the land's strange, secret wonders. Secret, because most of us never have the time to find them. This group does, because when you live nowhere you live everywhere. We get to follow them to some of these places, share their sense of belonging, and admire their astoundingly adaptable spirit. Despite the hardships they unquestionably face, they also find their way to extraordinary moments of joy.

See you down the road.

Illustrations: Fern's van, heading down the road.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 4, 2021

Data serfs

Asklepios_-_Epidauros.jpgIt is shameful that the UK government has apparently refused to learn anything over decades of these discussions, and is now ordering GPs in England to send their patient data to NHSx beginning on July 1 and continuing daily thereafter. GPs are unhappy about this. Patients - that is, the English population - have until June 23 to opt out. Government information has been so absent that if it were not for medConfidential we might not even know it was happening. The opt-out process is a dark pattern; here's how.

The pandemic has taught us a lot about both upsides and downsides of sharing information. The downside is the spread of covid conspiracy theories, refusal to accept public health measures, and death threats to public health experts.

But there's so much more upside. The unprecedented speed with which we got safe and effective vaccinations was enormously boosted by the Internet. The original ("ancestral") virus was genome-sequenced and shared across the world within days, enabling everyone to get cracking. While the heavy reliance on preprint servers meant some errors have propagated, rapid publication and direct access to experts has done far more good than harm overall.

Crowdsourcing is also proving its worth: by collecting voluntary symptom and test/vaccination status reports from 4.6 million people around the UK, the Covid Symptom Study, to which I've contributed daily for more than a year, has identified additional symptoms, offered early warning of developing outbreaks, and assessed the likelihood of post-vaccination breakthrough covid infections. The project is based on an app built by the startup Joinzoe in collaboration with 15 charities and academic research organizations. From the beginning it has seemed an obviously valuable effort worth the daily five seconds it takes to report - and worth giving up a modest amount of data privacy for - because the society-wide benefit is so obvious. The key points: the data they collect is specific, they show their work and how my contribution fits in, I can review what I've sent them, and I can stop at any time. In the blog, the project publishes ongoing findings, many of which have generated journal papers for peer review.

The government plans meet none of these criteria. The data grab is comprehensive, no feedback loop is proposed, and the subject access rights enshrined in data protection law are not available. How could it be more wrong?

Established in 2019, NHSx is the "digital arm" of the National Health Service. It's the branch that commissioned last year's failed data-collecting contact tracing app ("failed", as in many people correctly warned that their centralized design was risky and wouldn't work,). NHSx is all data and contracts. It has no direct relationship with patients, and many people don't know it exists. This is the organization that is demanding the patient records of 56 million people, a policy Ross Anderson dates to 1992.

If Britain has a national religion it's the NHS. Yes, it's not perfect, and yes, there are complaints - but it's a lot like democracy: the alternatives are worse. The US, the only developed country that has refused a national health system, is near-universally pitied by those outside it. For those reasons, no politician is ever going to admit to privatizing the NHS, and most citizens are suspicious, particularly of conservatives, that this is what they secretly want to do.

Brexit has heightened these fears, especially among those of us who remember 2014, when NHS England announced care.data, a plan to collect and potentially sell NHS patient data to private companies. Reconstructing the UK's economy post-EU membership has always been seen as involving a trade deal with the US, which is likely to demand free data flows and, most people believe, access to the NHS for its private medical companies. Already, more than 50 GPs' practices (1%) are managed by Operose, a subsidiary of US health insurer Centene. The care.data plan was rapidly canceled with a promise to retreat and rethink.

Seven years later, the new plan is the old plan, dusted off, renamed, and expanded. The story here is the same: it's not that people aren't willing to share data; it's that we're not willing to hand over full control. The Joinzoe app has worked because every day each contributor remakes the decision to participate and because the researchers provide a direct feedback loop that shows how the data is being used and the results. NHSx isn't offering any of that. It is assuming the right to put our most sensitive personal data into a black box it owns and controls and keep doing so without granting us any feedback or recourse. This is worse than advertisers pretending that we make free choices to accept tracking. No one in this country has asked for their relationship with their doctor to be intermediated by a bunch of unknown data managers, however well-meaning. If their case for the medical and economic benefits is so strong (and really, it is, *when done right*), why not be transparent and open about it?

The pandemic has made the case for the value of pooling medical data. But it has also been a perfect demonstration of what happens when trust seeps out of a health system - as it does when governments feudally treat citizens as data serfs. *Both* lessons should be learned.


Illustrations: Asklepios, Greek god of medicine.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 28, 2021

Judgments day

1024px-Submarine_cable_map_umap.pngThis has been quite a week for British digital rights campaigners, who have won two significant cases against the UK government.

First is a case regarding migrants in the UK, brought by the Open Rights Group and the3mllion. The case challenged a provision in the Data Protection Act (2018) that exempted the Home Office from subject access requests, meaning that migrants refused settled status or immigration visas had no access to the data used to decide their cases, placing them at an obvious disadvantage. ORG and the3million argued successfully in the Court of Appeal that this was unfair, especially given that nearly half the appeals against Home Office decisions before the law came into effect were successful.

This is an important win, but small compared to the second case.

Eight years after Edward Snowden revealed the extent of government interception of communications, the reverberations continue. This week, the the Grand Chamber of the Europgean Court of Human Rights found Britain's data interception regime breached the rights to privacy and freedom of expression. Essentially, as Haroon Siddique sums it up at the Guardian, the court found deficiencies in three areas. First, bulk interception was authorized by the secretary of state but not by an independent body such as a court. Second, the application for a warrant did not specify the kinds of communication to be examined. Third, search terms linked to an individual were not subject to prior authorization. The entire process, the court ruled, must be subject to "end-to-end safeguards".

This is all mostly good news. Several of the 18 applicants (16 organizations and two individuals), argue the ruling didn't go far enough because it didn't declare bulk interference illegal in and of itself. Instead, it merely condemned the UK's implementation. Privacy International expects that all 47 members of the Council of Europe, all signatories to the European Convention on Human Rights, will now review their surveillance laws and practices and bring them into line with the ruling, giving the win much broader impact./

Particularly at stake for the UK is the adequacy decision it needs to permit seamless sharing data with EU member states under the General Data Protection Regulation. In February the EU issued a draft decision that would grant adequacy for four years. This judgment highlights the ways the UK's regime is non-compliant.

This case began as three separate cases filed between 2013 and 2015; they were joined together by the court. PI, along with ACLU, Amnesty International, Liberty, and six other national human rights organizations, was among the first group of applicants. The second included Big Brother Watch, Open Rights Group, and English PEN; the third added the Bureau of Investigative Journalism.

Long-time readers will know that this is not the first time the UK's surveillance practices have been ruled illegal. In 2008, the CJEU ruled against the UK's DNA database. More germane, in 2014, the CJEU invalidated the Data Retention Directive as a disproportionate intrusion on fundamental human rights, taking down with it the UK's supporting legislation. At the end of 2014, to solve the "emergency" created by that ruling, the UK hurriedly passed the Data Retention and Investigatory Powers Act (DRIPA). The UK lost the resulting legal case in 2016, when the CJEU largely struck it down again.

Currently, the legislation that enables the UK's communications surveillance regime is the Investigatory Powers Act (2016), which built on DRIPA and its antecedents, plus the Terrorism Prevention and Investigation Measures Act (2011), whose antecedents go back to the Anti-Terrorism, Crime, and Security Act (2001), passed two months after 9/11. In 2014, I wrote a piece explaining how the laws fit together.

Snowden's revelations were important in driving the post-2013 items on that list; the IPA was basically designed to put the practices he disclosed on a statutory footing. I bring up this history because I was struck by a comment in Albuquerque's dissent: "The RIPA distinction was unfit for purpose in the developing Internet age and only served the political aim of legitimising the system in the eyes of the British public with the illusion that persons within the United Kingdom's territorial jurisdiction would be spared the governmental 'Big Brother'".

What Albuquerque is criticizing here, I think, is the distinction made in RIPA between metadata, which the act allowed the government to collect, and content, which is protected. Campaigners like the late Caspar Bowden frequently warned that metadata is often more revealing than content. In 2015, Steve Bellovin, Matt Blaze, Susan Landau, and Stephanie Pell showed that the distinction is no longer meaningful (PDF in any case.

I understand that in military-adjacent circles Snowden is still regarded as a traitor. I can't judge the legitimacy of all his revelations, but in at least one category it was clear from the beginning that he was doing the world a favor. That is alerting the world to the intelligence services' compromising crucial parts of the world's security systems that protect all of us. In ruling that the UK practices he disclosed are illegal, the ECtHR has gone a long way toward vindicating him as a whistleblower in a second category.


Illustrations: Map of cable data by Greg Mahlknecht, map by Openstreetmap contributors (CC-by-SA 2.0), from the Privacy International report on the ruling.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 14, 2021

Pre-crime

Unicorn_sculpture,_York_Crown_Court-Tim-Green.jpgMuch is being written about this week's Queen's speech, which laid out plans to restrict protests (the Police, Crime, Sentencing, and Courts bill), relax planning measures to help developers override communities, and require photo ID in order to vote even though millions of voters have neither passport nor driver's license and there was just one conviction for voting fraud in the 2019 general election. We, however, will focus here on the Online Safety bill, which includes age verification and new rules for social media content moderation.

At Politico, technology correspondent Mark Scott picks three provisions: the exemption granting politicians free rein on social media; the move to require moderation of content that is not illegal or criminal (however unpleasant it may be); and the carve-outs for "recognised news publishers". I take that to mean they wanted to avoid triggering the opposition of media moguls like Rupert Murdoch. Scott read it as "journalists".

The carve-out for politicians directly contradicts a crucial finding in last week's Facebook oversight board ruling on the suspension of former US president Donald Trump's account: "The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users." Politicians, in other words, may not be more special than other influencers. Given the history of this particular government, it's easy to be cynical about this exemption.

In 2019, Heather Burns, now policy manager for the Open Rights Group, predicted this outcome while watching a Parliamentary debate on the white paper: "Boris Johnson's government, in whatever communication strategy it is following, is not going to self-regulate its own speech. It is going to double down on hard-regulating ours." At ORG's blog, Burns has critically analyzed the final bill.

Few have noticed the not-so-hidden developing economic agenda accompanying the government's intended "world-leading package of online safety measures". Jen Persson, director of the children's rights advocacy group DefendDigitalMe, is the exception, pointing out that in May 2020 the Department of Culture, Media, and Sport released a report that envisions the UK as a world leader in "Safety Tech". In other words, the government views online safety (PDF; see Annex C) as not just an aspirational goal for the country's schools and citizens but also as a growing export market the UK can lead.

For years, Persson has been tirelessly highlighting the extent to which children's online use is monitored. Effectively, monitoring software watches every use of any school-owned device and whenever the child is logged into their school Gsuite account; some types can even record photos of the child at home, a practice that became notorious when it was tried in Pennsylvania.

Meanwhile, outside of DefendDigitalMe's work - for example its case study of eSafe and discussion of NetSupport DNA and this discussion of school safeguarding - we know disturbingly little about the different vendors, how they fit together in the education ecosystem, how their software works, how capabilities vary from vendor to vendor, how well they handle multiple languages, what they block, what data it collects, how they determine risk, what inferences are drawn and retained and by whom, and the rate of errors and their consequences. We don't even really know if any of it works - or what "works" means. "Safer online" does not provide any standard against which the cost to children's human rights can be measured. Decades of government policy have all trended toward increased surveillance and filtering, yet wherever "there" is we never seem to arrive. DefendDigitalMe has called for far greater transparency.

Persson notes both mission creep and scope creep: "The scope has shifted from what was monitored to who is being monitored, then what they're being monitored for." The move from harmful and unlawful content to lawful but "harmful" content is what's being proposed now, and along with that, Persson says, "children being assessed for potential risk". The controversial Prevent program program is about this: monitoring children for signs of radicalization. For their safety, of course.

Previous UK children's rights campaigners used to say that successive UK governments have consistently used children as test subjects for the controversial policies they wish to impose on adults, normalizing them early. Persson suggests the next market for safetytech could be employers monitoring employees for mental health issues. I imagine elderly people.

DCMS's comments support market expansion: "Throughout the consultations undertaken when compiling this report there was a sector consensus that the UK is likely to see its first Safety Tech unicorn (i.e. a company worth over $1bn) emerge in the coming years, with three other companies also demonstrating the potential to hit unicorn status within the early 2020s. Unicorns reflect their namesake - they are incredibly rare, and the UK has to date created 77 unicorn businesses across all sectors (as of Q4 2019)." (Are they counting the much-litigated Autonomy?)

There's something peculiarly ghastly about this government's staking the UK's post-Brexit economic success on exporting censorship and surveillance to the rest of the world, especially alongside its stated desire to opt out of parts of human rights law. This is what "global Britain" wants to be known for?

Illustrations: Unicorn sculpture at York Crown Court (by Tim Green via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 2, 2021

Medical apartheid

swiss-cheese-virus-defence.jpgEver since 1952, when Clarence Willcock took the British government to court to force the end of wartime identity cards, UK governments have repeatedly tried to bring them back, always claiming they would solve the most recent public crisis. The last effort ended in 2010 after a five-year battle. This backdrop is a key factor in the distrust that's greeting government proposals for "vaccination passports" (previously immunity passports). Yesterday, the Guardian reported that British prime minister Boris Johnson backs certificates that show whether you've been vaccinated, have had covid and recovered, or had a test. An interim report will be published on Monday; trials later this month will see attendees to football matches required to produce proof of negative lateral flow tests 24 hours before the game and on entry.

Simultaneously, England chief medical officer Chris Whitty told the Royal Society of Medicine that most experts think covid will become like the flu, a seasonal disease that must be perennially managed.

Whitty's statement is crucial because it means we cannot assume that the forthcoming proposal will be temporary. A deeply flawed measure in a crisis is dangerous; one that persists indefinitely is even more so. Particularly when, as this morning, culture secretary Oliver Dowden tries to apply spin: "This is not about a vaccine passport, this is about looking at ways of proving that you are covid secure." Rebranding as "covid certificates" changes nothing.

Privacy advocates and human rights NGOs saw this coming. In December, Privacy International warned that a data grab in the guise of immunity passports will undermine trust and confidence while they're most needed. "Until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair." We are a long, long way from that universal access and likely to remain so; today's vaccines will have to be updated, perhaps as soon as September. There is substantial, but not enough, parliamentary opposition.

A grassroots Labour discussion Wednesday night showed this will become yet another highly polarized debate. Opponents and proponents combine issues of freedom, safety, medical efficacy, and public health in unpredictable ways. Many wanted safety - "You have no civil liberties if you are dead," one person said; others foresaw segregation, discrimination, and exclusion; still others cited British norms in opposing making compulsory either vaccinations or carrying any sort of "papers" (including phone apps).

Aside from some specific use cases - international travel, a narrow range of jobs - vaccination passports in daily life are a bad idea medically, logistically, economically, ethically, and functionally. Proponents' concerns can be met in better - and fairer - ways.

The Independent SAGE advisory group, especially Susan Michie, has warned repeatedly that vaccination passports are not a good solution for solution life. The added pressure to accept vaccination will increase distrust, she has repeatedly said, particularly among victims of structural racism.

Instead of trying to identify which people are safe, she argues that the government should be guiding employers, businesses, schools, shops, and entertainment venues to make their premises safer - see for example the CDC's advice on ventilation and list of tools. Doing so would not only help prevent the spread of covid and keep *everyone* safe but also help prevent the spread of flu and other pathogens. Vaccination passports won't do any of that. "It again puts the burden on individuals instead of spaces," she said last night in the Labour discussion. More important, high-risk individuals and those who can't be vaccinated will be better protected by safer spaces than by documentation.

In the same discussion, Big Brother Watch's Silkie Carlo predicted that it won't make sense to have vaccination passports and then use them in only a few places. "It will be a huge infrastructure with checkpoints everywhere," she predicted, calling it "one of the civil liberties threats of all time" and "medical apartheid" and imagining two segregated lines of entry to every venue. While her vision is dramatic, parts of it don't go far enough: imagine when this all merges with systems already in place to bar access to "bad people". Carlo may sound unduly paranoid, but it's also true that for decades successive British governments at every decision point have chosen the surveillance path.

We have good reason to be suspicious of this government's motives. Throughout the last year, Johnson has been looking for a magic bullet that will fix everything. First it was contact tracing apps (failed through irrelevance), then test and trace (failing in the absence of "and isolate and support"), now vaccinations. Other than vaccinations, which have gone well because the rollout was given to the NHS, these failed high-tech approaches have handed vast sums of public money to private contractors. If by "vaccination certificates" the government means the cards the NHS gives fully-vaccinated individuals listing the shots they've had, the dates, and the manufacturer and lot number, well fine. Those are useful for those rare situations where proof is really needed and for our own information in case of future issues, it's simple, and not particularly expensive. If the government means a biometric database system that, as Michie says, individualizes the risk while relieving venues of responsibility, just no.

Illustrations: The Swiss Cheese Respiratory Virus Defence, created by virologist Ian McKay.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.


Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 19, 2021

Dystopian non-fiction

Screenshot from 2021-03-18 12-51-27.pngHow dumb do you have to be to spend decades watching movies and reading books about science fiction dystopias with perfect surveillance and then go on and build one anyway?

*This* dumb, apparently, because that what Shalini Kantayya discovers in her documentary Coded Bias, which premiered at the 2020 Sundance Film Festival. I had missed it until European Digital Rights (EDRi) arranged a streaming this week.

The movie deserves the attention paid to The Social Dilemma. Consider the cast Kantayya has assembled: "math babe" Cathy O'Neil, data journalism professor Meredith Broussard, sociologist Zeynep Tufekci, Big Brother Watch executive director Silkie Carlo, human rights lawyer Ravi Naik, Virginia Eubanks, futurist Amy Webb, and "code poet" Joy Buolamwini, who is the film's main protagonist and provides its storyline, such as it is. This film wastes no time on technology industry mea non-culpas, opting instead to hear from people who together have written a year's worth of reading on how modern AI disassembles people into piles of data.

The movie is framed by Buoalmwini's journey, which begins in her office at MIT. At nine, she saw a presentation on TV from MIT's Media Lab, and, entranced by Cynthia Breazeal's Kismet robot, she instantly decided: she was going to be a robotics engineer and she was going to MIT.

At her eventual arrival, she says, she imagined that coding was detached from the world - until she started building the Aspire Mirror and had to get a facial detection system working. At that point, she discovered that none of the computer vision tracking worked very well...until she put on a white mask. She started examining the datasets used to train the facial algorithms and found that every system she tried showed the same results: top marks for light-skinned men, inferior results for everyone else, especially the "highly melanated".

Teaming up with Deborah Raji, in 2018 Buolamwini published a study (PDF) of racial and gender bias in Amazon's Rekognition system, then being trialed with law enforcement. The company's response leads to a cameo, in which Buolamwini chats with Timnit Gebru about the methods technology companies use to discredit critics. Poignantly, today's viewers know that Gebru, then still at Google was only months away from becoming the target of exactly that behavior, fired over her own critical research on the state of AI.

Buolamwini's work leads Kantayya into an exploration of both algorithmic bias generally, and the uncontrolled spread of facial recognition in particular. For the first, Kantayya surveys scoring in recruitment, mortgage lending, and health care, and visits the history of discrimination in South Africa. Useful background is provided by O'Neil, whose Weapons of Math Destruction is a must-read on opaque scoring, and Broussard, whose Artificial Unintelligence deplores the math-based narrow conception of "intelligence" that began at Dartmouth in 1956, an arrogance she discusses with Kantayya on YouTube.

For the second, a US unit visits Brooklyn's Atlantic Plaza Towers complex, where the facial recognition access control system issues warnings for tiny infractions. A London unit films the Oxford Circus pilot of live facial recognition that led Carlo, with Naik's assistance, to issue a legal challenge in 2018. Here again the known future intervenes: after the pandemic stopped such deployments, BBW ended the challenge and shifted to campaigning for a legislative ban.

Inevitably, HAL appears to remind us of what evil computers look like, along with a red "I'm an algorithm" blob with a British female voice that tries to sound chilling.

But HAL's goals were straightforward: it wanted its humans dead. The motives behind today's algorithms are opaque. Amy Webb, whose book The Big Nine profiles the nine companies - six American, three Chinese - who are driving today's AI, highlights the comparison with China, where the government transparently tells citizens that social credit is always watching and bad behavior will attract penalties for your friends and family as well as for you personally. In the US, by contrast, everyone is being scored all the time by both government and corporations, but no one is remotely transparent about it.

For Buolamwini, the movie ends in triumph. She founds the Algorithmic Justice League and testifies in Congress, where she is quizzed by Alexandria Ocasio-Cortez(D-NY) and Jamie Raskin (D-MD), who looks shocked to learn that Facebook has patented a system for recognizing and scoring individuals in retail stores. Then she watches as facial recognition is banned in San Francisco, Somerville, Massachusetts, and Oakland, and the electronic system is removed from the Brooklyn apartment block - for now.

Earlier, however, Eubanks, author of Automating Inequality, issued a warning that seems prescient now, when the coronavirus has exposed all our inequities and social fractures. When people cite William Gibson's "The future is already here - it's just not evenly distributed", she says, they typically mean that new tools spread from rich to poor. "But what I've found is the absolute reverse, which is that the most punitive, most invasive, most surveillance-focused tools that we have, they go into poor and working communities first." Then they get ported out, if they work, to those of us with higher expectations that we have rights. By then, it may be too late to fight back.

See this movie!


Illustrations: Joy Buolamwini, in Coded Bias.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 19, 2021

Vaccine conoisseurs

800px-International_Certificates_of_Vaccination.jpgThis is one of those weeks when numerous stories update. Australia's dispute over sharing news has spawned deals that are bad for everyone except Facebook, Google, and Rupert Murdoch; the EU is beginning the final stages of formulating the ePrivacy Regulation; the UK awaits its adequacy decision on data protection; 3D printed guns are back; and the arrival of covid vaccines has revived the push for some form of vaccination certificate, which may (or may not) revive governments' desires for digital identities tied to each of us via biometrics and personal data.

To start with Australia: since the lower house of the Australian parliament has passed the law requiring Google and Facebook to negotiate licensing fees with publishers, Facebook began blocking Australian users from sharing "news content" - and the rest of the world from sharing links to Australian publishers - without waiting for final passage. The block is as overbroad as you might expect.

Google has instead announced a three-year deal under which it will pay Rupert Murdoch's News Corporation for the right to showcase it's output - which is almost universally paywalled.

Neither announcement is good news. Google's creates a damaging precedent of paying for links, and small public interest publishers don't benefit - and any publisher that does becomes even more dangerously dependent on the platforms to keep them solvent. On Twitter, Kate Crawford calls Facebook's move deplatforming at scale.

Next, as Glyn Moody helpfully explains, where GDPR protects personal data at rest, the ePrivacy Regulation covers personal data in transit. It has been pending since 2017, when the European Commission published a draft, which the European Parliament then amended. Massive amounts of lobbying and now-resolved internal squabbling over the text within the Council of the EU have finally been resolved so the three legs of this legislative stool can begin negotiations. Moody highlights two areas to watch: provisions exempting metadata from the prohibition on changing use without consent, and the rules regarding cookie walls. As negotiations proceed, however, there may be more.

As a no-longer EU member, the UK will have to actively adopt this new legislation. The UK's motivation to do so is simple: it wants - or should want - an adequacy decision. That is, for data to flow between the UK and the EU, the EU has to agree that the UK's privacy framework matches the EU's. On Tuesday, The Register reported that such a decision is imminent, a small piece of good news for British businesses in the sea of Brexit issues arising since January 1.

The original panic over 3D-printed guns was in 2013, when the US Department of Justice ordered the takedown of Defcad. In 2018, Defcad's owner, Cody Wilson, won his case against the DoJ in a settlement. At the time, 3D-printed plastic guns were too limited to worry about, and even by 2018 3D printing had failed to take off on the consumer level. This week Gizmodo reported that home-printing alarmingly functional automated weapons may now be genuinely possible for someone with the necessary obsession, home equipment, and technical skill.

Finally, ever since the beginning of this pandemic there has been concern that public health would become the vector for vastly expanded permanent surveillance that would be difficult to dislodge later.

The arrival of vaccinations has brought the weird new phenomenon of the vaccine connoisseur. They never heard of mRNA until a couple of months ago, but if you say you've been vaccinated they'll ask which one. And then say something like, "Oh, that's not the best one, is it?" Don't be fussy! If you're offered a vaccination, just take it. Every vaccine should help keep you alive and out of the hospital; like Willie Nelson's plane landings you can walk away from, they're *all* perfect. All will also need updates.

Israel is first up with vaccination certificates, saying that these will be issued to everyone after their second shot. The certificate will exempt them from some of the requirements for testing and isolation associated with visiting public places.

None of the problems surrounding immunity passports (as they were called last spring) has changed. We are still not sure whether the vaccines halt transmission or how long they last, and access is still enormously limited. Certificates will almost certainly be inescapable for international travel, as for other diseases like yellow fever and smallpox. For ordinary society, however, they would be profoundly discriminatory. In agreement on this: Ada Lovelace Institute, Privacy International, Liberty, Germany's ethics council. At The Lancet some researchers suggest they may be useful when we have more data, as does the the Royal Society; others reject them outright.

There is an ancillary concern. Ever since identity papers were withdrawn after the end of World War II, UK governments have repeatedly tried to reintroduce ID cards. The last attempt, which ended in 2010, came close. There is therefore legitimate concern about immunity passports as ID cards, a concern not allayed by the government's policy paper on digital identities, published last week.

What we need is clarity about what problem certificates are intended to solve. Are they intended to allow people who've been vaccinated greater freedom consistent with the lower risks they face and pose? Or is the point "health theater" for businesses? We need answers.


Illustrations: International vaccination certificates (from SimonWaldherr at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 12, 2021

The spirit of Mother Jones

800px-Mother_Jones_1902-11-04.jpegThis week a commenter on one of the mailing lists I follow asked, perhaps somewhat plaintively, why, after watching 20 years of attempts to organize Silicon Valley workers that have led nowhere, suddenly the push of workers at Big Tech to unionize seems to be gaining traction. "What has changed?"

Well, for one thing, the existence of a history of 20 years of attempts to organize tech workers - which could be the nearly-flat portion of the famous venture capital hockey stick - by itself is a profound change,. "Why is she running when she has no chance?" people asked about Shirley Chisholm in 1972. Her campaign opened minds for Hillary Clinton and Kamala Harris, VPOTUS.

The next month should give a solid indication of whether tech worker unions' moment is now. It very well might be. The same trend toward unaccountable power that have led the US Congress and many other countries to scrutinize the practices of the big platforms is surely felt even more by their employees. It shouldn't be a surprise; when you recruit people with the promise that they can improve the lives of millions of people you should expect them to be angry when they realize their efforts are being used to cause worldwide damage, especially when they see that little progress has been made on long-standing complaints such as the lack of diversity surrounding them.

One reason today's unionizing moves may come as a surprise is that the image of the tech worker has remained stuck on highly-compensated programmers and engineers and the perks, stock options, and salaries they receive. And yet, in 2014, Silicon Valley software engineers discovered that they, too, were just workers to their employers, who were limiting their career prospects via a no-poaching agreement in which Apple, Google, Intel, Dell, IBM, Pixar, Lucasfilm, Intuit, and dozens of other companies agreed not to recruit from each other's workforce. The result was to depress compensation across the board for millions of engineers and programmers.

And these are the high-caste workers; for years "lower-class" occupations have been filled at many companies by workers under all sorts of arrangements designed to keep them from being classed as employees to whom the company would owe medical insurance, paid leave, and other hard-won benefits. In 2018, Microsoft bug testers cited the Republican environment in Washington as the reason they gave up on a successful unionizing effort that had won them the right to negotiate directly with their temp agency. More recently, Uber and Lyft drivers have demanded employee status in numerous countries.

At Google, temporary, vendor, and contract workers, the majority of the workforce, have complained of being invisible. In November 2018, after the New York Times reported that the company had given seven-figure payouts to two executives accused of sexual harassment, 20,000 of these workers walked out demanding transparency, accountability, and structural change. Google's response was apparently enough to get them back to work at the time.

However, in December 2020, the National Labor Relations Board filed a complaint on behalf of two employees who said they were fired for their organizing efforts. Last month, hundreds of Google workers created the Alphabet Workers' Union, open to both full-time and contract workers. This union won't be formally recognized for collective bargaining, but will use other means to push for change. More than 200 of its members have signed on with the Communications Workers of America.

In an op-ed in the New York Times software engineers Parul Koul and Chewy Shaw, the leaders of the new Alphabet Workers Union, cite that earlier walkout, the recent firing of leading AI researcher Timnit Gebru, as well as the company's general behavior. "Each time workers organize to demand change, Alphabet's executives make token promises, doing the bare minimum in the hopes of placating workers," they write.

The original question was, I think, inspired by the news that voting began Monday at an Amazon fulfillment center in Bessemer, Alabama on whether to unionize. As Lee Fang reports at The Intercept, Amazon has been campaigning against this development, hiring a union-busting law firm Morgan Lewis to mastermind a website, Facebook ads, and mass texts to workers. This is not really comparable to Google's union. The fact that these warehouse staff and delivery drivers work for a technology company is largely irrelevant except for the extra-creepiness of the surveillance Amazon is able to install in its warehouses and delivery vans. The same goes for Apple's retail store staff, whose efforts to organize failed in 2011.

Plus, the overall environment has changed. The pandemic has cast many issues of structural unfairness into sharper relief, and the US's new president has promised to strengthen unions. Add in generational shift to a group whose bleak present includes burdensome education debt, the climate crisis, and shrinking prospects. Yes, it really might be different now..


Illustrations: Union organizer "Mother" Mary G. Harris Jones, "the most dangerous woman in America", in 1902, (via Wikipedia). The title is a reference to the folksinger Andy Irvine's biographical ode to the Union Maid, The Spirit of Mother Jones.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 15, 2021

One thousand

net.wars-the-book.gifIn many ways, this 1,000th net.wars column is much like the first (the count is somewhat artificial, since net.wars began as a 1998 book, then presaged by four years of news analysis pieces for the Daily Telegraph, and another book in 2001...and a lot of my other writing also fits under "computers, freedom, and privacy"; *however*). That November 2001 column was sparked by former Home Office minister Jack Straw's smug assertion that after 9/11 those of us who had defended access to strong cryptography must be feeling "naive". Here, just over a week after the Capitol invasion, three long-running issues are pertinent: censorship; security and the intelligence failures that enabled the attack; and human rights when demands for increased surveillance capabilities surface, as they surely will.

Censorship first. The US First Amendment only applies to US governments (a point that apparently requires repeating). Under US law, private companies can impose their own terms of service. Most people expected Twitter would suspend Donald Trump's account approximately one second after he ceased being a world leader. Trump's incitement of the invasion moved that up, and led Facebook, including its subsidiaries Instagram and WhatsApp, Snapchat, and, a week after the others, YouTube to follow suit. Less noticeably, a Salesforce-owned email marketing company ceased distributing emails from the Republican National Committee.

None of these social media sites is a "public square", especially outside the US, where they've often ignored local concerns. They are effectively shopping malls, and ejecting Trump is the same as throwing out any other troll. Trump's special status kept him active when many others were unjustly banned, but ultimately the most we can demand from these services is clearly stated rules, fairly and impartially enforced. This is a tough proposition, especially when you are dependent on social media-driven engagement.

Last week's insurrection was planned on numerous openly accessible sites, many of which are still live. After Twitter suspended 70,000 accounts linked to QAnon, numerous Republicans complaining they had lost followers seemed to be heading to Parler, a relatively new and rising alt-right Twitterish site backed by Rebekah Mercer, among others. Moving elsewhere is an obvious outcome of these bans, but in this crisis short-term disruption may be helpful. The cost will be longer-term adoption of channels that are harder to monitor.

By January 9 Apple was removing Parler from the App Store, to be followed quickly by Android (albeit less comprehensively, since Android allows side-loading). Amazon then kicked Parler off its host, Amazon Web Services. It is unknown when, if ever, the site will return.

Parler promptly sued Amazon claiming an antitrust violation. AWS retaliated with a crisp brief that detailed examples of the kinds of comments the site felt it was under no obligation to host and noted previous warnings.

Whether or not you think Parler should be squashed - stipulating that the imminent inauguration requires an emergency response - three large Silicon Valley platforms have combined to destroy a social media company. This is, as Jillian C. York, Corynne McSherry, and Danny O'Brien write at EFF, a more serious issue. The "free speech stack", they write, requires the cooperation of numerous layers of service providers and other companies. Twitter's decision to ban one - or 70,000 - accounts has limited impact; companies lower down the stack can ban whole populations. If you were disturbed in 2010, when, shortly after the diplomatic cables release, Paypal effectively defunded Wikleaks after Amazon booted it off its servers, then you should be disturbed now. These decisions are made at obscure layers of the Internet where we have little influence. As the Internet continues to centralize, we do not want just these few oligarchs making these globally significant decisions.

Security. Previous attacks - 9/11 in particular - led to profound damage to the sense of ownership with which people regard their cities. In the UK, the early 1990s saw the ease of walking into an office building vanish, replaced by demands for identification and appointments. The same happened in New York and some other US cities after 9/11. Meanwhile, CCTV monitoring proliferated. Within a year of 9/11, the US passed the PATRIOT Act, and the UK had put in place a series of expansions to surveillance powers.

Currently, residents report that Washington, DC is filled with troops and fences. Clearly, it can't stay that way permanently. But DC is highly unlikely to return to the openness of just ten days ago. There will be profound and permanent changes, starting with decreased access to government buildings. This will be Trump's most visible legacy.

Which leads to human rights. Among the videos of insurrectionists shocked to discover that the laws do apply to them were several in which prospective airline passengers discovered they'd been placed preemptively on the controversial no-fly list. Many others who congregated at the Capitol were on a (separate) terrorism watch list. If the post-9/11 period is any guide, the fact that the security agencies failed to connect any of the dots available to them into actionable intelligence will be elided in favor of insisting that they need more surveillance powers. Just remember: eventually, those powers will be used to surveil all the wrong people.


Illustrations: net.wars, the book at the beginning.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 16, 2020

The rights stuff

Humans-Hurt-synth.pngIt took a lake to show up the fatuousness of the idea of granting robots legal personality rights.

The story, which the AI policy expert Joanna Bryson highlighted on Twitter, goes like this: in February 2019 a small group of people, frustrated by their inability to reduce local water pollution, successfully spearheaded a proposition in Toledo, Ohio that created the Lake Erie Bill of Rights. Its history since has been rocky. In February 2020, a farmer sued the city and a US district judge invalidated the bill. This week, three-judge panel from Ohio's Sixth District Court of Appeals ruled the February judge made a mistake. For now, the lake still has its rights. Just.

We will leave aside the question of whether giving lakes and the other ecosystems listed in the above-linked Vox article is an effective means of environmental protection. But given that the idea of giving robots rights keeps coming up - the EU is toying with the possibility - it seems worth teasing out the difference.

In response to Bryson, Nicholas Bohm noted the difference between legal standing and personality rights. The General Data Protection Regulation, for example, grants legal standing in two new ways: collective action and civil society representing individuals seeking redress. Conversely, even the most-empowered human often lacks legal standing; my outrage that a brick fell on your head from the top of a nearby building does not give me the right to sue the building's owner on your behalf.

Rights as a person, however, would allow the brick to sue on its own behalf for the damage done to it by landing on a misplaced human. We award that type of legal personhood to quite a few things that aren't people - corporations, most notoriously. In India, idols have such rights, and Bohm cites a case in which the trustee of a temple, because the idol they represented had these rights in India, was allowed to join a case claiming improper removal in England.

Or, as Bohm put it more succinctly, "Legal personality is about what you are; standing is about what it's your business to mind."

So if lakes, rivers, forests, and idols, why not robots? The answer lies in what these things represent. The lakes, rivers, and forests on whose behalf people seek protection were not human-made; they are parts of the larger ecosystem that supports us all, and most intimately the people who live on their banks and verges. The Toledoans who proposed granting legal rights to Lake Erie were looking for a way to force municipal action over the lake's pollution, which was harming them and all the rest of the ecosystem the lake feeds. At the bottom of the lake's rights, in other words, are humans in existential distress. Granting the lake rights is a way of empowering the humans who depend on it. In that sense, even though the Indian idols are, like robots, human-made, giving them personality rights enables action to be taken on behalf of the human community for whom they have significance. Granting the rights does not require either the lake or the idol to possess any form of consciousness.

In a paper to which Bryson linked, S.G. Solaiman argues that animals don't quality for rights, even though they have some consciousness, because a legal personality must be able to "enjoy rights and discharge duties". The Smithsonian National Zoo's giant panda, who has been diligently caring for her new cub for the last two months, is not doing so out of legal obligation.

Nothing like any of this can be said of rights for robots, certainly not now and most likely not for a long time into the future, if ever. Discussions such as David Gunkel's How to Survive a Robot Invasion, which compactly summarizes the pros and cons, generally assume that robots will only qualify for rights after a certain threshold of intelligent consciousness has been met. Giving robots rights in order to enable suffering humans to seek redress does not come up at all, even when the robots' owners hold funerals because the manufacturer has discontinued the product. Those discussions rightly focus on manufacturer liability.

In the 2015 British TV series Humans (a remake of the 2012 Swedish series Äkta människor), an elderly Alzheimer's patient (William Hurt) is enormously distressed when his old-model carer robot is removed, taking with it the only repository of his personal memories, which he can no longer recall unaided. It is not necessary to give the robot the right to sue to protect the human it serves, since family or health workers could act on his behalf. The problem in this case is an uncaring state.

The broader point, as Bryson wrote on Twitter, is that while lakes are unique and can be irreparably damaged, digital technology - including robots - "is typically built to be fungible and upgradeable". Right: a compassionate state merely needs to transfer George's memories into a new model. In a 2016 blog posting, Bryson also argues against another commonly raised point, which is whether the *robots* suffer: if designers can install suffering as a feature, they can take it out again.

So, the tl;dr: sorry, robots.


Illustrations: George (William Hurt) and his carer "synth", in Humans.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 29, 2014

Shared space

What difference does the Internet make? This is the modern policy maker's equivalent of "To be, or not to be?" This question has underlain so many net.wars as politicians and activists have wrangled over whether and how the same laws should apply online as offline. Transposing offline law to the cyberworld is fraught with approximately the same dilemmas as transposing a novel to film. What do you keep? What do you leave out? What whole chapter can be conveyed in a single shot? In some cases it's obvious: consumer protection for purchases looks about the same. But the impact of changing connections and the democratization of worldwide distribution? Frightened people whose formerly safe, familiar world is slipping out of control often fail to make rational decisions.

This week's inaugural VOX-Pol conference, kept circling around this question. Funded under the EU's FP-7, the organizing group is meant to be an "academic research network focused on researching the prevalence, contours, functions, and impacts of Violent Online Political Extremism and responses to it". Attendees included researchers from a wide variety of disciplines from computer science to social science. If there was any group that was lacking, I'd say it was computer security practitioners and researchers, many of whose on-the-ground experience studying cyberattacks and investigating the criminal underground could be helpfully emulated by this group.

Some help could also perhaps be provided by journalists with investigative experience. In considering SOCMINT, for example - social media intelligence - people wondered how far to go in interacting with the extremists being studied. Are fake profiles OK? And can you be sure whether you're studying them...or they're studying us? The most impressive presentation on this sort of topic came from Aaron Zelin who, among other things, runs a Web-based clearinghouse for jihadi primary source material.

It's not clear that what Zelin does would be legal, or even possible in the UK. The "lone wolf" theory holds that someone alone in his house can be radicalized simply by accessing Web-based material; if you believe that, the obvious response is to block the dangerous material. Which, TJ McIntyre explained, is exactly what the UK does, unknown to most of its population.

McIntyre knows because he spent three years filing freedom of information requests to find out. So now we know: approximately 1,000 full URLs are blocked under this program, based on criteria derived from Sections 57 and 58 of the 2000 Terrorism Act and Sections 1 and 2 of the 2006 Terrorism Act. The system is "voluntary" - or rather, voluntary for ISPs, not voluntary for their subscribers. McIntyre's FOI answers have found no impact assessment or study of liability for wrongful blocking, and no review of compliance with the 1998 Human Rights Act. It also seems to contradict the Council of Europe's clear statement that filtering must be necessary and transparent.

This is, as Michael Jablonski commented on Twitter yesterday, one of very few conferences that begins by explaining the etiquette for showing gruesome images. Probably more frightening, though, was the presentations laying out the spread - and even mainstreaming - of interlinked extremist groups across the world. Many among Hungary's and Italy's extremist networks host their domains in the US, where the First Amendment ensures their material is not illegal.

This is why the First Amendment can be hard to love: defending free speech inevitably means defending speech you despise. Repeating that "The best answer to bad speech is more, better speech" is not always consoling. Trying to change the minds of the already committed is frustrating and thankless. Jihadi Trending(PDF), a report produced by the Quilliam Foundation, which describes itself as "the world's first counter-extremism think tank", reminds us that's not the piont. Released a few months ago and a fount of good sense, Nick Cohen reminds us in the foreword: "The true goal of debate, however, is not to change the minds of your opponents, but the minds of the watching audience."

Among the report's conclusions:
- The vast majority of radicalized individuals make contact first through offline socialization.
- Negative measures - censorship and filtering - are ineffective and potentially counter-productive.
- There are not enough positive measures - the "better speech" above to challenge extremist ideologies.
- Better ideas are to improve digital literacy and critical consumption skills and debunk propaganda.

So: what difference does the Internet make? It lets extremists use Twitter to tell each other what they had for breakfast. It lets them use YouTube to post videos of their cats. It lets them connect to others with similar views on Facebook, on Web forums, in chat rooms, virtual worlds, and dating sites, and run tabloid news sites that draw in large audiences. Just like everyone else, in fact. And, like the rest of us, they do not own the infrastructure.

The best answer came late on the second day, when someone commented that in the physical world neo-Nazi groups do not hang out with street gangs; extreme right hate groups don't go to the same conferences as jihadis; and Guantanamo detainees don't share the same physical space with white supremacists or teach other tactics. "But they will online."


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.