" /> net.wars: May 2021 Archives

« April 2021 | Main | June 2021 »

May 28, 2021

Judgments day

1024px-Submarine_cable_map_umap.pngThis has been quite a week for British digital rights campaigners, who have won two significant cases against the UK government.

First is a case regarding migrants in the UK, brought by the Open Rights Group and the3mllion. The case challenged a provision in the Data Protection Act (2018) that exempted the Home Office from subject access requests, meaning that migrants refused settled status or immigration visas had no access to the data used to decide their cases, placing them at an obvious disadvantage. ORG and the3million argued successfully in the Court of Appeal that this was unfair, especially given that nearly half the appeals against Home Office decisions before the law came into effect were successful.

This is an important win, but small compared to the second case.

Eight years after Edward Snowden revealed the extent of government interception of communications, the reverberations continue. This week, the the Grand Chamber of the Europgean Court of Human Rights found Britain's data interception regime breached the rights to privacy and freedom of expression. Essentially, as Haroon Siddique sums it up at the Guardian, the court found deficiencies in three areas. First, bulk interception was authorized by the secretary of state but not by an independent body such as a court. Second, the application for a warrant did not specify the kinds of communication to be examined. Third, search terms linked to an individual were not subject to prior authorization. The entire process, the court ruled, must be subject to "end-to-end safeguards".

This is all mostly good news. Several of the 18 applicants (16 organizations and two individuals), argue the ruling didn't go far enough because it didn't declare bulk interference illegal in and of itself. Instead, it merely condemned the UK's implementation. Privacy International expects that all 47 members of the Council of Europe, all signatories to the European Convention on Human Rights, will now review their surveillance laws and practices and bring them into line with the ruling, giving the win much broader impact./

Particularly at stake for the UK is the adequacy decision it needs to permit seamless sharing data with EU member states under the General Data Protection Regulation. In February the EU issued a draft decision that would grant adequacy for four years. This judgment highlights the ways the UK's regime is non-compliant.

This case began as three separate cases filed between 2013 and 2015; they were joined together by the court. PI, along with ACLU, Amnesty International, Liberty, and six other national human rights organizations, was among the first group of applicants. The second included Big Brother Watch, Open Rights Group, and English PEN; the third added the Bureau of Investigative Journalism.

Long-time readers will know that this is not the first time the UK's surveillance practices have been ruled illegal. In 2008, the CJEU ruled against the UK's DNA database. More germane, in 2014, the CJEU invalidated the Data Retention Directive as a disproportionate intrusion on fundamental human rights, taking down with it the UK's supporting legislation. At the end of 2014, to solve the "emergency" created by that ruling, the UK hurriedly passed the Data Retention and Investigatory Powers Act (DRIPA). The UK lost the resulting legal case in 2016, when the CJEU largely struck it down again.

Currently, the legislation that enables the UK's communications surveillance regime is the Investigatory Powers Act (2016), which built on DRIPA and its antecedents, plus the Terrorism Prevention and Investigation Measures Act (2011), whose antecedents go back to the Anti-Terrorism, Crime, and Security Act (2001), passed two months after 9/11. In 2014, I wrote a piece explaining how the laws fit together.

Snowden's revelations were important in driving the post-2013 items on that list; the IPA was basically designed to put the practices he disclosed on a statutory footing. I bring up this history because I was struck by a comment in Albuquerque's dissent: "The RIPA distinction was unfit for purpose in the developing Internet age and only served the political aim of legitimising the system in the eyes of the British public with the illusion that persons within the United Kingdom's territorial jurisdiction would be spared the governmental 'Big Brother'".

What Albuquerque is criticizing here, I think, is the distinction made in RIPA between metadata, which the act allowed the government to collect, and content, which is protected. Campaigners like the late Caspar Bowden frequently warned that metadata is often more revealing than content. In 2015, Steve Bellovin, Matt Blaze, Susan Landau, and Stephanie Pell showed that the distinction is no longer meaningful (PDF in any case.

I understand that in military-adjacent circles Snowden is still regarded as a traitor. I can't judge the legitimacy of all his revelations, but in at least one category it was clear from the beginning that he was doing the world a favor. That is alerting the world to the intelligence services' compromising crucial parts of the world's security systems that protect all of us. In ruling that the UK practices he disclosed are illegal, the ECtHR has gone a long way toward vindicating him as a whistleblower in a second category.


Illustrations: Map of cable data by Greg Mahlknecht, map by Openstreetmap contributors (CC-by-SA 2.0), from the Privacy International report on the ruling.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 20, 2021

Ontology recapiltulates phylogeny

Thumbnail image for sidewalklabs-streetcrossing.pngI may be reaching the "get off my lawn!" stage of life, except the things I'm yelling at are not harmless children but new technologies, many of which, as Charlie Stross writes, leak human stupidity into our environment.

Case in point: a conference this week chose for its platform an extraordinarily frustrating graphic "virtual congress center" that was barely more functional than Second Life (b. 2003). The big board displaying the agenda was not interactive; road signs and menu items pointed to venues by name, but didn't show what was going on in them. Yes, there was a reception desk staffed with helpful avatars. I do not want to ask for help, I want simplicity. The conference website advised: "This platform requires the installation of a dedicated software in your computer and a basic training." Training? To watch people speak on my computer screen? Why can't I just "click here to attend this session" and see the real, engaged faces of speakers, instead of motionless cartoon avatars?

This is not a new-technology issue but a usability issue that hasn't changed since Donald Norman's 1988 The Design of Everyday Things sought to do away with user manuals.

I tell myself that this isn't just another clash between generational habits.

Even so, if current technology trends continue I will be increasingly left behind, not just because I don't *want* to join in but because, through incalculable privilege, much of the time I don't *need* to. My house has no smart speakers, I see no reason to turn on open banking, and much of the time I can leave my mobile phone in a coat pocket, ignored.

But Out There in the rest of the world, where I have less choice, I read that Amazon is turning on Sidewalk, a proprietary mesh network that uses Bluetooth and 900MHz radio connections to join together Echo speakers, Ring cameras, and any other compatible device the company decides to produce. The company is turning this thing on by default (free software update!), though if you're lucky enough to read the right press articles you can turn it off. When individuals roam the streets piggybacking on open wifi connections, they're dubbed "hackers". But a company - just ask forgiveness, not permission, yes?

The idea appears to be that the mesh network will improve the overall reliability of each device when its wifi connection is iffy. How it changes the range and detail of the data each device collects is unclear. Connecting these devices into a network is a step change in physical tracking; CNet suggests that a Tile tag attached to a dog, while offering the benefit of an alert if the dog gets loose, could also provide Amazon with detailed tracking of all your dog walks. Amazon says the data is protected with three layers of encryption, but protection from outsiders is not the same as protection from Amazon itself. Even the minimal data Amazon says in its white paper (PDF) it receives - the device serial number and application server ID - reveal the type of device and its location.

We have always talked about smart cities as if they were centrally planned, intended to offer greater efficiency, smoother daily life, and a better environment, and built with some degree of citizen acceptance. But the patient public deliberation that image requires does not fit the "move fast and break things" ethos that continues to poison organizational attitudes. Google failed to gain acceptance for its Toronto plan; Amazon is just doing it. In London in 2019, neither private operators nor police bothered to inform or consult anyone when they decided to trial automated facial recognition.

In the white paper, Amazon suggests benefits such as finding lost pets, diagnostics for power tools, and supporting lighting where wifi is weak. Nice use cases, but note that the benefits accrue to the devices' owner while the costs belong to neighbors who may not have actively consented, but simply not known they had to change the default settings in order to opt out. By design, neither device owners nor server owners can see what they're connected to. I await the news of the first researcher to successfully connect an unauthorized device.

Those external costs are minimal now, but what happens when Amazon is inevitably joined by dozens more similar networks, like the collisions that famously plague the more than 50 companies that dig up London streets? It's disturbingly possible to look ahead and see our public spaces overridden by competing organizations operating primarily in their own interests. In my mind, Amazon's move opens up the image of private companies and government agencies all actively tracking us through the physical world the way they do on the web and fighting over the resulting "insights". Physical tracking is a sizable gap in GDPR.

Again, these are not new-technology issues, but age-old ones of democracy, personal autonomy, and the control of public and private spaces. As Nicholas Couldry and Ulises A. Mejias wrote in their 2020 book The Costs of Connection, this is colonialism in operation. "What if new ways of appropriating human life, and the freedoms on which it depends, are emerging?" they asked. Even if Amazon's design is perfect, Sidewalk is not a comforting sign.


Illustrations: A mock-up from Google's Sidewalk Labs plan for Toronto.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 14, 2021

Pre-crime

Unicorn_sculpture,_York_Crown_Court-Tim-Green.jpgMuch is being written about this week's Queen's speech, which laid out plans to restrict protests (the Police, Crime, Sentencing, and Courts bill), relax planning measures to help developers override communities, and require photo ID in order to vote even though millions of voters have neither passport nor driver's license and there was just one conviction for voting fraud in the 2019 general election. We, however, will focus here on the Online Safety bill, which includes age verification and new rules for social media content moderation.

At Politico, technology correspondent Mark Scott picks three provisions: the exemption granting politicians free rein on social media; the move to require moderation of content that is not illegal or criminal (however unpleasant it may be); and the carve-outs for "recognised news publishers". I take that to mean they wanted to avoid triggering the opposition of media moguls like Rupert Murdoch. Scott read it as "journalists".

The carve-out for politicians directly contradicts a crucial finding in last week's Facebook oversight board ruling on the suspension of former US president Donald Trump's account: "The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users." Politicians, in other words, may not be more special than other influencers. Given the history of this particular government, it's easy to be cynical about this exemption.

In 2019, Heather Burns, now policy manager for the Open Rights Group, predicted this outcome while watching a Parliamentary debate on the white paper: "Boris Johnson's government, in whatever communication strategy it is following, is not going to self-regulate its own speech. It is going to double down on hard-regulating ours." At ORG's blog, Burns has critically analyzed the final bill.

Few have noticed the not-so-hidden developing economic agenda accompanying the government's intended "world-leading package of online safety measures". Jen Persson, director of the children's rights advocacy group DefendDigitalMe, is the exception, pointing out that in May 2020 the Department of Culture, Media, and Sport released a report that envisions the UK as a world leader in "Safety Tech". In other words, the government views online safety (PDF; see Annex C) as not just an aspirational goal for the country's schools and citizens but also as a growing export market the UK can lead.

For years, Persson has been tirelessly highlighting the extent to which children's online use is monitored. Effectively, monitoring software watches every use of any school-owned device and whenever the child is logged into their school Gsuite account; some types can even record photos of the child at home, a practice that became notorious when it was tried in Pennsylvania.

Meanwhile, outside of DefendDigitalMe's work - for example its case study of eSafe and discussion of NetSupport DNA and this discussion of school safeguarding - we know disturbingly little about the different vendors, how they fit together in the education ecosystem, how their software works, how capabilities vary from vendor to vendor, how well they handle multiple languages, what they block, what data it collects, how they determine risk, what inferences are drawn and retained and by whom, and the rate of errors and their consequences. We don't even really know if any of it works - or what "works" means. "Safer online" does not provide any standard against which the cost to children's human rights can be measured. Decades of government policy have all trended toward increased surveillance and filtering, yet wherever "there" is we never seem to arrive. DefendDigitalMe has called for far greater transparency.

Persson notes both mission creep and scope creep: "The scope has shifted from what was monitored to who is being monitored, then what they're being monitored for." The move from harmful and unlawful content to lawful but "harmful" content is what's being proposed now, and along with that, Persson says, "children being assessed for potential risk". The controversial Prevent program program is about this: monitoring children for signs of radicalization. For their safety, of course.

Previous UK children's rights campaigners used to say that successive UK governments have consistently used children as test subjects for the controversial policies they wish to impose on adults, normalizing them early. Persson suggests the next market for safetytech could be employers monitoring employees for mental health issues. I imagine elderly people.

DCMS's comments support market expansion: "Throughout the consultations undertaken when compiling this report there was a sector consensus that the UK is likely to see its first Safety Tech unicorn (i.e. a company worth over $1bn) emerge in the coming years, with three other companies also demonstrating the potential to hit unicorn status within the early 2020s. Unicorns reflect their namesake - they are incredibly rare, and the UK has to date created 77 unicorn businesses across all sectors (as of Q4 2019)." (Are they counting the much-litigated Autonomy?)

There's something peculiarly ghastly about this government's staking the UK's post-Brexit economic success on exporting censorship and surveillance to the rest of the world, especially alongside its stated desire to opt out of parts of human rights law. This is what "global Britain" wants to be known for?

Illustrations: Unicorn sculpture at York Crown Court (by Tim Green via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 7, 2021

Decision not decision

Screenshot from 2021-01-07 13-17-20.pngIt is the best of decisions, it is the worst of decisions.

For some, this week's decision by Facebook's Oversight Board in the matter of "the former guy" Donald J. Trump is a deliberate PR attempt at distraction. For many, it's a stalling tactic. For a few, it is a first, experimental stab at calling the company to account.

It can be all these things at once.

But first, some error correction. Nothing the Facebook Oversight Board does or doesn't do tells us anything much about governing the Internet. Although there are countries where zero-rating deals with telcos make Facebook effectively the only online access most people have, Facebook is not the Internet and it's not the web. Facebook is a commercial company's walled garden that is reached over the Internet and via both the web and apps that bypass the web entirely. Governing Facebook is about how we regulate and govern commercial companies that use the Internet to achieve global reach. Like Trump, Facebook has no exact peer, so it is difficult to generalize from decisions about either to reach wider principles of content moderation.

It's also important to recognize that Trump used/uses different social media sites in different ways. Facebook was important to Trump for organizing campaigns and advertising, as well as getting his various messages amplified and spread by supporters. But there's little doubt that personally he'd rather have Twitter back; its public nature and instant response made it his id-to-fingers direct connection to the media. Twitter fed him the world's attention. Those were the postings that had everyone waking up in the middle of the night panicked in case he had abruptly declared war on North Korea. After his ban, the service was full of tweets expressing relief at the silence.

The board's decision has several parts. First, it says the company was right to suspend Trump's account. However, it goes on to say, the company erred in applying an "indeterminate and standardless penalty of indefinite suspension". It goes on to tell Facebook to develop "clear, necessary, and proportionate policies that promote public safety and freedom of expression". The board's charter requires Facebook to make an initial response within 30 days, and the decision itself orders Facebook to review the case to "determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform". It appears that the board is at least trying not to let itself be used as a shield.

At the New York Times, Kara Swisher calls the non-decision kind of perfect. At the Washington Post, Margaret Sullivan calls the board a high-priced fig leaf. At Lawfare, Evelyn Douek believes the decision shows promise but deplores the board's reluctance to constrain Facebook, On Wednesday's episode of Ben Wittes's and Kate Klonick's In Lieu of Fun, panelists speculated what indicators would show the board was achieving legitimacy. Carole Cadwalladr, who broke the Cambridge Analytica story in 2016, calls Facebook, simply, cancer and views the oversight board as a "dangerous distraction".

When the board first began issuing decisions, Jeremy Lewin commented that the only way the board - "a dangerous sham" - could show independence was to reverse Facebook's decisions, which in all cases, that would mean restoring deleted posts since the board has no role in evaluating decisions to retain posts. It turns out that's not true. In the Trump decision, the board found a third way: calling out Facebook for refusing to answer its questions, failing to establish and follow clear procedures, and punting on its responsibilities.

However, despite the decision's legalish language, the Oversight Board is not a court, and Facebook's management is not a government. For both good and bad: as Orin Kerr reminds Facebook can't fine, jail, or kill its users; as many others will note, as a commercial company its goals are profits and happy shareholders, not fairness, transparency, or a commitment to uphold democracy. If it adopts any of those latter goals, it's because the company has calculated that it will cost more not to. Therefore, *every* bit of governance it attempts is a PR exercise. In pushing the ultimate decision back to Facebook and demanding that the company write and publish clear rules the board is trying to make itself more than that. We will know soon whether it has any hope of success.

But even if the board succeeds in pushing Facebook into clarifying its approach to this case, "success" will be constrained. Here's the board's mission: "The purpose of the board is to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook's content policies." Nothing there permits the board to raise its own cases, examine structural defects, or query the company's business model. There is also no option for the board to survey Trump's case and the January 6 Capitol invasion and place it in the context of evidence on Facebook's use to incite violence in other countries - Myanmar, Sri Landa, Kindia, Indonesia, Mexico, Germany, and Ethiopia. In other words, the board can consider individual cases when it is assigned them, but not the patterns of behavior that Facebook facilitates and are in greatest need of disruption. That will take governments and governance.


Illustrations: The January 6 invasion of the US Capitol.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.