Privacy-preserving mass surveillance
Every time it seems like digital rights activists need to stop quoting George Orwell so much, stuff like this happens.
In an abrupt turnaround, on Thursday Apple announced the next stage in the decades-long battle over strong cryptography: after years of resisting law enforcement demands, the company is U-turning to backdoor its cryptography to scan personal devices and cloud stores for child abuse images. EFF sums up the problem nicely: "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor". Or, more simply, a hole is a hole. Most Orweliian moment: Nicholas Weaver framing it on Lawfare as "privacy-sensitive mass surveillance".
Smartphones, particularly Apple phones, have never really been *our* devices in the way that early personal computers were, because the supplying company has always been able to change the phone's software from afar without permission. Apple's move makes this reality explicit.
The bigger question is: why? Apple hasn't said. But the pressure has been mounting on all the technology companies in the last few years, as an increasing number of governments have been demanding the right of access to encrypted material. As Amie Stepanovich notes on Twitter, another factor may be the "online harms" agenda that began in the UK but has since spread to New Zealand, Canada, and others. The UK's Online Safety bill is already (controversially) in progress., as Ross Anderson predicted in 2018. Child exploitation is a terrible thing; this is still a dangerous policy.
Meanwhile, 2021 is seeing some of the AI hype of the last ten years crash into reality. Two examples: health and autonomous vehicles. At MIT Technology Review, Will Douglas Heaven notes the general failure of AI tools in the pandemic. Several research studies - the British Medical Journal, Nature, and the Turing Institute (PDF) - find that none of the hundreds of algorithms were any clinical use and some were actively harmful. The biggest problem appears to have been poor-quality training datasets, leading the AI to either identify the wrong thing, miss important features, or appear deceptively accurate. Finally, even IBM is admitting that Watson, its Jeopardy! champion has not become a successful AI medical diagnostician. Medicine is art as well as science; who knew? (Doctors and nurses, obviously.)
As for autonomous vehicles, at Wired Andrew Kersley reports that Amazon is abandoning its drone delivery business. The last year has seen considerable consolidation among entrants in the market for self-driving cars, as the time and resources it will take to achieve them continue to expand. Google's Waymo is nonetheless arguing that the UK should not cap the number of self-driving cars on public roads and the UK-grown Oxbotica is proposing a code of practice for deployment. However, as Christian Wolmar predicted in 2018, the cars are not here. Even some Tesla insiders admit that.
The AI that has "succeeded" - in the narrow sense of being deployed, not in any broader sense - has been the (Orwellian) surveillance and control side of AI - the robots that screen job applications, the automated facial recognition, the AI-driven border controls. The EU, which invests in this stuff, is now proposing AI regulations; if drafted to respect human rights, they could be globally significant.
However, we will also have to ensure the rules aren't abused against us. Also this week, Facebook blocked the tool a group of New York University social scientists were using to study the company's ad targeting, along with the researchers' personal accounts. The "user privacy" excuse: Cambridge Analytica. The 2015 scandal around CA's scraping a bunch of personal data via an app users voluntarily downloaded eventually cost Facebook $5 billion in its 2019 settlement with the US Federal Trade Commission that also required it to ensure this sort of thing didn't happen again. The NYU researchers' Ad Observatory was collecting advertising data via a browser extension users opted to install. They were, Facebook says, scraping data. Potato, potahto!
People who aren't Facebook's lawyers see the two situations as entirely different. CA was building voter profiles to study how to manipulate them. The Ad Observatory was deliberately avoiding collecting personal data; instead, they were collecting displayed ads in order to study their political impact and identify who pays for them. Potato, *tomahto*.
One reason for the universal skepticism is that this move has companions - Facebook has also limited journalist access to CrowdTangle, a data tool that helped establish that far-right news content generate higher numbers of interactions than other types and suffer no penalty for being full of misinformation. In addition, at the Guardian, Chris McGreal finds that InfluenceMap reports that fossil fuel companies are using Facebook ads to promote oil and gas use as part of remediating climate change (have some clean coal).
Facebook's response has been to claim it's committed to transparency and blame the FTC. The FTC was not amused: "Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest." The FTC knows Orwellian fiction when it sees it.
Illustrations: Orwell's house on Portobello Road, complete with CCTV camera.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.