" /> net.wars: August 2021 Archives

« July 2021 | Main | September 2021 »

August 27, 2021

The threat we left behind

afghan-455th_ESFG_scanning_iris.JPGBe careful what systems you build with good intentions. The next owner may not be so kind.

It has long been a basic principle among privacy activists that a significant danger in embedding surveillance technologies is regime change: today's government is benign, but tomorrow's may not be, so let's not build the technologies that could support a police state for that hostile government to wield. Equally - although it's often politic not to say this explicitly - the owner may remain the same but their own intentions may change as the affordances of the system give them new ideas about what it's possible for them to know.

I would be hard-pressed to produce evidence of a direct connection, but one of the ideas floating around Virtual Diplomacy, a 1997 conference that brought together the Internet and diplomacy communities, was that the systems that are privacy-invasive in Western contexts could save lives and avert disasters on the ground in crisis situations. Not long afterwards, the use of biometric identification and other technologies were being built into refugee systems in the US and EU.

In a 2018 article for The New Humanitarian, Paul Currian observes that the systems' development were "driven by the interests of national governments, technology companies, and aid agencies - in that order". Refugees quoted in the article express trust in the UN, but not much understanding of the risks of compliance.

Currian dates the earliest use of "humanitarian biometrics" to 2003 - and identifies the location of that groundbreaking use as...Afghanistan, which iris testing to verify the identities of Afghans returning from Pakistan to prevent fraud. In 2006, then-current, now just-departed, president Ashraf Ghani wrote a book pinpointing biometric identification as the foundation of Afghanistan's social policy. Afghanistan, the article concludes, is "the most biometrically identifiable country in the world" - and, it adds, "although UNHCR and the Afghan government have both invested heavily in biometric databases, the US military has been the real driving force." It bases this latter claim on a 2014 article in Public Intelligence that studies US military documents on the use of biometrics in Afghanistan.

These are the systems that now belong to the Taliban.

Privacy International began warning of the issues surrounding privacy and refugees in the mid-2000s. In 2011, by which time it had been working with UNHCR to improve its practices for four years, PI noted how little understanding there was among funders and the public of why privacy mattered to refugees.

Perhaps it's the word: "privacy" sounds like a luxury, a nice-to-have rather than a necessity, and anyway, how can people held in camps waiting to be moved on to their next location care about privacy when what they need is safety, food, shelter, and a reunion with the rest of their families? PI's answer: "Putting it bluntly, getting privacy wrong will get people arrested, imprisoned, tortured, and may sometimes lead to death." Refugees are at risk from both the countries they're fleeing *from* and the countries they're fleeing *to*, which may welcome and support them - or reject, return, deport, or imprison them, or hold them in bureaucratic purgatory. (As I type this, HIAS president and CEO Mark Hetfield is telling MSNBC that the US's 14-step checking process is stopping Afghan-Americans from getting their families out.)

As PI goes on to explain, there is no such thing as "meaningful consent" in these circumstances. At The New Humanitarian, in a June 2021 article, Zara Rahman agrees. She was responding to a Human Rights Watch report that the United Nations High Commissioner for Refugees had handed a detailed biometric database covering hundreds of thousands of Rohynga refugees to the Myanmar government from which they fled. HRW accused the agency of breaking its own rules for collecting and protecting data, and failing to obtain informed consent; UNHCR denies this charge. But you're desperate and in danger, and UNHCR wants your fingerprint. Can you really say no?

In many countries UNHCR is the organization that determines refugee status. Personal information is critical to this process. The amount of information has increased in some areas to include biometrics; as early as 2008 the US was considering using genetic information to confirm family relationships. More important, UNHCR is not always in control of the information it collects. In 2013, PI published a detailed analysis of refugee data collection in Syria. Last week, it published an even more detailed explanation of the systems built in Afghanistan over the last 20 years and that now have been left behind.

Shortly after the current crisis began, April Glaser and Sephora Smith reported at NBC News that Afghans were hastily deleting photographs and documents on their phones that might link them to Westerners, international human rights groups, the Afghan military, or the recently-departed Afghan government. It's an imperfect strategy: instructions on how to do this in local Afghan languages are not always available, and much of the data and the graph of their social connections are stored on social media that don't necessarily facilitate mass deletions. Facebook has released tools to help, including a one-click locking button and pop-up instructions on Instagram. Access Now also offers help and is telling international actors to close down access to these databases before leaving.

This aspect of the Afghan crisis was entirely avoidable.


Illustrations: Afghan woman being iris-scanned for entry into the Korean hospital at Bagram Airfield, Afghanistan, 2012 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 20, 2021

Outtakes

Thumbnail image for Jacinda_Ardern_at_the_University_of_Auckland_(cropped).jpg"One case!" railed a computer industry-adjacent US libertarian on his mailing list recently. He was scathing about the authoritarianism he thought implicit in prime minister Jacinda Ardern's decision to lock down New Zealand because one covid-positive case had been found in Auckland.

You would think that an intelligent guy whose life has been defined by the exponential growth of Moore's Law would understand by now. One *identified* case of unknown origin means a likely couple of dozen others who are all unknowingly going to restaurants, bars, concerts, and supermarkets and infecting other people. Put together the highly-transmissible Delta variant, which has ravaged India, caused huge spikes in the UK and Israel despite relatively high vaccination levels, and is vacuuming up ICU beds in vaccine-resistant US states, and the fact that under 20% of New Zealanders are vaccinated. Ardern, whose covid leadership has been widely admired all along, has absorbed the lessons of elsewhere. Locking down for a few days with so few cases buys time to do forwards and backwards contact tracing, 26 deaths, not tens of thousands, and an unstressed health care system. New Zealand has had months of normality punctuated by days of lockdown instead of, as elsewhere, months of lockdown punctuated by days of nervous attempts at socializing. Her country agrees with her. What more do you want?

The case was found Tuesday; lockdown began Wednesday. By Thursday, the the known case count was 21, with models predicting that the number of infected people was probably around 100. If all those people were walking around, that one case - imported, it now appears, from Australia - would be instigating thousands. Ardern has, you should excuse the expression, balls - and a touch of grace. I can't think of any other national leader who's taken the trouble to *thank* the index case for coming forward to get tested and thereby saving countless of his fellow citizens' lives.

Long ago - March 2020 - Ardern's public messaging included the advice "Be kind". This message could usefully be copied elsewhere - for example, the US, where anti-maskers are disrupting school board meetings andclassrooms, and anti-vaccination protests have left a man stabbed in Los Angeles. On Twitter and in other media, some states' medical staff report that among their hospitals'97%-unvaccinated covid caseloads are some who express regret, too late. Timothy Bella reports at the Washington Post that a Mobile, Alabama doctor has told patients that as of October 1 he won't treat anyone who is not vaccinated against covid. Alabama's vaccination rate, 36%, is the lowest in the US, the state is reporting nearly 4,000 new cases per day, and its hospitals have run out of ICU beds. His reaction is understandable. Useful motto for 2021: everyone is entitled to be anxious about the pandemic however they want.

Twitter has several "more of this, please"-type reactions. Tempting: there's the risk to other patients in the waiting room; the desire to push people to get vaccinated; the human reluctance to help people who won't help themselves to avoid dying of a preventable illness; the awareness of the frustration, burn-out, stress, and despair of hospital-based counterparts. And yet. This doctor isn't required by lack of resources to do triage. He just doesn't want to invest in treating people and be forced to watch their miserable, preventable deaths. I understand. But it's dangerous when doctors pick and choose whom they treat. Yes, barring medical contraindications, refusing covid vaccinations is generally a mistake. But being wrong isn't a reason to deny health care.

Ardern has - as she says - the advantage of being last. Working with less information, countries scrambling earlier to cope with new variants will inevitably make more mistakes. At the Atlantic, Howard Markel argues that we need to stop looking back to 1918 for clues to handling this one.

It's certainly true that the 1918 model has led us astray in significant ways, chiefly consequences of confusing covid with flu. In the UK, that confusion led the government to focus on washing hands and cleaning surfaces and ignore ventilation, a mistake it still hasn't fully rectified 18 months later. In the US, "it's a mild flu" is many people's excuse for refusing masks, vaccines, and other cautions. The 1918 example was, however, valuable as a warning of how devastating a pandemic can be without modern tools to control it. Even with today's larger population, 100 million deaths is too significant to ignore. For them, masks, ventilation, and lockdowns were the only really available tools. For us, they bought time for science to create better ones - vaccines. What we lack, however, is societal and political trust (whether or not you blame the Internet) and the will to spread manufacturing across the world. In 1918, the future, post-pandemic and post-war, was a "roaring" decade of celebration. Our post-pandemic future is more pandemics unless we pay attention to public health and building pandemic resistance, especially as climate change brings new microbes into direct contact with humans,

Markel is a professor at the University of Michigan, and his uncomfortable message is this: we are in uncharted territory. No wonder we cling to the idea that the pandemic of 2020-present is kinda-sorta 1918: without that precedent we are facing conditions of radical uncertainty. Be kind.


Illustrations: New Zealand prime minister Jacinda Ardern campaigning in 2017 (Brigitte Neuschwander-Kasselordner, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 13, 2021

Legacy

QRCode-2-Structure.pngThe first months of the pandemic saw a burst of energetic discussion about how to make it an opportunity to invest in redressing inequalities and rebuilding decaying systems - public health, education, workers' rights. This always reminded me of the great French film director Fran├žois Truffaut, who, in his role as the director of the movie-within-the-movie in Day for Night, said, "Before starting to shoot, I hope to make a fine film. After the problems begin, I lower my ambition and just hope to finish it." It seemed more likely that if the pandemic went on long enough - back then the journalist Laurie Garrett was predicting a best case of three years - early enthusiasm for profound change would drain away to leave most people just wishing for something they could recognize as "normal". Drinks at the pub!

We forget what "normal" was like. London today seems busy. But with still no tourists, it's probably a tenth as crowded as in August 2019.

Eighteen months (so far) has been long enough to make new habits driven by pandemic-related fears, if not necessity, begin to stick. As it turns out the pandemic's new normal is really not the abrupt but temporary severance of lockdown, which brought with it fears of top-down government-driven damage to social equity and privacy: covid legislation, imminuty passports, and access to vaccines. Instead, the dangerous "new normal" is the new habits building up from the bottom. If Garrett was right, and we are at best halfway through this, these are likely to become entrenched. Some are healthy: a friend has abruptly realized that his grandmother's fanaticism about opening windows stemmed from living through the 1918 Spanish flu pandemic. Others...not so much.

One of the first non-human casualties of the pandemic has been cash, though the loss is unevenly spread. This week, a friend needed more than five minutes to painfully single-finger-type masses of detail into a pub's app, the only available option for ordering and paying for a drink. I see the convenience for the pub's owner, who can eliminate the costs of cash (while assuming the costs of credit cards and technological intermediation) and maybe thin the staff, but it's no benefit to a customer who'd rather enjoy the unaccustomed sunshine and chat with a friend. "They're all like this now," my friend said gloomily. Not where I live, fortunately.

Anti-cash campaigners have long insisted that cash is dirty and spreads disease; but, as we've known for a year, covid rarely spreads through surfaces, and (as Dave Birch has been generous enough to note) a recent paper finds that cash is sometimes cleaner. But still: try to dislodge the apps.

A couple of weeks ago, the Erin Woo at the New York Times highlighted cash-free moves. In New York City, QR codes have taken over in restaurants and stores as contact-free menus and ordering systems. In the UK, QR codes mostly appear as part of the Test and Trace contact tracing app; the idea is you check in when you enter any space, be it restaurant, cinema, or (ludicrously) botanic garden, and you'll be notified if it turns out it was filled with covid-infected people when you were there.

Whatever the purpose, the result is tight links between offline and online behavior. Pre-pandemic, these were growing slowly and insidiously; now they're growing like an invasive weed at a time when few of us can object. The UK ones may fall into disuse alongside the app itself. But Woo cites Bloomberg: half of all US full-service restaurant operators have adopted QR-code menus since the pandemic began.

The pandemic has also helped entrench workplace monitoring. By September 2020, Alex Hern was reporting at the Guardian that companies were ramping up their surveillance of workers in their homes, using daily mandatory videoconferences, digital timecards in the form of cloud logins, and forced participation on Slack and other channels.

Meanwhile at NBC News, Olivia Solon reports that Teleperformance, one of the world's largest call center companies, to which companies like Uber, Apple, and Amazon outsource customer service, has inserted clauses in its employment contracts requiring workers to accept in-home cameras that surveil them, their surroundings, and family members under 18. Solon reports that the anger over this is enough to get these workers thinking about unionizing. Teleperformance is global; it's trying this same gambit in other countries.

Nearer to home, all along, there's been a lot of speculation about whether anyone would ever again accept commuting daily. This week, the Guardian reports that only 18% of workers have gone back to their offices since UK prime minister Boris Johnson ended all official restrictions on July 19. Granted, it won't be clear for some time whether this is new habit or simply caution in the face of the fact that Britain's daily covid case numbers are still 25 times what they were a year ago. In the US, Google is suggesting it will cut pay for staff who resist returning to the office, on the basis that their cost of living is less. Without knowing the full financial position, doesn't it sound like Google is saving money twice?

All these examples suggest that what were temporary accommodations are hardening into "the way things are". Undoing them is a whole new set of items for last year's post-pandemic to-do list.


Illustrations: Graphic showing the structure of QR codes (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 6, 2021

Privacy-preserving mass surveillance

new-22portobelloroad.jpgEvery time it seems like digital rights activists need to stop quoting George Orwell so much, stuff like this happens.

In an abrupt turnaround, on Thursday Apple announced the next stage in the decades-long battle over strong cryptography: after years of resisting law enforcement demands, the company is U-turning to backdoor its cryptography to scan personal devices and cloud stores for child abuse images. EFF sums up the problem nicely: "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor". Or, more simply, a hole is a hole. Most Orweliian moment: Nicholas Weaver framing it on Lawfare as "privacy-sensitive mass surveillance".

Smartphones, particularly Apple phones, have never really been *our* devices in the way that early personal computers were, because the supplying company has always been able to change the phone's software from afar without permission. Apple's move makes this reality explicit.

The bigger question is: why? Apple hasn't said. But the pressure has been mounting on all the technology companies in the last few years, as an increasing number of governments have been demanding the right of access to encrypted material. As Amie Stepanovich notes on Twitter, another factor may be the "online harms" agenda that began in the UK but has since spread to New Zealand, Canada, and others. The UK's Online Safety bill is already (controversially) in progress., as Ross Anderson predicted in 2018. Child exploitation is a terrible thing; this is still a dangerous policy.

Meanwhile, 2021 is seeing some of the AI hype of the last ten years crash into reality. Two examples: health and autonomous vehicles. At MIT Technology Review, Will Douglas Heaven notes the general failure of AI tools in the pandemic. Several research studies - the British Medical Journal, Nature, and the Turing Institute (PDF) - find that none of the hundreds of algorithms were any clinical use and some were actively harmful. The biggest problem appears to have been poor-quality training datasets, leading the AI to either identify the wrong thing, miss important features, or appear deceptively accurate. Finally, even IBM is admitting that Watson, its Jeopardy! champion has not become a successful AI medical diagnostician. Medicine is art as well as science; who knew? (Doctors and nurses, obviously.)

As for autonomous vehicles, at Wired Andrew Kersley reports that Amazon is abandoning its drone delivery business. The last year has seen considerable consolidation among entrants in the market for self-driving cars, as the time and resources it will take to achieve them continue to expand. Google's Waymo is nonetheless arguing that the UK should not cap the number of self-driving cars on public roads and the UK-grown Oxbotica is proposing a code of practice for deployment. However, as Christian Wolmar predicted in 2018, the cars are not here. Even some Tesla insiders admit that.

The AI that has "succeeded" - in the narrow sense of being deployed, not in any broader sense - has been the (Orwellian) surveillance and control side of AI - the robots that screen job applications, the automated facial recognition, the AI-driven border controls. The EU, which invests in this stuff, is now proposing AI regulations; if drafted to respect human rights, they could be globally significant.

However, we will also have to ensure the rules aren't abused against us. Also this week, Facebook blocked the tool a group of New York University social scientists were using to study the company's ad targeting, along with the researchers' personal accounts. The "user privacy" excuse: Cambridge Analytica. The 2015 scandal around CA's scraping a bunch of personal data via an app users voluntarily downloaded eventually cost Facebook $5 billion in its 2019 settlement with the US Federal Trade Commission that also required it to ensure this sort of thing didn't happen again. The NYU researchers' Ad Observatory was collecting advertising data via a browser extension users opted to install. They were, Facebook says, scraping data. Potato, potahto!

People who aren't Facebook's lawyers see the two situations as entirely different. CA was building voter profiles to study how to manipulate them. The Ad Observatory was deliberately avoiding collecting personal data; instead, they were collecting displayed ads in order to study their political impact and identify who pays for them. Potato, *tomahto*.

One reason for the universal skepticism is that this move has companions - Facebook has also limited journalist access to CrowdTangle, a data tool that helped establish that far-right news content generate higher numbers of interactions than other types and suffer no penalty for being full of misinformation. In addition, at the Guardian, Chris McGreal finds that InfluenceMap reports that fossil fuel companies are using Facebook ads to promote oil and gas use as part of remediating climate change (have some clean coal).

Facebook's response has been to claim it's committed to transparency and blame the FTC. The FTC was not amused: "Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest." The FTC knows Orwellian fiction when it sees it.


Illustrations: Orwell's house on Portobello Road, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.