Main

September 25, 2020

The zero on the phone

WeRobot2020-Poster.jpegAmong the minor casualties of the pandemic has been the appearance of a Swiss prototype robot at this year's We Robot, the ninth year of this unique conference that crosses engineering, technology policy, and law to identify future conflicts and pre-emptively suggest solutions. The result was to leave the robots considered by this virtual We Robot remarkably (appropriately) abstract.

We Robot was founded to get a jump on the coming conflicts that robots will bring to law and policy, in part so that we don't repeat the Internet experience of repeating the same arguments decades on end. This year's event pre-empted the Internet experience in a new way: many authors have drawn on the failed optimism and cooperation of the 1990s to begin defining ways to ensure that robotics and AI do not follow the same path. Where at the beginning we were all eager to embrace robots, this year their disembodied AIs are being done *to* us.

In the one slight exception to this rule, Hallie Siegel's exploration of senior citizens' attitudes towards new technologies found that the seniors she studies are pragmatic, concerned about their privacy and autonomy and only really interested in technologies that provided benefits they really need.

Jason Millar and Elizabeth Gray drew directly on the Internet experience by comparing network neutrality to the issues surrounding the mapping software that controls turn-by-turn navigation systems in a discussion of "mobility shaping". Should navigation services be common carriers, as telephone lines are? The idea appeals to me, if only because the potential for physical control of where our vehicles are allowed to go seems so clear.

The theme of exploitation was particularly visible in the two papers on Africa. In the first, Arthur Gwagwa (Strathmore University, Nairobi), Erika Kraemer-Mbula, Nagla Rizk, Isaac Rutenberg, and Jeremy de Beer warn that the combination of foreign capital and local resources is likely to reproduce the power structures of previous forms of colonialism, an argument also seen recently in a paper by Abeba Birhane. Women in particular, who run the majority of start-ups in some African countries, may be ignored, and the authors suggest that a GDPR-like rule awarding individuals control over their own data could be crucial in creating value for, rather than extracted from, Africa.

In the second, Laura Foster (Indiana University), Bram Van Wiele, and Tobias Schönwetter extracted a database of press stories about AI in Africa from Lexis-Nexus, to find the familiar set of claims for new technology: happy, value-neutral disruption, yay!. The failure of most of these articles to consider gender and race, they observed, doesn't make the emerging picture neutral, but serves to reinforce the default of the straight, white male.

One way we push back against AI/robot control is the "human in the loop" to whom the final decision is delegated. This human has featured in every We Robot conference, most notably in 2016 as Madeleine Elish's moral crumple zone. In his paper, Liam McCoy argues for the importance of meaningful control, because the middle ground, where the human is expected to solve the most complex situations where AI fails without support or authority is truly dangerous. The middle ground may be profitable; at UK IGF a few weeks ago, Gus Hosein noted that automating dispute resolution has what's made GAFA rich. But in the higher stakes of cyber-physical systems, the human you summon by pushing zero has to be able to make a difference.

Silvia de Conca's idea of "human-centered legal design", which sought to give autonomous agents a duty of care as a way of filling the gap in liability that presently exists, and Cynthia Khoo's interest in vulnerable communities who are harmed by behavior that emerges from combined business models, platform scale, human nature, and algorithm design presented different methods of putting a human in the loop. Often, Khoo has found in investigating this idea, the potential harm was in fact known and simply ignored; how much can and should be foreseen when system parts interact in unexpected ways is a rising issue.

Several papers explored previously unnoticed vectors for bias and control. Sentiment analysis, last seen being called "the snake oil of 2011", and its successor, emotion analysis, which I first saw explored in the 1990s by Rosalind Picard at MIT, are creeping into AI systems. Some are particularly dubious: aggression detection systems and emotion recognition cameras.

Emily McBain-Ashfield and Jason Millar are the first I'm aware of to study how stereotyping gets into these systems. Yes, it's in the data - but the problem lies in the process analyzing and tagging it. The authors found three methods of doing this: manual (human, slow), dictionary-based using seed words (automated), and crowdsourced (see also Mary L. Gray and Siddharth Suri's 2019 book, Ghost Work. All have problems; automating that sort of issue creates notoriously crude mistakes, and the participants in crowdsourcing may be from very different linguistic and cultural contexts.

The discussant for this paper, Osonde Osaba sounded appalled: "By having these AI models of emotion out in the wild in commercial products we are essentially sanctioning the unregulated experimentation on humans and their emotional processes without oversight or control."

Remedies have to contend, however, with the legacy infrastructure. Alice Xiang discovered a conflict between traditional anti-discrimination law, which bars decision making based on a set of protected classes and the technical methods of mitigating algorithmic bias. "If we're not careful," she said, "the vast majority of approaches proposed in machine learning literature might actually be illegal if they are ever tested in court."

We Robot 2020 was the first to be held outside the US, and chairs Florian Martin-Bariteau, Jason Millar, and Katie Szilagyi set out to widen its international character and diversity. When the pandemic hit, the resulting exceptional breadth of location of authors and discussants made it infeasible to ask everyone to pretend they were in Ottawa's time zone. The conference therefore has recorded the authors' and discussants' conversations as if live - which means that you, too, can experience the originals. Just follow the links. We Robot events not already linked here: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference.


Illustrations: Our robot avatars attend the conference for us on the We Robot 2020 poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 28, 2020

Through the mousehole

Rodchenkov-Fogel-Icarus.pngIt's been obvious for a long time that if you want to study a thoroughly dysfunctional security system you could hardly do better than doping control in sports. Anti-doping has it all: perverse incentives, wrong assumptions, conflicts of interest, and highly motivated opponents. If you doubt this premise, consider: none of the highest-profile doping cases were caught by the anti-doping system. Lance Armstrong (2010) was outed by a combination of dogged journalistic reporting by David Walsh and admissions by his former teammate Floyd Landis; systemic Russian doping (2014) was uncovered by journalist Hajo Seppelt, who has also broadcast investigations of China, Kenya, Germany, and weightlighting; BALCO (2002) was exposed by a coach who sent samples to the UCLA anti-doping lab; and Willy Voet (1998), soigneur to the Festina cycling team, was busted by French Customs.

I bring this up - again - because two insider tales of the Russian scandal have just been published. The first, The Russian Affair, by David Walsh, tells the story of Vitaly and Yuliya Stepanov, who provided Seppelt with material for The Secrets of Doping: How Russia Makes Its Winners (2014); the second, The Rodchenkov Affair, is a first-person account of the Russian system by Grigory Rodchenkov, from 2006 to 2015 the director of Moscow's testing lab. Together or separately, these books explain the Russian context that helped foster its particular doping culture. They also show an anti-doping system that isn't fit for purpose.

The Russian Affair is as much the story of the Stepanovs' marriage as of contrasting and complementary views of the doping system. Vitaly was an idealistic young recruit at the Russian Anti-Doping Agency; Yuliya Rusanova was an aspiring athlete willing to do anything to escape the desperate unhappiness and poverty of her native area, Kursk. While she lectured him about not understanding "the real world", he continued hopefully writing letters to contacts at the World Anti-Doping Agency describing the violations he was seeing. Yuliya comes to see the exploitation of a system that protects winners but lets others test positive to make the system look functional. Under Vitaly's guidance, she records the revealing conversations that Seppelt's documentary featured. Rodchenkov makes a cameo appearance; the Stepanovs believed he was paid to protect specific athletes from positive tests.

In the vastly more entertaining The Rodchenkov Affair, Rodchenkov denies receiving payment, calling Yuliya a "has-been" he'd never met. Instead, Rodchenkov describes developing new methods of detecting performance-enhancing substances, then finding methods to beat those same tests. If the nearest analogue to the Walsh-described Stepanovs' marriage is George and Kellyanne Conway, Rodchenkov's story is straight out of Philip K. Dick's A Scanner Darkly, in which an undercover narcotics agent is assigned to spy on himself.

Russia has advantages for dopers. For example, its enormous land mass allows athletes to sequenster themselves in training camps so remote they are out of range for testers. More important may be the pervasive sense of resignation that Vitaly Stepanov describes as his boss slashes WADA's 80 English pages of anti-doping protocols to ten in Russian translation because various aspects are "not possible to do in Russia". Rodchenkov, meanwhile, plans the Sochi anti-doping lab that the McLaren report later made famous for swapping positive samples for pre-frozen clean ones through a specially built "mousehole" operated by the FSB.

If you view this whole thing as a security system, it's clear that WADA's threat model was too simple, something like "athletes dope". Even in 1988, when Ben Johnson tested positive at the Seoul Olympics, it was obvious that everyone's interests depended on not catching star athletes. International sports depend on their stars - as do their families, coaches, support staff, event promoters, governments, fans, and even other athletes, who know the star attractions make their own careers possible. Anti-doping agencies must thread their way through this thicket.

In Rodchenkov's description, WADA appears inept, even without its failure to recognize this ecosystem. In one passage, Rodchenkov writes about the double-blind samples the IOC planted from time to time to test the lab: "Those DBs were easily detectable because they contained ridiculous compounds...which were never seen in doping control routine analysis." In another, he says: "[WADA] also assumed that all accredited laboratories were similarly competent, which was not the case. Some WADA-accredited laboratories were just sloppy, and would reach out to other countries' laboratories when they had to process quality control samples to gain re-accreditation."

Flaws are always easy to find once you know they're there. But WADA was founded in 1999. Just six years earlier, the opening of the Stasi records exposed the comprehensive East German system. The possibility of state involvement should have been high on the threat list from the beginning, as should the role of coaches and doctors who guide successive athletes to success.

It's hard to believe this system can be successfully reformed. Incentives to dope will always be with us, just as it would be impossible to eliminate all incentives to break into computer systems. Rodchenkov, who frequently references Orwell's 1984, insists that athletes dope because otherwise their bodies cannot cope with the necessary training, which he contends is more physically damaging than doping. This much is clear: a system that insists on autonomy while failing to fulfill its most basic mission is wrong. Small wonder that Rodchenkov concludes that sport will never be clean.


Illustrations: Grigory Rodchenkov and Bryan Fogel in Fogel's documentary, Icarus.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 21, 2020

The end of choice

new-22portobelloroad.jpgAt the Congressional hearings a few weeks ago, all four CEOs who appeared - Mark Zuckerberg (Facebook), Jeff Bezos (Amazon), Sundar Pichai (Google), and Tim Cook (Apple) - said essentially the same thing in their opening statements: they have lots of competitors, they have enabled millions of people to build small businesses on their platforms, and they do not have monopoly power. The first of these is partly true, the second is true, and the third...well, it depends which country you're talking about, how you look at it, and what you think they're competing for. In some countries outside the US, for example, Facebook *is* the Internet because of its Free Basics program.

In the weeks since: Google still intends to buy Fitbit, which for $2.1 billion would give it access to a huge pile of health-data-that's-not-categorized-as-health data; both the US and the EU are investigating.

In California, an appeals court has found that Amazon can be liable for defective products sold by third-party sellers.

Meanwhile, Apple, which this week became the first company in history to hit a $2 trillion market cap, deleted Epic's hugely popular game Fortnite from the App Store because its latest version breaks Apple's rules by allowing players to bypass the Apple payment system (and 30% commission) to pay Epic directly for in-game purchases. In response, Epic has filed suit - and, writes Matt Stoller, if a company with Epic's clout can't force Apple to negotiate terms, who can? Stoller describes the Apple-Epic suit as certainly about money but even more about "the right way to run an economy". Stoller goes on to find this thread running through other current disputes, and believes this kind of debate leads to real change.

At Stratechery Ben Thompson argues that the Democrats didn't prove their case. Most interesting of the responses to the hearings, though, is an essay by Benedict Evans, who argues that breaking up the platforms will achieve nothing. Instead, he says, citing relevant efforts by the EU and UK competition authorities, better to dig into how the platforms operate and write rules to limit the potential for abuse. I like this idea, in part because it is genuinely difficult to see how break-ups would work. However, the key issue is enforcement; the EU made not merging databases a condition of Facebook's acquisition of WhatsApp - and three years later Facebook decided to do it anyway. The resulting fine of €110 million was less than 1% of the $19 billion purchase price.

In 1998, when the Evil Borg of Tech was Microsoft, it, too, was the subject of antitrust actions. Echoing the 1984 breakup of AT&T, people speculated about creating "Baby Bills", either by splitting the company between operating systems and productivity software or by splitting it into clones and letting them compete with each other. Instead, in 2004 the EU ordered Microsoft to unbundle its media player and, in 2009, Internet Explorer to avoid new fines. The company changed, but so did the world around it: the web, online services, free software, smartphones, and social media all made Microsoft less significant. Since 2010, the landscape has changed again. As the economist Lina Khan wrote in 2017, two guys in a garage can no longer knock off the current crop by creating the next new big technology.

Today's expanding hybrid cyber-physical systems will entrench choices none of us made into infrastructure none of us can avoid. In 2017, for example, San Diego began installing "smart" streetlights intended to do all sorts of good things: drive down energy costs, monitor air pollution, point out empty parking spaces, and so on. The city also thought it might derive some extra income from allowing third parties to run apps on its streetlight network. Instead, as Tekla S. Perry reported at IEEE Spectrum in January, to date the system's sole use has been to provide video footage to law enforcement, which has taken advantage to solve serious crimes but also to investigate vandalism and illegal dumping.

In the UK, private developers and police have been rolling out automated facial recognition without notifying the public; this week, in a case brought by Liberty, the UK Court of Appeal ruled that its use breaches privacy rights and data protection and equality laws. This morning, I see that, undeterred, Lincolnshire Police will trial a facial recognition system that is supposed to be able to detect people's moods.

The issue of monopoly power is important. But even if we find a way to ensure fair competition we won't have solved a bigger problem that is taking shape: individuals increasingly have no choice about whether to participate in the world these companies are building. For decades we have had no choice about being credit-scored. Three years ago, despit the fatuous comments of senior politicians, it was obvious that the only people who can opt out of using the Internet are those who are economically inactive or highly privileged; last year journalist Kashmir Hill proved the difficulty of doing without GAFA. The pandemic response is making opting out either antisocial, a health risk, or both. And increasingly, going out of your house means being captured on video and analyzed whether you like it or not. No amount of controlling individual technology companies will solve this loss of agency. That is up to us.

Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 14, 2020

Revenge of the browser wars

Netscape-1.0N.pngThis week, the Mozilla Foundation announced major changes. As is the new norm these days, Mozilla is responding to a problem that existed BCV (before coronavirus) but has been exposed, accelerated, and compounded by the pandemic. But the response sounds grim: approximately a quarter of the workforce to be laid off and a warning that the company needs to find new business models. Just a couple of numbers explain the backdrop: according to Statcounter, Firefox's second-position share of desktop/laptop browser usage has dropped to 8.61% behind Chrome at 69.55%. On mobile and tablets, where the iPhone's Safari takes a large bite out of Chrome's share, Firefox doesn't even crack 1%. You might try to trumpify those percentages by suggesting it's a smaller share but a larger user population, but unfortunately no; at CNet, Stephen Shankland reports that usage is shrinking in raw numbers, too, down to 210 million monthly users from 300 million in 2017.

Yes, I am one of those users.

In its 2018 annual report and 2018 financial statement (PDF), Mozilla explains that most of its annual income - $430 million - comes from royalty deals with search engines, which pay Firefox to make them the default (users can change this at will). The default varies across countries: Baidu (China), Yandex (Russia, Belarus, Kazakhstan, Turkey, and Ukraine), and Google everywhere else, including the US and Canada. It derives a relatively small amount - $20 million or so in total - of additional income from subscriptions, advertising, donations and dividends and interest on the investments where it's parked its capital.

The pandemic has of course messed up everyone's financial projections. In the end, though, the underlying problem is that long-term drop in users; fewer users must eventually generate fewer search queries on which to collect royalties. Presumably this lies behind Mozilla's acknowledgment that it needs to find new ways to support itself - which, the announcement also makes clear, it has so far struggled to do.

The problem for the rest of us is that the Internet needs Firefox - or if not Firefox itself, another open source browser with sufficiently significant cloud to keep the commercial browsers and their owners honest. At the moment, Mozilla and Firefox are the only ones in a position to lead that effort, and it's hard to imagine a viable replacement.-

As so often, the roots of the present situation go back to 1995, when - no Google then and Apple in its pre-Jobs-return state - the browser kings were Microsoft's Internet Explorer and Netscape Navigator, both seeking world wide web domination. Netscape's 1995 IPO is widely considered the kickoff for the dot-com boom. By 1999, Microsoft was winning and then high-flying AOL was buying Netscape. It was all too easy to imagine both building out proprietary protocols that only their browsers could read, dividing the net up into incompatible walled gardens. The first versions of what became Firefox were, literally, built out of a fork of Netscape whose source code was released before the AOL acquisition.

The players have changed and the commercial web has grown explosively, but the danger of slowly turning the web into a proprietary system has not. Statcounter has Google (Chrome) and Apple (Safari) as the two most significant players, followed by Samsung Internet (on mobile) and Microsoft's Edge (on desktop), with a long tail of others including Opera (which pioneered many now-common features), Vivaldi (built by the Opera team after Telenor sold it to a Chinese consortium), and Brave, which markets itself as a privacy browser. All these browsers have their devoted fans, but they are only viable because websites observe open standards. If Mozilla can't find a way to reverse Firefox's user base shrinkage, web access will be dominated by two of the giant companies that two weeks ago were called in to the US Congress to answer questions about monopoly power. Browsers are a chokepoint they can control. I'd love to say the hearings might have given them pause, but two weeks later Google is still buying Fitbit, Apple and Google have removed Fortnite from the app store for violating its in-app payment rules, and Facebook has launched Tiktok clone Instagram Reels.

There is, at the moment, no suggestion that either Google or Apple wants to abuse its dominance in browser usage. If they're smart, they'll remember the many benefits of the standards-based approach that built the web. They may also remember that in 2009 the threat of EU fines led Microsoft to unbundle its Internet Explorer browser from Windows.

The difficulty of finding a viable business model for a piece of software that millions of people use is one of the hidden costs of the Internet as we know it. No one has ever been able to persuade large numbers of users to pay for a web browser; Opera tried in the late 1990s, and wound up switching first to advertising sponsorship and then, like Mozilla, to a contract with Google.

Today, Catalin Cimpanu reports at ZDNet that Google and Mozilla will extend their deal until 2023, providing Mozilla with perhaps $400 million to $500 million a year. Assuming it goes through as planned, it's a reprieve - but it's not a solution - as Mozilla, fortunately, seems to know.

Illustrations: Netscape 1.0, in 1994 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 7, 2020

The big four

vlcsnap-2020-08-06-22h38m37s848.png"Companies aren't bad just because they're big," Mark Zuckerberg told the US Congress ten days ago, though he failed to suggest aspirational counterexamples. Of course, the point isn't *that* a company is big - but *how*.

July 28, 2020 saw Zuckerberg, Jeff Bezos, Tim Cook, and Sundar Pichai lined up to face the House Judiciary committee in a hearing on Online Platforms and Market Power. As so often these days - and as Julia Angwin writes at The Markup, Democrats and Republicans (excepting Kelly Armstrong, R-ND), conducted different hearings. Both were essentially hostile. Democrats plus Armstrong asked investigative journalism-style questions about company practices, citing detailed historical examples: unfair competition, abuse of a dominant position (Apple, Amazon), editorial manipulation (Facebook, Google), past acquisitions, third-party cookies (Google), targeted advertising, content moderation, hate speech, Russian interference in the 2016 election (Facebook), smart speakers as home hubs (Amazon), counterfeit products (Amazon), and so on for five and a half hours. Each of the four, but particularly Cook, spent a fair bit of time waiting through other people's questions. Overall response: this stuff is *hard*; we're doing a *lot*, we have lots of competition, while their questioners fretted at the loss of every second of their limited time. It must be years since any of these guys has been so frequently peremptorily interrupted while waffling: "Yes or no?"

The Markup kept a tally of "I'll get back to you on that": Bezos edged out Zuckerberg by a hair. (Not entirely fair, since Cook had many fewer chances to play.)

At one point, Pramila Jayapal (D-WA) explained to Bezos that the point of the committee's work was to ensure that more companies like these four could be created. (Maybe start by blocking Google from buying Fitbit.) She was particularly impressive asking about multi-sided markets and revenue sharing, and also pushed Zuckerberg to quickly implement the recommendations in its recent civil rights audit (PDF). But will her desired focus be reflected in the final report, or will it get derailed by arguments over political bias?

Aggrieved Republicans pushed hard on their claim that social media stifles conservative voices, perhaps not achieving the effect they hoped. Jim Sensenbrenner (R-WI) asked Zuckerberg why Donald J. Trump Jr's account was suspended (for sharing a bizarre video full of misinformation about the coronavirus). Zuckerberg had to tell him that was Twitter, although Facebook did remove that same video. Greg Steube (R-FL) demanded of Pichai why Google sorted his campaign emails into his parents' spam folder: "This appears to only be happening to conservative Republicans." (The Markup has found this is non-partisan sorting of "marketing" email, and Val Demings (D-FL) noted it happens to her.) Steube also claimed that soon after the hearing was agreed conservative websites had jumped back up out of obscurity in Google's search results. Why was that? While Pichai struggled to answer, someone quipped on Twitter, "This is everyone trying to explain the Internet to their parents."

Jim Jordan (R-OH), whose career aspiration is apparently Court Jester, opened with: "Big Tech is out to get conservatives - that is a not suspicion, not a hunch, it's a fact." He reeled off a list of incidents and dates: the removal of right wing news website Breitbart, donations from Google employees to then-presidential-candidate Hillary Clinton in 2016, and Twitter removing posts from Donald Trump calling for violence against protesters, and claimed he'd been "shadowbanned" when Twitter (still not present) demoted his tweets to make them less visible, adding that he tried to call Twitter CEO Jack Dorsey as "our" witness. Was Google going to tailor its features to help Joe Biden in the upcoming election? "It's against our core values," said Pichai. Jordan pounced: "But you did it in 2016." He had emails.

Matt Gaetz (R-FL) also seemed offended that - as an American company - Google had withdrawn from the Department of Defense's Project Maven and asked Pichai to promise the company would not withdraw from cooperating with law enforcement, accusing the company of "bigoted, anti-police policies". Gaetz was also disturbed by Google's technical center and collaboration on AI in China - a complaint seemingly pioneered by Peter Thiel..

Steube also found time to take a swipe at the EU: "It's no secret that Europe seems to have an agenda of attacking large, successful US tech companies, yet Europe's approach to regulation in general, and antitrust in particular, seems to have been much less successful than America's approach. America is a remarkable nursery for market innovation and entrepreneurship in pursuit of the American Dream." The irony of saying this while investigating the resulting monopoly power appeared lost on him.

In their opening statements, all four CEOs had embraced only-in-America. At last week's gikii, Chris Marsden countered with this list of technology inventions by Europeans: the Linux kernel (Finland); the Opera browser (Norway), Skype (Estonia); the chip maker ARM (UK), the Raspberry Pi (UK); the VLC media player, and an obscure technology called the World Wide Web (UK, working in Switzerland). "Social good," Marsden concluded, "rather than unicorns". Some of those - Skype, ARM, Opera - were certainly sold off to other parts of the world. But all of the big four have benefited from at least one of them.


Illustrations: Jeff Bezos, Mark Zuckerberg, Sundar Pichai, and Tim Cook are sworn in via Webex.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 10, 2020

Trading digital rights

The_Story_of_Mankind_-_Mediæval_Trade.pngUntil this week I hadn't fully appreciated the number of ways Brexiting UK is trapped between the conflicting demands of major international powers of the size it imagines itself still to be. On the question of whether to allow Huawei to participate in building the UK's 5G network, the UK is caught between the US and China. On conditions of digital trade - especially data protection - the UK is trapped between the US and the EU with Northern Ireland most likely to feel the effects. This was spelled out on Tuesday in a panel on digital trade and trade agreements convened by the Open Rights Group.

ORG has been tracking the US-UK trade negotiations and their effect on the UK's continued data protection adequacy under the General Data Protection Regulation. As discussed here before, the basic problem with respect to privacy is that outside the state of California, the US has only sector-specific (mainly health, credit scoring, and video rentals) privacy laws, while the EU regards privacy as a fundamental human right, and for 25 years data protection has been an essential part of implementing that right.

In 2018 when the General Data Protection Regulation came into force, it automatically became part of British law. On exiting the EU at the end of January, the UK replaced it with equivalent national legislation. Four months ago, Boris Johnson said the UK intends to develop its own policies. This is risky; according to Oliver Patel and Nathan Lea at UCL, 75% of the UK's data flows are with the EU (PDF). Deviation from GDPR will mean the UK will need the EU to issue an adequacy ruling that the UK's data protection framework is compatible. The UK's data retention and surveillance policies may make obtaining that adequacy decision difficult; as Anna Fielder pointed out in Tuesday's discussion, this didn't arise before because national security measures are the prerogative of EU member states. The alternatives - standard contractual clauses and binding corporate rules - are more expensive to operate, are limited to the organization that uses them, and are being challenged in the European Court of Justice.

So the UK faces a quandary: does it remain compatible with the EU, or choose the dangerous path of deviation in order to please its new best friend, the US? The US, says Public Citizen's Burcu Kilic, wants unimpeded data flows and prohibitions on requirements for data localization and disclosure of source code and algorithms (as proposals for regulating AI might mandate).

It is easy to see these issues purely in terms of national alliances. The bigger issue for Kilic - and for others such as Transatlantic Consumer Dialogue - is the inclusion of these issues in trade agreements at all, a problem we've seen before with intellectual property provisions. Even when the negotiations aren't secret, which they generally are, international agreements are relatively inflexible instruments, changeable only via the kinds of international processes that created them. The result is to severely curtail the ability of national governments and legislatures to make changes - and the ability of civil society to participate. In the past, most notably with respect to intellectual property rights, corporate interests' habit of shopping their desired policies around from country to country until one bit and then using that leverage to push the others to "harmonize" has been called "policy laundering". This is a new and updated version, in which you bypass all that pesky, time-consuming democracy nonsense. Getting your desired policies into a trade agreement gets you two - or more - countries for the price of one.

In the discussion, Javier Ruiz called it "forum shifting" and noted that the latest example is intermediary liability, which is included in the US-Mexico-Canada agreement that replaced NAFTA. This is happening just as countries - including the US - are responding to longstanding problems of abuse on online platforms by considering how to regulate the big online platforms - in the US, the debate is whether and how to amend S230 of the Communications Decency Act, which offers a shield against intermediary liability, in the UK it's the online harms bill and the age-appropriate design code.

Every country matters in this game. Kilic noted that the US is also in the process of negotiating a trade deal with Kenya that will also include digital trade and intellectual property - small in and of itself, but potentially the model for other African deals - and for whatever deal Kenya eventually makes with the UK.

Kilic traces the current plans to the Trans-Pacific Partnership, which included the US during the Obama administration and which attracted public anger over provisions for investor-state dispute settlement. On assuming the presidency, Trump withdrew, leaving the other countries to recreate it as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership, which was formally signed in March 2018. There has been some discussion of the idea that a newly independent Britain could join it, but it's complicated. What the US wanted in TPP, Kilic said, offers a clear guide to what it wants in trade agreements with the UK and everywhere else - and the more countries enter into these agreements, the harder it becomes to protect digital rights. "In trade world, trade always comes first."


Illustrations: Medieval trade routes (from The Story of Mankind, 1921).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 12, 2020

Getting out the vote

Thumbnail image for bush-gore-hanging-chad-florida.jpg"If voting changed anything, they'd abolish it, the maverick British left-wing politician Ken Livingstone wrote in 1987.

In 2020, the strategy appears to be to lecture people about how they should vote if they want to change things, and then make sure they can't. After this week's denial-of-service attack on Georgia voters and widespread documentation of voter suppression tactics, there should be no more arguments about whether voter suppression is a problem.

Until a 2008 Computers, Freedom, and Privacy tutorial on "e-deceptive campaign practices", organized by Lillie Coney, I had no idea how much effort was put into disenfranchising eligible voters. The tutorial focused on the many ways new technology - the pre-social media Internet - was being adapted to do very old work to suppress the votes of those who might have undesired opinions. The images from the 2018 mid-term elections and from this week in Georgia tell their own story.

In a presentation last week, Rebecca Mercuri noted that there are two types of fraud surrounding elections. Voter fraud, which is efforts by individuals to vote when they are not entitled to do so and is the stuff proponents of voter ID requirements get upset about, is vanishingly rare. Election fraud, where one group or another try to game the election in their favor, is and has been common throughout history, and there are many techniques. Election fraud is the big thing to keep your eye on - and electronic voting is a perfect vector for it. Paper ballots can be reexamined, recounted, and can't easily be altered without trace. Yes, they can be stolen or spoiled, but it's hard to do at scale because the boxes of ballots are big, heavy, and not easily vanished. Scale is, however, what computers were designed for, and just about every computer security expert agrees that computers and general elections do not mix. Even in a small, digitally literate country like Estonia a study found enormous vulnerabilities.

Mercuri, along with longtime security expert Peter Neumann, was offering an update on the technical side of voting. Mercuri is a longstanding expert in this area; in 2000, she defended her PhD thesis, the first serious study of the security problems for online voting, 11 days before Bush v. Gore burst into the headlines. TL;DR: electronic voting can't be secured.

In the 20 years since, the vast preponderance of computer security experts have continued to agree with her. Naturally, people keep trying to find wiggle room, as if some new technology will change the math; besides election systems vendors there are well-meaning folks with worthwhile goals, such as improving access for visually impaired people, ensuring access for a widely scattered membership, such as unions, or motivating younger people.

Even apart from voter suppression tactics, US election systems continue to be a fragmented mess. People keep finding new ways to hack into them; in 2017, Bloomberg reported that Russia hacked into voting systems in 39 US states before the US presidential election and targeted election systems in all 50. Defcon has added a voting machine hacking village, where, in 2018, an 11-year-old hacked into a replica of the Florida state voting website in under ten minutes. In 2019, Defcon hackers were able to buy a bunch of voting machines and election systems on eBay - and cracked every single one for the Washington Post. The only sensible response: use paper.

Mercuri has long advocated for voter-verified paper ballots (including absentee and mail-in ballots) as the official votes that can be recounted or audited as needed. The complexity and size of US elections, however, means electronic counting.

In Congressional testimony, Matt Blaze, a professor at Georgetown University, has made three recommendations (PDF): immediately dump all remaining paperless direct recording electronic voting machines; provide resources, infrastructure, and training to local and state election officials to help them defend their systems against attacks; and conduct risk-limiting audits after every election to detect software failures and attacks. RLAs, which were proposed in a 2012 paper by Mark Lindeman and Philip B. Stark (PDF), involves counting a statistically significant random sampling of ballots and checking the results against the machine. The proposal has a fair amount of support, including from the Electronic Frontier Foundation.

Mercuri has doubts; she argues that election administrators don't understand the math that determines how many ballots to count in these audits, and thinks the method will fail to catch "dispersed fraud" - that is, a few votes changed across many precincts rather than large clumps of votes changed in a few places. She is undeniably right when she says that RLAs are intended to avoid counting the full set of ballots; proponents see that as a *good* thing - faster, cheaper, and just as good. As a result, some states - Michigan, Colorado (PDF) - are beginning to embrace it. My guess is there will be many mistakes in implementation and resulting legal contests until everyone either finds a standard for best practice or decides they're too complicated to make work.

Even more important, however, is whether RLAs can successfully underpin public confidence in election integrity. Without that, we've got nothing.

Illustrations: Hanging chad, during the 2000 Bush versus Gore vote.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 29, 2020

Tweeted

sbisson-parrot-49487515926_0c97364f80_o.jpgAnyone who's ever run an online forum has at some point grappled with a prolific poster who deliberately spreads division, takes over every thread of conversation, and aims for outraged attention. When your forum is a few hundred people, one alcohol-soaked obsessive bent on suggesting that anyone arguing with him should have their shoes filled with cement before being dropped into the nearest river is enormously disruptive, but the decision you make about whether to ban, admonish, or delete their postings matters only to you and your forum members. When you are a public company, your forum is several hundred million people, and the poster is a world leader...oy.

Some US Democrats have been calling Donald Trump's outrage this week over having two tweets labeled with a fact-check an attempt to distract us all from the terrible death toll of the pandemic under his watch. While this may be true, it's also true that the tweets Trump is so fiercely defending form part of a sustained effort to spread misinformation that effectively acts as voter suppression for the upcoming November election. In the 12 hours since I wrote this column, Trump has signed an Executive Order to "prevent online censorship", and Twitter has hidden, for "glorifying violence", Trump tweets suggesting shooting protesters in Minneapolis. It's clear this situation will escalate over the coming week. Twitter has a difficult balance to maintain: it's important not to hide the US president's thoughts from the public, but it's equally important to hold the US president to the same standards that apply to everyone else. Of course he feels unfairly picked on.

Rewind to Tuesday. Twitter applied its recently-updated rules regarding election integrity by marking two of Donald Trump's tweets. The tweets claimed that conducting the November presidential election via postal ballots would inevitably mean electoral fraud. Trump, who moved his legal residence to Florida last year, voted by mail in the last election. So did I. Twitter added a small, blue line to the bottom of each tweet: "! Get the facts about mail-in ballots". The link leads to numerous articles debunking Trump's claim. At OneZero, Will Oremus explains Twitter's decision making process. By Wednesday, Trump was threatening to "shut them down" and sign an Executive Order on Thursday.

Thursday morning, a leaked draft of the proposed executive order had been found, and Daphne Keller had color coded it to show which bits matter. In a fact-check of what power Trump actually has for Vox, Shirin Ghaffary quotes a tweet from Lawrence Tribe, who calls Trump's threat "legally illiterate". Unlike Facebook, Twitter doesn't accept political ads that Trump can threaten to withdraw, and unlike Facebook and Google, Twitter is too small for an antitrust action. Plus, Trump is addicted to it. At the Washington Post, Tribe adds that Trump himself *is* violating the First Amendment by continuing to block people who criticize his views, a direct violation of a 2019 court order.

What Trump *can* do - and what he appears to intend to do - is push the FTC and Congress to tinker with Section 230 of the Communications Decency Act (1996), which protects online platforms from liability for third-party postings spreading lies and defamation. S230 is widely credited with having helped create the giant Internet businesses we have today; without liability protection, it's generally believed that everything from web comment boards to big social media platforms will become non-viable.

On Twitter, US Senator Ron Wyden (D-OR), one of S230's authors, explains what the law does and does not do. At the New York Times, Peter Baker and Daisuke Wakabayashi argue, I think correctly, that the person a Trump move to weaken S230 will hurt most is...Trump himself. Last month, the Washington Post put the count of Trump's "false or misleading claims" while in office at 18,000 - and the rate has grown over time. Probably most of them have been published on Twitter.

As the lawyer Carrie A. Goldberg points out on Twitter, there are two very different sets of issues surrounding S230. The victims she represents cannot sue the platforms where they met serial rapists who preyed on them or continue to tolerate the revenge porn their exes have posted. Compare that very real damage to the victimhood conservatives are claiming: that the social media platforms are biased against them and disproportionately censor their posts. Goldberg wants access to justice for the victims she represents, who are genuinely harmed, and warns against altering S230 for purposes such as "to protect the right to spread misinformation, conspiracy theory, and misinformation".

However, while Goldberg's focus on her own clients is understandable, Trump's desire to tweet unimpeded about mail-in ballots or shooting protesters is not trivial. We are going to need to separate the issue of how and whether S230 should be updated from Trump's personal behavior and his clearly escalating war with the social medium that helped raise him from joke to viable presidential candidate. The S230 question and how it's handled in Congress is important. Calling out Trump when he flouts clearly stated rules is important. Trump's attempt to wield his power for a personal grudge is important. Trump versus Twitter, which unfortunately is much easier to write about, is a sideshow.


Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 20, 2020

The beginning of the world as we don't know it

magnolia-1.jpgOddly, the most immediately frightening message of my week was the one from the World Future Society, subject line "URGENT MESSAGE - NOT A DRILL". The text began, "The World Future Society over its 60 years has been preparing for a moment of crisis like this..."

The message caused immediate flashbacks to every post-disaster TV show and movie, from The Leftovers (in which 2% of the world's population mysteriously vanishes) to The Last Man on Earth (in which everyone who isn't in the main cast has died of a virus). In my case, it also reminds unfortunately of the very detailed scenarios I saw posted in the late 1990s to the comp.software.year-2000 Usenet newsgroup, in which survivalists were certain that the Millennium Bug would cause the collapse of society. In one scenario I recall, that collapse was supposed to begin with the banks failing, pass through food riots and cities burning, and end with four-fifths of the world's population dead: the end of the world as we know it (TEOTWAWKI). So what I "heard" in the World Future Society's tone was that the "preppers", who built bunkers, stored sacks of beans, rice, dried meat, and guns, were finally right and this was their chance to prove it.

Naturally, they meant no such thing. What they *did* mean was that futurists have long thought about the impact of various types of existential risks, and that what they want is for as many people as possible to join their effort to 1) protect local government and health authorities, 2) "co-create back-up plans for advanced collaboration in case of societal collapse", and 3) collaborate on possible better futures post-pandemic. Number two still brings those flashbacks, but I like the first goal very much, and the third is on many people's minds. If you want to see more, it's here.

It was one of the notable aspects of the early Internet that everyone looked at what appeared to be a green field for development and sought to fashion it in their own desired image. Some people got what they wanted: China, for example, defying Western pundits who claimed it was impossible, successfully built a controlled national intranet. Facebook, while coming along much later, through zero rating deals with local telcos for its Free Basics, is basically all the Internet people know in countries like Ghana and the Philippines, a phenomenon Global Voices calls "digital colonialism". Something like that mine-to-shape thinking is visible here.

I don't think WFS meant to be scary; what they were saying is in fact what a lot of others are saying, which is that when we start to rebuild after the crisis we have a chance - and a need - to do things differently. At Wired, epidemiologist Larry Brilliant tells Steven Levy he hopes the crisis will "cause us to reexamine what has caused the fractional division we have in [the US]".

At Singularity University's virtual summit on COVID-19 this week, similar optimism was on display (some of it probably unrealistic, like James Ehrlich's land-intensive sustainable villages). More usefully, Jamie Metzl compared the present moment to 1941, when US president Franklin Delano Roosevelt began to imagine how the world might be reshaped after the war would end in the Atlantic charter. Today, Metzl said, "We are the beneficiaries of that process." Therefore, like FDR we should start now to think about how we want to shape our upcoming different geopolitical and technological future. Like net.wars last week and John Naughton at the Guardian, Metzl is worried that the emergency powers we grant today will be hard to dislodge later. Opportunism is open to all.

I would guess that the people who think it's better to bail out businesses than support struggling people also fear permanence will become true of the emergency support measures being passed in multiple countries. One of the most surreal aspects of a surreal time is that in the space of a few weeks actions that a month ago were considered too radical to live are suddenly happening: universal basic income, grounding something like 80% of aviation, even support for *some* limited free health care and paid sick leave in the US.

The crisis is also exposing a profound shift in national capabilities. China could build hospitals in ten days; the US, which used to be able to do that sort of thing, is instead the object of charity from Chinese billionaire Alibaba founder Jack Ma, who sent over half a million test kits and 1 million face masks.

Meanwhile, all of us, with a few billionaire exceptions are turning to the governments we held in so little regard a few months ago to lead, provide support, and solve problems. Libertarians who want to tear governments down and replace all their functions with free-market interests are exposed as a luxury none of us can afford. Not that we ever could; read Paulina Borsook's 1996 Mother Jones article Cyberselfish if you doubt this.

"It will change almost everything going forward," New York State governor Andrew Cuomo said of the current crisis yesterday. Cuomo, who is emerging as one of the best leaders the US has in an emergency, and his counterparts are undoubtedly too busy trying to manage the present to plan what that future might be like. That is up to us to think about while we're sequestered in our homes.


Illustrations:: A local magnolia tree, because it *is* spring.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 12, 2020

Privacy matters

china-alihealth.jpegSometime last week, Laurie Garrett, the Pulitzer Prize-winning author of The Coming Plague, proposed a thought experiment to her interviewer on MSNBC. She had been describing the lockdown procedures in place in China, and mulling how much more limited actions are available to the US to mitigate the spread. Imagine, she said (or more or less), the police out on the interstate pulling over a truck driver "with his gun rack" and demanding a swab, running a test, and then and there ordering the driver to abandon the truck and putting him in isolation.

Um...even without the gun rack detail...

The 1980s AIDS crisis may have been the first time my generation became aware of the tension between privacy and epidemiology. Understanding what was causing the then-unknown "gay cancer" involved tracing contacts, asking intimate questions, and, once it was better understood, telling patients to contact their former and current sexual partners. At a time when many gay men were still closeted, this often meant painful conversations with wives as well as ex-lovers. (Cue a well-known joke from 1983: "What's the hardest part of having AIDS? Trying to convince your wife you're Haitian.")

The descriptions emerging of how China is working to contain the virus indicate a level of surveillance that - for now - is still unthinkable in the West. In a Huangzhou project, for example, citizens are required to install the Alipay Health Code app on their phones that assigns them a traffic light code based on their recent contacts and movements - which in turn determines which public and private spaces they're allowed to enter. Paul Mozur, who co-wrote that piece for the New York Times with Raymond Zhong and Aaron Krolik, has posted on Twitter video clips of how this works on the ground, while Ryutaro Uchiyama marvels at Singapore's command and open publication of highly detailed data This is a level of control that severely frightened people, even in the West, might accept temporarily or in specific circumstances - we do, after all, accept being data-scanned and physically scanned as part of the price of flying. I have no difficulty imagining we might accept barriers and screening before entering nursing homes or hospital wards, but under what conditions would the citizens of democratic societies accept being stopped randomly on the street and our phones scanned for location and personal contact histories?

The Chinese system has automated just such a system. Quite reasonably, at the Guardian Lily Kuo wonders if the system will be made permanent, essentially hijacking this virus outbreak in order to implement a much deeper system of social control than existed before. Along with all the other risks of this outbreak - deaths, widespread illness, overwhelmed hospitals and medical staff, widespread economic damage, and the mental and emotional stress of isolation, loss, and lockdown - there is a genuine risk that "the new normal" that emerges post-crisis will have vastly more surveillance embedded in it.

Not everyone may think this is bad. On Twitter, Stewart Baker, whose long-held opposition to "warrant-proof" encryption we noted last week, suggested it was time for him to revive his "privacy kills" series. What set him off was a New York Times piece about a Washington-based lab that was not allowed to test swabs they'd collected from flu patients for coronavirus, on the basis that the patients would have to give consent for the change of us. Yes, the constraint sounds stupid and, given the situation, was clearly dangerous. But it would be more reasonable to say that either *this* interpretation or *this* set of rules needs to be changed than to conclude unliterally that "privacy is bad". Making an exemption for epidemics and public health emergencies is a pretty easy fix that doesn't require up-ending all patient confidentiality on a permanent basis. The populations of even the most democratic, individualistic countries are capable of understanding the temporary need for extreme measures in a crisis. Even the famously national ID-shy UK accepted identity papers during wartime (and then rejected them after the war ended (PDF)).

The irony is that lack of privacy kills, too. At The Atlantic, Zeynep Tufecki argues that extreme surveillance and suppression of freedom of expression paradoxically results in what she calls "authoritarian blindness": a system designed to suppress information can't find out what's really going on. At The Bulwark, Robert Tracinski applies Tufecki's analysis to Donald Trump's habit of labeling anything he doesn't like "fake news" and blaming any events he doesn't like on the "deep state" and concludes that this, too, engenders widespread and dangerous distrust. It's just as hard for a government to know what's really happening when the leader doesn't want to know as when the leader doesn't want anyone *else* to know.

At this point in most countries it's early stages, and as both the virus and fear of it spread, people will be willing to consent to any measure that they believe will keep them and their loved ones safe. But, as Access Now agrees, there will come a day when this is past and we begin again to think about other issues. When that day comes, it will be important to remember that privacy is one of the tools needed to protect public health.


Illustrations: Alipay Health Code in action (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 6, 2020

Transitive rage

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgSomething has changed," a privacy campaigner friend commented last fall, observing that it had become noticeably harder to get politicians to understand and accept the reasons why strong encryption is a necessary technology to protect privacy, security, and, more generally, freedom. This particular fight had been going on since the 1990s, but some political balance had shifted. Mathematical reality of course remains the same. Except in Australia.

At the end of January, Bloomberg published a leaked draft of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT), backed by US Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT). In its analysis the Center for Democracy and Technology find the bill authorizes a new government commission, led by the US attorney general, to regulate online speech and, potentially, ban end-to-end encryption. At Lawfare, Stewart Baker, a veteran opponent of strong cryptography, dissents, seeing the bill as combating child exploitation by weakening the legal liability protection afforded by Section 230. Could the attorney general mandate that encryption never qualifies as "best practice"? Yes, even Baker admits, but he still thinks the concerns voiced by CDT and EFF are overblown.

In our real present, our actual attorney general, William Barr believes "warrant-proof encryption" is dangerous. His office is actively campaigning in favor of exactly the outcome CDT and EFF fear.

Last fall, my friend connected the "change" to recent press coverage of the online spread of child abuse imagery. Several - such as Michael H. Keller and Gabriel J.X. Dance's November story - specifically connected encryption to child exploitation, complaining that Internet companies fail to use existing tools, and that Facebook's plans to encrypt Messenger, "the main source of the imagery", will "vastly limit detection".

What has definitely changed is *how* encryption will be weakened. The 1990s idea was key escrow, a scheme under which individuals using encryption software would deposit copies of their private keys with a trusted third party. After years of opposition, the rise of ecommerce and its concomitant need to secure in-transit financial details eventually led the UK government to drop key escrow before the passage of the Regulation of Investigatory Powers Act (2000), which closed that chapter of the crypto debates. RIPA and its current successor, the Investigatory Powers Act (2016), requires individuals to descrypt information or disclose keys to government representatives. There have have been three prosecutions.

In 2013, we learned from Edward Snowden's revelations that the security services had not accepted defeat but had gone dark, deliberately weakening standards. The result: the Internet engineering community began the work of hardening the Internet as much as they could.

In those intervening years, though, outside of a few very limited cases - SSL, used to secure web transactions - very few individuals actually used encryption. Email and messaging remained largely open. The hardening exercise Snowden set off eventually included companies like Facebook, which turned on end-to-end encryption for all of WhatsApp in 2016, overnight turning 1 billion people into crypto users and making real the long-ago dream of the crypto nerds of being lost in the noise. If 1 billion people use messaging and only a few hundred use encryption, the encryption itself is a flag that draws attention. If 1 billion people use encrypted messaging, those few hundred are indistinguishable.

In June 2018, at the 20th birthday of the Foundation for Information Policy Research, Ross Anderson predicted that the battle over encryption would move to device hacking. The reasoning is simple: if they can't read the data in transit because of end-to-end encryption, they will work to access it at the point of consumption, since it will be cleartext at that point. Anderson is likely still to be right - the IPA includes provisions allowing the security services to engage in "bulk equipment interference", which means, less politely, "hacking".

At the same time, however, it seems clear that those governments that are in a position to push back at the technology companies now figure that a backdoor in the few giant services almost everyone uses brings back the good old days when GCHQ could just put in a call to BT. Game the big services, and the weirdos who use Signal and other non-mainstream services will stick out again.

At Stanford's Center for Internet and Society, Riana Pfefferkorn believes the DoJ is opportunistically exploiting the techlash much the way the security services rushed through historically and politically unacceptable surveillance provisions in the first few shocked months after the 9/11 attacks. Pfefferkorn calls it "transitive rage": Congresspeople are already mad at the technology companies for spreading false news, exploiting personal data, and not paying taxes, so encryption is another thing to be mad about - and pass legislation to prevent. The IPA and Australia's Assistance and Access Act are suddenly models. Plus, as UN Special Rapporteur David Keye writes in his book Speech Police: The Global Struggle to Govern the Internet, "Governments see that company power and are jealous of it, as they should be."

Pfefferkorn goes on to point out the inconsistency of allowing transitive rage to dictate banning secure encryption. It protects user privacy, sometimes against the same companies they're mad at. We'll let Alec Muffett have the last word, reminding that tomorrow's children's freedom is also worth protecting.


Illustrations: GCHQ's Bude listening post, at dawn (by wizzlewick at Wikimedia, CC3.0).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpg

February 14, 2020

Pushy algorithms

cyberporn.jpgOne consequence of the last three and a half years of British politics, which saw everything sucked into the Bermuda Triangle of Brexit debates, is that things that appeared to have fallen off the back of the government's agenda are beginning to reemerge like so many sacked government ministers hearing of an impending cabinet reshuffle and hoping for reinstatement.

One such is age verification, which was enshrined in the Digital Economy Act (2017) and last seen being dropped to wait for the online harms bill.

A Westminster Forum seminar on protecting children online shortly before the UK's December 2019 general election, reflected that uncertainty. "At one stage it looked as if we were going to lead the world," Paul Herbert lamented before predicting it would be back "sooner or later".

The expectation for this legislation was set last spring, when the government released the Online Harms white paper. The idea was that a duty of care should be imposed on online platforms, effectively defined as any business-owned website that hosts "user-generated content or user interactions, for example through comments, forums, or video sharing". Clearly they meant to target everyone's current scapegoat, the big social media platforms, but "comments" is broad enough to include any ecommerce site that accepts user reviews. A second difficulty is the variety of harms they're concerned about: radicalization, suicide, self-harm, bullying. They can't all have the same solution even if, like one bereaved father, you blame "pushy algorithms".

The consultation exercise closed in July, and this week the government released its response. The main points:

- There will be plentiful safeguards to protect freedom of expression, including distinguishing between illegal content and content that's legal but harmful; the new rules will also require platforms to publish and transparently enforce their own rules, with mechanisms for redress. Child abuse and exploitation and terrorist speech will have the highest priority for removal.

- The regulator of choice will be Ofcom, the agency that already oversees broadcasting and the telecommunications industry. (Previously, enforcing age verification was going to be pushed to the British Board of Film Classification.)

- The government is still considering what liability may be imposed on senior management of businesses that fall under the scope of the law, which it believes is less than 5% of British businesses.

- Companies are expected to use tools to prevent children from accessing age-inappropriate content "and protect them from other harms" - including "age assurance and age verification technologies". The response adds, "This would achieve our objective of protecting children from online pornography, and would also fulfill the aims of the Digital Economy Act."

There are some obvious problems. The privacy aspects of the mechanisms proposed for age verification remain disturbing. The government's 5% estimate of businesses that will be affected is almost certainly a wild underestimate. (Is a Patreon page with comments the responsibility of the person or business that owns it or Patreon itself?). At the Guardian, Alex Hern explains the impact on businesses. The nastiest tabloid journalism is not within scope.

On Twitter, technology lawyer Neil Brown identifies four fallacies in the white paper: the "Wild West web"; that privately operated computer systems are public spaces; that those operating public spaces owe their users a duty of care; and that the offline world is safe by default. The bigger issue, as a commenter points out, is that the privately operated computer systems UK government seeks to regulate are foreign-owned. The paper suggests enforcement could include punishing company executives personally and ordering UK ISPs to block non-compliant sites.

More interesting and much less discussed is the push for "age-appropriate design" as a method of harm reduction. This approach was proposed by Lorna Woods and Will Perrin in January 2019. At the Westminster eForum, Woods explained, "It is looking at the design of the platforms and the services, not necessarily about ensuring you've got the latest generation of AI that can identify nasty comments and take it down."

It's impossible not to sympathize with her argument that the costs of move fast and break things are imposed on the rest of society. However, when she started talking about doing risk assessments for nascent products and services I could only think she's never been close to software developers, who've known for decades that from the instant software goes out into the hands of users they will use it in ways no one ever imagined. So it's hard to see how it will work, though last year the ICO proposed a code of practice.

The online harms bill also has to be seen in the context of all the rest of the monitoring that is being directed at children in the name of keeping them - and the rest of us - safe. DefendDigital.me has done extensive work to highlight the impact of such programs as Prevent, which requires schools and libraries to monitor children's use of the Internet to watch for signs of radicalization, and the more than 20 databases that collect details of every aspect of children's educational lives. Last month, one of these - the Learning Records Service - was caught granting betting companies access to personal data about 28 million children. DefendDigital.me has called for an Educational Rights Act. This idea could be usefully expanded to include children's online rights more broadly.


Illustrations: Time magazine's 1995 "Cyberporn" cover, which marked the first children-Internet panic.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 24, 2020

The inevitability narrative

new-22portobelloroad.jpg"We could create a new blueprint," Woody Hartzog said in a rare moment of hope on Wednesday at this year's Computers, Privacy, and Data Protection in a panel on facial recognition. He went on stress the need to move outside of the model of privacy for the last two decades: get consent, roll out technology. Not necessarily in that order.

A few minutes earlier, he had said, "I think facial recognition is the most dangerous surveillance technology ever invented - so attractive to governments and industry to deploy in many ways and so ripe for abuse, and the mechanisms we have so weak to confront the harms it poses that the only way to mitigate the harms is to ban it."

This week, a leaked draft white paper revealed that the EU is considering, as one of five options, banning the use of facial recognition in public places. In general, the EU has been pouring money into AI research, largely in pursuit of economic opportunity: if the EU doesn't develop its own AI technologies, the argument goes, Europe will have to buy it from China or the United States. Who wants to be sandwiched between those two?

This level of investment is not available to most of the world's countries, as Julia Powles elsewhere pointed out with respect to AI more generally. Her country, Australia, is destined to be a "technology importer and data exporter", no matter how the three-pronged race comes out. "The promises of AI are unproven, and the risks are clear," she said. "The real reason we need to regulate is that it imposes a dramatic acceleration on the conditions of the unrestrained digital extractive economy." In other words, the companies behind AI will have even greater capacity to grind us up as dinosaur bones and use the results to manipulate us to their advantage.

At this event last year there was a general recognition that, less than a year after the passage of the general data protection regulation, it wasn't going to be an adequate approach to the growth of tracking through the physical world. This year, the conference is awash in AI to a truly extraordinary extent. Literally dozens of sessions: if it's not AI in policing it's AI and data protection, ethics, human rights, algorithmic fairness, or embedded in autonomous vehicles, Hartzog's panel was one of at least half a dozen on facial recognition, which is AI plus biometrics plus CCTV and other cameras. As interesting are the omissions: in two full days I have yet to hear anything about smart speakers or Amazon Ring doorbells, both proliferating wildly in the soon-to-be non-EU UK.

These technologies are landing on us shockingly fast. This time last year, automated facial recognition wasn't even on the map. It blew up just last May, when Big Brother Watch pushed the issue into everyone's consciousness by launching a campaign to stop the police from using what is still a highly flawed technology. But we can't lean too heavily on the ridiculous - 98%! - inaccuracy of its real-world trials, because as it becomes more accurate it will become even more dangerous to anyone on the wrong list. Here, it has become clear that it's being rapidly followed by "emotional recognition", a build-out of technology pioneered 25 years ago at MIT by Rosalind Picard under the rubric "affective computing".

"Is it enough to ban facial recognition?" a questioner asked. "Or should we ban cameras?"

Probably everyone here is carrying at least two camera (pause to count: two on phone, one on laptop).

Everyone here is also conscious that last week, Kashmir Hill broke the story that the previously unknown, Peter Thiel-backed company Clearview AI had scraped 3 billion facial images off social media and other sites to create a database that enables its law enforcement cutomers to grab a single photo and get back matches from dozens of online sites. As Hill reminds, companies like Facebook have been able to do this since 2011, though at the time - just eight and a half years ago! - this was technology that Google (though not Facebook) thought was "too creepy" to implement.

In the 2013 paper A Theory of Creepy, Omer Tene and Jules Polonetsky. cite three kinds of "creepy" that apply to new technologies or new uses: it breaks traditional social norms; it shows the disconnect between the norms of engineers and those of the rest of society; or applicable norms don't exist yet. AI often breaks all three. Automated, pervasive facial recognition certainly does.

And so it seems legitimate to ask: do we really want to live in a world where it's impossible to go anywhere without being followed? "We didn't ban dangerous drugs or cars," has been a recurrent rebuttal. No, but as various speakers reminded, we did constrain them to become much safer. (And we did ban some drugs.) We should resist, Hartzog suggested, "the inevitability narrative".

Instead, the reality is that, as Lokke Moerel put it, "We have this kind of AI because this is the technology and expertise we have."

One panel pointed us at the AI universal guidelines, and encouraged us to sign. We need that - and so much more.


Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 29, 2019

Open season

A_Large_Bird_Attacking_a_Stag_LACMA_65.37.315.jpgWith no ado, here's the money quote:

The [US Trade Representative] team is keen to move into the formal phase of negotiations. Ahead of the publication of UK negotiating objectives, there now little that we will be able to achieve in further pre-negotiation engagement. USTR officials noted continued pressure from their political leadership to pursue an FTA [free trade agreement] and a desire to be fully prepared for the launch of negotiations after the end of October. They envisage a high cadence negotiation - with rounds every 6 weeks - but it was interesting that my opposite number thought that there would remain a political and resource commitment to a UK negotiation even if it were thought that the chances of completing negotiations in a Trump first term were low. He felt that being able to point to advanced negotiations with the UK was viewed as having political advantages for the President going in to the 2020 elections. USTR were also clear that the UK-EU situation would be determinative: there would be all to play for in a No Deal situation but UK commitment to the Customs Union and Single Market would make a UK-U.S. FTA a non-starter.

This quote appears on page two of one of the six leaked reports that UK Labour leader Jeremy Corbyn flourished at a press conference this week. The reports summarize the US-UK Trade and Investment Working Group's efforts to negotiate a free trade agreement between the US and post-Brexit Britain (if and when). The quote dates to mid-July 2019; to recap, Boris Johnson became prime minister on July 24 swearing the UK would exit the EU on October 31.

Three key points jump out:

- Donald Trump thinks a deal with Britain will help him win re-election next year. This is not a selling point to most people in Britain.

- The US negotiators condition the agreement on a no-deal Brexit - the most damaging option for the UK and European economies. Despite the last Parliament's efforts, this could still happen because two cliff edges still loom: the revised January 31 exit date, and December 2020, when the transition period is due to end (and which Johnson swears he won't extend). Whose interests is Johnson prioritizing here?

- Wednesday's YouGov model poll predicts that Johnson will win a "comfortable" majority, suggesting that the cliff edge remains a serious threat.

At Open Democracy, Nick Dearden sums up the worst damage. Among other things, it shows the revival of some of the most-disliked provisions in the abandoned Transatlantic Trade Investment Partnership treaty, most notably investor-state dispute resolution (ISDS), which grants corporations the right to sue governments that pass laws they oppose in secret tribunals. As Dearden writes, these documents make clear that "taking back control" means "giving the US control". The Trade Justice Movement's predictions from earlier this year seem accurate enough.

On Twitter, UKTrade Forum co-founder David Henig has posted a thread explaining why adopting a US-first trade policy will be disastrous for British farmers and manufacturers.

Global Justice's analysis highlights both the power imbalance, and the US's demands for free rein. It's also clear that Johnson can say the NHS is not on the table, Trump can say the opposite, and both can be telling some value of truth, because the focus is on pharmaceutical pricing and patent extension. An unscrupulous government filled with short-term profiteers might figure that they'll be gone by the time the costs become clear.

For net.wars, this is all background and outside our area of expertise. The picture is equally alarming for digital rights. In 1999, Simon Davies predicted that data protection would become a trade war between the US and EU. Even a partial reading of these documents suggests that now, 20 years on, may be the moment. Data protection is a hinge, in that you might, at some expense, manage varying food standards for different trading regions, but data regimes want to be unitary. The UK can either align with the EU, GDPR, which enshrines privacy and data protection as human rights, or with the US and its technology giants. This goes double if Max Schrems, whose legal action brought down the Safe Harbor agreement, wins his NOYB case against Privacy Shield. Choose the EU and GDPR, and the US likely walks, as the February 2019 summary of negotiation objectives (PDF) makes plain. That document also is clear that the US wants to bar the UK from mandating local data storage, restricting cross-border data flows, imposing customs duties on digital products, requiring the disclosure of computer code or algorithms, and holding online platforms liable for third-party content. Many of these are opposite to the EU's general direction of travel.

The other hinge issue is the absolute US ban on mentioning climate change. The EU just declared a climate emergency and set out an action list.

The UK cannot hope to play both sides. It's hard to overstress how much worse a position these negotiations seem to offer the UK, which *is* a full EU partner, but which will always be viewed by the US as a lesser entity.

Illustrations: A large bird attacking a stag (Hendrik Hondius, 1610; from LA County Museum of Art, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 8, 2019

Burn rate

One of my favorite moments in the 1996 sitcom 3rd Rock from the Sun was when Dick (John Lithgow), the high commander of the aliens' mission to Earth, marveled at humans' ability to live every day as though they didn't know they were going to die. For everyone but Woody Allen and the terminally ill, that denial is useful: it allows us to get up every day and do things like watch silly sitcoms without being overwhelmed by the sense of doom.

In other contexts, the denial of existential limits is less helpful: being aware of the limits of capital reminds to use it wisely. During those 3rd Rock years, I was baffled by the recklessly rapid adoption of the Internet for serious stuff - banking, hospital systems - apparently without recognizing that the Internet was still a somewhat experimental network and lacked the service level agreements and robust engineering provided by the legacy telephone networks. During Silicon Valley's 2007 to 2009 bout of climate change concern it was an exercise in cognitive dissent to watch CEOs explain the green values they were imposing on themselves and their families while simultaneously touting their companies' products and services, which required greater dependence on electronics, power grids, and always-on connections. At an event on nanotechnology in medicine, it was striking that the presenting researchers never mentioned power use. The mounting consciousness of the climate crisis has proceeded in a separate silo from the one in which the "there's an app for that" industries have gone on designing a lifestyle of total technological dependence, apparently on the basis that electrical power is a constant and the Internet is never interrupted. (Tell that to my broadband during those missing six hours last Thursday.)

The last few weeks of California have shown that we need to completely rethink this dependence. At The Verge, Nicole Westman examines the fragility of American hospital systems. Many do have generators, but few have thought-out plans for managing during a black-out. As she writes, hospitals may be overwhelmed by unexpected influxes of patients from nursing homes that never mentioned the hospital was their fallback plan and local residents searching for somewhere to charge their phones. And, Westman notes, electronic patient records bring hard choices: do you spend your limited amount of power on keeping the medicines cold, or do you keep the computer system running?

Right now, with paper records still so recent, staff may be able to dust off their old habits and revert, but ten years hence that won't be true. British Airways' 2018 holiday weekend IT collapse at Heathrow provides a great example of what happens when there is (apparently) no plan and less experience.

At the Atlantic, Alexis Madrigal warns that California's blackouts and wildfires are samples of our future; the toxic "technical debt" of accumulated underinvestment in American infrastructure is being exposed by the abruptly increased weight of climate change. How does it happen that the fifth largest economy in the world has millions of people with no electric power? The answer, Madrigal (and others) writes is the diversion of capital that should have been spent improving the grid and burying power lines to shareholders' dividends. Add higher temperatures, less rainfall, and exceptional drought, and here's your choice: power outages or fires?

Someone like me, with a relatively simple life, a lot of paper records, sufficient resources, and a support network of friends and shopkeepers, can manage. Someone on a zero-hours contract, whose life and work depend on their phone, who can't cook, and doesn't know how to navigate the world of people if they can't check the website to find out why the water is out...can't. In these crises we always hear about the sick and the elderly, but I also worry about the 20-somethings whose lives are predicated on the Internet always being there because it always has been.

A forgotten aspect is the loss of social infrastructure, as Aditya Chakrabortty writes in the Guardian. Everyone notes that since online retail has bitten great chunks off Britain's high streets, stores have closed and hub businesses like banks have departed. Chakrabortty points out that this is only half of the depredation in those towns: the last ten years of Conservative austerity have sliced away social support systems such as youth clubs and libraries. Those social systems are the caulk that gives resilience in times of stress, and they are vanishing.

Both pieces ought to be taken as a serious warning about the many kinds of capital we are burning through, especially when read in conjunction with Matt Stoller's contention that the "millennial lifestyle" is ending. "If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates, you've interacted with seven companies that will collectively lose nearly $14 billion this year," he observes. He could have added Netflix, whose 2019 burn rate is $3 billion. And, he continues, WeWork's travails are making venture capitalists and bond markets remember that losing money, long-term, is not a good bet, particularly when interest rates start to rise.

So: climate crisis, brittle systems, and unsustainable lifestyles. We are burning through every kind of capital at pace.

Illustrations: California wildfire, 2008.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2019

I never paid for it in my life

lanier-lrm-2017.jpgSo Jaron Lanier is back, arguing that we should be paid for our data. He was last seen in net.wars two years back, arguing that if people had started by charging for email we would not now be the battery fuel for "behavior modification empires". In a 2018 TED talk, he continued that we should pay for Facebook and Google in order to "fix the Internet".

Lanier's latest disquisition goes like this: the big companies are making billions from our data. We should have some of it. That way lies human dignity and the feeling that our lives are meaningful. And fixing Facebook!

The first problem is that fixing Facebook is not the same as fixing the Internet, a distinction Lanier surely understands. The Internet is a telecommunications network; Facebook is a business. You can profoundly change a business by changing who pays for its services and how, but changing a telecommunications network that underpins millions of organizations and billions of people in hundreds of countries is a wholly different proposition. If you mean, as Lanier seems to, that what you want to change is people's belief that content on the Internet should be free, then what you want to "fix" is the people, not the network. And "fixing" people at scale is insanely hard. Just ask health professionals or teachers. We'd need new incentives,

Paying for our data is not one of those incentives. Instead of encouraging people to think more carefully about privacy, being paid to post to Facebook would encourage people to indiscriminately upload more data. It would add payment intermediaries to today's merry band of people profiting from our online activities, thereby creating a whole new class of metadata for law enforcement to claim it must be able to access.

A bigger issue is that even economists struggle to understand how to price data; as Diane Coyle asked last year, "Does data age like fish or like wine?" Google's recent announcement that it would allow users to set their browser histories to auto-delete after three or 12 months has been met by the response that such data isn't worth much three months on, though the privacy damage may still be incalculable. We already do have a class of people - "influencers" - who get paid for their social media postings, and as Chris Stokel-Walker portrays some of their lives, it ain't fun. Basically, while paying us all for our postings would put a serious dent into the revenues of companies like Google, and Facebook, it would also turn our hobbies into jobs.

So a significant issue is that we would be selling our data with no concept of its true value or what we were actually selling to companies that at least know how much they can make from it. Financial experts call this "information asymmetry". Even if you assume that Lanier's proposed "MID" intermediaries that would broker such sales will rapidly amass sufficient understanding to reverse that, the reality remains that we can't know what we're selling. No one happily posting their kids' photos to Flickr 14 years ago thought that in 2014 Yahoo, which owned the site from 2005 to 2015, was going to scrape the photos into a database and offer it to researchers to train their AI systems that would then be used to track protesters, spy on the public, and help China surveil its Uighur population.

Which leads to this question: what fire sales might a struggling company with significant "data assets" consider? Lanier's argument is entirely US-centric: data as commodity. This kind of thinking has already led Google to pay homeless people in Atlanta to scan their faces in order to create a more diverse training dataset (a valid goal, but oh,.the execution).

In a paywalled paper for Harvard Business Review, Lanier apparently argues that instead he views data as labor. That view, he claims, opens the way to collective bargaining via "data labor unions" and mass strikes.

Lanier's examples, however, are all drawn from active data creation: uploading and tagging photos, writing postings. Yet much of the data the technology companies trade in is stuff we unconsciously create - "data exhaust" - as we go through our online lives: trails of web browsing histories, payment records, mouse movements. At Tech Liberation, Will Rinehart critiques Lanier's estimates, both the amount (Lanier suggests a four-person household could gain $20,000 a year) and the failure to consider the differences between and interactions among the three classes of volunteered, observed, and inferred data. It's the inferences that Facebook and Google really get paid for. I'd also add the difference between data we can opt to emit (I don't *have* to type postings directly into Facebook knowing the company is saving every character) and data we have no choice about (passport information to airlines, tax data to governments). The difference matters: you can revise, rethink, or take back a posting; you have no idea what your unconscious mouse movements reveal and no ability to edit them. You cannot know what you have sold.

Outside the US, the growing consensus is that data protection is a fundamental human right. There's an analogy to be made here between bodily integrity and personal integrity more broadly. Even in the US, you can't sell your kidney. Isn't your data just as intimate a part of you?


Illustrations: Jaron Lanier in 2017 with Luke Robert Mason (photo by Eva Pascoe).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2019

The China syndrome

800px-The_Great_wall_-_by_Hao_Wei.jpgAbout five years ago, a friend commented that despite the early belief - promulgated by, among others, then-US president Bill Clinton and vice-president Al Gore - that the Internet would spread democracy around the world, so far the opposite seemed to be the case. I suggested perhaps it's like the rising sea level, where local results don't give the full picture.

Much longer ago, I remember wondering how Americans would react when large parts of the Internet were in Chinese. My friend shrugged. Why should they care? They don't have to read them.

This week's news shows that we may both have been wrong in both cases. The reality, as the veteran technology journalist Charles Arthur suggested in the Wednesday and Thursday editions of his weekday news digest, The Overspill, is that the Hong Kong protests are exposing and enabling the collision between China's censorship controls and Western standards for free speech, aided by companies anxious to access the Chinese market. We may have thought we were exporting the First Amendment, but it doesn't apply to non-government entities.

It's only relatively recently that it's become generally acknowledged that governments can harness the Internet themselves. In 2008, the New York Times thought there was a significant domestic backlash against China's censors; by 2018, the Times was admitting China's success, first in walling off its own edited version of the Internet, and second in building rival giant technology companies and speeding past the US in areas such as AI, smartphone payments, and media creation.

So, this week. On Saturday, Demos researcher Carl Miller documented an ongoing edit war at Wikipedia: 1,600 "tendentious" edits across 22 articles on topics such as Taiwan, Tiananmen Square, and the Dalai Lama to "systematically correct what [officials and academics from within China] argue are serious anti-Chinese biases endemic across Wikipedia".

On Sunday, the general manager of the Houston Rockets, an American professional basketball team, withdrew a tweet supporting the Hong Kong protesters after it caused an outcry in China. Who knew China was the largest international market for the National Basketball Association? On Tuesday, China responded that it wouldn't show NBA pre-season games, and Chinese fans may boycott the games scheduled for Shanghai. The NBA commissioner eventually released a statement saying the organization would not regulate what players or managers say. The Americanness of basketball: restored.

Also on Tuesday, Activision Blizzard suspended Chung Ng Wai, a professional player of the company's digital card game, Hearthstone, after he expressed support for the Hong Kong protesters in a post-win official interview and fired the interviewers. Chung's suspension is set to last for a year, and includes forfeiting his thousands of dollars of 2019 prize money. A group of the company's employees walked out in protest, and the gamer backlash against the company was such that the moderators briefly took the Blizzard subreddit private in order to control the flood of angry posts (it was reopened within a day). By Wednesday, EU-based Hearthstone gamers were beginning to consider mounting a denial-of-service-attack against Blizzard by sending so many subject access requests under the General Data Protection Regulation that it will swamp the company's resources complying with the legal requirement to fulfill them.

On Wednesday, numerous media outlets reported that in its latest iOS update Apple has removed the Taiwan flag emoji from the keyboard for users who have set their location to Hong Kong or Macau - you can still use the emoji, but the procedure for doing so is more elaborate. (We will save the rant about the uselessness of these unreadable blobs for another time.)

More seriously, also on Wednesday, the New York Times reported that Apple has withdrawn the HKmap.live app that Hong Kong protesters were using to track police after China's state media accusing and protecting the protesters.

Local versus global is a long-standing variety of net.war, dating back to the 1991 Amateur Action bulletin board case. At Stratechery, Ben Thompson discusses the China-US cultural clash, with particular reference to TikTok, the first Chinese company to reach a global market; a couple of weeks ago, the Guardian revealed the site's censorship policies.

Thompson argues that, "Attempts by China to leverage market access into self-censorship by U.S. companies should also be treated as trade violations that are subject to retaliation." Maybe. But American companies can't win at this game.

In her recent book, The Big Nine, Amy Webb discusses China AI advantage as it pours resources and, above all, data into becoming the world leader via Baidu, Ali Baba, and Tencent, which have grown to rival Google, Amazon, and Facebook, without ever needing to leave home. Beyond that, China has been spreading its influence by funding telecommunications infrastructure. The Belt and Road initiative has projects in 152 countries. In this, China is taking advantage of the present US administration's inward turn and worldwide loss of trust.

After reviewing the NBA's ultimate decision, Thompson writes, "I am increasingly convinced this is the point every company dealing with China will reach: what matters more, money or values?" The answer will always be money; whose values count will depend on which market they can least afford to alienate. This week is just a coincidental concatenation of early skirmishes; just wait for the Internet of Things.

Illustrations: The Great Wall of China (by Hao Wei, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 20, 2019

Jumping the shark

800px-Guadalupe_Island_Great_White_Shark_with_Horizon_Charters.pngThis week, the Wall Street Journal claimed that Amazon has begun ordering item search results according to their profitability for the company. ( (The story is summarized at Ars Technica, for non-WSJ subscribers.) Amazon has called the story "not factually accurate", though, unsurprisingly, it declined to explain its algorithm's inner workings.

My reaction: "Well, that's a jump the shark moment."

Of course we know that every business seeks to optimize profits. Supermarkets - doubtless including Amazon's Whole Foods - choose the products to place at the ends of aisles and at cash registers only partly because those are the ones that tempt customers to make impulse buys but also because the product manufacturers pay them to do so. Both halves of that motivation have to be there. But Amazon's business and reputation are built on being fiercely devoted to putting customers first. So what makes this story different is the - perhaps only very slight - change in the weighting given to customer welfare.

In this, Amazon is following a time-honored Silicon Valley tradition (despite being based 800 miles north, in Seattle). In 2017, the EU fined Google $2.7 billion for favoring its own services in its shopping search results.

Obviously, Amazon has done and is doing far worse things. Just a few days earlier, the company announced changes that will remove health benefits for nearly 2,000 part-time employees at Whole Foods. It seems capriciously cruel: the richest man in the world, who last year told Business Insider he couldn't think of anything to spend his money on other than space travel, is willing to actively harm (given the US health system) some of the most vulnerable people who work for him. Even if he can't see it himself, you'd think the company's PR department would.

And that's just the latest in the catalogue. The company's warehouse workers regularly tell horror stories about their grueling jobs - and have for years. It will pay no US federal taxes this year for the second year in a row.

Whether or not it's true, one reason the story is so plausible is that increasingly we have no idea how businesses make their money. We *assume* we know that Coca-Cola's primary business is selling soft drinks, airlines' is selling seats on planes, and Spotify's is the sort of combination of subscriptions and advertising that has sustained many different media for a century. But not so fast: in 2017, Bloomberg reported that actually airlines make more money selling miles than they do from selling seats. Maybe the miles can't exist without the seats, but motives go where the money is, so this business reality must have consequences. Spotify, it turns out, has been building itself up into the third-largest player in digital advertising, collaborating with the PR and advertising holding company WPP to mine the billions of data points collected daily from its users' playlists and giving advertisers a new meaning for the term "mood music".

In the most simple mental model, we might expect Amazon to profit more from items it sells itself than from those sold on its platform by marketplace sellers. In fact, Amazon noted in its 2008 annual report (PDF, see p32) that its profits were about the same either way. This year, however, the EU opened an investigation into whether the company is taking advantage of the data it collects about third-party sales to identify profitable products it can cherry-pick and make for itself. No one, Lina Khan wrote in 2017 in a discussion of the modern failings of the US's antitrust enforcement, valued the data Amazon collects from smaller sellers' transactions, not even in those annual reports. Revenue-neutral, indeed.

In fact, Amazon's biggest source of profits is not its retail division, which even The Motley Fool can't figure out if it makes money. Amazon's biggest profit center is Amazon Web Services; *Netflix* was built on it. It may in fact be the case that the cloud business enables Amazon to act as an increasingly rapacious predator feasting on the rest of retail, a business model familiar from Uber (though it's far from the only one).

So Spotify is a music service in the same sense that Adobe and Oracle are software companies. Probably none of their original business plans focused on data exploitation, and their "pivot" (or bait and switch) into data passes us by while Facebook and Google get all the stick. Amazon may be the most problematic; it is, as Kashmir Hill discovered earlier this year, hard to do without Google but impossible to excise Amazon from your life. Finding alternatives for retail can still be done with enough diligence, but opting out of every business that depends on its cloud services can't be done.

Amazon was doing very well at escaping the negative scrutiny accruing to Facebook, Uber, and Google, all while becoming arguably the bigger threat, in part because we think of it as a nice company that sends us things. But if its retail customers are becoming just fungible piles of data to be optimized, that's a systemic failure the company can't reverse by restoring 2,000 people's health benefits, or paying taxes, or getting its owner to say, oh, yeah, space travel...what was I thinking?


Illustrations: Great white shark (via Sharkcrew at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2019

The Fregoli delusion

Anomalisa-Fregoli.pngIn biology, a monoculture is a bad thing. If there's only one type of banana, a fungus can wipe out the entire species instead of, as now, just the most popular one. If every restaurant depends on Yelp to find its customers, Yelp's decision to replace their phone number with one under its own control is a serious threat. And if, as we wrote here some years ago, everyone buys everything from Amazon, gets all their entertainment from Netflix, and get all their mapping, email, and web browsing from Google, what difference does it make that you're iconoclastically running Ubuntu underneath?

The same should be true in the culture of software development. It ought to be obvious that a monoculture is as dangerous there as on a farm. Because: new ideas, robustness, and innovation all come from mixing. Plenty of business books even say this. It's why research divisions create public spaces, so people from different disciplines will cross-fertilize. It's why people and large businesses live in cities.

And yet, as the journalist Emily Chang documents in her 2018 book Brotopia: Breaking Up the Boys' Club of Silicon Valley, Silicon Valley technology companies have deliberately spent the last couple of decades progressively narrowing their culture. To a large extent, she blames the spreading influence of the Paypal Mafia. At Paypal's founding, she writes, this group, which includes Palantir founder Peter Thiel, LinkedIn founder Reid Hoffman, and Tesla supremo Elon Musk, adopted the basic principle that to make a startup lean, fast-moving, and efficient you needed a team who thought alike. Paypal's success and the diaspora of its early alumni disseminated a culture in which hiring people like you was a *strategy*. This is what #MeToo and fights for equality are up against.

Businesses are as prone to believing superstitions as any other group of people, and unicorn successes are unpredictable enough to fuel weird beliefs, especially in an already-insular place like Silicon Valley. Yet, Chang finds much earlier roots. In the mid-1960s, System Development Corporation hired psychologists William Cannon and Dallis Perry to create a profile to help it to identify recruits who would enjoy the new profession of computer programming. They interviewed 1,378 mostly male programmers, and found this common factor: "They don't like people." And so the idea that "antisocial" was a qualification was born, spreading outwards through increasingly popular "personality tests" and, because of the cultural differences in the way girls and boys are socialized, gradually and systematically excluding women.

Chang's focus is broad, surveying the landscape of companies and practices. For personal inside experiences, you might try Ellen Pao's Reset: My Fight for Inclusion and Lasting Change, which documents the experiences at Kleiner Perkins, which led her to bring a lawsuit, and at Reddit, where she was pilloried for trying to reduce some of the system's toxicity. Or, for a broader range, try Lean Out, a collection of personal stories edited by Elissa Shevinsky.

Chang finds that even Google, which began with an aggressive policy of hiring female engineers that netted it technology leaders Susan Wojcicki, CEO of YouTube, Marissa Mayer, who went on to try to rescue Yahoo, and Sheryl Sandberg, now COO of Facebook, failed in the long term. Today its male-female radio is average for Silicon Valley. She cites Slack as a notable exception; founder Stewart Butterfield set out to build a different kind of workplace.

In that sense, Slack may be the opposite of Facebook. In Zucked: Waking Up to the Facebook Catastrophe, Roger McNamee tells the mea culpa story of his early mentorship to Mark Zuckerberg and the company's slow pivot into posing problems he believes are truly dangerous. What's interesting to read in tandem with Chang's book is his story of the way Silicon Valley hiring changed. Until around 2000, hiring rewarded skill and experience; the limitations on memory, storage, and processing power meant companies needed trained and experienced engineers. Facebook, however, came along at the moment when those limitations had vanished and as the dot-com bust finished playing out. Suddenly, products could be built and scaled up much faster; open source libraries and the arrival of cloud suppliers meant they could be developed by less experienced, less skilled, *younger*, much *cheaper* people; and products could be free, paid for by advertising. Couple this with 20 years of Reagan deregulation and the influence, which he also cites, of the Paypal Mafia, and you have the recipe for today's discontents. McNamee writes that he is unsure what the solution is; his best effort at the moment appears to be advising Center for Humane Technology, led by former Google design ethicist Tristan Harris.

These books go a long way toward explaining the world Caroline Criado-Perez describes in 2018's Invisible Women: Data Bias in a World Designed for Men. Her discussion is not limited to Silicon Valley - crash test dummies, medical drugs and practices, and workplace design all appear - but her main point applies. If you think of one type of human as "default normal", you wind up with a world that's dangerous for everyone else.

You end up, as she doesn't say, with a monoculture as destructive to the world of ideas as those fungi are to Cavendish bananas. What Zucked and Brotopia explain is how we got there.


Illustrations: Still from Anomalisa (2015).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 9, 2019

Collision course

800px-Kalka-Shimla_Railway_at_night_in_Solan_-_approaching_train.JPGThe walk from my house to the tube station has changed very little in 30 years. The houses and their front gardens look more or less the same, although at least two have been massively remodeled on the inside. More change is visible around the tube station, where shops have changed hands as their owners retired. The old fruit and vegetable shop now sells wine; the weird old shop that sold crystals and carved stones is now a chain drug store. One of the hardware stores is a (very good) restaurant and the other was subsumed into the locally-owned health food store. And so on.

In the tube station itself, the open platforms have been enclosed with ticket barriers and the second generation of machines has closed down the ticket office. It's imaginable that had the ID card proposed in the early 2000s made it through to adoption the experience of buying a ticket and getting on the tube could be quite different. Perhaps instead of an Oyster card or credit card tap, we'd be tapping in and out using a plastic ID smart card that would both ensure that only I could use my free tube pass and ensure that all local travel could be tracked and tied to you. For our safety, of course - as we would doubtless be reminded via repetitive public announcements like the propaganda we hear every day about the watching eye of CCTV.

Of course, tracking still goes on via Oyster cards, credit cards, and, now, wifi, although I do believe Transport for London when it says its goal is to better understand traffic flows through stations in order to improve service. However, what new, more intrusive functions TfL may choose - or be forced - to add later will likely be invisible to us until an expert outsider closely studies the system.

In his recently published memoir, the veteran campaigner and Privacy International founder Simon Davies tells the stories of the ID cards he helped to kill: in Australia, in New Zealand, in Thailand, and, of course, in the UK. What strikes me now, though, is that what seemed like a win nine years ago, when the incoming Conservative-Liberal Democrat alliance killed the ID card, is gradually losing its force. (This is very similar to the early 1990s First Crypto Wars "win" against key escrow; the people who wanted it have simply found ways to bypass public and expert objections.)

As we wrote at the time, the ID card itself was always a brightly colored decoy. To be sure, those pushing the ID card played on it and British wartime associations to swear blind that no one would ever be required to carry the ID card and forced to produce it. This was an important gambit because to much of the population at the time being forced to carry and show ID was the end of the freedoms two world wars were fought to protect. But it was always obvious to those who were watching technological development that what mattered was the database because identity checks would be carried out online, on the spot, via wireless connections and handheld computers. All that was needed was a way of capturing a biometric that could be sent into the cloud to be checked. Facial recognition fits perfectly into that gap: no one has to ask you for papers - or a fingerprint, iris scan, or DNA sample. So even without the ID card we *are* now moving stealthily into the exact situation that would have prevailed if we had. Increasing numbers of police departments - South Wales, London, LA, India, and, notoriously, China - as Big Brother Watch has been documenting for the UK. There are many more remotely observable behaviors to be pressed into service, enhanced by AI, as the ACLU's Jay Stanley warns.

The threat now of these systems is that they are wildly inaccurate and discriminatory. The future threat of these systems is that they will become accurate and discriminatory, allowing much more precise targeting that may even come to seem reasonable *because* it only affects the bad people.

This train of thought occurred to me because this week Statewatch released a leaked document indicating that most of the EU would like to expand airline-style passenger data collection to trains and even roads. As Daniel Boffay explains at the Guardian (and as Edward Hasbrouck has long documented), the passenger name records (PNRs) airlines create for every journey include as many as 42 pieces of information: name, address, payment card details, itinerary, fellow travelers... This is information that gets mined in order to decide whether you're allowed to fly. So what this document suggests is that many EU countries would like to turn *all* international travel into a permission-based system.

What is astonishing about all of this is the timing. One of the key privacy-related objections to building mass surveillance systems is that you do not know who may be in a position to operate them in future or what their motivations will be. So at the very moment that many democratic countries are fretting about the rise of populism and the spread of extremism, those same democratic countries are proposing to put in place a system that extremists who get into power can operate anti-democratic ways. How can they possibly not see this as a serious systemic risk?


Illustrations: The light of the oncoming train (via Andrew Gray at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 26, 2019

Hypothetical risks

Great Hack - data connections.png"The problem isn't privacy," the cryptography pioneer Whitfield Diffie said recently. "It's corporate malfeasance."

This is obviously right. Viewed that way, when data profiteers claim that "privacy is no longer a social norm", as Facebook CEO Mark Zuckerberg did in 2010, the correct response is not to argue about privacy settings or plead with users to think again, but to find out if they've broken the law.

Diffie was not, but could have been, talking specifically about Facebook, which has blown up the news this week. The first case grabbed most of the headlines: the US Federal Trade Commission fined the company $5 billion. As critics complained, the fine was insignificant to a company whose Q2 2019 revenues were $16.9 billion and whose quarterly profits are approximately equal to the fine. Medium-term, such fines have done little to dent Facebook's share prices. Longer-term, as the cases continue to mount up...we'll see. Also this week, the US Department of Justice launched an antitrust investigation into Apple, Amazon, Alphabet (Google), and Facebook.

The FTC fine and ongoing restrictions have been a long time coming; EPIC executive director Marc Rotenberg has been arguing ever since the Cambridge Analytica scandal broke that Facebook had violated the terms of its 2011 settlement with the FTC.

If you needed background, this was also the week when Netflix released the documentary, The Great Hack, in which directors Karim Amer and Jehane Noujairn investigate the role Cambridge Analytica and Facebook played in the 2016 EU referendum and US presidential election votes. The documentary focuses primarily on three people: David Carroll, who mounted a legal action against Facebook to obtain his data; Brittany Kaiser, a director of Cambridge Analytica who testified against the company; and Carole Cadwalladr, who broke the story. In his review at the Guardian, Peter Bradwell notes that Carroll's experience shows it's harder to get your "voter profile" out of Facebook than from the Stasi, as per Timothy Garton Ash. (Also worth viewing: the 2006 movie The Lives of Others.)

Cadwalladr asks in her own piece about The Great Hack and in her 2019 TED talk, whether we can ever have free and fair elections again. It's a difficult question to answer because although it's clear from all these reports that the winning side of both the US and UK 2016 votes used Facebook and Cambridge Analytica's services, unless we can rerun these elections in a stack of alternative universes we can never pinpoint how much difference those services made. In a clip taken from the 2018 hearings on fake news, Damian Collins (Conservative, Folkstone and Hythe), the chair of the Digital, Culture, Media, and Sport Committee, asks Chris Wylie, a whistleblower who worked for Cambridge Analytica, that same question (The Great Hack, 00:25:51). Wylie's response: "When you're caught doping in the Olympics, there's not a debate about how much illegal drug you took or, well, he probably would have come in first, or, well, he only took half the amount, or - doesn't matter. If you're caught cheating, you lose your medal. Right? Because if we allow cheating in our democratic process, what about next time? What about the time after that? Right? You shouldn't win by cheating."

Later in the film (1:08:00), Kaiser, testifying to DCMS, sums up the problem this way: "The sole worth of Google and Facebook is the fact that they own and possess and hold and use the personal data from people all around the world.". In this statement, she unknowingly confirms the prediction made by the veteran Australian privacy advocate Roger Clarke,who commented in a 2009 interview about his 2004 paper, Very Black "Little Black Books", warning about social networks and privacy: "The only logical business model is the value of consumers' data."

What he got wrong, he says now, was that he failed to appreciate the importance of micro-pricing, highlighted in 1999 by the economist Hal Varian. In his 2017 paper on the digital surveillance economy, Clarke explains the connection: large data profiles enable marketers to gauge the precise point at which buyers begin to resist and pitch their pricing just below it. With goods and services, this approach allows sellers to extract greater overall revenue from the market than pre-set pricing would; with politics, you're talking about a shift from public sector transparency to private sector black-box manipulation. Or, as someone puts it in The Great Hack, a "full-service propaganda machine". Load, aim at "persuadables", and set running.

Less noticed than either of these is the Securities and Exchange Commission settlement with Facebook, also announced this week. While the fine is relatively modest - a mere $100 million - the SEC has nailed the company's conflicting statements. On Twitter, Jason Kint has helpfully highlighted the SEC's statements laying out the case that Facebook knew in 2016 that it had sold Cambridge Analytica some of the data underlying the 30 million personality profiles CA had compiled - and then "misled" both the US Congress and its own investors. Besides the fine, the SEC has permanently enjoined Facebook from further violations of the laws it broke in continuing to refer to actual risks as "hypothetical". The mills of trust have been grinding exceeding slow; they may yet grind exceeding small.


Illustrations: Data connections in The Great Hack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 28, 2019

Failure to cooperate

sweat-nottage.jpgIn her 2015 Pulitzer Prize-winning play, Sweat, on display nightly in London's West End until mid-July, Lynn Nottage explores class and racial tensions in the impoverished, post-industrial town of Reading, PA. In scenes alternating between 2000 and 2008, she explores the personal-level effects of twin economic crashes, corporate outsourcing decisions, and tribalism: friends become opposing disputants; small disagreements become violent; and the prize for "winning" shrinks to scraps. Them who has, gets; and from them who have little, it is taken.

Throughout, you wish the characters would recognize their real enemies: the company whose steel tubing factory has employed them for decades, their short-sighted union, and a system that structurally short-changes them. The pain of the workers when they are locked out is that of an unwilling divorce, abruptly imposed.

The play's older characters, who would be in their mid-60s today, are of the age to have been taught that jobs were for life. They were promised pensions and could look forward to wage increases at a steady and predictable pace. None are wealthy, but in 2000 they are financially stable enough to plan vacations, and their children see summer jobs as a viable means of paying for college and climbing into a better future. The future, however, lies in the Spanish-language leaflets the company is distributing to frustrated immigrants the union has refused to admit and who will work for a quarter the price. Come 2008, the local bar is run by one of those immigrants, who of necessity caters to incoming hipsters. Next time you read an angry piece attacking Baby Boomers for wrecking the world, remember that it's a big demographic and only some were the destructors. *Some* Baby Boomers were born wreckage, some achieved it, and some had it thrust upon them.

We leave the characters there in 2008: hopeless, angry, and alienated. Nottage, who has a history of researching working class lives and the loss of heavy industry, does not go on to explore the inner workings of the "digital poorhouse" they're moving into. The phrase comes from Virginia Eubanks' 2018 book, Automating Inequality, which we unfortunately missed reviewing before now. If Nottage had pursued that line, she might have found what Eubanks finds: a punitive, intrusive, judgmental, and hostile benefits system. Those devastated factory workers must surely have done something wrong to deserve their plight.

Eubanks presents three case studies. In the first, struggling Indiana families navigate the state's new automated welfare system, a $1.3 billion, ten-year privatization effort led by IBM. Soon after its 2006 launch, it began sending tens of thousands of families notices of refusal on this Kafkaesque basis: "Failure to cooperate". Indiana eventually canceled IBM's contract, and the two have been suing each other ever since. Not represented in court is, as Eubanks says, the incalculable price paid in the lives of the humans the system spat out.

In the second, "coordinated entry" matches homeless Los Angelenos to available resources in order of vulnerability. The idea was that standardizing the intake process across all possible entryways would help the city reduce waste and become more efficient while reducing the numbers on Skid Row. The result, Eubanks finds, is an unpredictable system that mysteriously helps some and not others, and that ultimately fails to solve the underlying structural problem: there isn't enough affordable housing.

In the third, a Pennsylvania predictive system is intended to identify children at risk of abuse. Such systems are proliferating widely and controversially for varying purposes, and all raise concerns about fairness and transparency: custody decisions (Durham, England), gang membership and gun crime (Chicago and London), and identifying children who might be at risk (British local councils). All these systems gather and retain, perhaps permanently, huge amounts of highly intimate data about each family. The result in Pennsylvania was to deter families from asking for the help they're actually entitled to, lest they become targets to be watched. Some future day, those same records may pop when a hostile neighbor files a minor complaint, or haunt their now-grown children when raising their own children.

All these systems, Eubanks writes, could be designed to optimize access to benefits instead of optimizing for efficiency or detecting fraud. I'm less sanguine. In prior art, Danielle Citron has written about the difficulties of translating human law accurately into programming code, and the essayist Ellen Ullman warned in 1996 that even those with the best intentions eventually surrender to computer system imperatives of improving data quality, linking databases, and cross-checking, the bedrock of surveillance.

Eubanks repeatedly writes that middle class people would never put up with this level of intrusion. They may have no choice. As Sweat highlights, many people's options are shrinking. Refusal is only possible for those who can afford to buy their help, an option increasingly reserved for a privileged few. Poor people, Eubanks is frequently told, are the experimental models for surveillance that will eventually be applied to all of us.

In 2017, Cathy O'Neil argued in Weapons of Math Destruction that algorithmic systems can be designed for fairness. Eubanks' analysis suggests that view is overly optimistic: the underlying morality dates back centuries. Digitization has, however, exacerbated its effects, as Eubanks concludes. County poorhouse inmates at least had the community of shared experience. Its digital successor squashes and separates, leaving each individual to drink alone in that Reading bar.


Illustrations: Sweat's London production poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 17, 2019

Genomics snake oil

DNA_Double_Helix_by_NHGRI-NIH-PD.jpgIn 2011, as part of an investigation she conducted into the possible genetic origins of the streak of depression that ran through her family, the Danish neurobiologist Lone Frank had her genome sequenced and interviewed many participants in the newly-opening field of genomics that followed the first complete sequencing of the human genome. In her resulting book, My Beautiful Genome, she commented on the "Wild West" developing around retail genetic testing being offered to consumers over the web. Absurd claims such as using DNA testing to find your perfect mate or direct your child's education abounded.

This week, at an event organized by Breaking the Frame, New Zealand researcher Andelka M. Phillips presented the results of her ongoing study of the same landscape. The testing is just as unreliable, the claims even more absurd - choose your diet according to your DNA! find out what your superpower is! - and the number of companies she's collected has reached 289 while the cost of the tests has shrunk and the size of the databases has ballooned. Some of this stuff makes astrology look good.

To be perfectly clear: it's not, or not necessarily, the gene sequencing itself that's the problem. To be sure, the best lab cannot produce a reading that represents reality from poor-quality samples. And many samples are indeed poor, especially those snatched from bed sheets or excavated from garbage cans to send to sites promising surreptitious testing (I have verified these exist, but I refuse to link to them) to those who want to check whether their partner is unfaithful or whether their child is in fact a blood relative. But essentially, for health tests at least, everyone is using more or less the same technology for sequencing.

More crucial is the interpretation and analysis, as Helen Wallace, the executive director of GeneWatch UK, pointed out. For example, companies differ in how they identify geographical regions, frame populations , and the makeup of their databases of reference contributions. This is how a pair of identical Canadian twins got varying and non-matching test results from five companies, one Ashkenazi Jew got six different ancestry reports, and, according to one study, up to 40% of DNA results from consumer genetic tests are false positives. As I type, the UK Parliament is conducting an inquiry into commercial genomics.

Phillips makes the data available to anyone who wants to explore it. Meanwhile, so far she's examined the terms of service and privacy policies of 71 companies, and finds them filled with technology company-speak, not medical information. They do not explain these services' technical limitations or the risks involved. Yet it's so easy to think of disastrous scenarios: this week, an American gay couple reported that their second child's birthright citizenship is being denied under new State Department rules. A false DNA test could make a child stateless.

Breaking the Frame's organizer, Dave King, believes that a subtle consequence of the ancestry tests - the things everyone was quoting in 2018 that tell you that you're 13% German, 1% Somalian, and whatever else - is to reinforce the essentially racist notion that "Germanness" has a biological basis. He also particularly disliked the services claiming they can identify children's talents; these claim, as Phillips highlighted, that testing can save parents money they might otherwise waste on impossible dreams. That way lies Gattaca and generations of children who don't get to explore their own abilities because they've already been written off.

Even more disturbing questions surround what happens with these large databases of perfect identifiers. In the UK, last October the Department of Health and Social Care announced its ambition to sequence 5 million genomes. Included was the plan to being in 2019 to offer whole genome sequencing to all seriously ill children and adults with specific rare diseases or hard-to-treat cancers as part of their care. In other words, the most desperate people are being asked first, a prospect Phil Booth, coordinator of medConfidential, finds disquieting. As so much of this is still research, not medical care, he said, like the late despised care.data, it "blurs the line around what is your data, and between what the NHS was and what some would like it to be". Exploitation of the nation's medical records as raw material for commercial purposes is not what anyone thought they were signing up for. And once you have that giant database of perfect identifiers...there's the Home Office, which has already been caught using the NHS to hunt illegal immigrants and DNA testing immigrants.

So Booth asked this: why now? Genetic sequencing is 20 years old, and to date it has yet to come close to being ready to produce the benefits predicted for it. We do not have personalized medicine, or, except in a very few cases (such as a percentage of breast cancer) drugs tailored to genetic makeup. "Why not wait until it's a better bet?" he asked. Instead of spending billions today - billions that, as an audience member pointed out, would produce better health more widely if spent on improving the environment, nutrition, and water - the proposal is to spend them on a technology that may still not be producing results 20 years from now. Why not wait, say, ten years and see if it's still worth doing?


Illustrations: DNA double helix (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 12, 2019

The Algernon problem

charly-movie-image.jpgLast week we noted that it may be a sign of a maturing robotics industry that it's possible to have companies specializing in something as small as fingertips for a robot hand. This week, the workshop day kicking off this year's We Robot conference provides a different reason to think the same thing: more and more disciplines are finding their way to this cross-the-streams event. This year, joining engineers, computer scientists, lawyers, and the odd philosopher are sociologists, economists, and activists.

The result is oddly like a meeting of the Research Institute for the Science of Cyber Security, where a large part of the point from the beginning has been that human factors and economics are as important to good security as technical knowledge. This was particularly true in the face-off between the economist Rob Seamans and the sociologist Beth Bechky, which pitted quantitative "things we can count" against qualitative "study the social structures" thinking. The range of disciplines needed to think about what used to be "computer" security keeps growing as the ways we use computers become more complex; robots are computer systems whose mechanical manifestations interact with humans. This move has to happen.

One sign is a change in language. Madeline Elish, currently in the news for her newly published 2016 We Robot paper, Moral Crumple Zones, said she's trying to replace the term "deploying" with "integrating" for arriving technologies. "They are integrated into systems," she explained, "and when you say "integrate" it implies into what, with whom, and where." By conrast, "deployment" is military-speak, devoid of context. I like this idea, since by 2015, it was clear from a machine learning conference at the Royal Society that many had begun seeing robots as partners rather than replacements.

Later, three Japanese academics - the independent researcher Hideyuki Matsumi, Takayuki Kato, and Pumio Shimpo - tried to explain why Japanese people like robots so much - more, it seems, than "we" do (whoever "we" are). They suggested three theories: the influence of TV and manga; the influence of the mainstream Shinto religion, which sees a spirit in everything; and the Japanese government strategy to make the country a robotics powerhouse. The latter has produced a 356-page guideline for research development.

"Japanese people don't like to draw distinctions and place clear lines," Shinto said. "We think of AI as a friend, not an enemy, and we want to blur the lines." Shimpo had just said that even though he has two actual dogs he wants an Aibo. Kato dissented: "I personally don't like robots."

The MIT researcher Kate Darling, who studies human responses to robots, found positive reinforcement in studies that have found that autistic kids respond well to robots. "One theory is that they're social, but not too social." An experiment that placed these robots in homes for 30 days last summer had "stellar results". But: when the robots were removed at the end of the experiment, follow-up studies found that the kids were losing the skills the robots had brought them. The story evokes the 1958 Daniel Keyes story Flowers for Algernon, but then you have to ask: what were the skills? Did they matter to the children or just to the researchers and how is "success" defined?

The opportunities anthropomorphization opens for manipulation are an issue everywhere. Woody Hartzog called the tendency to believe what the machine says "automation bias", but that understates the range of motivations: you may believe the machine because you like it, because it's manipulated you, or because you're working in a government benefits agency where you can't be sure you won't get fired if you defy the machine's decision. Would that everyone could see Bill Smart and Cindy Grimm follow up their presentation from last year to show: AI is just software; it doesn't "know" things; and it's the complexity that gets you. Smart hates the term "autnomous" for robots "because in robots it means deterministic software running on a computer. It's teleoperation via computer code."

This is the "fancy hammer" school of thinking about robots, and it can be quite valuable. Kevin Bankston soon demonstrated this: "Science fiction has trained us to worry about Skynet instead of housing discrimination, and expect individual saviors rather than communities working together to deal with community problems." AI is not taking our jobs; capitalists are using AI to take our jobs - a very different problem. As long as we see robots and AI as autonomous, we miss that instead they ares agents carrying out others' plans. This is a larger example of a pervasive problem with smartphones, social media sites, and platforms generally: they are designed to push us to forget the data-collecting, self-interested, manipulative behemoth behind them.

Returning to Elish's comment, we are one of the things robots integrate with. At the moment, this is taking the form of making random people research subjects: the pedestrian killed in Arizona by a supposedly self-driving car, the hapless prisoners whose parole is decided by it's-just-software, the people caught by the Metropolitan Police's staggeringly flawed facial recognition, the homeless people who feel threatened by security robots, the Caltrain passengers sharing a platform with an officious delivery robot. Did any of us ask to be experimented on?


Illustrations: Cliff Robertson in Charly, the movie version of "Flowers for Algernon".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 15, 2019

Schrödinger's Brexit

Parliament_Clock_Westminster-wikimedia.jpg

"What's it like over there now?" American friends keep asking as the clock ticks down to midnight on March 29. Even American TV seems unusually interested: last week's Full Frontal with Samantha Bee had Amy Hoggart explain in detail; John Oliver made it a centerpiece two weeks ago, and US news outlets are giving it as much attention as if it were a US story. They're even - so cute! - trying to pronounce "Taoiseach". Everyone seems fascinated by the spectacle of the supposedly stoic, intellectual British holding meaningless "meaningful" votes and avoiding making any decisions that could cause anyone to lose face. So this is what it's like to live through a future line in the history books: other countries fret on your behalf while you're trying to get lunch.

In 14 days, Britain will either still be a member of the European or it won't. It will have a deal describing the future relationship or it won't. Ireland will be rediscovering civil war or it won't. In two months, we will be voting in the European Parliamentary elections as if nothing has happened, or we won't. All possible outcomes lead to protests in Parliament Square.

No one expects to be like Venezuela. But no one knows what will happen, either. We were more confident approaching Y2K. At least then you knew that thousands of people had put years of hard work into remediating the most important software that could fail. Here...in January, returning from CPDP and flowing seamlessly via Eurostar from Brussels to London, my exit into St Pancras station held the question: is this the last time this journey will be so simple? Next trip, will there be Customs channels and visa checks? Where will they put them? There's no space.

A lot of the rhetoric both at the time of the 2016 vote and since has been around taking back control and sovereignty. That's not the Britain I remember from the 1970s, when the sense of a country recovering from the loss of its empire was palpable, middle class people had pay-as-you-go electric and gas meters, and the owner of a Glasgow fruit and vegetable shop stared at me when I asked for fresh garlic. In 1974, a British friend visiting an ordinary US town remarked, "You can tell there's a lot more money around in this country." And another, newly expatriate and struggling: "But at least we're eating real meat here." This is the pre-EU Britain I remember.

"I've worked for them, and I know how corrupt they are," a 70-something computer scientist said to me of the EU recently. She would, she said, "man the barriers" if withdrawal did not go through. We got interrupted before I could ask if she thought we were safer in the hands of the Parliament whose incompetence she had also just furiously condemned.

The country remains profoundly in disagreement. There may be as many definitions of "Brexit" as there are Leave voters. But the last three years have brought everyone together on one thing: no matter how they voted, where they're from, which party they support, or where they get their news, everyone thinks the political class has disgraced itself. Casually-met strangers laugh in disbelief at MPs' inability to put country before party or self-interest or say things like "It's sickening". Even Wednesday's hair's width vote taking No Deal off the table is absurd: the clock inexorably ticks toward exiting the EU with nothing unless someone takes positive action, either by revoking Article 50, or by asking for an extension, or by signing a deal. But action can get you killed politically. I've never cared for Theresa May, but she's prime minister because no one else was willing to take this on.

NB for the confused: in the UK "tabling a motion" means to put it up for discussion; in the US it means to drop it.

Quietly, people are making just-in-case preparations. One friend scheduled a doctor's appointment to ensure that he'd have in hand six months' worth of the medications he depends on. Others stockpile EU-sourced food items that may be scarce or massively more expensive. Anyone who can is applying for a passport from an EU country; many friends are scrambling to research their Irish grandparents and assemble documentation. So the people in the best position are the recent descendants of immigrants that would would not now be welcome. It is unfair and ironic, and everyone knows it. A critical underlying issue, Danny Dorling and Sally Tomlinson write in their excellent and eye-opening Rule Britannia: Brexit and the End of Empire is education that stresses the UK's "glorious" imperial past. Within the EU, they write, UK MEPs are most of the extreme right, and the EU may be better off - more moderate, less prone to populism - without the UK, while British people may achieve a better understanding of their undistinguished place in the world. Ouch.

The EU has never seemed irrelevant to digital rights activists. Computers, freedom, and privacy (that is, "net.wars") shows the importance of the EU in our time, when the US refuses to regulate and the Internet is challenging national jurisdiction. International collaboration matters.

Just as I wrote that, Parliament finally voted to take the smallest possible action and ask the EU for a two-month extension. Schrödinger needs a bigger box.

Illustrations: "Big Ben" (Aldaron, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 8, 2019

Pivot

parliament-whereszuck.jpgWould you buy a used social media platform from this man?

"As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platform," Mark Zuckerberg wrote this week at the Facebook blog, also summarized at the Guardian.

Zuckerberg goes on to compare Facebook and Instagram to "the digital equivalent of a town square".

So many errors, so little time. Neither Facebook nor Instagram is open. "Open information, Rufus Pollock explained last year in The Open Revolution, "...can be universally and freely used, built upon, and shared." While, "In a Closed world information is exclusively 'owned' and controlled, its attendant wealth and power more and more concentrated".

The alphabet is open. I do not need a license from the Oxford English Dictionary to form words. The web is open (because Tim Berners-Lee made it so). One of the first social media, Usenet, is open. Particularly in the early 1990s, Usenet really was the Internet's town square.

*Facebook* is *closed*.

Sure, anyone can post - but only in the ways that Facebook permits. Running apps requires Facebook's authorization, and if Facebook makes changes, SOL. Had Zuckerberg said - as some have paraphrased him - "town hall", he'd still be wrong, but less so: even smaller town halls have metal detectors and guards to control what happens inside. However, they're publicly owned. Under the structure Zuckerberg devised when it went public, even the shareholders have little control over Facebook's business decisions.

So, now: this week Zuckerberg announced a seeming change of direction for the service. Slate, the Guardian, and the Washington Post all find skepticism among privacy advocates that Facebook can change in any fundamental way, and they wonder about the impact on Facebook's business model of the shift to focusing on secure private messaging instead of the more public newsfeed. Facebook's former chief security officer Alex Stamos calls the announcement a "judo move" that removes both the privacy complaints (Facebook now can't read what you say to your friends) and allows the site to say that complaints about circulating fake news and terrorist content are outside its control (Facebook now can't read what you say to your friends *and* doesn't keep the data).

But here's the thing. Facebook is still proposing to unify the WhatsApp, Instagram, and Facebook user databases. Zuckerberg's stated intention is to build a single unified secure messaging system. In fact, as Alex Hern writes at the Guardian that's the one concrete action Zuckerberg has committed to, and that was announced back in January, to immediate privacy queries from the EU.

The point that can' t be stressed enough is that although Facebook is trading away the ability to look at the content of what people post it will retain oversight of all the traffic data. We have known for decades that metadata is even more revealing than content; I remember the late Caspar Bowden explaining the issues in detail in 1999. Even if Facebook's promise to vape the messages doesn't include keeping no copies for itself (a stretch, given that we found out in 2013 that the company keeps every character you type), it will be able to keep its insights into the connections between people and the conclusions it draws from them. Or, as Hern also writes, Zuckerberg "is offering privacy on Facebook, but not necessarily privacy from Facebook".

Siva Vaidhyanathan, author of Antisocial Media, seems to be the first to get this, and to point out that Facebook's supposed "pivot" is really just a decision to become more dominant, like China's WeChat.WeChat thoroughly dominates Chinese life: it provides messaging, payments, and a de facto identity system. This is where Vaidhyanathan believes Facebook wants to go, and if encrypting messages means it can't compete in China...well, WeChat already owns that market anyway. Let Google get the bad press.

Facebook is making a tradeoff. The merged database will give it the ability to inspect redundancy - are these two people connected on all three services or just one? - and therefore far greater certainty about which contacts really matter and to whom. The social graph that emerges from this exercise will be smaller because duplicates will have been merged, but far more accurate. The "pivot" does, however, look like it might enable Facebook to wriggle out from under some of its numerous problems - uh, "challenges". The calls for regulation and content moderation focus on the newsfeed. "We have no way to see the content people write privately to each other" ends both discussions, quite possibly along with any liability Facebook might have if the EU's copyright reform package passes with Article 11 (the "link tax") intact.

Even calls that the company should be broken up - appropriate enough, since the EU only approved Facebook's acquisition of WhatsApp when the company swore that merging the two databases was technically impossible - may founder against a unified database. Plus, as we know from this week's revelations, the politicians calling for regulation depend on it for re-election, and in private they accommodate it, as Carole Cadwalladr and Duncan Campbell write at the Guardian and Bill Goodwin writes at Computer Weekly.

Overall, then, no real change.


Illustrations: The international Parliamentary committee, with Mark Zuckerberg's empty seat.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 28, 2019

Systemic infection

Thumbnail image for 2001-hal.png"Can you keep a record of every key someone enters?"

This question brought author and essayist Ellen Ullman up short when she was still working as a software engineer and it was posed to her circa 1996. "Yes, there are ways to do that," she replied after a stunned pause.

In her 1997 book Close to the Machine, Ullman describes the incident as "the first time I saw a system infect its owner". After a little gentle probing, her questioner, the owner of a small insurance agency, explained that now that he had installed a new computer system he could find out what his assistant, who had worked for him for 26 years and had picked up his children from school when they were small, did all day. "The way I look at it," he explained, "I've just spent all this money on a system, and now I get to use it the way I'd like to."

Ullman appeared to have dissuaded this particular business owner on this particular occasion, but she went on to observe that over the years she saw the same pattern repeated many times. Sooner or later, someone always realizes that they systems they have commissioned for benign purposes can be turned to making checks and finding out things they couldn't know before. "There is something...in the formal logic of programs and data, that recreates the world in its own image," she concludes.

I was reminded of this recently when I saw a report at The Register that the US state of New Jersey, along with two dozen others, may soon require any contractor working on a contract worth more than $100,000 to install keylogging software to ensure that they're actually working all the hours - one imagines that eventually, it will be minutes - they bill for. Veteran reporter Thomas Claburn goes on to note that the text of the bill was provided by TransparentBusiness, a maker of remote work management software, itself a trend.

Speaking as a taxpayer, I can see the point of ensuring that governments are getting full value for our money. But speaking as a freelance writer who occasionally has had to work on projects where I'm paid by the hour or day (a situation I've always tried to avoid by agreeing a rate for the whole job), the distrust inherent in such a system seems poisonous. Why are we hiring people we can't trust? Most of us who have taken on the risks of self-employment do so because one of the benefits is autonomy and a certain freedom from bosses. And now we're talking about the kind of intensive monitoring that in the past has been reserved for full-time employees - and that none of them have liked much either.

One of the first sectors that is already fighting its way through this kind of transition is trucking. In 2014, Cornell sociologist Karen Levy published the results of three years of research into the arrival of electronic monitoring into truckers' cabs as a response to safety concerns. For truckers, whose cabs are literally their part-time homes, electronic monitoring is highly intrusive; effectively, the trucking company is installing a camera and other sensors not just in their office but also in their living room and bedroom. Instead of using electronics to try to change unsafe practices, she argues, alter the economic incentives. In particular, she finds that the necessity of making a living at low per-mile rates pushes truckers to squeeze the unavoidable hours of unpaid work - waiting for loading and unloading, for example - into their statutory hours of "rest".

The result sounds like it would be familiar to Uber drivers or modern warehouse workers, even if Amazon never deploys the wristbands it patented in 2016. In an interview published this week, Data & Society Institute researcher Alex Rosenblat outlines the results of a four-year study of ride-hail drivers across the US and Canada. Forget the rhetoric that these drivers are entrepreneurs, she writes; they have a boss, and it's the company's algorithm, which dictates their on-the-job behavior and withholds the data they need to make informed decisions.

If we do nothing, this may be the future of all work. In a discussion last week, University of Leicester associate professor Phoebe Moore located "quantified work" at the intersection of two trends: first, the health-oriented self-quantified movement, and second the succeeding waves of workplace management from industrialization through time and motion study, scientific management, and today's organizational culture, where, as Moore put it, we're supposed to "love our jobs and identify with our employer". The first of these has led to "wellness" programs that, particularly in the US, helped grant employers access to vastly more detailed personal data about their employees than has ever been available to them before.

Quantification, the combination of the two trends, Moore warns at Medium, will alter the workplace's social values by tending to pit workers against each other, race track style. Vendors now claim predictive power for AI: which prospective employees fit which jobs, or when staff may be about to quit or take sick leave. One can, as Moore does, easily imagine that, despite the improvements AI can bring, the AI-quantified workplace, will be intensively worker-hostile. The infection continues to spread.


Illustrations: HAL, from 2001: A Space Odyssey (1968).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.