" /> net.wars: June 2018 Archives

« May 2018 | Main | July 2018 »

June 28, 2018

Divergence

siliconvalleyopoloy.jpgLast September, Open Markets Institute director of legal policy Lina M. Khan published a lengthy law review article discussing two related topics: American courts' narrowing of antitrust law to focus on pricing and profits rather than promoting competition and balancing market power, and the application of that approach to Amazon in particular. The US's present conception of antitrust law, she writes and we see in action, provides no lens through which to curb the power of data-driven platforms. If cheaper is always better, what can possibly be wrong with free?

This week, the US Supreme Court provided another look at this sort of reasoning when it issued its opinion in Ohio v. American Express. The short version, as Henry Farrell explains on Twitter: SCOTUS sided with American Express, and in doing so made it harder to bring antitrust action against the big technology companies in the US *and* widened the gulf between the EU and US approaches to such things.

As Erik Hovenkamp explains in a paper analyzing the case (PDF), credit card transactions, like online platforms, are multi-sided markets. That is, they connect two distinguishable groups of customers whose relationships with the platform are markedly different. For Facebook and Google, these groups are consumers and advertisers; for credit card companies they're consumers and retailers. The intermediary platforms make their money by taking a small slice of each transaction. Credit card companies' slice is a percentage of the value of the transaction, paid directly by the merchant and indirectly by card holders through fees and interest; social media's slice is the revenue from the advertising it can display when you interact with your friends. Historically, American Express charges merchants a higher commission than other cards, money the merchant can't reclaim by selectively raising prices. Network effects - the fact that the more people use them the more useful they are to users - mean all these platforms benefit hugely from scale.

American Express v. Ohio, began in 2010, when the US Department of Justice, eventually joined by 17 states, filed a civil antitrust suit against American Express, Visa, and Mastercard. At issue were their "anti-steering" merchant contract clause rules, which barred merchants from steering customers toward cheaper (to the merchant) forms of payment.

Visa and Mastercard settled and removed the language. In 2015, the District Court ruled in favor of the DoJ. American Express then won in the 2nd Circuit Appeals Court in 2016; 11 of the states appealed. Now, SCOTUS has upheld the circuit court, and the precedent it sets, Beth Farmer writes at SCOTUSblog suggests that the plaintiffs in future antitrust cases covering two-sided markets will have to show that both sides have suffered harm in order to succeed. Applied to Facebook, this judgment would appear to say that harm to users (the loss of privacy) or to society at large (gamed elections) wouldn't count if no advertisers were harmed.

Farrell goes on to note the EU's very different tack, like last year's fine against Google for abusing its market dominance. Americans also underestimate the importance of Max Schrems's case against Google, Instagram, WhatsApp, and Facebook, launched the day the General Data Protection Regulation came into force. For 20 years, American companies have tried to cut a deal with data protection law, but, as Simon Davies warned in 1999, this is no more feasible than Europeans doing the same to the US 1st Amendment.

Schrems's case is that the all-or-nothing approach ("give us your data or go away") is not the law's required meaningful consent. In its new report, Deceived by Design, the Norwegian Consumer Council finds plenty of evidence to back up this contention. After studying the privacy settings provided by Windows 10, Facebook, and Google, the NCC argues that the latter two in particular deliberately make it easier for users to accept the most intrusive options than to opt out; they also use "nudge" techniques to stress the benefits of accepting the intrusive defaults.

"This is not privacy by default," the authors conclude after showing that opting into Facebook's most open privacy setting requires only a single click, while opting out requires foraging through "Manage your privacy settings". These are dark patterns - nudges intended to mislead users into doing things they wouldn't normally choose. In advising users to turn on photo tagging, for example, Facebook implies that choosing otherwise will harm the visually impaired using screenreaders, a technique the Dark Patterns website calls confirmshaming. Five NGOs have written to EU Data Protection Board chair Andrea Jelinek to highlight the report and asking her to investigate further.

The US has no comparable GDPR on which to base regulatory action, even if it were inclined to do so, and the American Express case makes clear that it has little interest in applying antitrust law to curb market power. For now, the EU is the only other region or government large enough to push back and make it stick. The initial response from some American companies', ghosting everyone in the EU, is unlikely to be sufficient. It's hard to see a reconciliation of these diverging approaches any time soon.

Illustrations: Silicon Valleyopoly game, 1996..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 22, 2018

Humans

virginmary-devil.jpgOne of the problems in writing about privacy over the last nearly 30 years is that it's easy for many people to see it as a trivial concern when you look at what's going on in the world: terrorist attacks, economic crashes, and the rise of extremism. To many, the case for increasing surveillance "for your safety" is a reasonable one.

I've never believed the claim that people - young or old - don't care about their privacy. People do care about their privacy, but, as previously noted, it's complicated. The biggest area of agreement is money: hardly anyone publishes the details of their finances unless forced. But beyond that, people have different values about what is private, and who should know it. For some women, saying openly they've had abortions is an essential political statement to normalize a procedure and a choice that is under threat. For others, it's too personal to disclose.

The factors involved vary: personality, past experience, how we've been treated, circumstances. It is easy for those of us who were born into economic prosperity and have lived in sectors of society where the governments in our lifetimes have treated us benignly to underestimate the network externalities of the decisions we make.

In February 2016, when the UK's Investigatory Power Act (2016) was still a mere bill under discussion, I wrote this:

This column has long argued that whenever we consider granting the State increased surveillance powers we should imagine life down the road if those powers are available to a government less benign than the present one. Now, two US 2016 presidential primaries in, we can say it thusly: what if the man wielding the Investigatory Powers Bill is Donald Trump?

Much of the rest of that net.wars focused on the UK bill and some aspects of the data protection laws. However, it also included this:

Finally, Privacy International found "thematic warrants" hiding in paragraph 212 of the explanatory notes and referenced in clauses 13(2) and 83 of the draft bill. PI calls this a Home Office attempt to disguise these as "targeted surveillance". They're so vaguely defined - people or equipment "who share a common purpose who carry on, or may carry on, a particular activity" - that they could include my tennis club. PI notes that such provisions contravene a long tradition of UK law that has prohibited general warrants, and directly conflict with recent rulings by the European Court of Human Rights.

It's hard to guess who Trump would turn this against first: Muslims, Mexicans, or Clintons.

The events of the last year and a half - parents and children torn apart at the border; the Border Patrol operating an 11-hour stop-and-demand-citizenship checkpoint on I-95 in Maine, legal under the 1953 rule that the "border" is a 100-mile swath in which the Fourth Amendment is suspended; and, well you read the news - suggest the question was entirely fair.

Now, you could argue that universal and better identification could stop this sort of the thing by providing the facility to establish quickly and unambiguously who has rights. You could even argue that up-ending the innocent-until-proven-guilty principle (being required to show papers on demand presumes that you have no right to be where you are until you prove you do) is worth it (although you'd still have to fight an angry hive of constitutional lawyers). I believe you'd be wrong on both counts. Identification is never universal; there are always those who lack the necessary resources to acquire it. The groups that wind up being disenfranchised by such rules are the most vulnerable members of the groups that are suffering now. It won't even deter those who profit from spreading hate - and yes, I am looking at the Daily Mail - from continuing to do so; they will merely target another group. The American experience already shows this. Despite being a nation of immigrants, Americans are taught that their own rights matter more than other people's; and as Hua Hsu writes in a New Yorker review of Nancy Isenberg's recent book, White Trash, that same view is turned daily on the "lower" parts of the US's classist and racist hierarchy.

I have come to believe that there is a causative link between violating people's human rights and the anti-privacy values of surveillance and control. The more horribly we treat people and the less we offer them trust, the more reason we have to be think that they and their successors will want revenge - guilt and the expectation of punishment operating on a nation-state scale. The logic would then dictate that they must be watched even more closely. The last 20 years of increasing inequality have caused suspicion to burst the banks of "the usual suspects". "Privacy" is an inadequate word to convey all this, but it's the one we have.

A few weeks ago, I reminded a friend of the long-running mantra that if you have nothing to hide you have nothing to fear. "I don't see it that way at all," he said. "I see it as, I have nothing to hide, so why are you looking at me?"


Illustrations: 'Holy Mary full of grace, punch that devil in the face', book of hours ('The De Brailes Hours'), Oxford ca. 1240 BL, Add 49999, fol. 40V (via Discarding Images).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 15, 2018

Old thinking

sidewalklabs-streetcrossing.png"New technology is doing very old work," Center for Media Justice executive director Malkia Cyril told Computers, Freedom, and Privacy in 2015. reminding the mostly white, middle-class attendees that the level of surveillance that was newly outraging them has long been a fact of life for African-American and Muslim communities in the US.

Last week, at the Personal Democracy Forum, Matt Mitchell, the founder of Crypto Harlem, made a similar point in discussing menacing cities (start at 1:09:00), his term for what are more often called "smart cities". A particularly apt example was his discussion of Google's plans for Toronto's waterfront. The company's mock-up shows an old woman crossing the street very slowly and reactive traffic lights responding by delaying turning green to give her time to get across. Now, my own reaction is to think that all the kids and self-important people in the neighborhood will rapidly figure out there's a good game in blocking the cars from ever getting through. Mitchell's is to see immediately that the light controls can see people and watch their movements. He is certainly right, and he would be rightly hard to convince that the data will only be used for everyone's good.

Much has already been published about bias in what technology companies call predictive policing and Stop LAPD Spying Coalition leader Hamid Khan called "speculative policing" at that same CFP 2015. Mitchell provided background that, as he said, is invisible to those who don't live near US public housing apartment blocks. I have never seen, as he has, police vans parked outside the houses on my street in order to shoot lights into the windows at night on the basis that darkness fosters crime. "You will never see it if you don't walk through these neighborhoods." It is dangerous to assume it will never happen to you.

"Smart" - see Bruce Sterling's completely correct rant about this terminology - cities would embed all this inside the infrastructure and make the assumptions behind it and its operations largely invisible. The analogy that occurs to mind is those elevators you encounter in "smart buildings" where the buttons to choose your floor are all on the outside. It optimizes life for the elevators, but for a human rider it's unnerving to find no controls except buttons to open the doors or sound an alarm. Basically, agency ends at the elevator door. The depressing likelihood is that this is how smart cities will be built, too. Cue Sterling: "The language of Smart City is always Global Business English, no matter what town you're in."

"Whose security?" Royal Holloway professor Lizzie Coles-Kemp often asks. Like Mitchell, she works with underserved communities - in her case, in Britain, with families separated by prison, and the long-term unemployed. For so many, "security" means a hostile system designed as if they are attackers, not users. There is a welcome trend of academics and activists working along these lines: David Corsar is working with communities near Aberdeen to understand what building trust into the Internet of Things means to them; at Bucknell University in Pennsylvania Darakhshan Mir is leading a project on community participation in privacy decisions. Mitchell himself works to help vulnerable communities protect themselves against surveillance.

Technological change is generally sold to us as advances: more efficient, safer, fairer, and cheaper. Deployment offers the chance to reimagine processes. Yet so often the underpinning thinking goes unexamined. In a tiny example at this week's EEMA conference, a speaker listed attributes banks use to digitally identify customers. Why is gender among them, asked one woman. The speaker replied that the system was GDPR-compliant. Not the point: why use an attribute that for an increasing number of people is fluid? (My own theory is that in the past, formalities meant staff needed gender in order to know how to address you in a letter, and now everyone collects it because they always have.)

Bigger examples have long been provided by David Alexander, the co-founder of Mydex, a community interest company that has been working on personal data stores for the last decade-plus. In behavior Alexander has dubbed "organizational narcissism", services claim to be "user-centric" when they're really "producer-centric". Despite the UK government's embrace of "digital", for example, it still retains the familiar structure in which it is the central repository of data and power, while we fill out their forms to suit their needs. At EEMA, Microsoft's architect of identity, Kim Cameron, was also talking moving control into our hands. Microsoft (and Cameron) has been working on this for more than 15 years, first as CardSpace (canceled 2011), then as U-Prove (silent since 2014). Another push seems to be imminent, but it's a slow, hard road to up-end the entrenched situation. What "disruption" is this if the underlying structures remain the same?

Back to Mitchell, who had just discussed the normalization of whiteness as the default when black people are omitted from machine learning training datasets: "Every time someone tells you about an amazing piece of technology, you have to remember there's an equally amazing horrible thing inside of it and if we don't train ourselves to think this way we're going to end up in a bad future."


Illustrations: "Old woman" crosses the street in Sidewalk Labs' plans for Toronto's waterfront.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 8, 2018

Block that metaphor

oldest-robot-athens-2015-smaller.jpgMy favourite new term from this year's Privacy Law Scholars conference is "dishonest anthropomorphism". The term appeared in a draft paper written by Brenda Leung and Evan Selinger as part of a proposal for its opposite, "honest anthropomorphism". The authors' goal was to suggest a taxonomy that could be incorporated into privacy by design theory and practice, so that as household robots are developed and deployed they are less likely to do us harm. Not necessarily individual "harm" as in Isaac Asimov's Laws of Robotics, which tended to see robots as autonomous rather than a projection of its manufacturer into our personal space, therefore glossing over this more intentional and diffuse kind of deception. Pause to imagine that Facebook goes into making robots and you can see what we're talking about here.

"Dishonest anthropomorphism" derives from an earlier paper, Averting Robot Eyes by Margo Kaminski, Matthew Rueben, Bill Smart, and Cindy Grimm, which proposes "honest anthropomorphism" as a desirable principle in trying to protect people from the privacy problems inherent in admitting a robot, even something as limited as a Roomba, into your home. (At least three of these authors are regular attendees at We Robot since its inception in 2012.) That paper categorizes three types of privacy issues that robots bring: data privacy, boundary management, and social/relational.

The data privacy issues are substantial. A mobile phone or smart speaker may listen to or film you, but it has to stay where you put it (as Smart has memorably put it, "My iPad can't stab me in my bed"). Add movement and processing, and you have a roving spy that can collect myriad kinds of data to assemble an intimate picture of your home and its occupants. "Boundary management" refers to capabilities humans may not realize their robots have and therefore don't know to protect themselves against - thermal sensors that can see through walls, for example, or eyes that observe us even when the robot is apparently looking elsewhere (hence the title).

"Social/relational" refers to the our social and cultural expectations of the beings around us. In the authors' examples, unscrupulous designers can take advantage of our inclination to apply our expectations of other humans to entice us into disclosing more than we would if we truly understood the situation. A robot that mimics human expressions that we understand through our own muscle memory may be highly deceptive, inadvertently or intentionally. Robots may also be given the capability of identifying micro-reactions we can't control but that we're used to assuming go unnoticed.

A different session - discussing research by Marijn Sax, Natalie Helberger, and Nadine Bol - provided a worked example, albeit one without the full robot component. In other words: they've been studying mobile health apps. Most of these are obviously aimed at encouraging behavioral change - walk 10,000 steps, lose weight, do yoga. What the authors argue is that they are more aimed at effecting economic change than at encouraging health, an aspect often obscured from users. Quite apart from the wrongness of using an app marketed to improve your health as a vector for potentially unrelated commercial interests, the health framing itself may be questionable. For example, the famed 10,000 steps some apps push you to take daily has no evidence basis in medicine: the number was likely picked as a Japanese marketing term in the 1960s. These apps may also be quite rigid; in one case that came up during the discussion, an injured nurse found she couldn't adapt the app to help her follow her doctor's orders to stay off her feet. In other words, they optimize one thing, which may or may not have anything to do with health or even health's vaguer cousin, "wellness".

Returning to dishonest anthropomorphism, one suggestion was to focus on abuse rather than dishonesty; there are already laws that bar unfair practices and deception. After all, the entire discipline of user design is aimed at nudging users into certain behaviors and discouraging others. With more complex systems, even if the aim is to make the user feel good it's not simple: the same user will react differently to the same choice at different times. Deciding which points to single out in order to calculate benefit is as difficult as trying to decide where to begin and end a movie story, which the screenwriter William Goldman has likened to deciding where to cut a piece of string. The use of metaphor was harmless when we were talking desktops and filing cabinets; much less so when we're talking about a robot cat that closely emulates a biological cat and leads us into the false sense that we can understand it in the same way.

Deception is becoming the theme of the year, perhaps partly inspired by Facebook and Cambridge Analytica. It should be a good thing. It's already clear that neither the European data protection approach nor the US consumer protection approach will be sufficient in itself to protect privacy against the incoming waves of the Internet of Things, big data, smart infrastructure, robots, and AI. As the threats to privacy expand, the field itself must grow in new directions. What made these discussions interesting is that they're trying to figure out which ones.

Illustrations: Recreation of oldest known robot design (from the Ancient Greek Technology exhibition)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 1, 2018

The three IPs

Thumbnail image for 1891_Telegraph_Lines.jpgAgainst last Friday's date history will record two major European events. The first, as previously noted is the arrival into force of the General Data Protection Regulation, which is currently inspiring a number of US news sites to block Europeans. The second is the amazing Irish landslide vote to repeal the 8th amendment to the country's constitution, which barred legislators from legalizing abortion. The vote led the MEP Luke Ming Flanagan to comment that, "I always knew voters were not conservative - they're just a bit complicated."

"A bit complicated" sums up nicely most people's views on privacy; it captures perfectly the cognitive dissonance of someone posting on Facebook that they're worried about their privacy. As Merlin Erroll commented, terrorist incidents help governments claim that giving them enough information will protect you. Countries whose short-term memories include human rights abuses set their balance point differently.

The occasion for these reflections was the 20th birthday of the Foundation for Information Policy Research. FIPR head Ross Anderson noted on Tuesday that FIPR isn't a campaigning organization, "But we provide the ammunition for those who are."

Led by the late Caspar Bowden, FIPR was most visibly activist in the late 1990s lead-up to the passage of the now-replaced Regulation of Investigatory Powers Act (2000). FIPR in general and Bowden in particular were instrumental in making the final legislation less dangerous than it could have been. Since then, FIPR helped spawn the 15-year-old European Digital Rights and UK health data privacy advocate medConfidential.

Many speakers noted how little the debates have changed, particularly regarding encryption and surveillance. In the case of encryption, this is partly because mathematical proofs are eternal, and partly because, as Yes, Minister co-writer Antony Jay said in 2015, large organizations such as governments always seek to impose control. "They don't see it as anything other than good government, but actually it's control government, which is what they want.". The only change, as Anderson pointed out, is that because today's end-to-end connections are encrypted, the push for access has moved to people's phones.

Other perennials include secondary uses of medical data, which Anderson debated in 1996 with the British Medical Association. Among significant new challenges, Anderson, like many others noted the problems of safety and sustainability. The need to patch devices that can kill you changes our ideas about the consequences of hacking. How do you patch a car over 20 years? he asked. One might add: how do you stop a botnet of pancreatic implants without killing the patients?

We've noted here before that built infrastructure tends to attract more of the same. Today, said Duncan Campbell, 25% of global internet traffic transits the UK; Bude, Cornwall remains the critical node for US-EU data links, as in the days of the telegraph. As Campbell said, the UK's traditional position makes it perfectly placed to conduct global surveillance.

One of the most notable changes in 20 years: there were no less than two speakers whose open presence would have been unthinkable: Ian Levy, the technical director of the National Cyber Security centre, the defensive arm of GCHQ, and Anthony Finkelstein, the government's chief scientific advisor for national security. You wouldn't have seen them even ten years ago, when GCHQ was deploying its Mastering the Internet plan, known to us courtesy of Edward Snowden. Levy made a plea to get away from the angels versus demons school of debate.

"The three horsemen, all with the initials 'IP' - intellectual property, Internet Protocol, and investigatory powers - bind us in a crystal lattice," said Bill Thompson. The essential difficulty he was getting at is that it's not that organizations like Google DeepMind and others have done bad things, but that we can't be sure they haven't. Being trustworthy, said medConfidential's Sam Smith, doesn't mean you never have to check the infrastructure but that people *can* check it if they want to.

What happens next is the hard question. Onora O'Neill suggested that our shiny, new GDPR won't work, because it's premised on the no-longer-valid idea that personal and non-personal data are distinguishable. Within a decade, she said, new approaches will be needed. Today, consent is already largely a façade; true consent requires understanding and agreement.

She is absolutely right. Even today's "smart" speakers pose a challenge: where should my Alexa-enabled host post the privacy policy? Is crossing their threshold consent? What does consent even mean in a world where sensors are everywhere and how the data will be used and by whom may be murky. Many of the laws built up over the last 20 years will have to be rethought, particularly as connected medical devices pose new challenges.

One of the other significant changes will be the influx of new and numerous stakeholders whose ideas about what the internet is are very different from those of the parties who have shaped it to date. The mobile world, for example, vastly outnumbers us; the Internet of Things is being developed by Asian manufacturers from a very different culture.

It will get much harder from here, I concluded. In response, O'Neill was not content. It's not enough, she said, to point out problems. We must propose at least the bare bones of solutions.


Illustrations: 1891 map of telegraph lines (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.