Archives

July 21, 2017

The long arm of the law

European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpgIt's a little over three years since the European Court of Justice held in Google Spain v AEPD and Mario Costeja González that in some circumstances Google must honor requests to deindex search hits based on a person's name. In data protection terms, the court ruled that Google is a data processor, and as such subject to the EU's data protection laws. At the time, the decision outraged many American internet experts, who saw (and still see) it as an unacceptable abrogation of free speech. European privacy advocates were more likely to see it as a court balancing two fundamental rights - the right to privacy and the right to freedom of expression - against each other and finding a compromise. Some also argued that the decision redressed the imbalance of power between individuals and a large corporation, which led them to ask in puzzlement: isn't self-reinvention an American tradition?

In these cases, the main search engines, which means, uniquely, Google, because it has a 91% share of the European search engine market - sit right in the crosshairs . (As Bas van der Beld writes at Search Engine Land, Europe really needs competition.) Because of that level of dominance, what Google's algorithm chooses to display and in what order has a profound effect on not only businesses but individuals.

Once the court ruled that Google met the a legal definition of a data controller, to a non-lawyer the rest appears to follow. That key decision is, nonetheless, controversial: the court's own advocate-general, Niilo Jääskinen, had advised otherwise, a recommendation the court chose to ignore. However, part of the advocate-general's argument rested on the fact that the right to be forgotten was not, at the time, part of the law. As of May 25, 2018, it will be.

It is unhelpful to talk about this in black-and-white terms as censorship. Unlike the UK's current relentless pursuit of restricting access to various types of material, the court did not rule that the underlying material should be removed, blocked, or made subject to other types of access restrictions. There is also no prohibition on having the pages in question pop up in response to other types of searches - that is, not on the person's name. It's also unhelpful to paint the situation as one that aids wealthy criminals and corrupt politicians to hide evidence of their misdeeds: Google, as already noted, has rejected nearly 60% of the requests it's received on, among other grounds, the basis of public interest. That said, transparency will continue to be crucial to ensuring that the system isn't abused in that way.

After the inevitable teething problems while Google struggled with a rush of pent-up demand, things went somewhat quiet, although the right to be forgotten did get some airplay as an element of the General Data Protection Regulation, which was passed last year. This year, however, the French data protection watchdog, Commission Nationale de l'Informatique et des Libertés (CNIL), kicked the issue back into the rotation. Instead of removing these hits - as the link above shows, Google has deindexed 43.2% of the requests it has received - only from the search results seen by visitors based in the EU, CNIL told Google they must be removed from all its sites worldwide. The ruling has been appealed, and this week it was announced that the case will be, as Ars Technica's Kelly Fiveash writes, heard by the European Court of Justice, which is expected to decide whether such results should be delisted globally, on a country-by-country basis, or across the EU.

Each of these options poses problems. Deindexing search results country-by-country, or even across the EU, is easily circumvented. Deindexing them globally raises the question of jurisdictional boundaries, previously seen most notably in the area of surveillance legislation and access to data on foreign servers. Like the issues of who gets to see whose data on distant servers, the question of how companies obey deindexing rulings is just one of the long series of demarcation disputes that will extend through most of our lifetimes as governments squabble over how far across the world their authority extends.

jaaskinen_vierailuluento.jpgA second big issue - which Jääskinen raised in his opinion - is devolving responsibility for the decision of what to remove into the hand of private companies. The problems with this prospect also feature in the UK discussions about getting social media companies to "do more" to remove extremist material, hate speech, and other undesirables. Obviously the alternative, in which the government makes these decisions is even worse. Jääskinen also suggested that the volume of requests would become unmanageable. Google's transparency report, linked above, shows otherwise: a spike at the beginning followed by a dramatic drop-off and a relatively flat trajectory thereafter. Experience over the last three years, therefore, indicates otherwise.

Costeja was a messy and controversial decision. The ECJ's decision to hear this case gives it a chance to review and revise its thinking. However, it will not be able to solve the fundamental problems: the power struggle between global data services and national governments and the direct clash between European fundamental privacy rights and the US's First Amendment. Most likely, it will contain something to offend everyone.

Illustrations: European Court of Justice (Cédric Puisney;

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 14, 2017

The harvest

Spaghetti_harvest.jpgin 1957, the BBC's flagship current affairs program, Panorama, broadcast a story about that year's extraordinarily bountiful spaghetti harvest, attributed to the "virtual disappearance of the spaghetti weevil" (it says here in Wikipedia). It was, of course, an April 1 hoax, and apparently up there with the 1937 War of the Worlds radio broadcast if it's still being pulled out in 2017 as a pertinent precursor to a knotty modern problem, as Baroness Patience Wheatcroft did yesterday at a Westminster Forum discussion of fake news (PDF). In any case, it appears that national unfamiliarity with that foreign muck called pasta meant that many people believed it and called in asking how to grow their own spaghetti trees.

Parts of the discussion proceeded along familiar lines. Some things pretty much everyone agreed on. Such as: fake news is not new. Skeptics have been fighting this stuff for years. There has long been much more money in publishing stories promoting miracles than there ever will be in debunking them. Even if belief in spaghetti trees has died in the face of greater familiarity with the product, hoaxes are perennially hard to kill. In 1862 Mark Twain found found that out, and in the 1980s so did science fiction critic David Langford.

Everyone also converged on a consistent meaning of "fake news", even though really it's a spectrum whose boundaries are as smudged as Wimbledon's baselines this week. People publish stories that aren't true for all kinds of reasons - satire, parody, public education, journalistic incompetence - but the ones everyone is exercised about are stories that are intentionally false and are distributed for political or financial gain. The discussion left a slight gap there, in that doing so just for the lulz doesn't have a fully political purpose and yet is a very likely scenario. But close enough.

Skeptics' experience shows that every strategy you adopt for identifying genuine information will be emulated by others seeking to promote its opposite: you have scientists, they have scientists. We know this from the history of Big Tobacco and Big Oil. This week, Google was accused of funding research favorable to its interests in copyright, antitrust law, privacy, and information security, a report Google calls misleading.

wheat-weevil-Sitophilus.granarius.jpgSimilar problems apply to the item everyone thought had to form part of the solution: teach digital literacy. Many suggested it should form part of the primary school curriculum, and sure, let's go for it, but human beings teach these things. Given that political polarization has reached the point where Fox News viewers and New York Times readers cannot agree on even the most basic of facts about, say, climate change or American health care, what principles do you give kids by which to determine whom to believe? What does a creationist teach kids about judging science stories? Wikipedia ought to be the teacher's friend because its talk pages lay out in detail how every page was built and curated; instead, for years many have told kids to avoid "unreliable" Wikipedia in favor of using a search engine to find better information. The result: they trust Google without understanding how it works. A more subtle problem of provenance was raised by Matt Tee, the CEO of the Independent Press Standards Organisation , who said that on social media platforms, particularly Facebook, all news stories look alike, no matter where they're from. More startling was Adblock Plus's Laura Sophie Dornheim's claim that ad blockers can help by interfering with the business model of clickbait farms. To an audience seeking solutions but to whom the loss of advertising revenue was an important part of the problem, she was a disturbing bit of precipitate.

Inevitably there was discussion of regulation. Leaving aside whether these companies are platforms, publishers, or some kind of hybrid, the significant gap in this and most other discussions is the how. The image in our minds matters; for the foreseeable future this won't be solved by computers. Instead, as Annalee Newitz recently reported in Ars Technica, the world's social media content raters are humans, many of them in countries like India, where Adrian Chen and Ciaran Cassidy followed a two-week rating training course and the Philippines. Observes an unidentified higher-up, "You definitely need man behind the machines."

This is what efforts to control fake news - a vastly more complex problem - will also look like. GAFAT et. al may be forced to hire expensive journalists and scholars to figure out what the rules for identifying fake news should be, but ultimately these rules will be put into practice by an army of subcontractors far removed from the "us" who are being protected from it. There are bound to be unintended consequences.

Fake news is yet another way that our traditional democratic values are under threat. Even small terrorist attacks have provided justification for putting into place a vast surveillance framework that's chipped away at our values of privacy and the right to travel freely. Everyone yesterday was conscious of the threat to freedom of expression attempts to disappear fake news may represent. But, like computer security, fake news is an arms race: those intent on financial gain and political disruption will attempt to turn every new system to their advantage. Computer scientists cannot solve today's security problems without consulting many other disciplines; the same will prove true of the journalists, media professionals, and scholars who are fretting about our very human tendency to go "Ooh, shiny!" at entertaining lies while putting off reading sober truths.


Illustrations: Spaghetti harvest; wheat weevil (Sarefo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.