« Testing times | Main | Robots without software »

Duty of care

Anyone who is surprised that Google scans incoming and outgoing email hasn't been paying attention. That is the service's raison d'ĂȘtre: crunch user data, sell ads. You may think of Google as a search engine (or a map service or an email service) and its researchers may have lofty ambitions to change the world, but, folks, the spade is an ad agency. So is Facebook.

The news that emerged this week, however, gave pause even to people who understood this as early as 2004, when Gmail first put out its Beta flag and seductively waved 1Gb of storage. To wit: a Gmail user was arrested when an automated scan noted the arrival in his inbox of a child sexual abuse image. This is the 2014 equivalent of what happened to Gary Glitter in 1997: he handed in a PC for repair and got back an arrest warrant (and ultimately a conviction) when PC World's staff saw the contents of his hard drive. US Federal law requires services to report suspected child sexual abuse when instances are found - but not to proactively scan for them.

So, the question: is active scrutiny of users' private data looking for crimes properly Google's job? Or any other company's? It quickly emerged that the technology used to identify the images was invented by Microsoft and donated to the (US) National Center for Missing and Exploited Children and relies on calculating a mathematical hash for the images in users' accounts and comparing them to the entries in a database of known images that have been ruled illegal by experts.

Child sexual abuse images are the one type of material about which there is near-universal agreement. It's illegal in almost all countries, and deciding what constitutes such an image is comparatively clear-cut. The database of hashes has presumably been assembled labor-intensively, much the way that the Internet Watch Foundation does it. That is, manually, based on reports from the public, after examination by experts. Many of the fears that are being expressed about Gmail's scanning are the same ones heard in 1996, when the IWF was first proposed. Eighteen years later, there have been only a very few cases (see Richard Clayton's paper discussing them (PDF) where an IWF decision has become known and controversial.

The surprise is that Google has chosen to be proactive. Throughout the history of the Internet, most service providers persistently argued that sheer volume means they cannot realistically police user-generated content. Since the days of Scientology versus the Net, the accepted rule has been "notice and takedown". Google resisted years of complaints from rights holders before finally agreeing in 2012 to demote torrent sites. More recently, in Google v. Spain, Google has argued in the European Court of Justice that its activities do not amount to data processing; elsewhere it has claimed its search results are the equivalent of editorial judgments and protected by the First Amendment.

Both the ContentID system that Google operates on YouTube and the scanning system we've just learned about are part of the rise of automated policing, which I suppose began with speed cameras. The issues with ContentID are well-known: when someone complains, take it down. If no one objects, do nothing more. Usually, the difficulty of getting something taken off the blocked list is not crucial, occasionally - such as during the 2012 Democratic National Convention - it causes real, immediate damage. Less obvious is the potential for malicious abuse.

Cut to the mid-1990s, when Usenet was still the biggest game in town. An email message arrived in my inbox one day saying that based on my known interest (huh?) it was offering me the opportunity to buy some kind of child pornography from a named person at a specified Brooklyn street address. The address and phone number looked real; there may even have been some prices mentioned. I thought it was unusual for spam, and wondered whether the guy mentioned in it was a real person. I dismissed it as weird spam. When I mentioned it to a gay friend with a government job, you could practically hear the blood drain from his face over the phone. He was *terrified* such a thing would land in *his* inbox and it would be believed. And he'd be fired. And other terrible things would happen to him.

The scenario seemed far-fetched at the time, but less so today. Given the number of data breaches and hacked email accounts, ii would not be difficult for the appropriately skilled to take an innocent individual out of action by loading up their account with the identifiably wrong sort of images. There may well be solutions to that - for example, scanning only images people send and not the ones they receive - but you can only solve the problem if you know the system exists. Which, until this week, we didn't.

On Sunday, at Wikimania I'm moderating a panel on democratic media. The organizers likely had in mind citizen journalism and freedom of expression. The scanning discovery casts the assignment in a new light: shouldn't part of democracy be discussing how far we want our media companies to act as policemen? What is their duty of care? A question for the panel.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/518

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)