" /> net.wars: August 2014 Archives

« July 2014 | Main | September 2014 »

August 29, 2014

Shared space

What difference does the Internet make? This is the modern policy maker's equivalent of "To be, or not to be?" This question has underlain so many net.wars as politicians and activists have wrangled over whether and how the same laws should apply online as offline. Transposing offline law to the cyberworld is fraught with approximately the same dilemmas as transposing a novel to film. What do you keep? What do you leave out? What whole chapter can be conveyed in a single shot? In some cases it's obvious: consumer protection for purchases looks about the same. But the impact of changing connections and the democratization of worldwide distribution? Frightened people whose formerly safe, familiar world is slipping out of control often fail to make rational decisions.

This week's inaugural VOX-Pol conference, kept circling around this question. Funded under the EU's FP-7, the organizing group is meant to be an "academic research network focused on researching the prevalence, contours, functions, and impacts of Violent Online Political Extremism and responses to it". Attendees included researchers from a wide variety of disciplines from computer science to social science. If there was any group that was lacking, I'd say it was computer security practitioners and researchers, many of whose on-the-ground experience studying cyberattacks and investigating the criminal underground could be helpfully emulated by this group.

Some help could also perhaps be provided by journalists with investigative experience. In considering SOCMINT, for example - social media intelligence - people wondered how far to go in interacting with the extremists being studied. Are fake profiles OK? And can you be sure whether you're studying them...or they're studying us? The most impressive presentation on this sort of topic came from Aaron Zelin who, among other things, runs a Web-based clearinghouse for jihadi primary source material.

It's not clear that what Zelin does would be legal, or even possible in the UK. The "lone wolf" theory holds that someone alone in his house can be radicalized simply by accessing Web-based material; if you believe that, the obvious response is to block the dangerous material. Which, TJ McIntyre explained, is exactly what the UK does, unknown to most of its population.

McIntyre knows because he spent three years filing freedom of information requests to find out. So now we know: approximately 1,000 full URLs are blocked under this program, based on criteria derived from Sections 57 and 58 of the 2000 Terrorism Act and Sections 1 and 2 of the 2006 Terrorism Act. The system is "voluntary" - or rather, voluntary for ISPs, not voluntary for their subscribers. McIntyre's FOI answers have found no impact assessment or study of liability for wrongful blocking, and no review of compliance with the 1998 Human Rights Act. It also seems to contradict the Council of Europe's clear statement that filtering must be necessary and transparent.

This is, as Michael Jablonski commented on Twitter yesterday, one of very few conferences that begins by explaining the etiquette for showing gruesome images. Probably more frightening, though, was the presentations laying out the spread - and even mainstreaming - of interlinked extremist groups across the world. Many among Hungary's and Italy's extremist networks host their domains in the US, where the First Amendment ensures their material is not illegal.

This is why the First Amendment can be hard to love: defending free speech inevitably means defending speech you despise. Repeating that "The best answer to bad speech is more, better speech" is not always consoling. Trying to change the minds of the already committed is frustrating and thankless. Jihadi Trending(PDF), a report produced by the Quilliam Foundation, which describes itself as "the world's first counter-extremism think tank", reminds us that's not the piont. Released a few months ago and a fount of good sense, Nick Cohen reminds us in the foreword: "The true goal of debate, however, is not to change the minds of your opponents, but the minds of the watching audience."

Among the report's conclusions:
- The vast majority of radicalized individuals make contact first through offline socialization.
- Negative measures - censorship and filtering - are ineffective and potentially counter-productive.
- There are not enough positive measures - the "better speech" above to challenge extremist ideologies.
- Better ideas are to improve digital literacy and critical consumption skills and debunk propaganda.

So: what difference does the Internet make? It lets extremists use Twitter to tell each other what they had for breakfast. It lets them use YouTube to post videos of their cats. It lets them connect to others with similar views on Facebook, on Web forums, in chat rooms, virtual worlds, and dating sites, and run tabloid news sites that draw in large audiences. Just like everyone else, in fact. And, like the rest of us, they do not own the infrastructure.

The best answer came late on the second day, when someone commented that in the physical world neo-Nazi groups do not hang out with street gangs; extreme right hate groups don't go to the same conferences as jihadis; and Guantanamo detainees don't share the same physical space with white supremacists or teach other tactics. "But they will online."


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


August 22, 2014

Last of the summer whine

I thought David Cameron was supposed to be harmlessly on vacation. Instead, he's (again) busy protecting the innocent public from the evil stuff out there on the Internet. His latest idea is to rate online music videos to bring them into line with content bought offline. I guess I get it: music videos aren't porn, so they won't necessarily be stopped by the porn filters that apparently hardly anyone is enabling, so they need to be reined in separately. Beginning in October, YouTube, Vevo, and the British Board of Film Classification, along with the Big Three music companies (Sony, Universal, and Warner Music) will collaborate on a three-month pilot scheme. Judging by quotes the British Phonographic Institute gave the Guardian, the music companies are gung-ho. We will come back to why shortly.

Anti-censorship campaigners generally argue that parents should be the arbiters for their children and that what they need is better tools and information. So on the face of it, that's what the government is suggesting. There are known problems with the BBFC system, even for movies, where ratings are mandatory: submissions are expensive, and must be sought separately for public screening and home video distribution. Ratings do provide parents with, if not information, an easy rule to apply that kids can understand. Ratings are also, of course, information that teens exploit themselves. An "18" rating will be a target for younger teens, just like R-rated movies (restricted to over-17s) in the US.

But online ratings systems only work if the world's millions of content producers are willing to rate their content. Previous efforts, such as 2002's ICRA scheme failed because most people won't bother.

The people who will, however, are the big rights holders, partly because they represent large targets for governments wanting to crack down, but mostly because they can afford the expense of getting things rated - and they know that independent and amateur competitors can't. So the big downside to the ratings scheme is that it presents a route by which independent competition can be eliminated. The big losers are likely to be teens seeking a wider audience for the videos they make with and for each other.

What Britain really needs is the music video equivalent of the US's Movie Mom, whose film reviews are deliberately designed to help parents understand what they might find objectionable or thought-provoking for their kids.

***

Elsewhere, there has been discussion of the General Medical Council, which is considering applying tougher sanctions to doctors who have made mistakes and forcing them to apologize.

Medical omerta seems to transcend national cultures. In the US, fear of litigation keeps many medical practitioners from ever coming out to patients and openly claiming responsibility. In the UK, the closed-wagon culture seems to have grown up with the National Health Service based on the notions that doctors are far too busy to have time for explanations and in any case the "simple" people (and, my God, women!) they treated didn't have the education necessary to understand anyway. So requiring doctors - and still more, judging from the stories of egregious medical cover-ups that regularly appear in the pages of Private Eye, hospitals - to *explain* what happened seems long overdue.

But forcing them to apologize?

First of all, as any cursory glance through your own experience and SorryWatch will tell you, a forced apology is no apology at all. I remember at eight, being forced to apologize to the school vice-principal for something she had misheard, despite my protestations that what she was upset about was not what I had actually said. I had to kiss her disgusting powdered cheek and smell her perfume, and what that taught me was not to be more polite, or at least careful, in future but to avoid women in heavy makeup at all costs. Probably everyone has a memory like that, and whether or not you were guilty of the infraction, being forced to apologize for it changes nothing. You go off resentful, and the apological recipient still feels aggrieved because they know it was all just for show: apology theater.

In medical cases, even a truly heartfelt apology can't be very satisfying. The speaker's clear suffering from apocalyptic guilt in no way lessens the likelihood that a statement like "I'm sorry I accidentally cut off the wrong leg" will provoke a response like, "Great. I'm glad *you* feel better. Now, how are you going to get me to work every day?"

Besides the obvious compensation for damage to their lives (if it's even possible), what people want when bad things happen, especially when they are part of a persistent pattern, is to know that the behavior is going to *change*, and permanently. "We know we did this terrible thing to you, and as a result we have put in this system/adopted this practice/fired this doctor/spearheaded this research to ensure that it will not happen again" and we would like to invite you to come see what we've done" is better recompense than any apology. Because we all know where, in the end, apologies come from: the PR department. And then everyone but the damaged person goes on as if nothing has happened.


Wendy M. Grossman Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Stories about the border wars between cyberspace and real life are posted throughout the week at the net.wars Pinboard - or follow on Twitter.


August 15, 2014

Robots without software

"Will we be retired - or unemployed?" Chris Phoenix asked in 2007. Last week, a Pew Research study on AI, robotics, and the future of jobs wondered the same thing and collated answers given by 1,896 experts. About half thought these technologies would displace more jobs than they create by 2025; the other half thought not. The pessimists project an increase in existing economic gaps and resulting social unrest. The optimists think new jobs will soak up the strain. We may have better data soon: the National Academies of Science is beginning a study.

Still: why wait for data when opinions are already available? On Wednesday, I was part of a Voice of Russia debate, with Kathleen Richardson, who studies robots and ethics, as well as the use of robots to assist autistic children and adults; Anders Sandberg, a research fellow at Oxford University's Future of Humanity Institute; and Nick Bostrum, the institute's director. This was Bostrum's and my second outing here: two years ago, it was killer robots and the founding of the Centre for Existential Risk. Some kind of progress, there: now they're not going to killer robotskill us, they're just going to take our jobs.

First, what is a robot? Classically, most of us think of Isaac Asimov's android detective Daneel Olivaw or such popular movie figures as C3P0 or the Terminator. Bostrum calls intelligence, rather than mobility or form factor the defining characteristic. Yes: we're biologically hardwired to think of things that are mobile, like animals (which might be predators) as smart and things that are stationary, like plants (which can't harm you unless you get close) as dumb. Carried into the world of electronics and fancy software, this isn't a good guide: modern washing machines have more intelligence than Roombas; and the "smartest" systems with the greatest impact on our lives are pure software. Sure, these are all robots. Sandberg said things are called "AI" until they start working, when we switch to calling them "automation". He has a point.

Sometime in the last couple of years, I recall a piece about MOOCs - massive open online courses - that discussed the way good-enough technologies emerge in fields where excellence has traditionally been consider vital. First, these technologies begin by serving the underserved - the people who don't have access to university degrees - and then eat away at the middle. The high end of the industry, in this case,top universities, tends to survive, often by coopting the technology (like edX). We've seen this with phone calls: first Skype replaced expensive international and long distance calling, then the legacy telephone companies started turning their networks into voice over IP.

Google Translate is a better example. It provides rough, good-enough translation in a lot of situations where no one would hire a translator (pub arguments, for example). By now, it's probably replaced human translators wherever getting the gist is adequate. But contracts, legislation, and diplomacy require a level of precision and detailed certainty that machine translation can't approach. Those who can afford it or whose needs are too complex hire people - and that, it seems to me, is what the digital divide of the automation age will look like in fields where automation can be done cheaply. It may hit hardest in the countries to which work is now being outsourced: see Foxconn.. The safest jobs are in fields where automation is expensive and/or difficult *and* outsourcing to "robots without software" elsewhere in the world is impossible.. Learn plumbing, or automechanics.

Or, just possibly journalism. All media are suffering from a mix of changing business models, vastly increased competition from free services (our version of outsourcing), vastly increased competition for consumers' time and attention. Journalism itself is happening all over the place, sort of proving the point: much of the kind of investigative journalism that newspapers and magazines used to do is now funded by NGOs, who are, in turn, typically funded by foundations and others who can still afford it. There are narrow areas where automation succeeds - such as quarterly earnings stories and the results of high school football games. But we're a long way from robots that can do the creative stuff: digging, interviews, analysis, shaping stories. Sandberg also thought robots wouldn't make much headway in professions such as nursing, where human contact is vital. Vendors who sell assistive robots for the most vulnerable people will claim that these will make better care affordable to a wide range of people. Richardson argued, however, that what's needed there is not robots but social change. We do have a choice.

But there's a final element missing from this discussion: Google Translate was created by mining the millions of Web pages that had already been human-translated and analyzing them statistically to create a system that can guess at the meaning of a word or phrase based on the words that commonly surround it. As language inevitably changes, the robots are going to need people - us - to feed them the new stuff. Welcome to your new job as a Mechanical Turk.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


August 9, 2014

Duty of care

Anyone who is surprised that Google scans incoming and outgoing email hasn't been paying attention. That is the service's raison d'ĂȘtre: crunch user data, sell ads. You may think of Google as a search engine (or a map service or an email service) and its researchers may have lofty ambitions to change the world, but, folks, the spade is an ad agency. So is Facebook.

The news that emerged this week, however, gave pause even to people who understood this as early as 2004, when Gmail first put out its Beta flag and seductively waved 1Gb of storage. To wit: a Gmail user was arrested when an automated scan noted the arrival in his inbox of a child sexual abuse image. This is the 2014 equivalent of what happened to Gary Glitter in 1997: he handed in a PC for repair and got back an arrest warrant (and ultimately a conviction) when PC World's staff saw the contents of his hard drive. US Federal law requires services to report suspected child sexual abuse when instances are found - but not to proactively scan for them.

So, the question: is active scrutiny of users' private data looking for crimes properly Google's job? Or any other company's? It quickly emerged that the technology used to identify the images was invented by Microsoft and donated to the (US) National Center for Missing and Exploited Children and relies on calculating a mathematical hash for the images in users' accounts and comparing them to the entries in a database of known images that have been ruled illegal by experts.

Child sexual abuse images are the one type of material about which there is near-universal agreement. It's illegal in almost all countries, and deciding what constitutes such an image is comparatively clear-cut. The database of hashes has presumably been assembled labor-intensively, much the way that the Internet Watch Foundation does it. That is, manually, based on reports from the public, after examination by experts. Many of the fears that are being expressed about Gmail's scanning are the same ones heard in 1996, when the IWF was first proposed. Eighteen years later, there have been only a very few cases (see Richard Clayton's paper discussing them (PDF) where an IWF decision has become known and controversial.

The surprise is that Google has chosen to be proactive. Throughout the history of the Internet, most service providers persistently argued that sheer volume means they cannot realistically police user-generated content. Since the days of Scientology versus the Net, the accepted rule has been "notice and takedown". Google resisted years of complaints from rights holders before finally agreeing in 2012 to demote torrent sites. More recently, in Google v. Spain, Google has argued in the European Court of Justice that its activities do not amount to data processing; elsewhere it has claimed its search results are the equivalent of editorial judgments and protected by the First Amendment.

Both the ContentID system that Google operates on YouTube and the scanning system we've just learned about are part of the rise of automated policing, which I suppose began with speed cameras. The issues with ContentID are well-known: when someone complains, take it down. If no one objects, do nothing more. Usually, the difficulty of getting something taken off the blocked list is not crucial, occasionally - such as during the 2012 Democratic National Convention - it causes real, immediate damage. Less obvious is the potential for malicious abuse.

Cut to the mid-1990s, when Usenet was still the biggest game in town. An email message arrived in my inbox one day saying that based on my known interest (huh?) it was offering me the opportunity to buy some kind of child pornography from a named person at a specified Brooklyn street address. The address and phone number looked real; there may even have been some prices mentioned. I thought it was unusual for spam, and wondered whether the guy mentioned in it was a real person. I dismissed it as weird spam. When I mentioned it to a gay friend with a government job, you could practically hear the blood drain from his face over the phone. He was *terrified* such a thing would land in *his* inbox and it would be believed. And he'd be fired. And other terrible things would happen to him.

The scenario seemed far-fetched at the time, but less so today. Given the number of data breaches and hacked email accounts, ii would not be difficult for the appropriately skilled to take an innocent individual out of action by loading up their account with the identifiably wrong sort of images. There may well be solutions to that - for example, scanning only images people send and not the ones they receive - but you can only solve the problem if you know the system exists. Which, until this week, we didn't.

On Sunday, at Wikimania I'm moderating a panel on democratic media. The organizers likely had in mind citizen journalism and freedom of expression. The scanning discovery casts the assignment in a new light: shouldn't part of democracy be discussing how far we want our media companies to act as policemen? What is their duty of care? A question for the panel.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


August 1, 2014

Testing times

Many years ago, Ray Hyman, a psychology professor at the University of Oregon and one of the 26 founders of the Committee for Skeptical Inquiry, dabbled in reading palms. There are, as I've heard him say in describing the experience, benefits to picking this particular line in psychic claims because hands give you all sorts of helpful clues. First, there's the Sherlock Holmes act of noting calluses and other physical indicators of profession, class, and health and matrimonial state. Even more helpfully, when people like what you're telling them they tend to push their hands toward you; when they don't, they pull away, helpfully guiding what line to take. However he did it, people were generally very positive about the accuracy of Hyman's readings.

A lot of people would pat themselves on the back and call themselves great palm readers. Hyman, who may have been a born skeptic, took a different tack: he started telling people the *opposite* of what he thought he saw in their palms. And he still got the same enthusiastic responses. What this told him, and tells us, is that the assessments had nothing to do with the readings and everything to do with the people's level of belief in palm readers and the personality Hyman projected. His was an absolutely valid experiment to conduct in the name of science.

So to this week, when the news emerged that OKCupid deliberately ran three experiments on its users. The first involved a seven-hour window in which the service removed the pictures from a blind dating app it had available at the time. In the second the service hid either pictures or text to establish how much people relied on pictures in choosing dates. In the third the service told pairs of users the computer said were poorly matched that they were exceptionally well-matched and vice-versa. This last study is like what Hyman did: it tests the algorithm to see if it has any validity or whether its apparent success is all down to users' desire to believe it works.

Founder Christian Rudder's conclusion: "OKCupid definitely works, but that's not the whole story. And if you have to choose only one or the other, the mere myth of compatibility works just as well as the truth." In which case, what he's actually proved is that OKCupid's algorithms are about as good as chance. There's a logical reason: users all want the service to work.

OKCupid's experiments seem to me legitimate in a way that Facebook's experiments with manipulating newsfeeds were not. Even though most people's Facebook friends lists include plenty of marginal acquaintances, manipulating the newsfeed interferes with real emotions and pre-existing relationships. By contrast, OKCupid was testing the effectiveness of its service, and followed up by sending users the real compatibility scores. The New York Times reports that some users were modestly dismayed, but even before the test had low expectations of such sites. Realistically, that's the point: surely no one believes that testing OKCupid's algorithms has destroyed their one true path out of loneliness.

Articles like the one written by Milo Yiannopoulos at Business Insider miss the point. He suggests that the FTC might view the test as "unfair and deceptive behavior" and paints a sad picture of the desperately lonely person who trusts the compatibility scores, takes a chance, goes on a date, and has a terrible time. Yes, that's lost time. But even OKCupid's best match doesn't guarantee anything different. In fact, what the test proved is that their compatibility scores provide very little useful guidance to which correspondents might actually be worth the trouble to meet. It's not often you find a company willing to expose the threadbare nature of its own business model.

More helpfully, at ThinkProgress, Lauren C. Williams notes the trusting way people submit data to dating and other sites without recognizing that the companies' goals are quite different from their own. The incompatibility of motives is more obvious in the Facebook case: users want better but protected contact with the people who matter in their lives, while Facebook wants to mine their social graphs to sell advertising. OKCupid, now owned by Match.com, also mines user data to sell advertising, but pays users back with potential relationship matches - not limited to dates. What the test should tell users is that they're not getting what they think they're paying for; few sites admit that.

Having said all that defending OKCupid's tests, caveats remain. We don't know how many tests the company did, what the stated goals were, or whether they selectively published the results (though granted, these are not the results you'd expect them to cherry-pick). If you're going to experiment, you might as well do it right: publishing the information needed for others to independently replicate the results is the way to go, even though that would mean also publishing the algorithms it uses to match people.

But if users are going to leave the service over this, it shouldn't be because of the experiment itself. It should be because what the experiment shows is that the service's ballyhooed matching algorithm is nonsense. So you're paying with your data for a crapshoot. Why is that worth paying for?


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.