" /> net.wars: August 2015 Archives

« July 2015 | Main

August 28, 2015

Running with the devil

The Conservatives' election manifesto included a pledge to require websites displaying adult content to implement age verification. This week, I attended a meeting organised by the Digital Policy Alliance (whose own website unfortunately appears blank to me) convened to study the details in which the devil always lurks: how it might be done. The somewhat government-approved DPA's plan is to generate a document to be handed over to the BSI to use as the basis for a standard websites can implement (which means raising the funds to pay the BSI for drafting). erroll-lords.jpgIn charge of proceedings was the cross-bench peer, the Earl of Erroll, who has a long record of sanity on IT-related issues.

The background: multiple parties are pushing for age verification besides Prime Minister David Cameron. In November, Parliament quietly passed the Audiovisual Media Services Regulations 2014 (PDF), which came into force on December 1, 2014. These rules require anyone providing on-demand video that would be refused an over-18 classification to ensure that it cannot be viewed by those under 18. Sex and Censorship has a critical analysis. This week, the Authority for Television on Demand (ATVOD) charged 21 adult websites with violations of sections 11 and 14 of these rules. ATVOD is also the source of the 2014 For Adults Only? (PDF)">study of young people's access to pornography (though its methodology has been criticised.

Also a precipitating factor is the Online Safety Bill, sponsored by Baroness Howe of Idlicote, her fourth attempt in four sessions. Howe believes identifying adult content and providing a mechanism for dealing with overblocking claims should be jobs for Ofcom. The growing pressure is reminiscent of 1996, when the Internet Watch Foundation was created: "regulate or be regulated".

As noted at this week's meeting, age checks happen in retail all the time: in movie theaters, in shops selling cigarettes and alcohol (and, in Colorado, marijuana), at car rental offices, and in libraries. In many of these cases, online age verification is beside the point because the buyer has to acquire the good or service in person, but in many others it's not: gambling, gaming, some games, and so on. At this week's meeting, the representatives of the adult content industry indicated they'd actually be glad to have a way to filter out underage visitors, who produce no revenue and cost bandwidth. Age classification already applies to music videos.

The first thought in such cases always seems to be to confuse verifying an attribute - in this case, age, which the group resisted expanding upon - with verifying identity. This was the most positive part of this week's meeting: there was general agreement that what's needed is a lightweight system that avoids the privacy and security issues surrounding collecting personal data. If I can prove I'm over 18 you don't need to know my identity and you don't need to store anything more than "18=yes". Creating such a system involves solving two technical problems. First: devising a way of implementing the check in a form that websites can automate. Second: tying that verification to a single specific person so only they can use it. The second problem is far harder than the first. Big brothers buy little brothers beers in the offline world; parents, a 2011 survey found, have helped millions of under-13s to lie about their age so they can use Facebook. No amount of technology or government fiat can force parents to implement what others have decided are "responsible" rules.

LSE researcher Sonia Livingstone, who often tries to calm down moral panics Sonia Livingstone (and for whose Parenting for a Digital Future blog I write) has found generally that what both children and parents want is tools and skills to manage their own online environment, recommending a "complex solution for a complex problem". She advocates using multiple approaches: legislation, self-regulation, community norms, education (adding attention to pornography, coercion, and consent issues to the sex education curriculum), empowerment, and welfare intervention where needed.

Within those bounds, can age checks work? It depends on what you think "work" means. Very few pornographic websites are based in the UK. The only obvious way to force a non-UK website to comply is to block it - or, as someone facetiously (I hope) suggested, throw the owners in jail if they step onto UK soil. In pushing for age checks, proponents may not care how they are implemented, making it hard to ensure that they don't turn into excuses for the usual data-driven suspects to corner the market as intermediaries, scooping up even more data about everyone and holding smaller UK businesses to ransom. The makeup of the meeting's membership was not promising in this regard: there is not enough representation from consumer protection, small business, academic research, or civil liberties groups. For them, it's a conundrum: if they oppose it as censorship, do they refuse to participate on principle and hope it fails, or do they join to ensure the result is as least-worst as possible?

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 21, 2015

Unreal humans

Over there, Bill_Thompson,_BBC,_at_Wikimania_2014_-_14876124081.jpgthe BBC's Bill Thompson has a glowing review of the TV series Humans. The obvious irony in the title is serendipitously expanded by the simultaneous airing in the US of Mr Robot: Humans is about robots and Mr Robot is about humans. Sort of.

For the uninitiated, Humans is a British remake of the Swedish series Äkta Människor. In an alternative present, humanoid androids - "Synths" - have spread throughout society. Synths have brilliant green eyes and slightly stiff movements. There has been much progress since the earliest models, which are "recycled" for parts, brains wiped.

The Synth business is well-developed, with stores, service departments, and second-hand outlets. The national social care service deploys Synths as home carers - part nurse, part companion, part jailer. As the story opens, Joe Hawkins (Tom Goodman-Hill), a harassed father with three kids and a traveling wife, buys a black-market reprogrammed Synth named Anita (Gemma Chan) at a knockdown price. The perfectly shaped Anita's perfect impersonation of a perfect housekeeper and nanny exposes the family's inner troubles, which of course rebound on...well, it or her? That is the question.

No such series is complete without exceptions, and this one is no...exception. Part of the action follows the travails of a small band of Synths that are secretly endowed with human-level consciousness. It is these Synths that Thompson defended, arguing that programming Isaac Asimov's Three Laws of Robotics into positronic brains is morally indefensible, equivalent to shackling a slave or caging a gorilla. Substitute something real for "positronic". Either way, I don't think it's a fair comparison..

However, as I went on to argue in a comment, possibly wrongly, there's a better reason not to implement Asimov's First Law: it seems technically impossible. Granted, I'm not a mathematician who can produce an elegant proof, but intuitively...

Alan Turing and his successors long ago showed that it is impossible to create an algorithm that can tell whether a given computer program will halt - that is, complete. In writing my comment, I had a dim idea that somehow that led to making Asimov's Laws non-computable. This much I know: there is a class of computer problems that can provably be solved in a reasonable amount of time ("polynomial time", meaning that the amount of time needed to solve the problem doesn't expand outrageously as you increase the size of the input to the algorithm). That class is called P. There is a second class of problems for which there is no known way to find an answer in that reasonable amount of time but for which a possible solution can be *verified* quickly; that class is NP. The big question: does P equal NP? That is, are the two sets identical? In 2013, Elementary (Season 2, episode 2, "Solve for X") incorporated this open problem into a murder mystery: brave stuff for prime-time TV.

The class of problems known as "NP-complete" are provably not solvable in a reasonable amount of time. In 1999, Rebecca Mercuri proved that securing electronic voting was one of them. But securing a voting machine is vastly simpler than incorporating all the variables flying at an Asimov-constrained Synth trying to decide what action is appropriate to solve the danger facing this human at this millisecond. How many decision trees must a Synth walk down? How far ahead should it look? How does it decide whose interests take priority? How do you keep it from getting so tangled up in variables that it hangs and through inaction allows a human to come to harm?

I don't think it can be done. But even if it could...granted that it's a sign of my inner biological supremacist, or what an io9 discussion of the unworkability of the Three Laws calls "substrate chauvinism", comparing Asimov's Laws to shackles and cages applied to the breakable bodies housing biological intelligences is getting carried away by the story. Great news for writers Sam Vincent and Jonathan Brackley, and the very human actors who evoked such emotion while embodying machines.

The real issue is that humans can anthropomorphize anything. We become attached to all sorts of inanimate objects that are incapable of returning our affection but that we surround with memories. The show's George (William Hurt), is profoundly attached to his outmoded Synth, Odi, precisely because Odi stores all the memories his aging mind has lost. Anthropomorphism is a recurring theme at We Robot (see here for 2015 2013, and 2012, as well as Kate Darling's work on this subject: she cites a case in which a colonel called off a mine-locating exercise because he thought the defusing robot's hobbling on its two remaining legs was "inhumane". Issues stemming from projection of this kind will be with us far sooner than anything like the show's androids will be. Let's focus on that, first.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 14, 2015

Why can't we just...

The last couple of weeks I seem to have heard "Why can't we just use...?" a lot. Typically, this has been about some collaborative venture where there's a question of what technology to use as the supporting medium. Very often, the question ends with "Google Docs", although a couple of times it's been "WordPress", "Dropbox", and "Facebook".

Partly, this is a reflection of the fact that those particular companies have done a stellar job of making their stuff appealing enough to attract millions (or billions) of users. Partly, it's a reflection of the dominance they have built by that means. But partly it's a question of what people are used to and what their friends want them to use, and the reasons aren't always logical. I remember, for example, an early 1990s complaint that more programs should use the same keyboard shortcuts as WordStar because "it's so intuitive". Ha! It's only intuitive if your first word processor used those shortcuts. It's a classic example of the kind of marketing used by cigarette companies and the Catholic church: get them young, and they're yours for life.

Most of the Google Docs examples are people who think collaborating on Google is easier than using Microsoft Word, LibreOffice, or any other word processor that would allow the collaborators to keep their data offline under their own control. One case was notable, however, because the alternative being suggested was an existing wiki of surpassing simplicity: click "edit" button, type text, click "save".

"Can't we just use Google Docs?"

The person speaking had, it transpired, never gotten annoyed at a misspelling, incorrect statement, or missing bit of punctuation on Wikipedia and decided to rectify it. Those big edit buttons all over the place? He'd never clicked on one.

"My teachers told me Wikipedia was unreliable," he explained. As Danah Boyd finds in interviews for her book It's Complicated, many teachers do say that and tell their students instead to search for more reliable references. As a result their students often erroneously conclude that Google is trustworthy. Whereas, Boyd correctly says, the inner workings of Google's search algorithm are a black box, but Wikipedia's Talk pages and change histories, while filled with sometimes unpleasant contention, are quite possibly the best teaching aid that's even been invented for demonstrating how knowledge is collaboratively created and curated. Generationally, that fits: the speaker in question is 25. So much for "digital natives", an idea I've always resisted for just this reason: today's younger generation take Facebook, Google, Wikipedia, the web, online messaging, smartphones, and various social media for granted. They don't *see* the internet because they've always been soaking in it. But, I pleaded, we're a *privacy group*. We can't post our private stuff to *Google*! Take a chance. Click on the wiki edit button. It won't bite!

A Digital Native ought approach an unknown bit of internet-related technology with confidence because they know they can figure out how to use this bit from the knowledge they've gained operating other bits. Certainly, someone who's grown up playing computer games and using apps can leverage the experience for new games and using apps - but not in the more generalized way of 20 years ago, when the internet's bare skeleton was still showing. We usually cast technophobia as fear of the new, but many also fear old technology just as much if it's unfamiliar.

The WordPress discussion was a little different, Why, someone asked at a meeting, was the group paying a developer to run the website? "Why can't we just do it in [something like] WordPress?" The question displays two assumptions: one, that a mainstream tool can meet all requirements; two, that using it to run a professional-level, part-public, part-private website with a membership database backend must be cheaper because personal WordPress sites are really cheap. The reality, of course, is that changing underlying technology has its own costs and wouldn't of itself solve the site's acute need for substantially improved navigation. Asking it suggests that WordPress is the 2015 version of the old "my partner's 14-year-old nephew can build us a website for practically nothing". Which usually meant later paying substantial sums to a professional to scrape the content and build something business-class.

It's possible my frustrations are a version of the wall Ellen Ullman describes in her still-invaluable 1997 book of essays, Close to the Machine: Technophilia and Its Discontents that in time hits every software engineer: the moment when they just want to use technology that's tried, true, proven, and debugged, and can no longer stand to go on learning the latest gewgaw that's obsessing venture capitalists. 256px-Marx_Brothers_1931.jpgThat, she writes, is when your career in software effectively ends. So: after nearly 25 years online, I'm asking what's wrong with reverting to older stuff that doesn't make us dependent on outside parties, that with a little tinkering will do what we need? It's not *hard*. It's just unfamiliar to folks whose first introduction to the internet was FaGooDropPress, and now think that's The Way Things Ought to Be. Which means that decades hence, long after the original sites themselves have massively changed their character, we'll still be running into clones of how they worked circa 2015, showing their creators' age as accurately as their dated pop culture references.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 7, 2015

The sharing economy

The news this week that Google Glass is to make a comeback as a business tool prompted this stray thought: Glass was inevitably doomed as a consumer item because it doesn't share well. Oh, certainly, you can take a photo and email it to a friend or post it online, but what you can't do is show what you're seeing to your friends in real time. Imagine a bunch of teens gathered behind the head of one of them peering into Glass's field of view the way they do around each others' phones. Can't be done.

256px-Google_Glass_Main.jpgGranted, the early version of Glass had other problems, briefly summarized as bugs, battery life, and budget. There's no reason why Glass shouldn't wind up being a successful product in the right market sector, and it's by no means unprecedented for the first version of something genuinely new, which Glass is, to find an entirely different market and set of uses than those its creator first imagined.

Viewed this way, it's possible to see individualism as part of Google's essential nature. All its successful services - search engine, maps, Gmail, Android, the Chrome browser, potentially self-driving cars - are about an individual and their machine. Glass fits right in. The search engine, browser, and mapping all ultimately date to the desktop era. As an operating system, Android is something an individual chooses, and then uses apps to implement sharing. Cars especially are American icons of individualism. It's notable that Europeans talk about autocars ("autonomous cars") in terms of improved road-sharing and lowered energy costs through platooning, while Americans talk more about the personal benefits of not having to spend so much time focused on driving in their daily commute. In the 1950s through 1970s, teens coming of age often derived their sense of personal independence from owning a car; today, research suggests that the younger generation has their sense of personal freedom tied up instead in their phones.

Related to this, alongside the Glass revival came the news that Google is separating out its services so that people using YouTube or Gmail no longer need to sign in with a Google+ ID. The result is myriad media writing the service's likely end and positing reasons why anyone might have thought it was successful. Fortune magazine suggests that the result is to make it more likely that Google will buy Twitter. (I hope not, although I can see the twisted logic that if Twitter is in trouble it *should* be bought by a company that can't make social networking work.)

To me, this is a positive step, if only because it seems obvious that someone who wants to use YouTube shouldn't be forced to register for some other service they're not interested in, any more than they should be required to use their real names for everything. Making them do so is exactly the kind of leveraging of dominance in one market to build a presence in another that antitrust law is supposed to prevent. I suspect Google+ will linger much longer than the obit-writers think - although Google has become known lately for abruptly executing services it doesn't want to support any more, no matter how much users protest (see: Reader, Video, many others), so who really knows?

At Mashable, Seth Fiegerman has a lengthy inside look at Google+ as an effort to compete with Facebook. The story he tells sounds like the kind of obsession sports figures sometimes get when a particular opponent makes big enough waves. Rarely, however, do either athletes or technology companies succeed by playing their opponents' s games. IBM, despite trying, couldn't beat all its many PC company competitors; Microsoft couldn't beat Apple for innovative design company; and Google, no matter how hard it tries, can't beat Facebook at building the social graph. Facebook, meanwhile, seems to have its own expansion plans in mind: this week the company filed for a patent allowing lenders to determine users' creditworthiness by looking at their friends list - clear potential for digital redlining.

This all still leaves Google with the problem it was trying to solve. As financial pundits point out every quarter, although the company's revenues keep climbing, its ads' cost-per-click keeps declining. Ads are still its only source of revenues; it competes for those ads with Facebook and countless others; Apple's upcoming native ad blocking on iOS, as Charles Arthur writes, will pose a giant new problem. Even if Google fights back in kind, at some point we will hit - may have already - peak advertising. It is simply not possible for advertising on its own to support everything people want it to support. Google is still growing massively, but it still needs - and knows it needs - that elusive second product.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.