" /> net.wars: July 2013 Archives

« June 2013 | Main | August 2013 »

July 27, 2013

Immoral panic

"The Internet perceives censorship as damage, and routes around it," John Gilmore famously said. I've quoted that aphorism as often as anybody, but it's only really true if by "Internet" you mean "the collection of people who use the Internet". Otherwise, you're giving a mass of computers, cables, and software programs both sentience and agency.

On his home page, where Gilmore has the quote pegged to an article that appeared in Time in 1993, he provides context that people usually forget:

In its original form, it meant that the Usenet software (which moves messages around in discussion newsgroups) was resistant to censorship because, if a node drops certain messages because it doesn't like their subject, the messages find their way past that node anyway by some other route. This is also a reference to the packet-routing protocols that the Internet uses to direct packets around any broken wires or fiber connections or routers. (They don't redirect around selective censorship, but they do recover if an entire node is shut down to censor it.)

The meaning of the phrase has grown through the years. Internet users have proven it time after time, by personally and publicly replicating information that is threatened with destruction or censorship. If you now consider the Net to be not only the wires and machines, but the people and their social structures who use the machines, it is more true than ever.

The problem is that censorship - especially in the form of a blunt instrument aimed at a class of material rather than a specific piece - can also cause lots of damage, most of which will hit targets other than those intended. People who can route around it will; those who cannot will have to live in Sconthorpe.

When, on Monday, UK Prime Minister David Cameron announced his clutch of anti-pornography measures, including nationwide filtering by default, the creation of a blacklist of search engine terms, and the outlawing of possession of "extreme" pornography, for some of us it felt like 1996 all over again. As net.wars noted in 2011, the Internet is not television; there is no easy button to push. It's also not a newspaper, where the editor could at any time stop publishing topless photos on page 3 - a campaign that Cameron has refused to back.

The problems with Cameron's proposals are quickly summarized. First, under the rubric of protecting children he conflates removing illegal material (child abuse images) and blocking material that is legal, however distasteful. Second, a blacklist of "abhorrent" search engine terms will inevitably set off an arms race (blacklist installed; pedophiles create new codes), increasing the likelihood of "stumbling upon" (as happened in the mid-1990s with newsgroups). In any case, search engines are the wrong targets here. Average Internet users are still learning to care enough about protecting their privacy to avoid being tracked, but the tiny minority of obsessives who wish to share pictures of child abuse already have. Finally, if consumers are not opting for filtering on their broadband it's not because of a dastardly plot by ISPs. Let's be clear: this is not just about blocking pornography, and not just because historically, filtering has always led to overblocking. The list of options revealed to the Open Rights Group shows clearly that filtering will not be limited to pornography. The level of safety implied by that list is one that technical experts will tell you ISPs can't deliver - and a false sense of safety carries its own risks.

None of this is new. For the most complete takedown see Lilian Edwards at Pangloss. As Edwards notes, even police experts known for their long-standing interest in protecting children like Jim Gamble have poured scorn on the proposals. Charles Arthur at the Guardian, outlines the effort Internet companies put into removing and denying access to child abuse images and studies the much more complex issues around legal material. I also had a lengthy discussions with the BBC World Service (starts about 40 minutes in), where my host seemed to be genuinely struggling with understanding how to protect his children in the age of the Internet.

In 1996, when the social medium du jour was Usenet, everyone blamed ISPs for making the wrong sort of newsgroups available, and ISPs that opposed censorship on principles became targets of media attacks. The result of that was the formation of the Internet Watch Foundation. Today, helped out by public distaste for large corporations that pay little or no British tax, the same kinds of accusations are being levelled at the search engines, primarily Google. Accidentally matching people with things they were not looking for is not how Google makes its money, nor is linking to material that puts the company in disrepute.

Cameron may not get the votes he wants out of this either, no matter how happy he's made the Daily Mail right now. As Charles Arthur also writes, porn is popular. Maybe, as a commenter to that article wrote, more popular than Cameron. Especially with families whose benefits are being cut.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted irregularly during the week at the net.wars Pinboard - or follow on Twitter.

July 19, 2013

Automate and chill

Is it an invasion of privacy if your intimate communications are stored and scanned by an automated system but not seen by a human? Here I want to nail down why the answer is yes.

This question - and much else - came up at Wednesday evening's debate between the QC Geoffrey Robinson, the investigative journalist Duncan Campbell, and David Omand, former head of GCHQ and security and intelligence coordinator for the Cabinet Office (2002 to 2005). We were in Chatham House, home of the Rule keeping backroom deals between the "great and good" secret, but not of it: we were encouraged to tweet (except there was no wifi and many mobile phones didn't work in that basement ...but whatever).

The person who brought it up on Wednesday was David Omand in response to a questioner who suggested that given today's tools "man's fallible temptation to delve" might take over, nullifying any rules created to regulate access to the collected piles of data.

"We could almost advance the argument that we're safer with computers because they're not conscious," Omand said. "They don't actually read the stuff. If there were a human - or a thousand humans - reading it they might be tempted." In the Guardian, last month he similarly wrote:

This involves computers searching through a mass of material, of course, and that might include your and my emails and data on our web traffic, but it is only the legally requested material that ever gets seen by a human being. These computers are not conscious beings: they will only select that which they are lawfully programmed to select. To describe this process as monitoring all our communications or "the surveillance state" or a "snooper's charter" is wholly misleading and a perverse reading of the situation.

So Omand's contention is that computers can't be tempted to break rules because they don't get curious and can't be bribed ("Free electricity"), blackmailed ("I'll tell other machines you're having an affair with that Galaxy Note II"), or socially engineered ("Nude pics of Anna Robotova - click here!"). He is also claiming that the piles of data do not matter until or unless human assessment is involved - and apparently assuming that such involvement will always be legal.

The first obvious response is the most general: clearly Europe fundamentally disagrees or it wouldn't have put such effort into enshrining the data protection principles into law. That law does not distinguish between machine and human access; it assumes that all processing, no matter how automated, has a human ultimately in charge.

The claim that automatic scanning is less invasive is a justification, not a fact. The focus on content is a decoy. Easily accepted, because most people readily imagine the unpleasant consequences if something they've just written were read by the wrong person: the note asking for help fixing your boss's mistake; the explicit note to a lover; financial figures; 250,000 diplomatic cables... It is much harder to feel that same punch to the gut with respect to analyzing metadata - yet the consequences of the latter may be much worse. One explicit email can be explained; the fact that your mobile phones are seen together at the same hotel every Wednesday afternoon can change your life. The ACLU's Jay Stanley uses the term "reverberations" to express the fact that the privacy issue is less about who or what sees the data than about what happens to you as a result. As he writes , knowing we are being watched - by whatever - chills our behavior..

Present limitations of natural language processing and artificial intelligence mean that machines presently suck at content analysis. Traffic data, however, is perfect for machines. So when Obama - or Omand - says "no human is reading the content", they're glossing over the real game here: Big Data is today's industry buzzword. This tactic of diversion is very like the UK government's repeated insistence in the mid-2000s that the ID card was no threat because citizens would not be forced to *carry* it. As campaigners understood, the real game was the underlying database.

As long as you have humans in the loop deciding what queries are run, the "temptation to delve" still applies - remember, that perfectly functioning, omniscient, black-box, tamper-proof Machine on Person of Interest is fictional. The human using and programming the machine will always be a target for bribery, blackmail, or deception and the machine or its databases can be hacked, corrupted, or bug-ridden.

And: you'd better hope there are humans in the loop because otherwise you've got machines making intimate decisions about people's lives. It is no consolation that no human has read your email if automated processing adds your name to the no-fly list, falsely accuses you, wrecks your credit score, or decides you're fit for work and denies your benefits claim. The bad news is that too many of those humans blindly trust the machine, as Danielle Citron and others have established, because it's a safe way not to get fired.

Ultimately, behind every great machine stands a human with a finger on the panic button. It's sophistry to pretend otherwise.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 13, 2013

Take back the Net

"They stole our revolution. Now we're stealing it back," wrote Danny O'Brien and Dave Green every week in Need to Know (Now) for five years in the 1990s.

Now we may well wish they really had. In the years since, the Net become increasingly centralized: email (Gmail, Hotmail, major ISPs), social connections (Facebook, Twitter, Google+), content provision (Amazon, Apple, Netflix), ecommerce (eBay, Amazon), and mobile access software (Google, Apple, Microsoft). The Internet started with the idea of empowering the "little guy"; but the choices we make increasingly favor whales.

The usual objection to this situation is that a world of monopoly or oligopoly suppliers is inevitably more expensive for consumers. But a less often-considered problem is that centralization enables the kind of mass surveillance that Snowden's revelations have described over the last few weeks.

There are several kinds of responses to Bruce Schneier's Internet surveillance state. Legal challenges are proliferating. Privacy International and Liberty have both filed complaints against the UK government in the Investigatory Powers Tribunal. Within the US, EPIC is challenging the NSA and FBI in the Supreme Court. Germany may investigate, and two French groups, the French Human Rights League and the International Federation for Human Rights, have filed a complaint in a Paris civil court hoping to push French prosecutors to do likewise. Doubtless there will be plenty more to come.

In the meantime, as others have pointed out, European Internet companies have a market opportunity if they can offer ways to protect and keep user data under the aegis of EU law. If these revelations had to come at all they couldn't have come at a better time in terms of the debate over data protection reform.

But what about the technical possibilities? The Internet is still open enough for us collectively to be able to change tack: let's make all that wiretapping a whole lot harder.

"Disintermediation" was one of the buzzwords of the early 1990s. The Net was going to eliminate middlemen by allowing us all to deal with each other directly. Even at the time I thought what would happen instead was the rise of a new set of intermediaries, some of which would be familiar names (banks, legacy telcos, Hollywood studios) and some newcomers (Verisign, Paypal, Amazon, eBay, Apple). The dangers of relying too much on a few large service providers were understood. Technical risks included the problems of central points of failure (for example, ICANN), stagnation, and closure to new opportunities if the Internet's technologies hardened into a static form the way the world's legacy telephone networks had. The broader risk most commonly discussed was censorship. A big concern was the expected death of small ISPs as dial-up gave way to broadband. Today, the landscape is dominated by many fewer, much larger ISPs whose fixed connections are far more trackable and controllable. We thought a lot about encryption as a protector of privacy and, I now think, not enough about the unprecedented potential for endemic wiretapping that would be enabled by an increasingly centralized Internet.

The closure of computers and other devices that Jonathan Zittrain warned about in The Future of the Internet - and How to Stop It and Cory Doctorow has called the war on general-purpose computing is only one piece of the problem that confronts us now. It will not matter - or at least, it will not matter enough - if your computer is general purpose if everything you do on it is intermediated by a third party whose goals have nothing to do with yours. If you buy all your ebooks from Amazon, stream all your TV and movie watching from Netflix, do all your music listening through iTunes, and manage all your communications through Gmail and Facebook, how much difference does it make that you're running Linux? General-purpose computing only gives you access to alternatives if the alternatives still exist; and a small handful of suppliers will provide intimate pictures of large swathes of the population.

Today's younger generation, burdened by debt at a young age and imagining themselves overwhelmed by their Baby Boomer parents' extravagant http://www.pelicancrossing.net/netwars/2011/01/stuffed.html">hoards of possessions seem to be willing to contemplate lives with less ownership and more subscription or rental without fully thinking through the level of control this approach grants to third parties who will happily charge you for every extra time you watch a movie or read a book and retain the data to track your inner intellectual life.

Yet we keep making these choices. For example: everyone wants all their devices to sync up neatly so their data is always available. But why must it be via someone else's cloud? We have the technology to support private clouds for families, individuals, groups of friends, or clubs, just as we have the technology to make it reasonably straightforward for moderately sophisticated home users and small businesses to run their own mail servers. The history of file-sharing has shown the way a particular technology can be progressively decentralized to defeat attempts at control and monitoring. Surely what was worth doing to gain access to recorded music would be worth doing to protect digital rights.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


July 5, 2013

Repealing the Internet

One of this week's weaker moments was Robert Samuelson's op-ed in the Washington Post wishing he could "repeal the Internet" because the technology's many upsides are, in his view, outweighed by the downsides of various forms of cyber attacks.

A colleague of Samuelson's does a necessary and sufficient job of dissecting the piece, pointing out the real threats, many of which either predate, or have nothing to do with, the Internet. It remains for me to take issue with his language: what led everyone to adopt the Internet was not a proclamation or a law, so "repeal" isn't even semantically possible. Samuelson means "recall", which is about as logical as suggesting that the smart thing to do with someone who uses his literacy to write silly things would be to stuff him back into his mother's womb. But the likelihood is that he doesn't really mean any of it; this is the journalism of provocation in search of attention - that is, hits and advertising clicks.

It's still true, however, that much of the process of developing the technologies that enable the Internet was at least somewhat subversive. The truth of this was on show on Monday night, when Google's London plex substation hosted an evening of British computing history, a perfectly formed answer to all those Americans who think the US invented the Internet. The evening featured Vint Cerf, Roger Scantlebury, Peter Wilkinson, and Peter Kirstein. In the 1960s and 1970s, when all this stuff was being invented, Scantlebury and Wilkinson were at the National Physical Laboratory alongside Donald Davies, the man who gave us data packets. Cerf, now at Google, was at SRI; Kirstein was and is at UCL, which was the site of the first ARPAnet node outside the United States.

The evening also featured a short documentary on the first computer really built to run a business, Lyons tea shops' LEO (I reviewed the book about this system, by Georgina Ferry, 10 years ago for New Scientist). Plus, presentations from the director of the National Museum of Computing at Bletchley Park and the brains behind a new British computer history exhibit going up at the Science Museum some time next year. It's excellent to see all this work being recognized; as Scantlebury and Wilkinson pointed out recently in the Guardian, the success of the Web and the attention consequently paid to its inventor has tended to overshadow the early networking efforts without which...

Mostly, though, this was a vastly entertaining bunch of guys reminiscing. Many of their stories revolved around all the subterfuges the various nascent networking efforts had to adopt in order to get around one or another set of rules. And so many of them, serving so many different agendas. UNIX was developed at Bell Labs and then given away because at the time AT&T was barred from entering the computer market. The first non-US ARPA node was at UCL in part because NPL was not allowed to engage in wide-area networking (so instead, it built a fancy local area network and experimented on that). And BT stuck with analog switching for so long because of unemployment: the analog switches were built in Liverpool and Nottingham.

One of the more salient questions asked of Cerf was what he would do differently if he'd had the benefit of hindsight. Would he build in better security? Content control? Charging?

Now, the received wisdom is that the Internet pioneers were so idealistically focused on sharing information across a network used by a small, homogeneous population that they didn't see security as an issue. Cerf's response to the contrary: yes, they would have liked to have incorporated security, but the necessary technology didn't exist yet. TCP/IP was being finalized in 1974, and Whit Diffie and Martin Hellman didn't publish their groundbreaking paper proposing public key cryptography until 1976. Of course, Cerf added, we now know that GCHQ had already had that same idea, so the technology existed - but was classified. What an expensive narrow miss: if ever there were a case where government secrecy cost billions that would be it. At the time, 30 years after World War II, encryption was still seen as a military technology. Twenty years later, this is what the early 1990s crypto wars were about. You can see the thinking for yourself in a recently declassified NSA internal newsletter (PDF), whose best entertainment is its smugly dismissive trip report on the 1992 Eurocrypt conference (some of the panelists the writer dubs "philosophers", whose presentations he ignored in favor of reading material he found more interesting, have since won Turing medals).

Cerf would also have chosen a larger address space (128-bit instead of 32-bit). Re charging, the germ of today's battle s over network neutrality: "I think we got it right. Everybody pays for access and then do what they want. I love that model."

The design of the Internet is very like the US Constitution, which one might have celebrated yesterday: both are subject to interpretation by later users. Either can be exploited for good or ill. The big thing to do in both cases if we don't like where they've landed us is to use their principles to make better decisions going forward.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.