" /> net.wars: April 2013 Archives

« March 2013 | Main | May 2013 »

April 27, 2013

Namesakes

The domain name wars are back: Brazil and Peru are objecting to Amazon.com's application to control .amazon. This sort of dispute has been going on for at least 20 years, and always, in my view, for the same reason: there is no consensus about what the domain name system is *for*. Various paradigms have been suggested over the last 30 years - directory, list of trademarks, geographic guide, free-for-all. There are cases to be made for all these ideas, by consumer advocates, lawyers, governments, or engineers (respectively), but the most current answer seems to be "a way for ICANN to make money". Set up in 1998, ICANN is the Internet Corporation for Assigned Names and Numbers, the organization in charge of allocating Internet names.

I have a lot of sympathy with Peru's and Brazil's claim "Amazon" is a lot older than Jeff Bezos On the other hand, they already have the country codes .br and .pe respectively; it's not like they'll fall off the Internet without .amazon as well. But the lack of consensus about the purpose of the DNS (other than to ease human navigation) means these conflicts are inevitable. No clear structure guides who gets precedence: countries or companies, geographical regions or brand names.

The brief background: traffic is routed around the Internet by computers that use numbers - Internet Protocol addresses. To make it easier for humans, in 1983 Paul Mockapetris created the Domain Name System, which provides a server that takes the name you request and routes your request to the correct number. Domain names are hierarchical and read right to left in order of increasing specificity: google.com takes you to Google's main page, news.google.com takes you to the news servers at Google. The rightmost piece (.com in google.com) is known as the top-level domain, and these come in two types: generic (.com, .edu) and country code (.us, .uk). The intention was that national enterprises would register under their country code, and only multinational organizations would register under the handful of generic top-level domains (gTLDs): .edu (educational), .org (non-profits), .mil (military), .net (for staff of ISPs), and, of course, .com (for commercial organizations).

Blame the users that things did not work out that way. As things shook out, everyone wanted to be in .com, the gTLD that Mockapetris had originally opposed creating. As the early rush online grew into the dot-com boom of the late 1990s, people began complaining that all the "good names" were taken. By this they meant that the *meaningful* names in .com were taken. By 1997, plans were afoot to create more gTLDs and add support for international - that is, non-ASCII - alphabets.

Since then, a relatively small number of new gTLDs have been created with, it seems to me, relatively little effect. As of March 2013, there were more than three times as many registrations in .com as in the next seven gTLDs put together. It's also notable that the top three continue to be the oldies: .com, .net, and .org. What isn't so easily calibrated is the percentage of registrations that are defensive: IBM, for example, is registered in .biz, .info, and .org (plus, I'm sure, many country code domains as well), all of which divert to its main site at ibm.com.

In 2011, ICANN announced it would create as many as 1,000 new gTLDs via an application process that starts with a $185,000 fee and that closed to new entries in March 2012. Vetting the 1,900 applications ICANN received is understandably slow.

The absurdity of the whole situation is that what really matters are the numbers. You can, as the instructions for bypassing the UK court-ordered block on accessing The Pirate Bay show, get to a site using only its number (assuming you know it). Censorship efforts so far have focused on blocking access to a given site by altering the DNS server's response to requests for its name - a man-in-the-middle attack, basically. Yet few understand the numbers; it's the names that have meanings people care about.

Personally, I've never been convinced that the new gTLDs answer any real need. More, they revive and intensify all the old conflicts and confusions: under the old system you can have amazon.pe, amazon.us, and amazon.com and the "amazon" in each case can mean something different. Under the new one, only one usage can win. Kieren McCarthy, who has covered ICANN in greater detail than any other journalist I'm aware of and who even worked for the organization for a time, has raised a more frightening issue: that the Governmental Advisory Committee has demanded "safeguards" for the new gTLDs that, if implemented, could mean government-ordered content restrictions.

From the day ICANN was created, this potential for the organization to engage in censorship has been a frequently-voiced concern. So far, it has stuck to a narrow technical remit. But this year is seeing many more concerted efforts: the British Prime Minister is pitching for clean public wifi, and Eric Schmidt is imagining an Internet carved up by censorship into national regions. Whatever the DNS is for, it shouldn't be for implementing censorship. As Evgeny Morozov would despise me for saying: it would break the Internet as we know it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Stories about the border wars between cyberspace and real life are posted throughout the week at the net.wars Pinboard - or follow on Twitter.


April 19, 2013

Not just another 14-year-old basement Tweeter

It is, as they say, a free country. Which means that as a grown-up I understand that on occasion Congress (or Parliament) is going to pass laws that I seriously disagree with. What I don't appreciate is insults from the people who pass these laws.

In the mid-1990s, in a famous incident at the Computers, Freedom, and Privacy conference, which at the time was a hotbed of opponents to key escrow and defenders of the wide deployment of strong cryptography (privacy, security, authentication), the lawyer Stewart Baker told those assembled that the only opposition to key escrow was coming from "people who couldn't go to Woodstock because they had to stay home and finish their math homework". (To be fair to Baker, despite the insult, he's ever since been a very good sport about coming to CFP and being insulted right back by people who violently disagree with him. So, wash.)

In 2001, when Britain began the process of passing what became the Anti-Terrorism, Crime, and Security Act, the then Home Secretary, Jack Straw, called those who opposed key escrow in the mid-1990s "naïve", and suggested that the 9/11 attacks ought to be making us think better of said opposition. Now I'm told that the bill's sponsor, Mike Rogers (R-MI) stood up in Congress and urged the passage of the Cyber Intelligence Sharing and Protection Act (CISPA) and said, "if you're a 14-year-old tweeter in the basement" you don't understand why Congress needs to pass CISPA".

For those who've lost track, CISPA is the US bill that seeks to carve out a large exception to privacy law for "cybersecurity". Its provisions would give companies new rights to monitor their users and share the resulting data with government. The government needs no warrant, and the legislation grants the companies immunity from lawsuits. It creates, in other words, the surveillance-industrial complex that people used to think only paranoids thought could happen.

Let's say up front that 14-year-olds aren't what they used to be. Aaron Swartz won the ArsDigita Prize at 13, and at 14 he was part of the group that wrote the first version of the RSS standard. I don't care where kids like that tweet from; they're worth listening to. More to the point, of course, the opposition spans dozens of civil liberties groups, none of which are staffed by 14-year-olds.

I am 59, so, yes, I was too young for Woodstock. I tweet from wherever I happen to be but have no basement. I was opposed to key escrow, and am opposed to CISPA. I recognize - how could I not? - the very real threats we all face from terrorism. But there are much bigger risks - driving cars, going out in public where there are flu viruses - that we all take every day without thought. We are at much lower risk from terrorism, but are more frightened of it because a) it's new and b) it seems more out of our control.

On Wednesday, Bruce Schneier smartly rewrote and posted his "refused to be terrorized" essay (others' related thoughts are linked from that posting). The gist: don't panic and pass bad laws that stick. Refuse to be terrorized by going about life as normal. The ongoing disruption to public life, the raised anxiety, the man-centuries of bag and personal searches, the economic uncertainty, the vast sums spent on surveillance infrastructure - these are the real goals of terrorist activity. When, yesterday, Farhad Manjoo argued in Slate that what we need is more security cameras, he was reacting as intended..

On Thursday, the non-14-year-olds in the House of Representatives passed CISPA. If each incident keeps ratcheting up the fear-driven deployment of surveillance technology, the US may become as different from what we think of as American democratic values as the McCarthy era was. Instead of loyalty oaths and blacklists this would mean pervasive automated control.

It's unfortunately hard to explain to frightened people why hoping that sacrificing core values of liberty and democracy, privacy and personal autonomy will buy safety is wrong, especially with so little public understanding of statistics. Mass surveillance *will certainly* make you less free but it only *may* pay off in increased safety - and that pay-off will be highly unevenly distributed. You are never going to be able to lock down all possible targets.

Last year, when the hated bills du jour were SOPA and PIPA, technology companies and digital rights activists were on the same side. The evil genius of CISPA is that it divides us up: digital rights activists oppose it, particularly the warrantless data sharing. But because the bill relieves these data-driven companies like Facebook and Google of the worry about being sued by angry customers, their motivation to join with activists in opposing CISPA is limited. Privacy policies, contrary to what we'd like to believe, to protect companies' corporate asses, not us. This is likely to be a much tougher fight, even though CISPA's similar passage through the House last year was followed by rejection in the Senate. Worse, the drafters of future legislation know now that Internet companies will abandon digital rights rhetoric in favor of their business interests: they can be bought.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Stories about the border wars between cyberspace and real life are posted throughout the week at the net.wars Pinboard - or follow on Twitter.


April 12, 2013

Cautiously apocalyptic

"We've been waiting for societal readiness," Ian Danforth says, at the end of his list of factors that have kept us waiting for robots. The head of the pet-like prototype machine on the table next him nods.

Danforth is demonstrating the only actual robot at this year's at We Robot, which is otherwise mostly lawyers scoping out the legal challenges robots will bring. Danforth's video clip has it rolling around a little to chase a ball: I imagine becoming quickly bored (yes, yes, Aibo).

Yet Danforth confidently predicts that in six months "incredible, unexpected new robots you want in your home" will be available; in a year thousands of homes will have them; in two years tens of thousands; and five years will produce the first "true AI". We are less than a mile away from the research lab where John McCarthy labored for 50 years; in 1956 he thought it would take six months.

Danforth's ideas tap into a particular trend, which Ken Goldberg at UC Berkeley calls "cloud robotics". Today's networked computational power means that you can launch a cute pet robot into the market with rather limited abilities, and let it improve in the field via the cloud. People enjoy teaching pets tricks and find it endearing when they fail; why shouldn't this apply to robots? Coalescence happens for me when someone asks what kind of data this cute little pet will be collecting, especially in conjunction with other recent events. Answer: video, audio, accelerometer, and geolocation from an attached GPS unit, all sent to a central server, from where the data can be shared back out again so my robot suddenly knows a trick that yours has learned. Someone's actually implementing Rupert Sheldrake's morphic resonance.

Danforth claims the data will not be looked at by humans. Not impressed: as the ACLU's Jay Stanley has pointed out, what matters is less whether data is examined by humans or read by machines than the way the resulting decisions reverberate through the rest of our lives. Later, Danforth tells me the stream will be encrypted in transit to and from the server, and he hopes that if law enforcement issues a subpoena he'll be able to say he has no data to show them. Now, why does that make me want to say CALEA and communications data bill?

The notion of robot as intimate data collection device came up at the first We Robot last year, among the many other things lawyers worry about, like liability, but this is less hypothetical. It shows that Charlie Stross was right in his talk about the future of Moore's Law that computational power is yesterday's future, just as increasing transportation speeds were the future of the first half of the 20th century. Today's future is rapidly emerging as data (his meditations on the implications of bandwidth included lifelogging). Big data, open data, algorithmic decision-making. Asimov did not, if I remember correctly, consider this aspect of robotics. His robots fought through individual behavioral tangles brought upon them by the Three Laws, but did not collaborate across vast data networks and did not wrestle with deciding whether disclosing their intimate knowledge of you to a hostile interrogator would cause you sufficient harm that they should harm the interrogator or self-destruct rather than answer.

Julie Martin saw this as a possibly hopeful thing: "Robotics cases may force people to look at things they should be looking at," she said. "It shouldn't be different because it's robotics." She meant that the world is now full of data collection technologies we shouldn't be taking so casually, and robots provide an opportunity to make that visible enough to engage people in stopping it. In response, Ian Kerr commented that Ryan Calo has made similar comments about drones in the past - that they would spark a chance to h4ave and win a privacy debate that should have already taken place, "but I'm cautiously apocalyptic about that now".

One of Martin's examples was Tesla's recent spat with the New York Times, which showed how much data cars can collect about their drivers. Unfortunately, if the past discussions are any guide, the argument others will make is that in a world of CCTV cameras, wiretap-ready telephone services and ISPs, online profiling, and audit trails, "why should robotics be any different" will be the line used to justify the invasion of our most private settings. Cue Bill Steele's 1970s song The Walls Have Ears.

At this point an evil thought occurs: you sell a cute robot people will fall in love with. You include the kind of subscription service common in software, where you push updates and improvements to the robot automatically. Or, in the way of today's world, you offer those services free, contingent on my agreeing to data sharing. When I fail to resubscribe or refuse to provide data, all that stops. With an Internet service, the site stops giving me personalized service (search results, targeted ads). A pet robot would seem to stop loving me back. This seems to me a chilling but perfectly plausible business model and not at all what we imagine when we long for a robot to do the housecleaning.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Stories about the border wars between cyberspace and real life are posted throughout the week at the net.wars Pinboard - or follow on Twitter.


April 5, 2013

Only forget

The so-called "right to be forgotten", proposed as part of the EU's data protection reform package, is a particularly contentious idea. Located at the nexus where privacy, freedom of expression, and freedom of association collide, it impinges on other rights we care about in all directions.

At a Westminster eForum on the data protection reforms (PDF) a couple of weeks ago, the deputy commissioner David Smith, from Britain's Information Commissioner's Office, commented that he thinks RTBF "may have unintended serious impact by setting users' expectations too high for what can realistically be achieved". Today, media stories are reporting that the UK wants to opt out, for just those reasons. As the Telegraph explains matters, currently individuals have the right to object to wrong information; what's proposed is to extend that right and put the onus on companies to comply. At stake for non-compliance (with the various new rules, not just RTBF): up to 2 percent of a company's global income.

Smith thought it would be better to characterize it as a "right to object", and thought seen in that light it could be a good thing: "I can say I object and ask you to stop, and you will have to stop unless you can come up with valid reasons." Such a right would, he said, reverse the burden of proof in existing processes, rather like shifting from opt-out to opt-in. This seems reasonable to me, in part because no matter what the law says it's impossible to be sure that every copy of anything has been removed from the Internet.

Framed Smith's way, the proposals seem more rational: there is real harm at stake here. For example, why should someone applying for a job be at a disadvantage because the first few pages that turn up in a Google search are full of abusive postings over something you said once about Harry Potter? Whether or not you did anything wrong, today's cover-your-corporate-ass attitudes will move on to a less controversial candidate. It can take years to push that kind of thing down enough pages that Human Resources will lose patience before finding it. Similarly, with no obligation to check, update, or correct information, sites are free to spread as much misinformation as they like. These are issues that have been well covered by privacy scholars such as Daniel Solove and, in his 2010 book, Delete, Viktor Mayer-Schönberger.

Anyone with a modicum of technical knowledge about the Internet will say it's not reasonable to expect ISPs, search engines, hosting sites, and social networks to police the steady flood of data that's being posted. Granted. Much impetus behind RTBF came from the discovery, largely due to research by Cambridge's Joseph Bonneau, that deleting material from social networks is often purely cosmetic: the material is not served up to the public but continues to reside on their servers. Bonneau published his research in 2009; it was August 2012 before Facebook changed the system so that deleted means deleted. The hunger of (especially) advertising-supported companies for Big Data means there is a genuine need for this intuitively correct right to be backed by law. The alternative risks rampant deception, only revealed occasionally when researchers like Bonneau are motivated to dig and publish.

Other difficulties concern the nature of social networks. If a circle of friends regularly post pictures of themselves and each other, large holes are torn in the social fabric if one demands that all material related to them be deleted. What are the boundaries of RTBF in a time when people, like networks, are losing their defined perimeters?

Naturally, the biggest push-back against RTBF is coming from the US, home of the biggest advertising-driven companies. In the Stanford Law Review, Jeffrey Rosen analyzes the legal roots of this cultural disjuncture, tracing it to variations in the way past convictions are handled in criminal law. In the US, convictions are published while in Europe after a certain amount of time has passed they are "spent" - essentially, forgotten. In 2009, Wikipedia was the center of exactly this cultural clash.

Like Lauren Weinstein, who sees RTBF as a modern reenactment of the memory hole in Orwell's 1984, Rosen concludes that RTBF necessarily means chilling freedom of expression and implementing mass censorship. Peter Fleischer, Google's global privacy counsel, doesn't love it either: by January 2012 he was complaining that, their expectations raised, individuals were asking for links to legally posted information about them to be deleted from search listings, perhaps referring to the workings of a relevant Spanish law. Fleischer and the EU went on to argue about whether search engines should be responsible for such deletions or not. Last week, a US coalition launched that claims it wants to find a balance between protecting privacy and ensuring the free flow of information.

The labyrinthine process of legislation in the EU makes it hard to parse the thousands of amendments and myriad committee votes to get a sense of how this particular debate will translate into law. You would hope that the length and complexity of the process would result in a nuanced outcome.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Stories about the border wars between cyberspace and real life are posted throughout the week at the net.wars Pinboard - or follow on Twitter.