" /> net.wars: April 2012 Archives

« March 2012 | Main

April 28, 2012

Interview with Lawrence Lessig

This interview was originally intended for a different publication; I only discovered recently that it hadn't run. Lessig and I spoke in late January, while the fate of the Research Works Act was still unknown (it's since been killed.

"This will be the grossest money election we've seen since Nixon," says the law professor Lawrence Lessig, looking ahead to the US Presidential election in November. "As John McCain said, this kind of spending level is certain to inspire a kind of scandal. What's needed is scandals."

It's not that Lessig wants electoral disaster; it's that scandals are what he thinks it might take to wake Americans up to the co-option of the country's political system. The key is the vast, escalating sums of money politicians need to stay in the game. In his latest book, Republic, Lost, Lessig charts this: in 1982 aggregate campaign spending for all House and Senate candidates was $343 million; in 2008 it was $1.8 billion. Another big bump upward is expected this year: the McCain quote he references was in response to the 2010 Supreme Court decision in Citizens United legalising Super-PACs. These can raise unlimited campaign funds as long as they have no official contact with the candidates. But as Lessig details in Republic, Lost, money-hungry politicians don't need things spelled out.

Anyone campaigning against the seemingly endless stream of anti-open Internet, pro-copyright-tightening policies and legislation in the US, EU, and UK - think the recent protests against the US's Stop Internet Piracy (SOPA) and Protect Intellectual Property (PIPA) Acts and the controversy over the Digital Economy Act and the just-signed Anti-Counterfeiting Trade Agreement (ACTA) treaty - has experienced the blinkered conviction among many politicians that there is only one point of view on these issues. Years of trying to teach them otherwise helped convince Lessig that it was vital to get at the root cause, at least in the US: the constant, relentless need to raise escalating sums of money to fund their election campaigns.

"The anti-open access bill is such a great example of the money story," he says, referring to the Research Works Act (H.R. 3699), which would bar government agencies from mandating that the results of publicly funded research be made accessible to the public. The target is the National Institutes of Health, which adopted such a policy in 2008; the backers are journal publishers.

"It was introduced by a Democrat from New York and a Republican from California and the single most important thing explaining what they're doing is the money. Forty percent of the contributions that Elsevier and its senior executives have made have gone to this one Democrat." There is also, he adds, "a lot to be done to document the way money is blocking community broadband projects".

Lessig, a constitutional scholar, came to public attention in 1998, when he briefly served as a special master in Microsoft's antitrust case. In 2000, he wrote the frequently cited book Code and Other Laws of Cyberspace, following up by founding Creative Commons to provide a simple way to licence work on the Internet. In 2002, he argued Eldred v. Ashcroft against copyright term extension in front of the Supreme Court, a loss that still haunts him. Several books later - The Future of Ideas, Free Culture, and Remix - in 2008, at the Emerging Technology conference, he changed course into his present direction, "coding against corruption". The discovery that he was writing a book about corruption led Harvard to invite him to run the Edmond J. Safra Foundation Center for Ethics, where he fosters RootStrikers, a network of activists.

Of the Harvard centre, he says, "It's a bigger project than just being focused on Congress. It's a pretty general frame for thinking about corruption and trying to think in many different contexts." Given the amount of energy and research, "I hope we will be able to demonstrate something useful for people trying to remedy it." And yet, as he admits, although corruption - and similar copyright policies - can be found everywhere his book and research are resolutely limited to the US: "I don't know enough about different political environments."

Lessig sees his own role as a purveyor of ideas rather than an activist.

"A division of labour is sensible," he says. "Others are better at organising and creating a movement." For similar reasons, despite a brief flirtation with the notion in early 2008, he rules out running for office.

"It's very hard to be a reformer with idealistic ideas about how the system should change while trying to be part of the system," he says. "You have to raise money to be part of the system and engage in the behaviour you're trying to attack."

Getting others - distinguished non-politicians - to run on a platform of campaign finance reform is one of four strategies he proposes for reclaiming the republic for the people.

"I've had a bunch of people contact me about becoming super-candidates, but I don't have the infrastructure to support them. We're talking about how to build that infrastructure." Lessig is about to publish a short book mapping out strategy; later this year he will update incorporating contributions made on a related wiki.

The failure of Obama, a colleague at the University of Illinois at Chicago in the mid-1990s, to fulfil his campaign promises in this area is a significant disappointment.

"I thought he had a chance to correct it and the fact that he seemed not to pay attention to it at all made me despair," he says.

Discussion is also growing around the most radical of the four proposals, a constitutional convention under Article V to force through an amendment; to make it happen 34 state legislatures would have to apply.

"The hard problem is how you motivate a political movement that could actually be strong enough to respond to this corruption," he says. "I'm doing everything I can to try to do that. We'll see if I can succeed. That's the objective."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this seriesand one of other interviews.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 20, 2012

A nation of suspects

Bad policies are like counterfeit money: they never quite go away and they ensnare the innocent.

It's about a months since the coalition government admitted its plans for the Communications Capabilities Development Programme, and while most details are still unknown it's pretty clear that it's the Interception Modernisation Programme Redux. The goal is the same: to collect and monitor the nation's communications data. What's changed since 2009 is the scope, which takes in vast quantities of data that have never been kept before and the conditions of storage, which site the data at ISPs rather than GCHQ.

The security policy analyst Susan Landau has written in various places about the fundamental threat to security created by opening a hole (aka, a back door) for law enforcement. A hole is a hole, no matter who it's for, and once it's there it can be used by people it's not intended for. You'd think this would be blatantly obvious. It's like requiring everyone to give the police a copy of the key to their house and/or car.

The justification for all this is modernization: restoring to law enforcement abilities had before the Internet came along and made a mess of everything. (How soon they forget the world of anonymous telephone booths, cash payments, postal mail, and open-air meetings.) The obvious next stage under this logic - if all our online communications are recorded for prospective future study in case we become criminals - would be to demand the same powers over our offline lives. Yesterday, at the ninth Scrambling for Safety event, Liberty doyenne Shami Chakrabarti raised just this point, asking if the Home Office's goal is to eliminate all unwatchable spaces, online and off. (I note an amusing side effect: such a policy could effectively nationalize the pornography industry.)

CCDP, the MP David Davis said yesterday, would turn us all into "a nation of suspects". Quite so. Though how long anyone will be free to point this out is another question. In 2002, the activist John Gilmore was tossed off a British Airways plane for wearing a small button that said "Suspected Terrorist".

There are all sorts of things wrong with building a system to implement surveillance as standard; Ross Anderson liveblogged the many that were made yesterday. Paul Bernal also has three good blog pieces on the politics of privacy, the infeasibility of securing the data, and why the government keeps getting these policies so spectacularly wrong.

The net.wars back catalogue adds that dataveillance doesn't work and the chilling effect of the amounts of data already available about all of us.

Here, I'd like to pursue an analogy to drug testing in sports. The University of Aberystwyth's Mark Burnley did a good presentation last week for London Skeptics in the Pub; you can hear it at The Pod Delusion.

When it comes to drug testing, athletes are presumed guilty. They must repeatedly prove their innocence by passing drug tests that examine their urine and blood for any of a lengthy list of substances and methods that are banned under rules set down by the World Anti-Doping Agency and that cover all Olympic sports. The result is an arms race between athletes (or more properly, Burnley argues, their coaches and doctors) and the testing authorities. When steroids became detectable they were replaced with new substances and techniques: EPO, insulin, designer steroids, HGH, latterly microdosing. It's the stupid athletes who get caught

All, doping or not, submit to substantial invasions of privacy. Under the "whereabouts" rule, they must identify an hour every day where they can be found; they are held responsible for every substance found in their bodies; missed tests add up to failed tests. You may lack sympathy on the basis that a) they choose to be professional athletes, b) they are rewarded with lots of money and glamor, and c) they're all cheaters anyway. To counter: for many their choice of specialization was made very young; many athletes who live under these rules are largely invisible and struggling to break even; and the system is failing to catch the cheaters who matter. (That said, what *is* interesting is the exercise of keeping athletes' submitted samples and testing them again years later in the light of improved technical knowledge.) Meanwhile, the huge sums of money in the sports business make it worthwhile to fund research into new, less detectable techniques and the morality plays that surround athletes who do get caught could hardly be bettered as a method for convincing kids that doping is what you need to do to become a winner.

In any event, the big point Burnley made is that the big doping cases that have been broken have been primarily through traditional policing methods. In tennis, Wayne Odesnik did not get caught by doping tests; he got caught by Australian Customs, who found syringes and eight vials of HGH in his suitcase. Similarly, despite what understandably risk-averse politicians would like to believe under the influence of the security services, unfettered data collection will make plenty of ordinary people's lives miserable - but crime will route around it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 13, 2012

The people perimeter

People with jobs are used to a sharp division between their working lives and their private lives. Even in these times, when everyone carries a mobile phone and may be on call at any moment, they still tend to believe that what they say to their friends is no concern of their employer's. (Freelances tend not to have these divisions; to a much larger extent we have always been "in public" most of the time.)

These divisions were always less in small towns, where teachers or clergy had little latitude, and where even less-folk would be well advised to leave town before doing anything they wouldn't want discussed in detail. Then came social media, which turns everywhere into a small town and where even if you behave impeccably details about you and your employer may be exposed without your knowledge.

That's all a roundabout way of leading to yesterday's London Tea camp, where the subject of discussion was developing guidelines for social media use by civil servants.

Civil servants! The supposedly faceless functionaries who, certainly at the senior levels, are probably still primarily understood by most people through the fictional constructs of TV shows like Yes, Minister and The Thick of It. All of the 50 or 60 people from across government who attended yesterday have Twitter IDs; they're on Facebook and Foursquare, and probably a few dozen other things that would horrify Sir Humphrey. And that's as it should be: the people administering the nation's benefits, transport, education, and health absolutely should live like the people they're trying to serve. That's how you get services that work for us rather than against us.

The problem with social media is the same as their benefit: they're public in a new and different way. Even if you never identify your employer, Foursquare or the geotagging on Twitter or Facebook checks you in at a postcode that's indelibly identified with the very large government building where your department is the sole occupant. Or a passerby photographs you in front of it and Facebook helpfully tags your photograph with your real name, which then pops up in outside searches. Or you say something to someone you know who tells someone else who posts it online for yet another person to identify and finally the whole thing comes back and bites you in the ass. Even if your Tweets are clearly personal, and even if your page says, "These are just my personal opinions and do not reflect those of my employer", the fact of where you can be deduced to work risks turning anything connected to you into something a - let's call it - excitable journalist can make into a scandal. Context is king.

What's new about this is the uncontrollable exposure of this context. Any Old Net Curmudgeon will tell you that the simple fact of people being caught online doing things their employers don't like goes back to the dawn of online services. Even now I'm sure someone dedicated could find appalling behavior in the Usenet archives by someone who is, 25 years on, a highly respected member of society. But Usenet was a minority pastime; Facebook, Twitter et al are mainstream.

Lots has been written by and about employers in this situation: they may suffer reputational damage, legal liability, or a breach that endangers their commercial secrets. Not enough has been written about individuals struggling to cope with sudden, unwanted exposure. Don't we have the right to private lives? someone asked yesterday. What they are experiencing is the same loss of border control that security engineers are trying to cope with. They call it "deperimeterization", because security used to mean securing the perimeter of your network and now security means coping with its loss. Adding wireless, remote access for workers at home, personal devices such as mobile phones, and links to supplier and partner networks have all blown holes in it.

There is no clear perimeter any more for networks - or individuals, either. Trying to secure one by dictating behavior, whether by education, leadership by example, or written guidelines, is inevitably doomed. There is, however, a very valid reason to have these things: to create a general understanding between employer and employee. It should be clear to all sides what you can and cannot get fired for.

In 2003, Danny O'Brien nailed a lot of this when he wrote about the loss of what he called the "private-intermediate sphere". In that vanishing country, things were private without being secret. You could have a conversation in a pub with strangers walking by and be confident that it would reach only the audience present at the time and that it would not unexpectedly be replayed or published later (see also Don Harmon and Chevy Chase's voicemail). Instead, he wrote, the Net is binary: secret or public, no middle ground.

What's at stake here is really not private life, but *social* life. It's the addition of the online component to our social lives that has torn holes in our personal perimeters.

"We'll learn a kind of tolerance for the private conversation that is not aimed at us, and that overreacting to that tone will be a sign of social naivete," O'Brien predicted. Maybe. For now, hard cases make bad law (and not much better guidelines) *First* cases are almost always hard cases.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 6, 2012

I spy

"Men seldom make passes | At girls who wear glasses," Dorothy Parker incorrectly observed in 1937. (How would she know? She didn't wear any). You have to wonder what she could have made of Google Goggles which, despite the marketing-friendly alliterative name, are neither a product (yet) nor a new idea.

I first experienced the world according to a heads-up display in 1997 during a three-day conference (TXT) on wearable computing at MIT ($). The eyes-on demonstration was a game of pool with the headset augmenting my visual field with overlays showing cuing angles. (Could be the next level of Olympic testing: checking athletes for contraband contract lenses and earpieces for those in sports where coaching is not allowed.)

At that conference, a lot of ideas were discussed and demonstrated: temperature-controlling T-shirts, garments that could send back details of a fallen soldier's condition, and so on. Much in evidence were folks like Thad Starner, who scanned my business card and handed it back to me and whose friends commented on the way he'd shift his eyes to his email mid-conversation, and Steve Mann, who turned himself into a cyborg experiment as long ago as the 1980s. Checking their respective Web pages, I see that Mann hasn't updated the evolution of wearables graphic since the late 1990s, by which time the headset looked like an ordinary pair of sunglasses; in 2002, when airport security forced him to divest his gear, he had trouble adjusting to life without it. Starner is on leave to work at...Project Glass, the home of Google Goggles.

The problem when a technological dream spans decades is that between conception and prototype things change. In 1997, that conference seemed to think wearable computing - keyboards embroidered in conductive thread, garments made of cloth woven from copper-covered strands, souped-up eyeglasses, communications-enabled watches, and shoes providing from the energy generated in walking - surely was a decade or less away.

The assumptions were not particularly contentious. People wear wrist watches and jewelry, right? So they'll wear things with the same fashion consciousness, but functional. Like, it measures and displays your heart rhythms (a woman danced wearing a light-flashing pendant that sped up with her heart rate), or your moods (high-tech mood rings), or acts as the controller for your personal area network.

Today, a lot of people don't *wear* wrist watches any more.

For wearable guys, it's good progress. The functionality that required 12 pounds of machinery draped about your person - I see from my pieces linked above and my contemporaneous notes, that the rig I tried felt like wearing a very heavy, inflexible sandwich board - is an iPhone or Android. Even my old Palm Centro comes close. As Jack Schofield writes in the Guardian, the headset is really all that's left that we don't have. And Google has a lot of competition.

What interests me is let's say these things do take off in a big way. What then? Where will the information come from to display on those headsets? Who will be the gatekeepers? If we - some of us - want to see every building decorated with outsized female nudes, will we have to opt in for porn?

My speculation here is surely not going to be futuristic enough, because like most people I'm locked into current trends. But let's say that glasses bolt onto the mobile/Internet ecologies we have in place. It is easy to imagine that, if augmented reality glasses do take off, they will be an important gateway to the next generation of information services. Because if all the glasses are is a different way of viewing your mobile phone, then they're essentially today's ear pieces - surely not sufficient motivation for people with good vision to wear glasses. So, will Apple glasses require an iTunes account and an iOS device to gain access to a choice of overlays to turn on and off that you receive from the iTunes store in real time? Similarly, Google/Android/Android marketplace. And Microsoft/Windows Mobile/Bing or something. And whoever.

So my questions are things like: will the hardware and software be interoperable? Will the dedicated augmented reality consumer need to have several pairs? Will it be like, "Today I'm going mountain climbing. I've subscribed to the Ordnance Survey premium service and they have their own proprietary glasses, so I'll need those. And then I need the Google set with the GPS enhancement to get me there in the car and find a decent restaurant afterwards." And then your kids are like, "No, the restaurants are crap on Google. Take the Facebook pair, so we can ask our friends." (Well, not Facebook, because the kids will be saying, "Facebook is for *old* people." Some cool, new replacement that adds gaming.)

What's that you say? These things are going to collapse in price so everyone can afford 12 pairs? Not sure. Prescription glasses just go on getting more expensive. I blame the involvement of fashion designers branding frames, but the fact is that people are fussy about what they wear on their faces.

In short, will augmented reality - overlays on the real world - be a new commons or a series of proprietary, necessarily limited, world views?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.