Main

October 12, 2012

My identity, my self

Last week, the media were full of the story that the UK government was going to start accepting Facebook logons for authentication. This week, in several presentations at the RSA Conference, representatives of the Government Digital Service begged to differ: the list of companies that have applied to become identity providers (IDPs) will be published at the end of this month and until then they are not confirming the presence or absence of any particular company. According to several of the spokesfolks manning the stall and giving presentations, the press just assumed that when they saw social media companies among the categories of organization that might potentially want to offer identity authentication, that meant Facebook. We won't actually know for another few weeks who has actually applied.

So I can mercifully skip the rant that hooking a Facebook account to the authentication system you use for government services is a horrible idea in both directions. What they're actually saying is, what if you could choose among identification services offered by the Post Office, your bank, your mobile network operator (especially for the younger generation), your ISP, and personal data store services like Mydex or small, local businesses whose owners are known to you personally? All of these sounded possible based on this week's presentations.

The key, of course, is what standards the government chooses to create for IDPs and which organizations decide they can meet those criteria and offer a service. Those are the details the devil is in: during the 1990s battles about deploying strong cryptography, the government's wanted copies of everyone's cryptography keys to be held in escrow by a Trusted Third Party. At the time, the frontrunners were banks: the government certainly trusted those, and imagined that we did, too. The strength of the disquiet over that proposal took them by surprise. Then came 2008. Those discussions are still relevant, however; someone with a long memory raised the specter of Part I of the Electronic Communications Act 2000, modified in 2005, as relevant here.

It was this historical memory that made some of us so dubious in 2010, when the US came out with proposals rather similar to the UK's present ones, the National Strategy for Trusted Identities in Cyberspace (NSTIC). Ross Anderson saw it as a sort of horror-movie sequel. On Wednesday, however, Jeremy Grant, the senior executive advisor for identity management at the US National Institute for Standards and Technology (NIST), the agency charged with overseeing the development of NSTIC, sounded a lot more reassuring.

Between then and now came both US and UK attempts to establish some form of national ID card. In the US, "Real ID", focused on the state authorities that issue driver's licenses. In the UK, it was the national ID card and accompanying database. In both countries the proposals got howled down. In the UK especially, the combination of an escalating budget, a poor record with large government IT projects, a change of government, and a desperate need to save money killed it in 2006.

Hence the new approach in both countries. From what the GDS representatives - David Rennie (head of proposition at the Cabinet Office), Steven Dunn (lead architect of the Identity Assurance Programme; Twitter: @cuica), Mike Pegman (security architect at the Department of Welfare and Pensions, expected to be the first user service; Twitter: @mikepegman), and others manning the GDS stall - said, the plan is much more like the structure that privacy advocates and cryptographers have been pushing for 20 years: systems that give users choice about who they trust to authenticate them for a given role and that share no more data than necessary. The notion that this might actually happen is shocking - but welcome.

None of which means we shouldn't be asking questions. We need to understand clearly the various envisioned levels of authentication. In practice, will those asking for identity assurance ask for the minimum they need or always go for the maximum they could get? For example, a bar only needs relatively low-level assurance that you are old enough to drink; but will bars prefer to ask for full identification? What will be the costs; who pays them and under what circumstances?

Especially, we need to know what the detail of the standards organizations must meet to be accepted as IDPs, in particular, what kinds of organization they exclude. The GDS as presently constituted - composed, as William Heath commented last year, of all the smart, digitally experienced people you *would* hire to reinvent government services for the digital world if you had the choice - seems to have its heart in the right place. Their proposals as outlined - conforming, as Pegman explained happily, to Kim Cameron's seven laws of identity - pay considerable homage to the idea that no one party should have all the details of any given transaction. But the surveillance-happy type of government that legislates for data retention and CCDP might also at some point think, hey, shouldn't we be requiring IDPs to retain all data (requests for authentication, and so on) so we can inspect it should we deem it necessary? We certainly want to be very careful not to build a system that could support such intimate secret surveillance - the fundamental objection all along to key escrow.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.


September 21, 2012

This is not (just) about Google

We had previously glossed over the news, in February, that Google had overridden the "Do Not Track" settings in Apple's Safari Web browser, used on both its desktop and mobile machines. For various reasons, Do Not Track is itself a divisive issue, pitting those who favour user control over privacy issues against those who ask exactly how people plan to pay for all that free content0 if not through advertising. But there was little disagreement about this: Google goofed badly in overriding users' clearly expressed preferences. Google promptly disabled the code, but the public damage was done - and probably made worse by the company's initial response.

In August, the US Federal Trade Commission fined Google $22.5 million for that little escapade. Pocket change, you might say, and compared to Google's $43.6 billion in 2011 revenues you'd be right. As the LSE's Edgar Whitely pointed out on Monday, a sufficiently large company can also view such a fine strategically: paying might be cheaper than fixing the problem. I'm less sure: fines have a way of going up a lot if national regulators believe a company is deliberately and repeatedly flouting their authority. And to any of the humans reviewing the fine - neither Page nor Brin grew up particularly wealthy, and I doubt Google pays its lawyers more than six figures - I'd bet $22.5 million still seems pretty much like real money.

On Monday, Simon Davies, the founder and former director of Privacy International, convened a meeting at the LSE to discuss this incident and its eventual impact. This was when it became clear that whatever you think about Google in particular, or online behavioral advertising in general, the questions it raises will apply widely to the increasing numbers of highly complex computer systems in all sectors. How does an organization manage complex code? What systems need to be in place to ensure that code does what it's supposed to do, no less - and no more? How do we make these systems accountable? And to whom?

The story in brief: Stanford PhD student Jonathan Mayer studies the intersection of technology and privacy, not by writing thoughtful papers studying the law but empirically, by studying what companies do and how they do it and to how many millions of people.

"This space can inherently be measured," he said on Monday. "There are wide-open policy questions that can be significantly informed by empirical measurements." So, for example, he'll look at things like what opt-out cookies actually do (not much of benefit to users, sadly), what kinds of tracking mechanisms are actually in use and by whom, and how information is being shared between various parties. As part of this, Mayer got interested in identifying the companies placing cookies in Safari; the research methodology involved buying ads that included codes enabling him to measure the cookies in place. It was this work that uncovered Google's bypassage of Safari's Do Not Track flag, which has been enabled by default since 2004. Mayer found cookies from four companies, two of which he puts down to copied and pasted circumvention code and two of which - Google and Vibrant - he were deliberate. He believes that the likely purpose of the bypass was to enable social synchronizing features (such as Google+'s "+1" button); fixing one bit of coded policy broke another.

This wasn't much consolation to Whitley, however: where are the quality controls? "It's scary when they don't really tell you that's exactly what they have chosen to do as explicitly corporate policy. Or you have a bunch of uncontrolled programmers running around in a large corporation providing software for millions of users. That's also scary."

And this is where, for me, the issue at hand jumped from the parochial to the global. In the early days of the personal computer or of the Internet, it didn't matter so much if there were software bugs and insecurities, because everything based on them was new and understood to be experimental enough that there were always backup systems. Now we're in the computing equivalent of the intermediate period in a pilot's career, which is said to be the more dangerous time: that between having flown enough to think you know it all, and having flown enough to know you never will. (John F. Kennedy, Jr, was in that window when he crashed.)

Programmers are rarely brought into these kinds of discussions, yet are the people at the coalface who must transpose human language laws, regulations, and policies into the logical precision of computer code. As Danielle Citron explains in a long and important 2007 paper, Technological Due Process, that process inevitably generates many errors. Her paper focuses primarily on several large, automated benefits systems (two of them built by EDS) where the consequences of the errors may be denying the most needy and vulnerable members of society the benefits the law intends them to receive.

As the LSE's Chrisanthi Avgerou said, these issues apply across the board, in major corporations like Google, but also in government, financial services, and so on. "It's extremely important to be able to understand how they make these decisions." Just saying, "Trust us" - especially in an industry full of as many software holes as we've seen in the last 30 years - really isn't enough.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


September 14, 2012

What did you learn in school today?

One of the more astonishing bits of news this week came from Big Brother Watch: 207 schools across Britain have placed 825 CCTV cameras in toilets or changing rooms. The survey included more than 2,000 schools, so what this is basically saying is that a tenth of the schools surveyed apparently saw nothing wrong in spying on its pupils in these most intimate situations. Overall, the survey found that English, Welsh, and Scottish secondary schools and academies have a total of 106,710 cameras overall, or an average camera-to-pupil ratio of 1:38. As a computer scientist would say, this is non-trivial.

Some added background: the mid 2000s saw the growth of fingerprinting systems for managing payments in school cafeterias, checking library books in and out, and registering attendance. In 2008, the Leave Them Kids Alone campaign, set up by a concerned parent, estimated that more than 2 million UK kids had been fingerprinted, often without the consent of their parents. The Protection of Freedoms Act 2012 finally requires schools and colleges to get parental consent before collecting children's biometrics. That doesn't stop the practice but at least it establishes that these are serious decisions whose consequences need to be considered.

Meanwhile, Ruth Cousteau, the editor of the Open Rights Group's ORGzine, one of the locations where you can find net.wars every week, sends the story that a Texas school district is requiring pupils to carry RFID-enabled cards at all times while on school grounds. The really interesting element is that the real goal here is primarily and unashamedly financial, imposed on the school by its district: the school gets paid per pupil per day, and if a student isn't in homeroom when the teacher takes attendance, that's a little less money to finance the school in doing its job. The RFID cards enable the school to count the pupils who are present somewhere on the grounds but not in their seats, as if they were laptops in danger of being stolen. In the Wired write-up linked above, the school's principal seems not to see any privacy issues connecting to the fact that the school can track kids anywhere on the campus. It's good for safety. And so on.

There is constant debate about what kids should be taught in schools with respect to computers. In these discussions, the focus tends to be on what kids should be directly taught. When I covered Young Rewired State in 2011, one of the things we asked the teams I followed was about the state of computer education in their schools. Their answers: dire. Schools, apparently under the impression that their job was to train the office workforce of the previous decade, were teaching kids how to use word processors, but nothing or very little about how computers work, how to program, or how to build things.

There are signs that this particular problem is beginning to be rectified. Things like the Raspberry Pi and the Arduino, coupled with open source software, are beginning provide ways to recapture teaching in this area, essential if we are to have a next generation of computer scientists. This is all welcome stuff: teaching kids about computers by supplying them with fundamentally closed devices like iPads and Kindles is the equivalent of teaching kids sports by wheeling in a TV and playing a videotape of last Monday's US Open final between Andy Murray and Novak Djokovic.

But here's the most telling quote from that Wired article: "The kids are used to being monitored."

Yes, they are. And when they are adults, they will also be used to being monitored. I'm not quite paranoid enough to suggest that there's a large conspiracy to "soften up" the next generation (as Terri Dowty used to put it when she was running Action for the Rights of Children), but you can have the effect whether or not you have the intent. All these trends are happening in multiple locations: in the UK, for example, there were experiments in 2007 with school uniforms with embedded RFID chips (that wouldn't work in the US, where school uniforms are a rarity); in the trial, these not only tracked students' movements but pulled up data on academic performance.

These are the lessons we are teaching these kids indirectly. We tell them that putting naked photos on Facebook is a dumb idea and may come back to bite them in the future - but simultaneously we pretend to them that their electronic school records, down to the last, tiniest infraction, pose no similar risk. We tell them that plagiarism is bad and try to teach them about copyright and copying - but real life is meanwhile teaching them that a lot of news is scraped almost directly from press releases and that cheating goes on everywhere from financial markets and sports to scientific research. And although we try to tell them that security is important, we teach them by implication that it's OK to use sensitive personal data such as fingerprints and other biometrics for relatively trivial purposes, even knowing that these data's next outing may be to protect their bank accounts and validate their passports.

We should remember: what we do to them now they will do to us when we are old and feeble, and they're the ones in charge.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

.

August 10, 2012

Wiped out

There are so many awful things in the story of what happened this week to technology journalist Matt Honan that it's hard to know where to start. The fundamental part - that through not particularly clever social engineering an outsider was able in about 20 minutes to take over and delete his Google account, take over and defame his Twitter account, and then wipe all the data on his iPhone, iPad, and MacBook - would make a fine nightmare, or maybe a movie with some of the surrealistic quality of Martin Scorsese's After Hours (1985). And all, as Honan eventually learned, because the hacker fancied an outing with his three-digit Twitter ID, a threat so unexpected there's no way you'd make it your model.

Honan's first problem was the thing Suw Charman-Anderson put her finger on for an Infosecurity Magazine piece I did earlier this year: gaining access to a single email address to which every other part of your digital life - ecommerce accounts, financial accounts, social media accounts, password resets all over the Web - is locked puts you in for "a world of hurt". If you only have one email account you use for everything, given access to it, an attacker can simply request password resets all over the place - and then he has access to your accounts and you don't. There are separate problems around the fact that the information required for resets is both the kind of stuff people disclose without thinking on social networks and commonly reused. None of this requires fancy technology fix, just smarter, broader thinking

There are simple solutions to the email problem: don't use one email account for everything and, in the case of Gmail, use two-factor authentication. If you don't operate your own server (and maybe even if you do) it may be too complicated to create a separate address for every site you use, but it's easy enough to have a public address you use for correspondence, a private one you use for most of your site accounts, and then maybe a separate, even less well-known one for a few selected sites that you want to protect as much as you can.

Honan's second problem, however, is not so simple to fix unless an incident like this commands the attention of the companies concerned: the interaction of two companies' security practices that on their own probably seemed quite reasonable. The hacker needed just two small bits of information: Honan's address (sourced from the Whois record for his Internet domain name), and the last four digits of a credit card number, The hack to get the latter involved adding a credit card to Honan's Amazon.com account over the phone and then using that card number, in a second phone call, to add a new email address to the account. Finally, you do a password reset to the new email address, access the account, and find the last four digits of the cards on file - which Apple then accepted, along with the billing address, as sufficient evidence of identity to issue a temporary password into Honan's iCloud account.

This is where your eyes widen. Who knew Amazon or Apple did any of those things over the phone? I can see the point of being able to add an email address; what if you're permanently locked out of the old one? But I can't see why adding a credit card was ever useful; it's not as if Amazon did telephone ordering. And really, the two successive calls should have raised a flag.

The worst part is that even if you did know you'd likely have no way to require any additional security to block off that route to impersonators; telephone, cable, and financial companies have been securing telephone accounts with passwords for years, but ecommerce sites do not (or haven't) think of themselves as possible vectors for hacks into other services. Since the news broke, both Amazon and Apple have blocked off this phone access. But given the extraordinary number of sites we all depend on, the takeaway from this incident is that we ultimately have no clue how well any of them protect us against impersonation. How many other sites can be gamed in this way?

Ultimately, the most important thing, as Jack Schofield writes in his Guardian advice column is not to rely on one service for everything. Honan's devastation was as complete as it was because all his devices were synched through iCloud and could be remotely wiped. Yet this is the service model that Apple has and that Microsoft and Google are driving towards. The cloud is seductive in its promises: your data is always available, on all your devices, anywhere in the world. And it's managed by professionals, who will do all the stuff you never get around to, like make backups.

But that's the point: as Honan discovered to his cost, the cloud is not a backup. If all your devices are hooked to it, it is your primary data pool, and, as Apple co-founder Steve Wozniak pointed out this week it is out of your control. Keep your own backups, kids. Develop multiple personalities. Be careful out there.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


June 15, 2012

A license to print money

"It's only a draft," Julian Huppert, the Liberal Democrat MP for Cambridge, said repeatedly yesterday. He was talking about the Draft Communications Data Bill (PDF), which was published on Wednesday. Yesterday, in a room in a Parliamentary turret, Hupper convened a meeting to discuss the draft; in attendance were a variety of Parliamentarians plus experts from civil society groups such as Privacy International, the Open Rights Group, Liberty, and Big Brother Watch. Do we want to be a nation of suspects?

The Home Office characterizes the provisions in the draft bill as vital powers to help catch criminals, save lives, and protect children. Everyone else - the Guardian, ZDNet UK, and dozens more - is calling them the "Snooper's charter".

Huppert's point is important. Like the Defamation Bill before it, publishing a draft means there will be a select committee with 12 members, discussion, comments, evidence taken, a report (by November 30, 2012), and then a rewritten bill. This draft will not be voted on in Parliament. We don't have to convince 650 MPs that the bill is wrong; it's a lot easier to talk to 12 people. This bill, as is, would never pass either House in any case, he suggested.

This is the optimistic view. The cynic might suggest that since it's been clear for something like ten years that the British security services (or perhaps their civil servants) have a recurring wet dream in which their mountain of data is the envy of other governments, they're just trying to see what they can get away with. The comprehensive provisions in the first draft set the bar, softening us up to give away far more than we would have in future versions. Psychologists call this anchoring, and while probably few outside the security services would regard the wholesale surveillance and monitoring of innocent people as normal, the crucial bit is where you set the initial bar for comparison for future drafts of the legislation. However invasive the next proposals are, it will be easy for us to lose the bearings we came in with and feel that we've successfully beaten back at least some of the intrusiveness.

But Huppert is keeping his eye on the ball: maybe we can not only get the worst stuff out of this bill but make things actually better than they are now; it will amend RIPA. The Independent argues that private companies hold much more data on us overall but that article misses that this bill intends to grant government access to all of it, at any time, without notice.

The big disappointment in all this, as William Heath said yesterday, is that it marks a return to the old, bad, government IT ways of the past. We were just getting away from giant, failed public IT projects like the late unlamented NHS platform for IT and the even more unlamented ID card towards agile, cheap public projects run by smart guys who know what they're doing. And now we're going to spend £1.8 billion of public money over ten years (draft bill, p92) building something no one much wants and that probably won't work? The draft bill claims - on what authority is unclear - that the expenditure will bring in £5 to £6 billion in revenues. From what? Are they planning to sell the data?

Or are they imagining the economic growth implied by the activity that will be necessary to build, install, maintain, and update the black boxes that will be needed by every ISP in order to comply with the law. The security consultant Alec Muffet has laid out the parameters for this SpookBox 5000: certified, tested, tamperproof, made by, say, three trusted British companies. Hundreds of them, legally required, with ongoing maintenance contracts. "A license to print money," he calls them. Nice work if you can get it, of course.

So we're talking - again - about spending huge sums of government money on a project that only a handful of people want and whose objectives could be better achieved by less intrusive means. Give police better training in computer forensics, for example, so they can retrieve the evidence they need from the devices they find when executing a search warrant.

Ultimately, the real enemy is the lack of detail in the draft bill. Using the excuse that the communications environment is changing rapidly and continuously, the notes argue that flexibility is absolutely necessary for Clause 1, the one that grants the government all the actual surveillance power, and so it's been drafted to include pretty much everything, like those contracts that claim copyright in perpetuity in all forms of media that exist now or may hereinafter be invented throughout the universe. This is dangerous because in recent years the use of statutory instruments to bypass Parliamentary debate has skyrocketed. No. Make the defenders of this bill prove every contention; make them show the evidence that makes every extra bit of intrusion necessary.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 25, 2012

Camera obscura

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't), or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 13, 2012

The people perimeter

People with jobs are used to a sharp division between their working lives and their private lives. Even in these times, when everyone carries a mobile phone and may be on call at any moment, they still tend to believe that what they say to their friends is no concern of their employer's. (Freelances tend not to have these divisions; to a much larger extent we have always been "in public" most of the time.)

These divisions were always less in small towns, where teachers or clergy had little latitude, and where even less-folk would be well advised to leave town before doing anything they wouldn't want discussed in detail. Then came social media, which turns everywhere into a small town and where even if you behave impeccably details about you and your employer may be exposed without your knowledge.

That's all a roundabout way of leading to yesterday's London Tea camp, where the subject of discussion was developing guidelines for social media use by civil servants.

Civil servants! The supposedly faceless functionaries who, certainly at the senior levels, are probably still primarily understood by most people through the fictional constructs of TV shows like Yes, Minister and The Thick of It. All of the 50 or 60 people from across government who attended yesterday have Twitter IDs; they're on Facebook and Foursquare, and probably a few dozen other things that would horrify Sir Humphrey. And that's as it should be: the people administering the nation's benefits, transport, education, and health absolutely should live like the people they're trying to serve. That's how you get services that work for us rather than against us.

The problem with social media is the same as their benefit: they're public in a new and different way. Even if you never identify your employer, Foursquare or the geotagging on Twitter or Facebook checks you in at a postcode that's indelibly identified with the very large government building where your department is the sole occupant. Or a passerby photographs you in front of it and Facebook helpfully tags your photograph with your real name, which then pops up in outside searches. Or you say something to someone you know who tells someone else who posts it online for yet another person to identify and finally the whole thing comes back and bites you in the ass. Even if your Tweets are clearly personal, and even if your page says, "These are just my personal opinions and do not reflect those of my employer", the fact of where you can be deduced to work risks turning anything connected to you into something a - let's call it - excitable journalist can make into a scandal. Context is king.

What's new about this is the uncontrollable exposure of this context. Any Old Net Curmudgeon will tell you that the simple fact of people being caught online doing things their employers don't like goes back to the dawn of online services. Even now I'm sure someone dedicated could find appalling behavior in the Usenet archives by someone who is, 25 years on, a highly respected member of society. But Usenet was a minority pastime; Facebook, Twitter et al are mainstream.

Lots has been written by and about employers in this situation: they may suffer reputational damage, legal liability, or a breach that endangers their commercial secrets. Not enough has been written about individuals struggling to cope with sudden, unwanted exposure. Don't we have the right to private lives? someone asked yesterday. What they are experiencing is the same loss of border control that security engineers are trying to cope with. They call it "deperimeterization", because security used to mean securing the perimeter of your network and now security means coping with its loss. Adding wireless, remote access for workers at home, personal devices such as mobile phones, and links to supplier and partner networks have all blown holes in it.

There is no clear perimeter any more for networks - or individuals, either. Trying to secure one by dictating behavior, whether by education, leadership by example, or written guidelines, is inevitably doomed. There is, however, a very valid reason to have these things: to create a general understanding between employer and employee. It should be clear to all sides what you can and cannot get fired for.

In 2003, Danny O'Brien nailed a lot of this when he wrote about the loss of what he called the "private-intermediate sphere". In that vanishing country, things were private without being secret. You could have a conversation in a pub with strangers walking by and be confident that it would reach only the audience present at the time and that it would not unexpectedly be replayed or published later (see also Don Harmon and Chevy Chase's voicemail). Instead, he wrote, the Net is binary: secret or public, no middle ground.

What's at stake here is really not private life, but *social* life. It's the addition of the online component to our social lives that has torn holes in our personal perimeters.

"We'll learn a kind of tolerance for the private conversation that is not aimed at us, and that overreacting to that tone will be a sign of social naivete," O'Brien predicted. Maybe. For now, hard cases make bad law (and not much better guidelines) *First* cases are almost always hard cases.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 9, 2012

Private parts

In 1995, when the EU Data Protection Directive was passed, Facebook founder and CEO Mark Zuckerberg was 11 years old. Google was three years away from incorporation. Amazon.com was a year old and losing money fast enough to convince many onlookers that it would never be profitable; the first online banner ads were only months old. It was the year eBay and Yahoo! were founded and Netscape went public. This is how long ago it was: CompuServe was a major player in online services, AOL was just setting up its international services, and both of them were still funded by per-minute usage fees.

In other words: even when it was published there were no Internet companies whose business models depended on exploiting user data. During the years it was being drafted only posers and rich people owned mobile phone, selling fax machines was a good business, and women were still wearing leggings the *first* time. It's impressive that the basic principles formulated then have held up well. Practice, however, has been another matter.

The discussions that led to the publication in January of of a package of reforms to the data protection rules began in 2008. Discussions among data protection commissioners, Peter Hustinx, the European Data Protection Supervisor, said at Thursday's Westminster eForum on data protection and electronic privacy, produced a consensus that changes were needed, including making controllers more accountable, increasing "privacy by design", and making data protection a top-level issue for corporate governance.

These aren't necessarily the issues that first spring to mind for privacy advocates, particularly in the UK, where many have complained that the Information Commissioner's Office has failed. (It was, for example, out of step with the rest of the world with respect to Google's Street View.) Privacy International has a long history of complaints about the ICO's operation. But even the EU hasn't performed as well as citizens might hope under the present regime: PI also exposed the transfer of SWIFT financial data to the US, while Edward Hasbrouck has consistently and publicly opposed the transfer of passenger name record data from the EU to the US.

Hustinx has published a comprehensive opinion of the reform package. The details of both the package itself and the opinion require study. But some of the main points are an effort to implement a single regime and the rights to erasure (aka the right to be forgotten), require breach notification within 24 hours of discovery, strengthen the data protection authorities and make them more accountable.

Of course, everyone has a complaint. The UK's deputy information commissioner, David Smith, complained that the package is too prescriptive of details and focuses on paperwork rather than privacy risk. Lord McNally, Minister of State at the Ministry of Justice, complained that the proposed fines of up to 2 percent of global corporate income are disproportionate and that 24 hours is too little time. Hustinx outlined his main difficulties: that the package has gaps, most notably surrounding the transfer of telephone data to law enforcement; that fines should be discretionary and proportionate rather than compulsory; and that there remain difficulties in dealing with national and EU laws.

We used to talk about the way the Internet enabled the US to export the First Amendment. You could, similarly, see the data protection laws as the EU's effort to export privacy rules; a key element is the prohibition on transferring data to countries without similar regimes - which is why the SWIFT and PNR cases were so problematic. In 1999, for a piece that's now behind Scientific American's paywall, PI's Simon Davies predicted that US companies might find themselves unable to trade in Europe because of data flows. Big questions, therefore, revolve around the business corporate rules, which allow companies to transfer data to third countries without equivalent data protection as long as the data stays within their corporate boundaries.

The arguments over data protection law have a lot in common with the arguments over copyright. In both cases, the goal is to find a balance of power between competing interests that keeps individuals from being squashed. Also like copyright, data protection policy is such a dry and esoteric subject that it's hard to get non-specialists engaged with it. Hard, but not impossible: copyright has never had a George Orwell to make the dangers up close and personal. Copyright law began, Lawrence Lessig argued in (I think it was) Free Culture, as a way to curb the power of publishers (although by now it has ended up greatly empowering them). Similarly while most of us may think of data protection law as protecting the abuse of personal data, a voice argued from the floor yesterday that the law was originally drafted to enable free data transfers within the single market.

There is another similarity. Rightsholders and government policymakers often talk as though the population-at-large are consumers, not creators in their own right. Similarly, yesterday, Mydex's David Alexander had this objection to make: "We seem to keep forgetting that humans are not just subjects, but participants in the management of their own personal data...Why can't we be participants?"


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 27, 2012

Principle failure

The right to access, correct, and delete personal information held about you and the right to bar data collected for one purpose from being reused for another are basic principles of the data protection laws that have been the norm in Europe since the EU adopted the Privacy Directive in 1995. This is the Privacy Directive that is currently being updated; the European Commission's proposals seem, inevitably, to please no one. Businesses are already complaining compliance will be unworkable or too expensive (hey, fines of up to 2 percent of global income!). I'm not sure consumers should be all that happy either; I'd rather have the right to be anonymous than to be forgotten (which I believe will prove technically unworkable), and the jurisdiction for legal disputes with a company to be set to my country rather than theirs. Much debate lies ahead.

In the meantime, the importance of the data protection laws has been enhanced by Google's announcement this week that it will revise and consolidate the more than 60 privacy policies covering its various services "to create one beautifully simple and intuitive experience across Google". It will, the press release continues, be "Tailored for you". Not the privacy policy, of course, which is a one-size-fits-all piece of corporate lawyer ass-covering, but the services you use, which, after the fragmented data Google holds about you has been pooled into one giant liquid metal Terminator, will be transformed into so-much-more personal helpfulness. Which would sound better if 2011 hadn't seen loud warnings about the danger that personalization will disappear stuff we really need to know: see Eli Pariser's filter bubble and Jeff Chester's worries about the future of democracy.

Google is right that streamlining and consolidating its myriad privacy policies is a user-friendly thing to do. Yes, let's have a single policy we can read once and understand. We hate reading even one privacy policy, let alone 60 of them.

But the furore isn't about that, it's about the single pool of data. People do not use Google Docs in order to improve their search results; they don't put up Google+ pages and join circles in order to improve the targeting of ads on YouTube. This is everything privacy advocates worried about when Gmail was launched.

Australian privacy campaigner Roger Clarke's discussion document sets out the principles that the decision violates: no consultation, retroactive application; no opt out.

Are we evil yet?

In his 2011 book, In the Plex, Steven Levy traces the beginnings of a shift in Google's views on how and when it implements advertising to the company's controversial purchase of the DoubleClick advertising network, which relied on cookies and tracking to create targeted ads based on Net users' browsing history. This $3.1 billion purchase was huge enough to set off anti-trust alarms. Rightly so. Levy writes, "...sometime after the process began, people at the company realized that they were going to wind up with the Internet-tracking equivalent of the Hope Diamond: an omniscient cookie that no other company could match." Between DoubleClick's dominance in display advertising on large, commercial Web sites and Google AdSense's presence on millions of smaller sites, the company could track pretty much all Web users. "No law prevented it from combining all that information into one file," Levy writes, adding that Google imposed limits, in that it didn't use blog postings, email, or search behavior in building those cookies.

Levy notes that Google spends a lot of time thinking about privacy, but quotes founder Larry Page as saying that the particular issues the public chooses to get upset about seem randomly chosen, the reaction determined most often by the first published headline about a particular product. This could well be true - or it may also be a sign that Page and Brin, like Facebook's Mark Zuckberg and some other Silicon Valley technology company leaders, are simply out of step with the public. Maybe the reactions only seem random because Page and Brin can't identify the underlying principles.

In blending its services, the issue isn't solely privacy, but also the long-simmering complaint that Google is increasingly favoring its own services in its search results - which would be a clear anti-trust violation. There, the traditional principle is that dominance in one market (search engines) should not be leveraged to achieve dominance in another (social networking, video watching, cloud services, email).

SearchEngineLand has a great analysis of why Google's Search Plus is such a departure for the company and what it could have done had it chosen to be consistent with its historical approach to search results. Building on the "Don't Be Evil" tool built by Twitter, Facebook, and MySpace, among others, SEL demonstrates the gaps that result from Google's choices here, and also how the company could have vastly improved its service to its search customers.

What really strikes me in all this is that the answer to both the EU issues and the Google problem may be the same: the personal data store that William Heath has been proposing for three years. Data portability and interoperability, check; user control, check. But that is as far from the Web 2.0 business model as file-sharing is from that of the entertainment industry.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 6, 2012

Only the paranoid

Yesterday's news that the Ramnit worm has harvested the login credentials of 45,000 British and French Facebook users seems to me a watershed moment for Facebook. If I were an investor, I'd wish I had already cashed out. Indications are, however, that founding CEO Mark Zuckerberg is in it for the long haul, in which case he's going to have to find a solution to a particularly intractable problem: how to protect a very large mass of users from identity fraud when his entire business is based on getting them to disclose as much information about themselves as possible.

I have long complained about Facebook's repeatedly changing privacy controls. This week, while working on a piece on identity fraud for Infosecurity, I've concluded that the fundamental problem with Facebook's privacy controls is not that they're complicated, confusing, and time-consuming to configure. The problem with Facebook's privacy controls is that they exist.

In May 2010, Zuckerberg enraged a lot of people, including me, by opining that privacy is no longer a social norm. As Judith Rauhofer has observed, the world's social norms don't change just because some rich geeks in California say so. But the 800 million people on Facebook would arguably be much safer if the service didn't promise privacy - like Twitter. Because then people wouldn't post all those intimate details about themselves: their kids' pictures, their drunken, sex exploits, their incitements to protest, their porn star names, their birth dates... Or if they did, they'd know they were public.

Facebook's core privacy problem is a new twist on the problem Microsoft has: legacy users. Apple was willing to make earlier generations of its software non-functional in the shift to OS X. Microsoft's attention to supporting legacy users allows me to continue to run, on Windows 7, software that was last updated in 1997. Similarly, Facebook is trying to accommodate a wide variety of privacy expectations, from those of people who joined back when membership was limited to a few relatively constrained categories to those of people joining today, when the system is open to all.

Facebook can't reinvent itself wholesale: it is wholly and completely wrong to betray users who post information about themselves into what they are told is a semi-private space by making that space irredeemably public. The storm every time Facebook makes a privacy-related change makes that clear. What the company has done exceptionally well is to foster the illusion of a private space despite the fact that, as the Australian privacy advocate Roger Clarke observed in 2003, collecting and abusing user data is social networks' only business model.

Ramnit takes this game to a whole new level. Malware these days isn't aimed at doing cute, little things like making hard drive failure noises or sending all the letters on your screen tumbling into a heap at the bottom. No, it's aimed at draining your bank account and hijacking your identity for other types of financial exploitation.

To do this, it needs to find a way inside the circle of trust. On a computer network, that means looking for an unpatched hole in software to leverage. On the individual level, it means the malware equivalent of viral marketing: get one innocent bystander to mistakenly tell all their friends. We've watched this particular type of action move through a string of vectors as the human action moves to get away from spam: from email to instant messaging to, now, social networks. The bigger Facebok gets, the bigger a target it becomes. The more information people post on Facebook - and the more their friends and friends of friends friend promiscuously - the greater the risk to each individual.

The whole situation is exacerbated by endemic, widespread, poor security practices. Asking people to provide the same few bits of information for back-up questions in case they need a password reset. Imposing password rules that practically guarantee people will use and reuse the same few choices on all their sites. Putting all the eggs in services that are free at point of use and that you pay for in unobtainable customer service (not to mention behavioral targeting and marketing) when something goes wrong. If everything is locked to one email account on a server you do not control, if your security questions could be answered by a quick glance at your Facebook Timeline and a Google search, if you bank online and use the same passwords throughout...you have a potential catastrophe in waiting.

I realize not everyone can run their own mail server. But you can use multiple, distinct email addresses and passwords, you can create unique answers on the reset forms, and you can limit your exposure by presuming that everything you post *is* public, whether the service admits it or not. Your goal should be to ensure that when - it's no longer safe to say "if" - some part of your online life is hacked the damage can be contained to that one, hopefully small, piece. Relying on the privacy consciousness of friends means you can't eliminate the risk; but you can limit the consequences.

Facebook is facing an entirely different risk: that people, alarmed at the thought of being mugged, will flee elsewhere. It's happened before.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 16, 2011

Location, location, location

In the late 1970s, I used to drive across the United States several times a year (I was a full-time folksinger), and although these were long, long days at the wheel, there were certain perks. One was the feeling that the entire country was my backyard. The other was the sense that no one in the world knew exactly where I was. It was a few days off from the pressure of other people.

I've written before that privacy is not sleeping alone under a tree but being able to do ordinary things without fear. Being alone on an interstate crossing Oklahoma wasn't to hide some nefarious activity (like learning the words to "There Ain't No Instant Replay in the Football Game of Life"). Turn off the radio and, aside from an occasional billboard, the world was quiet.

Of course, that was also a world in which making a phone call was a damned difficult thing to do, which is why professional drivers all had CB radios. Now, everyone has mobile phones, and although your nearest and dearest may not know where you are, your phone company most certainly does, and to a very fine degree of "granularity".

I imagine normal human denial is broad enough to encompass pretending you're in an unknown location while still receiving text messages. Which is why this year's A Fine Balance focused on location privacy.

The travel privacy campaigner Edward Hasbrouck has often noted that travel data is particularly sensitive and revealing in a way few realize. Travel data indicate your religion (special meals), medical problems, and life style habits affecting your health (choosing a smoking room in a hotel). Travel data also shows who your friends are, and how close: who do you travel with? Who do you share a hotel room with, and how often?

Location data is travel data on a steady drip of steroids. As Richard Hollis, who serves on the ISACA Government and Regulatory Advocacy Subcommittee, pointed out, location data is in fact travel data - except that instead of being detailed logging of exceptional events it's ubiquitous logging of everything you do. Soon, he said, we will not be able to opt out - and instead of travel data being a small, sequestered, unusually revealing part of our lives, all our lives will be travel data.

Location data can reveal the entire pattern of your life. Do you visit a church every Monday evening that has an AA meeting going on in the basement? Were you visiting the offices of your employer's main competitor when you were supposed to have a doctor's appointment?

Research supports this view. Some of the earliest work I'm aware of is of Alberto Escudero-Pascual. A month-long experiment tracking the mobile phones in his department enabled him to diagram all the intra-departmental personal relations. In a 2002 paper, he suggests how to anonymize location information (PDF). The problem: no business wants anonymization. As Hollis and others said, businesses want location data. Improved personalization depends on context, and location provides a lot of that.

Patrick Walshe, the director of privacy for the GSM Association, compared the way people care about privacy to the way they care about their health: they opt for comfort and convenience and hope for the best. They - we - don't make changes until things go wrong. This explains why privacy considerations so often fail and privacy advocates despair: guarding your privacy is like eating your vegetables, and who except a cranky person plans their meals that way?

The result is likely to be the world that Microsoft UK's director of Search, advertising, and online UK, Dave Coplin, outlined, arguing that privacy today is at the turning point that the Melissa virus represented for security 11 years ago when it first hit.

Calling it "the new battleground," he said, "This is what happens when everything is connected." Similarly, Blaine Price, a senior lecturer in computing at the Open University, had this cheering thought: as humans become part of the Internet of Things, data leakage will become almost impossible to avoid.

Network externalities mean that the number of people using a network increase its value for all other users of that network. What about privacy externalities? I haven't heard the phrase before, although I see it's not new (PDF). But I mean something different than those papers do: the fact that we talk about privacy as an individual choice when instead it's a collaborative effort. A single person who says, "I don't care about my privacy" can override the pro-privacy decisions of dozens of their friends, family, and contacts. "I'm having dinner with @wendyg," someone blasts, and their open attitude to geolocation reveals mine.

In his research on tracking, Price has found that the more closely connected the trackers are the less control they have over such decisions. I may worry that turning on a privacy block will upset my closest friend; I don't obsess at night, "Will the phone company think I'm mad at it?"

So: you want to know where I am right now? Pay no attention to the geolocated Twitterer who last night claimed to be sitting in her living room with "wendyg". That wasn't me.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 11, 2011

The sentiment of crowds

Context is king.

Say to a human, "I'll meet you at the place near the thing where we went that time," and they'll show up at the right place. That's from the 1987 movieBroadcast News: Aaron (Albert Brooks) says it; cut to Jane (Holly Hunter), awaiting him at a table.

But what if Jane were a computer and what she wanted to know from Aaron's statement was not where to meet but how Aaron felt about it? This is the challenge facing sentiment analysis.

At Wednesday's Sentiment Analysis Symposium, the key question of context came up over and over again as the biggest challenge to the industry of people who claim that they can turn Tweets, blog postings, news stories, and other mass data sources into intelligence.

So context: Jane can parse "the place", "the thing", and "that time" because she has expert knowledge of her past with Aaron. It's an extreme example, but all human writing makes assumptions about the knowledge and understanding of the reader. Humans even use those assumptions to implement privacy in a public setting: Stephen Fry could retweet Aaron's words and still only Jane would find the cafe. If Jane is a large organization seeking to understand what people are saying about it and Aaron is 6 million people posting on Twitter, Tom can use sentiment analyzer tools to give a numerical answer. And numbers always inspire confidence...

My first encounter with sentiment analysis was this summer during Young Rewired State, when a team wanted to create a mood map of the UK comparing geolocated tweets to indices of multiple deprivation. This third annual symposium shows that here is a rapidly engorging industry, part PR, part image consultancy, and part artificial intelligence research project.

I was drawn to it out of curiosity, but also because it all sounds slightly sinister. What do sentiment analyzers understand when I say an airline lounge at Heathrow Terminal 4 "brings out my inner Sheldon? What is at stake is not precise meaning - humans argue over the exact meaning of even the greatest communicators - but extracting good-enough meaning from high-volume data streams written by millions of not-monkeys.

What could possibly go wrong? This was one of the day's most interesting questions, posed by the consultant Meta Brown to representatives of the Red Cross, the polling organization Harris Interactive, and Paypal. Failure to consider the data sources and the industry you're in, said the Red Cross's Banafsheh Ghassemi. Her example was the period just after Hurricane Irene, when analyzing social media sentiment would find it negative. "It took everyday disaster language as negative," she said. In addition, because the Red Cross's constituency is primarily older, social media are less indicative than emails and call center records. For many organizations, she added, social media tend to skew negative.

Earlier this year, Harris Interactive's Carol Haney, who has had to kill projects when they failed to produce sufficiently accurate results for the client, told a conference, "Sentiment analysis is the snake oil of 2011." Now, she said, "I believe it's still true to some extent. The customer has a commercial need for a dial pointing at a number - but that's not really what's being delivered. Over time you can see trends and significant change in sentiment, and when that happens I feel we're returning value to a customer because it's not something they received before and it's directionally accurate and giving information." But very small changes over short time scales are an unreliable basis for making decisions.

"The difficulty in social media analytics is you need a good idea of the questions you're asking to get good results," says Shlomo Argamon, whose research work seems to raise more questions than answers. Look at companies that claim to measure influence. "What is influence? How do you know you're measuring that or to what it correlates in the real world?" he asks. Even the notion that you can classify texts into positive and negative is a "huge simplifying assumption".

Argamon has been working on technology to discern from written text the gender and age - and perhaps other characteristics - of the author, a joint effort with his former PhD student Ken Bloom. When he says this, I immediately want to test him with obscure texts.

Is this stuff more or less creepy than online behavioral advertising? Han-Sheong Lai explained that Paypal uses sentiment analysis to try to glean the exact level of frustration of the company's biggest clients when they threaten to close their accounts. How serious are they? How much effort should the company put into dissuading them? Meanwhile Verint's job is to analyze those "This call may be recorded" calls. Verint's tools turn speech to text, and create color voiceprint maps showing the emotional high points. Click and hear the anger.

"Technology alone is not the solution," said Philip Resnik, summing up the state of the art. But, "It supports human insight in ways that were not previously possible." His talk made me ask: if humans obfuscate their data - for example, by turning off geolocation - will this industry respond by finding ways to put it all back again so the data will be more useful?

"It will be an arms race," he agrees. "Like spam."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 4, 2011

The identity layer

This week, the UK government announced a scheme - Midata - under which consumers will be able to reclaim their personal information. The same day, the Centre for the Study of Financial Innovation assembled a group of experts to ask what the business model for online identification should be. And: whatever that model is, what the the government's role should be. (For background, here's the previous such discussion.)

My eventual thought was that the government's role should be to set standards; it might or might not also be an identity services provider. The government's inclination now is to push this job to the private sector. That leaves the question of how to serve those who are not commercially interesting; at the CSFI meeting the Post Office seemed the obvious contender for both pragmatic and historical reasons.

As Mike Bracken writes in the Government Digital Service blog posting linked above, the notion of private identity providers is not new. But what he seems to assume is that what's needed is federated identity - that is, in Wikipedia's definition, a means for linking a person's electronic identity and attributes across multiple distinct systems. What I meant is a system in which one may have many limited identities that are sufficiently interoperable that you can make a choice which to use at the point of entry to a given system. We already have something like this on many blogs, where commenters may be offered a choice of logging in via Google, OpenID, or simply posting a name and URL.

The government gateway circa Year 2000 offered a choice: getting an identity certificate required payment of £50 to, if I remember correctly, Experian or Equifax, or other companies whose interest in preserving personal privacy is hard to credit. The CSFI meeting also mentioned tScheme - an industry consortium to provide trust services. Outside of relatively small niches it's made little impact. Similarly, fifteen years ago, the government intended, as part of implementing key escrow for strong cryptography, to create a network of trusted third parties that it would license and, by implication, control. The intention was that the TTPs should be folks that everyone trusts - like banks. Hilarious, we said *then*. Moving on.

In between then and now, the government also mooted a completely centralized identity scheme - that is, the late, unlamented ID card. Meanwhile, we've seen the growth a set of competing American/global businesses who all would like to be *the* consumer identity gateway and who managed to steal first-mover advantage from existing financial institutions. Facebook, Google, and Paypal are the three most obvious. Microsoft had hopes, perhaps too early, when in 1999 it created Passport (now Windows Live ID). More recently, it was the home for Kim Cameron's efforts to reshape online identity via the company's now-cancelled CardSpace, and Brendon Lynch's adoption of U-Prove, based on Stefan Brands' technology. U-Prove is now being piloted in various EU-wide projects. There are probably lots of other organizations that would like to get in on such a scheme, if only because of the data and linkages a federated system would grant them. Credit card companies, for example. Some combination of mobile phone manufacturers, mobile network operators, and telcos. Various medical outfits, perhaps.

An identity layer that gives fair and reasonable access to a variety of players who jointly provide competition and consumer choice seems like a reasonable goal. But it's not clear that this is what either the UK's distastefully spelled "Midata" or the US's NSTIC (which attracted similar concerns when first announced, has in mind. What "federated identity" sounds like is the convenience of "single sign-on", which is great if you're working in a company and need to use dozens of legacy systems. When you're talking about identity verification for every type of transaction you do in your entire life, however, a single gateway is a single point of failure and, as Stephan Engberg, founder of the Danish company Priway, has often said, a single point of control. It's the Facebook cross-all-the-streams approach, embedded everywhere. Engberg points to a discussion paper) inspired by two workshops he facilitated for the Danish National IT and Telecom Agency (NITA) in late 2010 that covers many of these issues.

Engberg, who describes himself as a "purist" when it comes to individual sovereignty, says the only valid privacy-protecting approach is to ensure that each time you go online on each device you start a new session that is completely isolated from all previous sessions and then have the choice of sharing whatever information you want in the transaction at hand. The EU's LinkSmart project, which Engberg was part of, created middleware to do precisely that. As sensors and RFID chips spread along with IPv6, which can give each of them its own IP address, linkages across all parts of our lives will become easier and easier, he argues.

We've seen often enough that people will choose convenience over complexity. What we don't know is what kind of technology will emerge to help us in this case. The devil, as so often, will be in the details.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 28, 2011

Crypto: the revenge

I recently had occasion to try out Gnu Privacy Guard, the Free Software Foundation's version of PGP, Phil Zimmermann's legendary Pretty Good Privacy software. It was the first time I'd encrypted an email message since about 1995, and I was both pleasantly surprised and dismayed.

First, the good. Public key cryptography is now implemented exactly the way it should have been all along: once you've installed it and generated a keypair, encrypting a message is ticking a box or picking a menu item inside your email software. Even key management is handled by a comprehensible, well-designed graphical interface. Several generations of hard work have created this and also ensured that the various versions of PGP, OpenPGP, and GPG are interoperable, so you don't have to worry about who's using what. Installation was straightforward and the documentation is good.

Now, the bad. That's where the usability stops. There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners.

Item: the subject line doesn't get encrypted. There is nothing you can do about this except put a lot of thought into devising a subject line that will compel people to read the message but that simultaneously does not reveal anything of value to anyone monitoring your email. That's a neat trick.

Item: watch out for attachments, which are easily accidentally sent in the clear; you need to encrypt them separately before bundling them into the message.

Item: while there is a nifty GPG plug-in for Thunderbird - Enigmail - Outlook, being commercial software, is less easily supported. GPG's GpgOL module works only with 2003 (SP2 and above) and 2007, and not on 64-bit Windows. The problem is that it's hard enough to get people to change *one* habit, let alone several.

Item: lacking appropriate browser plug-ins, you also have to tell them to stop using Webmail if the service they're used to won't support IMAP or POP3, because they won't be able to send encrypted mail or read what others send them over the Web.

Let's say you're running a field station in a hostile area. You can likely get users to persevere despite these points by telling them that this is their work system, for use in the field. Most people will put up with a some inconvenience if they're being paid to do so and/or it's temporary and/or you scare them sufficiently. But that strategy violates one of the basic principles of crypto-culture, which is that everyone should be encrypting everything so that sensitive traffic doesn't stand out. They are of course completely right, just as they were in 1993, when the big political battles over crypto were being fought.

Item: when you connect to a public keyserver to check or download someone's key, that connection is in the clear, so anyone surveilling you can see who you intend to communicate with.

Item: you're still at risk with regard to traffic data. This is what RIPA and data retention are all about. What's more significant? Being able to read a message that says, "Can you buy milk?" or the information that the sender and receiver of that message correspond 20 times a day? Traffic data reveals the pattern of personal relationships; that's why law enforcement agencies want it. PGP/GPG won't hide that for you; instead, you'll need to set up a proxy or use Tor to mix up your traffic and also protect your Web browsing, instant messaging, and other online activities. As Tor's own people admit, it slows performance, although they're working on it (PDF).

All this says we're still a long way from a system that the mass market will use. And that's a damn shame, because we genuinely need secure communications. Like a lot of people in the mid-1990s, I'd have thought that by now encrypted communications would be the norm. And yet not only is SSL, which protects personal details in transit to ecommerce and financial services sites, the only really mass-market use, but it's in trouble. Partly, this is because of the technical issues raised in the linked article - too many certification authorities, too many points of failure - but it's also partly because hardly anyone understands how to check that a certificate is valid or knows what to do when warnings pop up that it's expired or issued for a different name. The underlying problem is that many of the people who like crypto see it as both a cool technology and a cause. For most of us, it's just more fussy software. The big advance since the mid 1990s is that at least now the *developers* will use it.

Maybe mobile phones will be the thing that makes crypto work the way it should. See, for example, Dave Birch's current thinking on the future of identity. We've been arguing about how to build an identity infrastructure for 20 years now. Crypto is clearly the mechanism. But we still haven't solved the how.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 29, 2011

Name check

How do you clean a database? The traditional way - which I still experience from time to time from journalist directories - is that some poor schnook sits in an office and calls everyone on the list, checking each detail. It's an immensely tedious job, I'm sure, but it's a living.

The new, much cheaper method is to motivate the people in the database to do it themselves. A government can pass a law and pay benefits. Amazon expects the desire to receive the goods people have paid for to be sufficient. For a social network it's a little harder, yet Facebook has managed to get 750 million users to upload varying amounts of information. Google hopes people will do the same with Google+,

The emotional connections people make on social networks obscure their basic nature as databases. When you think of them in that light, and you remember that Google's chief source of income is advertising, suddenly Google's culturally dysfunctional decision to require real names on |Google+ makes some sense. For an advertising company,a fuller, cleaner database is more valuable and functional. Google's engineers most likely do not think in terms of improving the company's ability to serve tightly targeted ads - but I'd bet the company's accountants and strategists do. The justification - that online anonymity fosters bad behavior - is likely a relatively minor consideration.

Yet it's the one getting the attention, despite the fact that many people seem confused about the difference between pseudonymity, anonymity, and throwaway identity. In the reputation-based economy the Net thrives on, this difference matters.

The best-known form of pseudonymity is the stage name, essentially a form of branding for actors, musicians, writers, and artists, who may have any of a number of motives for keeping their professional lives separate from their personal lives: privacy for themselves, their work mates, or their families, or greater marketability. More subtly, if you have a part-time artistic career and a full-time day job you may not want the two to mix: will people take you seriously as an academic psychologist if they know you're also a folksinger? All of those reasons for choosing a pseudonym apply on the Net, where everything is a somewhat public performance. Given the harassment some female bloggers report, is it any wonder they might feel safer using a pseudonym?

The important characteristic of pseudonyms, which they share with "real names", is persistence. When you first encounter someone like GrrlScientist, you have no idea whether to trust her knowledge and expertise. But after more than ten years of blogging, that name is a known quantity. As GrrlScientist writes about Google's shutting down her account, it is her "real-enough" name by any reasonable standard. What's missing is the link to a portion of her identity - the name on her tax return, or the one her mother calls her. So what?

Anonymity has long been contentious on the Net; the EU has often considered whether and how to ban it. At the moment, the driving justification seems to be accountability, in the hope that we can stop people from behaving like malicious morons, the phenomenon I like to call the Benidorm syndrome.

There is no question that people write horrible things in blog and news site comments pages, conduct flame wars, and engage in cyber bullying and harassment. But that behaviour is not limited to venues where they communicate solely with strangers; every mailing list, even among workmates, has flame wars. Studies have shown that the cyber versions of bullying and harassment, like their offline counterparts, are most often perpetrated by people you know.

The more important downside of anonymity is that it enables people to hide, not their identity but their interests. Behind the shield, a company can trash its competitors and those whose work has been criticized can make their defense look more robust by pretending to be disinterested third parties.

Against that is the upside. Anonymity protects whistleblowers acting in the public interest, and protesters defying an authoritarian regime.

We have little data to balance these competing interests. One bit we do have comes from an experiment with anonymity conducted years ago on the WELL, which otherwise has insisted on verifying every subscriber throughout its history. The lesson they learned, its conferencing manager, Gail Williams, told me once, was that many people wanted anonymity for themselves - but opposed it for others. I suspect this principle has very wide applicability, and it's why the US might, say, oppose anonymity for Bradley Manning but welcome it for Egyptian protesters.

Google is already modifying the terms of what is after all still a trial service. But the underlying concern will not go away. Google has long had a way to link Gmail addresses to behavioral data collected from those using its search engine, docs, and other services. It has always had some ability to perform traffic analysis on Gmail users' communications; now it can see explicit links between those pools of data and, increasingly, tie them to offline identities. This is potentially far more powerful than anything Facebook can currently offer. And unlike government databases, it's nice and clean, and cheap to maintain.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 15, 2011

Dirty digging

The late, great Molly Ivins warns (in Molly Ivins Can't Say That, Can She?) about the risk to journalists of becoming "power groupies" who identify more with the people they cover than with their readers. In the culture being exposed by the escalating phone hacking scandals the opposite happened: politicians and police became "publicity groupies" who feared tabloid wrath to such an extent that they identified with the interests of press barons more than those of the constituents they are sworn to protect. I put the apparent inconsistency between politicians' former acquiescence and their current baying for blood down to Stockholm syndrome: this is what happens when you hold people hostage through fear and intimidation for a few decades. When they can break free, oh, do they want revenge.

The consequences are many and varied, and won't be entirely clear for a decade or two. But surely one casualty must have been the balanced view of copyright frequently argued for in this column. Murdoch's media interests are broad-ranging. What kind of copyright regime do you suppose he'd like?

But the desire for revenge is a really bad way to plan the future, as I said (briefly) on Monday at the Westminster Skeptics.

For one thing, it's clearly wrong to focus on News International as if Rupert Murdoch and his hired help were the only contaminating apple. In the 2006 report What price privacy now? the Information Commissioner listed 30 publications caught in the illegal trade in confidential information. News of the World was only fifth; number one, by a considerable way, was the Daily Mail (the Observer was number nine). The ICO wanted jail sentences for those convicted of trading in data illegally, and called on private investigators' professional bodies to revoke or refuse licenses to PIs who breach the rules. Five years later, these are still good proposals.

Changing the culture of the press is another matter.
When I first began visiting Britain in the late 1970s, I found the tabloid press absolutely staggering. I began asking the people I met how the papers could do it.

"That's because *we* have a free press," I was told in multiple locations around the country. "Unlike the US." This was only a few years after The Washington Post backed Bob Woodward and Carl Bernstein's investigation of Watergate, so it was doubly baffling.

Tom Stoppard's 1978 play Night and Day explained a lot. It dropped competing British journalists into an escalating conflict in a fictitious African country. Over the course of the play, Stoppard's characters both attack and defend the tabloid culture.

"Junk journalism is the evidence of a society that has got at least one thing right, that there should be nobody with power to dictate where responsible journalism begins," says the naïve and idealistic new journalist on the block.

"The populace and the popular press. What a grubby symbiosis it is," complains the play's only female character, whose second marriage - "sex, money, and a title, and the parrots didn't harm it, either" - had been tabloid fodder.

The standards of that time now seem almost quaint. In the movie Starsuckers, filmmaker Chris Atkins fed fabricated celebrity stories to a range of tabloids. All were published. That documentary also showed in action illegal methods of obtaining information. In 2009, right around the time The Press Complaints Commission was publishing a report concluding, "there is no evidence that the practice of phone message tapping is ongoing".

Someone on Monday asked why US newspapers are better behaved despite First Amendment protection and less constraint by onerous libel laws. My best guess is fear of lawsuits. Conversely, Time magazine argues that Britain's libel laws have encouraged illegal information gathering: publication requires indisputable evidence. I'm not completely convinced: the libel laws are not new, and economics and new media are forcing change on press culture.

A lot of dangers lurk in the calls for greater press regulation. Phone hacking is illegal. Breaking into other people's computers is illegal. Enforce those laws. Send those responsible to jail. That is likely to be a better deterrent than any regulator could manage.

It is extremely hard to devise press regulations that don't enable cover-ups. For example, on Wednesday's Newsnight, the MP Louise Mensch, head of the DCMS committee conducting the hearings, called for a requirement that politicians disclose all meetings with the press. I get it: expose too-cosy relationships. But whistleblowers depend on confidentiality, and the last thing we want is for politicians to become as difficult to access as tennis stars and have their contact with the press limited to formal press conferences.

Two other lessons can be derived from the last couple of weeks. The first is that you cannot assume that confidential data can be protected simply by access rules. The second is the importance of alternatives to commercial, corporate journalism. Tom Watson has criticized the BBC for not taking the phone hacking allegations seriously. But it's no accident that the trust-owned Guardian was the organization willing to take on the tabloids. There's a lesson there for the US, as the FBI and others prepare to investigate Murdoch and News Corp: keep funding PBS.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 8, 2011

The grey hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views, also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers - not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea - and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time between the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden?
One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friend default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help - but for that to be effective we need far greater transparency from all these - largely American - companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 1, 2011

Free speech, not data

Congress shall make no law...abridging the freedom of speech...

Is data mining speech? This week, in issuing its ruling in the case of IMS Health v Sorrell, the Supreme Court of the United States took the view that it can be. The majority (6-3) opinion struck down a Vermont law that prohibited drug companies from mining physicians' prescription data for marketing purposes. While the ruling of course has no legal effect outside the US, the primary issue in the case - the use of aggregated patient data - is being considered in many countries, including the UK, and the key technical debate is relevant everywhere.

IMS Health is a new species of medical organization: it collects aggregated medical data and mines it for client pharmaceutical companies, who use the results to determine their strategies for marketing to doctors. Vermont's goal was to save money by encouraging doctors to prescribe lower-cost generic medications. The pharmaceutical companies know, however, that marketing to doctors is effective. IMS Health accordingly sued to get the law struck down, claiming that the law abrogated the company's free speech rights. NGOs from the digital - EFF and EPIC - to the not-so-digital - AARP, - along with a host of medical organizations, filed amicus briefs arguing that patient information is confidential data that has never before been considered to fall within "free speech". The medical groups were concerned about the threat to trust between doctors and patients; EPIC and EFF added the more technical objection that the deidentification measures taken by IMS Health are inadequate.

At first glance, the SCOTUS ruling is pretty shocking. Why can't a state protect its population's privacy by limiting access to prescription data? How do marketers have free speech?

The court's objection - or rather, the majority opinion - was that the Vermont law is selective: it prohibits the particular use of this data for marketing but not other uses. That, to the six-judge majority, made the law censorship. The three remaining judges dissented, partly on privacy grounds, but mostly on the well-established basis that commercial speech typically enjoys a lower level of First Amendment protection than non-commercial speech.

When you are talking about traditional speech, censorship means selectively banning a type or source of content. Let's take Usenet in the early 1990s as an example. When spam became a problem, a group of community-minded volunteers devised cancellation practices that took note of this principle and defined spam according to the behavior involved in posting it. Deciding a particular posting was spam requires no subjective judgments about who posted the message or whether it was a commercial ad. Instead, postings are scored against a bunch of published, objective criteria: x number of copies, posted to y number of newsgroups, over z amount of time., or off-topic for that particular newsgroup, or a binary file posted to a text-only newsgroup. In the Vermont case, if you can accept the argument that data mining is speech, as SCOTUS did, then the various uses of the data are content and therefore a law that bans only one of many possible uses or bans use by specified parties is censorship.

The decision still seems intuitively wrong to me, as it apparently also did to the three remaining judges, who wrote a dissenting opinion that instead viewed the Vermont law as an attempt to regulate commercial activity, something that has never been covered by the First Amendment.

But note this: the concern for patient privacy that animated much of the interest in this case was only a bystander (which must surely have pleased the plaintiffs).

Obscured by this case, however, is the technical question that should be at the heart of such disputes (several other states have passed Vermont-style laws): how effectively can data be deidentified? If it can be easily reidentified and linked to specific patients, making it available for data mining ends medical privacy. If it can be effectively anonymized, then the objections go away.

At this year's Computers, Freedom, and Privacy there was some discussion of this issue; an IMS Health representative and several of the experts EPIC cited in its brief were present and disagreeing. Khaled El Emam, from the University of Ottawa, filed a brief (PDF) opposing EPIC's analysis; Latanya Sweeney, who did the seminal work in this area in the early 2000s, followed with a rebuttal. From these, my non-expert conclusion is that just as you cannot trust today's secure cryptographic system to remain unbreakable for the future as computing power continues to increase in speed and decrease in price, you cannot trust today's deidentification to remain robust against the increasing masses of data available for matching to it.

But it seems the technical and privacy issues raised by the Vermont case are yet to be decided. Vermont is free to try again to frame a law that has the effect the state wants but takes a different approach. As for the future of free speech, it seems clear that it will encompass many technological artefacts still being invented - and that it will be quite a fight to keep it protecting individuals instead of, increasingly, commercial enterprises.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 14, 2011

Untrusted systems

Why does no one trust patients?

On the TV series House, the eponymous sort-of-hero has a simple answer: "Everybody lies." Because he believes this, and because no one appears able to stop him, he sends his minions to search his patients' homes hoping they will find clues to the obscure ailments he's trying to diagnose.

Today's Health Privacy Summit in Washington, DC, the zeroth day of this year's Computers, Freedom, and Privacy conference, pulled together, in the best Computers, Freedom, and Privacy tradition, speakers from all aspects of health care privacy. Yet many of them agreed on one thing: health data is complex, decisions about health data are complex, and it's demanding too much of patients to expect them to be able to navigate these complex waters. And this is in the US, where to a much larger extent than in Europe the patient is the customer. In the UK, by contrast, the customer is really the GP and the patient has far less direct control. (Just try looking up a specialist in the phone book.)

The reality is, however, as several speakers pointed out, that doctors are not going to surrender control of their data either. Both physicians and patients have an interest in medical records. Patients need to know about their care; doctors need records both for patient care and for billing and administrative purposes. But beyond these two parties are many other interests who would like access to the intimate information doctors and patients originate: insurers, researchers, marketers, governments, epidemiologists. Yet no one really trusts patients to agree to hand over their data; if they did, these decisions would be a lot simpler. But if patients can't trust their doctor's confidentiality, they will avoid seeking health care until they're in a crisis. In some situations - say, cancer - that can end their lives much sooner than is necessary.

The loss of trust, said lawyer Jim Pyles, could bring on an insurance crisis, since the cost of electronic privacy breaches could be infinite, unlike the ability of insurers to insure those breaches. "If you cannot get insurance for these systems you cannot use them."

If this all (except for the insurance concerns) sounds familiar to UK folk, it's not surprising. As Ross Anderson pointed out, greatly to the Americans' surprise, the UK is way ahead on this particular debate. Nationalized medicine meant that discussions began in the UK as long ago as 1992.

One of Anderson's repeated points is that the notion of the electronic patient record has little to do with the day-to-day reality of patient care. Clinicians, particularly in emergency situations, want to look at the patient. As you want them to do: they might have the wrong record, but you know they haven't got the wrong patient.

"The record is not the patient," said Westley Clarke, and he was so right that this statement was repeated by several subsequent speakers.

One thing that apparently hasn't helped much is the Health Insurance Portability and Accountability Act, which one of the breakout sessions considered scrapping. Is HIPAA a failure or, as long-time Canadian privacy activist Stephanie Perrin would prefer it, a first step? The distinction is important: if HIPPA is seen as an expensive failure it might be scrapped and not replaced. First steps can be succeeded by further, better steps.

Perhaps the first of those should be another of Perrin's suggestions: a map of where your data goes, much like Barbara Garson's book Money Makes the World Go Around? followed her bank deposit as it was loaned out across the world. Most of us would like to believe that what we tell our doctors remains cosily tucked away in their files. These days, not so much.

For more detail see Andy Oram's blog.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 10, 2011

The creepiness factor

"Facebook is creepy," said the person next to me in the pub on Tuesday night.

The woman across from us nodded in agreement and launched into an account of her latest foray onto the service. She had, she said uploaded a batch of 15 photographs of herself and a friend. The system immediately tagged all of the photographs of the friend correctly. It then grouped the images of her and demanded to know, "Who is this?"

What was interesting about this particular conversation was that these people were not privacy advocates or techies; they were ordinary people just discovering their discomfort level. The sad thing is that Facebook will likely continue to get away with this sort of thing: it will say it's sorry, modify some privacy settings, and people will gradually get used to the convenience of having the system save them the work of tagging photographs.

In launching its facial recognition system, Facebook has done what many would have thought impossible: it has rolled out technology that just a few weeks ago *Google* thought was too creepy for prime time.

Wired UK has a set of instructions for turning tagging off. But underneath, the system will, I imagine, still recognize you. What records are kept of this underlying data and what mining the company may be able to do on them is, of course, not something we're told about.

Facebook has had to rein in new elements of its service so many times now - the Beacon advertising platform, the many revamps to its privacy settings - that the company's behavior is beginning to seem like a marketing strategy rather than a series of bungling missteps. The company can't be entirely privacy-deaf; it numbers among its staff the open rights advocate and former MP Richard Allan. Is it listening to its own people?

If it's a strategy it's not without antecedents. Google, for example, built its entire business without TV or print ads. Instead, every so often it would launch something so cool everyone wanted to use it that would get it more free coverage than it could ever have afforded to pay for. Is Facebook inverting this strategy by releasing projects it knows will cause widely covered controversy and then reining them back in only as far as the boundary of user complaints? Because these are smart people, and normally smart people learn from their own mistakes. But Zuckerberg, whose comments on online privacy have approached arrogance, is apparently justified, in that no matter what mistakes the company has made, its user base continues to grow. As long as business success is your metric, until masses of people resign in protest, he's golden. Especially when the IPO moment arrives, expected to be before April 2012.

The creepiness factor has so far done nothing to hurt its IPO prospects - which, in the absence of an actual IPO, seem to be rubbing off on the other social media companies going public. Pandora (net loss last quarter: $6.8 million) has even increased the number of shares on offer.

One thing that seems to be getting lost in the rush to buy shares - LinkedIn popped to over $100 on its first day, and has now settled back to $72 and change (for a Price/Earnings ratio 1076) - is that buying first-day shares isn't what it used to be. Even during the millennial technology bubble, buying shares at the launch of an IPO was approximately like joining a queue at midnight to buy the new Apple whizmo on the first day, even though you know you'll be able to get it cheaper and debugged in a couple of months. Anyone could have gotten much better prices on Amazon shares for some months after that first-day bonanza, for example (and either way, in the long term, you'd have profited handsomely).

Since then, however, a new game has arrived in town: private exchanges, where people who meet a few basic criteria for being able to afford to take risks, trade pre-IPO shares. The upshot is that even more of the best deals have already gone by the time a company goes public.

In no case is this clearer than the Groupon IPO, about which hardly anyone has anything good to say. Investors buying in would be the greater fools; a co-founder's past raises questions, and its business model is not sustainable.

Years ago, Roger Clarke predicted that the then brand-new concept of social networks would inevitably become data abusers simply because they had no other viable business model. As powerful as the temptation to do this has been while these companies have been growing, it seems clear the temptation can only become greater when they have public markets and shareholders to answer to. New technologies are going to exacerbate this: performing accurate facial recognition on user-uploaded photographs wasn't possible when the first pictures were being uploaded. What capabilities will these networks be able to deploy in the future to mine and match our data? And how much will they need to do it to keep their profits coming?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


June 3, 2011

A forgotten man and a bowl of Japanese goldfish

"I'm the forgotten man," Godfrey (William Powell) explains in the 1936 film My Man Godfrey.

Godfrey was speaking during the Great Depression, when prosperity was just around the corner ("Yes, it's been there a long time," says one of Godfrey's fellow city dump dwellers) but the reality for many people was unemployment, poverty, and a general sense that they had ceased to exist except, perhaps, as curiosities to be collected by the rich in a scavenger hunt. Today the rich in question would record their visit to the city dump in an increasingly drunken stream of Tweets and Facebook postings, and people in Nepal would be viewing photographs and video clips even if Godfrey didn't use a library computer to create his own Facebook page.

The EU's push for a right to be forgotten is a logical outgrowth of today's data protection principles, which revolve around the idea that you have rights over your data even when someone else has paid to collect it. EU law grants the right to inspect and correct the data held about us and to prevent its use in unwanted marketing. The idea that we should also have the right to delete data we ourselves have posted seems simple and fair, especially given the widely reported difficulty of leaving social networks.

But reality is complicated. Godfrey was fictional; take a real case, from Pennsylvania. A radiology trainee, unsure what to do when she wanted a reality check whether the radiologist she was shadowing was behaving inappropriately, sought advice from her sister, also a health care worker before reporting the incident. The sister told a co-worker about the call, who told others, and someone in that widening ripple posted the story on Facebook, from where it was reported back to the student's program director. Result: the not-on-Facebook trainee was expelled on the grounds that she had discussed a confidential issue on a cell phone. Lawsuit.

So many things had to go wrong for that story to rebound and hit that trainee in the ass. No one - except presumably the radiologist under scrutiny - did anything actually wrong, though the incident illustrates the point that than people think. Preventing this kind of thing is hard. No contract can bar unrelated, third-hand gossipers from posting information that comes their way. There's nothing to invoke libel law. The worst you can say is that the sister was indiscreet and that the program administrator misunderstood and overreacted. But the key point for our purposes here is: which data belongs to whom?

Lilian Edwards has a nice analysis of the conflict between privacy and freedom of expression that is raised by the right to forget. The comments and photographs I post seem to me to belong to me, though they may be about a dozen other people. But on a social network your circle of friends are also stakeholders in what you post; you become part of their library. Howard Rheingold, writing in his 1992 book The Virtual Community, noted the ripped and gaping fabric of conversations on The Well when early member Blair Newman deleted all his messages. Photographs and today's far more pervasive, faster-paced technology make such holes deeper and multi-dimensional. How far do we need to go in granting deletion rights?

The short history of the Net suggests that complete withdrawal is roughly impossible. In the 1980s, Usenet was thought of as an ephemeral medium. People posted in the - they thought - safe assumption that anything they wrote would expire off the world's servers in a couple of weeks. And as long as everyone read live online that was probably true. But along came offline readers and people with large hard disks and Deja News, and Usenet messages written in 1981 with no thought of any future context are a few search terms away.

"It's a mistake to only have this conversation about absolutes," said Google's Alma Whitten at the Big Tent event two weeks ago, arguing that it's impossible to delete every scrap about anyone. Whitten favors a "reasonable effort" approach and a user dashboard to enable that so users can see and control the data that's being held. But we all know the problem with market forces: it is unlikely that any of the large corporations will come up with really effective tools unless forced. For one thing, there is a cultural clash here between the EU and the US, the home of many of these companies. But more important, it's just not in their interests to enable deletion: mining that data is how those companies make a living and in return we get free stuff.

Finding the right balance between freedom of expression (my right to post about my own life) and privacy, including the right to delete, will require a mix of answers as complex as the questions: technology (such as William Heath's Mydex), community standards, and, yes, law, applied carefully. We don't want to replace Britain's chilling libel laws with a DMCA-like deletion law.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 20, 2011

The world we thought we lived in

If one thing is more annoying than another, it's the fantasy technology on display in so many TV shows. "Enhance that for me!" barks an investigator. And, obediently, his subordinate geek/squint/nerd pushes a button or few, a line washes over the blurry image on screen, and now he can read the maker's mark on a pill in the hand of the target subject that was captured by a distant CCTV camera. The show 24 ended for me 15 minutes into season one, episode one, when Kiefer Sutherland's Jack Bauer, trying to find his missing daughter, thrust a piece of paper at an underling and shouted, "Get me all the Internet passwords associated with that telephone number!" Um...

But time has moved on, and screenwriters are more likely to have spent their formative years online and playing computer games, and so we have arrived at The Good Wife, which gloriously wrapped up its second season on Tuesday night (in the US; in the UK the season is still winding to a close on Channel 4). The show is a lot of things: a character study of an archetypal humiliated politician's wife (Alicia Florrick, played by Julianna Margulies) who rebuilds her life after her husband's betrayal and corruption scandal; a legal drama full of moral murk and quirky judges ( Carob chip?); a political drama; and, not least, a romantic comedy. The show is full of interesting, layered men and great, great women - some of them mature, powerful, sexy, brilliant women. It is also the smartest show on television when it comes to life in the time of rapid technological change.

When it was good, in its first season, Gossip Girl cleverly combined high school mean girls with the citizen reportage of TMZ to produce a world in which everyone spied on everyone else by sending tips, photos, and rumors to a Web site, which picks the most damaging moment to publish them and blast them to everyone's mobile phones.

The Good Wife goes further to exploit the fact that most of us, especially those old enough to remember life before CCTV, go on about our lives forgetting that everywhere we leave a trail. Some are, of course, old staples of investigative dramas: phone records, voice messages, ballistics, and the results of a good, old-fashioned break-in-and-search. But some are myth-busting.

One case (S2e15, "Silver Bullet") hinges on the difference between the compressed, digitized video copy and the original analog video footage: dropped frames change everything. A much earlier case (S1e06, "Conjugal") hinges on eyewitness testimony; despite a slightly too-pat resolution (I suspect now, with more confidence, it might have been handled differently), the show does a textbook job of demonstrating the flaws in human memory and their application to police line-ups. In a third case (S1e17, "Heart"), a man faces the loss of his medical insurance because of a single photograph posted to Facebook showing him smoking a cigarette. And the disgraced husband's (Peter Florrick, played by Chris Noth) attempt to clear his own name comes down to a fancy bit of investigative work capped by camera footage from an ATM in the Cayman Islands that the litigator is barely technically able to display in court. As entertaining demonstrations and dramatizations of the stuff net.wars talks about every week and the way technology can be both good and bad - Alicia finds romance in a phone tap! - these could hardly be better. The stuffed lion speaker phone (S2e19, "Wrongful Termination") is just a very satisfying cherry topping of technically clever hilarity.

But there's yet another layer, surrounding the season two campaign mounted to get Florrick elected back into office as State's Attorney: the ways that technology undermines as well as assists today's candidates.

"Do you know what a tracker is?" Peter's campaign manager (Eli Gold, played by Alan Cumming) asks Alicia (S2e01, "Taking Control"). Answer: in this time of cellphones and YouTube, unpaid political operatives follow opposing candidates' family and friends to provoke and then publish anything that might hurt or embarrass the opponent. So now: Peter's daughter (Makenzie Vega) is captured praising his opponent and ham-fistedly trying to defend her father's transgressions ("One prostitute!"). His professor brother-in-law's (Dallas Roberts) in-class joke that the candidate hates gays is live-streamed over the Internet. Peter's son (Graham Phillips) and a manipulative girlfriend (Dreama Walker), unknown to Eli, create embarrassing, fake Facebook pages in the name of the opponent's son. Peter's biggest fan decides to (he thinks) help by posting lame YouTube videos apparently designed to alienate the very voters Eli's polls tell him to attract. (He's going to post one a week; isn't Eli lucky?) Polling is old hat, as are rumors leaked to newspaper reporters; but today's news cycle is 20 minutes and can we have a quote from the candidate? No wonder Eli spends so much time choking and throwing stuff.

All of this fits together because the underlying theme of all parts of the show is control: control of the campaign, the message, the case, the technology, the image, your life. At the beginning of season one, Alicia has lost all control over the life she had; by the end of season two, she's in charge of her new one. Was a camera watching in that elevator? I guess we'll find out next year.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 13, 2011

Lay down the cookie

British Web developers will be spending the next couple of weeks scrambling to meet the May 26 deadline after which new legislation require users to consent before a cookie can be placed on their computers. The Information Commissioner's guidelines allow a narrow exception for cookies that are "strictly necessary for a service requested by the user"; the example given is a cookie used to remember an item the user has chosen to buy so it's there when they go to check out. Won't this be fun?

Normally, net.wars comes down on the side of privacy even when it's inconvenient for companies, but in this case we're prepared to make at least a partial exception. It's always been a little difficult to understand the hatred and fear with which some people regard the cookie. Not the chocolate chip cookie, which of course we know is everything that is good, but the bits of code that reside on your computer to give Web pages the equivalent of memory. Cookies allow a server to assemble a page that remembers what you've looked at, where you've been, and which gewgaw you've put into your shopping basket. At least some of this can be done in other ways such as using a registration scheme. But it's arguably a greater invations of privacy to require users to form a relationship with a Web site they may only use once.

The single-site use of cookies is, or ought to be, largely uncontroversial. The more contentious usage is third-party cookies, used by advertising agencies to track users from site to site with the goal of serving up targeted, rather than generic, ads. It's this aspect of cookies that has most exercised privacy advocates, and most browsers provide the ability to block cookies - all, third-party, or none, with a provision to make exceptions.

The new rules, however, seem overly broad.

In the EU, the anti-cookie effort began in 2001 (the second-ever net.wars), seemed to go quiet, and then revived in 2009, when I called the legislation "masterfully stupid". That piece goes into some detail about the objections to the anti-cookie legislation, so we won't review that here. At the time, reader email suggested that perhaps making life unpleasant for advertisers would force browser manufacturers to design better privacy controls. 'Tis a consummation devoutly to be wished, but so far it hasn't happened, and in the meantime that legislation

The chief difference is moving from opt-out to opt-in: users must give consent for cookies to be placed on their machines; the chief flaw is banning a technology instead of regulating undesirable actions and effects. Besides the guidelines above, the ICO refers people to All About Cookies for further information.

Pete Jordan, a Hull-based Web developer, notes that when you focus legislation on a particular technology, "People will find ways around it if they're ingenious enough, and if you ban cookies or make it awkward to use them, then other mechanisms will arise." Besides, he says, "A lot of day-to-day usage is to make users' experience of Web sites easier, more friendly, and more seamless. It's not life-threatening or vital, but from the user's perception it makes a difference if it disappears." Cookies, for example, are what provide the trail of "breadcrumbs" at the top of a Web page to show you the path by which you arrived at that page so you can easily go back to where you were.

"In theory, it should affect everything we do," he says of the legislation. A possible workaround may be to embed tokens in URLs, a strategy he says is difficult to manage and raises the technical barrier for Web developers.

The US, where competing anti-tracking bills are under consideration in both houses of Congress, seems to be taking a somewhat different tack in requiring Web sites to honor the choice if consumers set a "Do Not Track" flag. Expect much more public debate about the US bills than there has been in the EU or UK. See, for example, the strong insistence by What Would Google Do? author Jeff Jarvis that media sites in particular have a right to impose any terms they want in the interests of their own survival. He predicts paywalls everywhere and the collapse of media economics. I think he's wrong.

The thing is, it's not a fair contest between users and Web site owners. It's more or less impossible to browse the Web with all cookies turned off: the complaining pop-ups are just too frequent. But targeting the cookie is not the right approach. There are many other tracking technologies that are invisible to consumers which may have both good and bad effects - even Web bugs are used helpfully some of the time. (The irony is, of course, regulating the cookie but allowing increases in both offline and online surveillance by police and government agencies.)

Requiring companies to behave honestly and transparently toward their customers would have been a better approach for the EU; one hopes it will work better in the US.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 22, 2011

Applesauce

Modern life is full of so many moments when you see an apparently perfectly normal person doing something that not so long ago was the clear sign of a crazy person. They're walking down the street talking to themselves? They're *on the phone*. They think the inanimate objects in their lives are spying on them? They may be *right*.

Last week's net.wars ("The open zone") talked about the difficulty of finding the balance between usability, on the one hand, and giving users choice, flexibility, and control, on the other. And then, as if to prove this point, along comes Apple and the news that the iPhone has been storing users' location data, perhaps permanently.

The story emerged this week when two researchers presenting at O'Reilly's Where 2.0 conference presented an open-source utility they'd written to allow users to get a look at the data the iPhone was saving. But it really begins last year, when Alex Levinson discovered the stored location data as part of his research on Apple forensics. Based on his months of studying the matter, Levinson contends that it's incorrect to say that Apple is gathering this data: rather, the device is gathering the data, storing it, and backing it up when you sync your phone. Of course, if you sync your phone to Apple's servers, then the data is transferred to your account - and it is also migrated when you purchase a new iPhone or iPad.

So the news is not quite as bad as it first sounded: your device is spying on you, but it's not telling anybody. However: the data is held in unencrypted form and appears never to expire, and this raises a whole new set of risks about the devices that no one had really focused on until now.

A few minutes after the story broke, someone posted on Twitter that they wondered how many lawyers handling divorce cases were suddenly drafting subpoenas for copies of this file from their soon-to-be-exes' iPhones. Good question (although I'd have phrased it instead as how many script ideas the wonderful, tech-savvy writers of The Good Wife are pitching involving forensically recovered location data). That is definitely one sort of risk; another, ZDNet's Adrian Kingsley-Hughes points out is that the geolocation may be wildly inaccurate, creating a false picture that may still be very difficult to explain, either to a spouse or to law enforcement, who, as Declan McCullagh writes know about and are increasingly interested in accessing this data.

There are a bunch of other obvious privacy things to say about this, and Privacy International has helpfully said them in an open letter to Steve Jobs.

"Companies need openness and procedures," PI's executive director, Simon Davies, said yesterday, comparing Apple's position today to Google's a couple of months before the WiFi data-sniffing scandal.

The reason, I suspect, that so many iPhone users feel so shocked and betrayed is that Apple's attention to the details of glossy industrial design and easy-to-understand user interfaces leads consumers to cuddle up to Apple in a way they don't to Microsoft or Google. I doubt Google will get nearly as much anger directed at it for the news that Android phones also collect location data (the Android saves only the last 50 mobile masts and 200 WiFi networks). In either event, the key is transparency: when you post information on Twitter or Facebook about your location or turn on geo-tagging you know you're doing it. In this case, the choice is not clear enough for users to understand what they've agreed to.

The question is: how best can consumers be enabled to make informed decisions? Apple's current method - putting a note saying "Beware of the leopard" at the end of a 15,200-word set of terms and conditions (which are in any case drafted by the company's lawyer to protect the company, not to serve consumers) that users agree to when they sign up for iTunes - is clearly inadequate. It's been shown over and over again that consumers hate reading privacy policies, and you have only to look at Facebook's fumbling attempts to embed these choices in a comprehensible interface to realize that the task is genuinely difficult. This is especially true because, unlike the issue of user-unfriendly sysstems in the early 1990s, it's not particularly in any of these companies' interests to solve this intransigent and therefore expensive problem. Make it easy for consumers to opt out and they will, hardly an appetizing proposition for companies supported in whole or in part by advertising.

The answer to the question, therefore, is going to involve a number of prongs: user interface design, regulation, contract law, and industry standards, both technical and practical. The key notion, however, is that it should be feasible - even easy - for consumers to tell what information gathering they're consenting to. The most transparent way of handling that is to make opting out the default, so that consumers must take a positive action to turn these things on.

You can say - as many have - that this particular scandal is overblown. But we're going to keep seeing dust-ups like this until industry practice changes to reflect our expectations. Apple, so sensitive to the details of industrial design that will compel people to yearn to buy its products, will have to develop equal sensitivity for privacy by design.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 11, 2011

The ten-year count

My census form arrived the other day - 32 lavender and white pages of questions about who will have been staying overnight in my house on March 27, their religions, and whether they will be cosseted with central heating and their own bedroom.

I seem to be out of step on this one, but I've always rather liked the census. It's a little like finding your name in an old phone book: I was here. Reportedly, this, Britain's 21st national census, may be the last. Cabinet Office minister Francis Maude has complained that it is inaccurate and out of date by the time it's finished, and £482 million is expensive.

Until I read the Guardian article cited above, I had never connected the census to Thomas Malthus' 1798 prediction that the planet would run out of the resources necessary to support an ever-increasing human population. I blame the practice of separating science, history, and politics: Malthus is taught in science class, so you don't realize he was contemporaneous with the inclusion of the census in the US Constitution, which you learn about in civics class.

The census seems to be the one moment when attention really gets focused on the amount and types of data the government collects about all of us. There are complaints from all political sides that it's intrusive and that the government already has plenty of other sources.

I have - both here and elsewhere - written a great deal about privacy and the dangers of thoughtlessly surrendering information but I'm inclined to defend the census. And here's why: it's transparent. Of all the data-gathering exercises to which our lives are subject it's the only one that is. When you fill out the form you know exactly what information you are divulging, when, and to whom. Although the form threatens you with legal sanctions for not replying, it's not enforced.

And I can understand the purpose of the questions: asking the size and disposition of homes, the amount of time spent working and at what, racial and ethnic background, religious affiliation, what passports people hold and what languages they speak. These all make sense to me in the interests of creating a snapshot of modern Britain that is accurate enough for the decisions the government must make. How many teachers and doctors do we need in which areas who speak which languages? How many people still have coal fires? These are valid questions for a government to consider.

But most important, anyone can look up census data and develop some understanding of the demographics government decisions are based on.

What are the alternatives? There are certainly many collections of data for various purposes. There are the electoral rolls, which collect the names and nationalities of everyone at each address in every district. There are the council tax registers, which collect the householder's name and the number of residents at each address. Other public sector sources include the DVLA's vehicle and driver licensing data, school records, and the NHS's patient data. And of course there are many private sector sources, too: phone records, credit card records, and so on.

Here's the catch: every one of those is incomplete. Everyone does not have a phone or credit card; some people are so healthy they get dropped from their doctors' registers because they haven't visisted in many years; some people don't have an address; some people have five phones, some none. Most of those people are caught by the census, since it relies on counting everyone wherever they're staying on a single particular night.

Here's another catch: the generation of national statistics to determine the allocation of national resources is not among the stated purposes for which those data are gathered. That is of course fixable. But doing so might logically lead government to mandate that these agencies collect more data from us than they do now - and with more immediate penalties for not complying. Would you feel better about telling the DVLA or your local council your profession and how many hours you work? No one is punished for leaving a question blank on the census, but suppose leaving your religious affiliation blank on your passport application means not getting a passport until you've answered it?

Which leads to the final, biggest catch. Most of the data that is collected from us is in private hands or is confidential for one reason or another. Councils are pathological about disliking sharing data with the public; commercial organizations argue that their records are commercially sensitive; doctors are rightly concerned about protecting patient data. Despite the data protection laws we often do not know what data has been collected, how it's being used, or where it's being held. And although we have the right to examine and correct our own records we won't find it easy to determine the basis for government decisions: open season for lobbyists.

The census, by contrast, is transparent and accountable. We know what information we have divulged, we know who is responsible for it, and we can even examine the decisions it is used to support. Debate ways to make it less intrusive by all means, but do you really want to replace it with a black box?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 21, 2011

Fogged

The Reform Club, I read on its Web site, was founded as a counterweight to the Carlton Club, where conservatives liked to meet and plot away from public scrutiny. To most of us, it's the club where Phileas Fogg made and won his bet that he could travel around the world in 80 days, no small feat in 1872.

On Wednesday, the club played host to a load of people who don't usually talk to each other much because they come at issues of privacy from such different angles. Cityforum, the event's organizer, pulled together representatives from many parts of civil society, government security, and corporate and government researchers.

The key question: what trade-offs are people willing to make between security and privacy? Or between security and civil liberties? Or is "trade-off" the right paradigm? It was good to hear multiple people saying that the "zero-sum" attitude is losing ground to "proportionate". That is, the debate is moving on from viewing privacy and civil liberties as things we must trade away if we want to be secure to weighing the size of the threat against the size of the intrusion. It's clear to all, for example, that one thing that's disproportionate is local councils' usage of the anti-terrorism aspects of the Regulation of Investigatory Powers Act to check whether householders are putting out their garbage for collection on the wrong day.

It was when the topic of the social value of privacy was raised that it occurred to me that probably the closest model to what people really want lay in the magnificent building all around us. The gentleman's club offered a social network restricted to "the right kind of people" - that is, people enough like you that they would welcome your fellow membership and treat you as you would wish to be treated. Within the confines of the club, a member like Fogg, who spent all day every day there, would have had, I imagine, little privacy from the other members or, especially, from the club staff, whose job it was to know what his favorite drink was and where and when he liked it served. But the club afforded members considerable protection from the outside world. Pause to imagine what Facebook would be like if the interface required each would-be addition to your friends list to be proposed and seconded and incomers could be black-balled by the people already on your list.

This sort of web of trust is the structure the cryptography software PGP relies on for authentication: when you generate your public key, you are supposed to have it signed by as many people as you could. Whenever someone wanted to verify the key, they could look at the list of who had signed it for someone they themselves knew and could trust. The big question with such a structure is how you make managing it scale to a large population. Things are a lot easier when it's just a small, relatively homogeneous group you have to deal with. And, I suppose, when you have staff to support the entire enterprise.

We talk a lot about the risks of posting too much information to things like Facebook, but that may not be its biggest issue. Just as traffic data can be more revealing than the content of messages, complex social linkages make it impossible to anonymize databases: who your friends are may be more revealing than your interactions with them. As governments and corporations talk more and more about making "anonymized" data available for research use, this will be an increasingly large issue. An example: an little-known incident in 2005, when the database of a month's worth of UK telephone calls was exported to the US with individuals' phone numbers hashed to "anonymize" them. An interesting technological fix comes from Microsoft' in the notion of differential privacy, a system for protecting databases both against current re-identification and attacks with external data in the future. The catch, if it is one, is that you must assign to your database a sort of query budget in advance - and when it's used up you must burn the database because it can no longer be protected.

We do know one helpful thing: what price club members are willing to pay for the services their club provides. Public opinion polls are a crude tool for measuring what privacy intrusions people will actually put up with in their daily lives. A study by Rand Europe released late last year attempted to examine such things by framing them in economic terms. The good news is they found that you'd have to pay people £19 to get them to agree to provide a DNA sample to include in their passport. The weird news is that people would pay £7 to include their fingerprints. You have to ask: what pitch could Rand possibly have made that would make this seem worth even one penny to anyone?

Hm. Fingerprints in my passport or a walk across a beautiful, mosaic floor to a fine meal in a room with Corinthian columns, 25-foot walls of books, and a staff member who politely fails to notice that I have not quite confirmed to the dress code? I know which is worth paying for if you can afford it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 7, 2011

Scanning the TSA

There are, Bruce Schneier said yesterday at the Electronic Privacy Information Center mini-conference on the TSA (video should be up soon), four reasons why airport security deserves special attention, even though it directly affects a minority of the population. First: planes are a favorite terrorist target. Second: they have unique failure characteristics - that is, the plane crashes and everybody dies. Third: airlines are national symbols. Fourth: planes fly to countries where terrorists are.

There's a fifth he didn't mention but that Georgetown lawyer Pablo Molina and We Won't Fly founder James Babb did: TSAism is spreading. Random bag searches on the DC Metro and the New York subways. The TSA talking about expanding its reach to shopping malls and hotels. And something I found truly offensive, giant LED signs posted along the Maryland highways announcing that if you see anything suspicious you should call the (toll-free) number below. Do I feel safer now? No, and not just because at least one of the incendiary devices sent to Maryland state offices yesterday apparently contained a note complaining about those very signs.

Without the sign, if you saw someone heaving stones at the cars you'd call the police. With it, you peer nervously at the truck in front of you. Does that driver look trustworthy? This is, Schneier said, counter-productive because what people report under that sort of instruction is "different, not suspicious".

But the bigger flaw is cover-your-ass backward thinking. If someone tries to bomb a plane with explosives in a printer cartridge, missing a later attempt using the exact same method will get you roasted for your stupidity. And so we have a ban on flying with printer cartridges over 500g and, during December, restrictions on postal mail, something probably few people in the US even knew about.

Jim Harper, a policy scholar with the Cato Institute and a member of the Department of Homeland Security's Data Privacy and Integrity Advisory Committee, outlined even more TSA expansion. There are efforts to create mobile lie detectors that measure physiological factors like eye movements and blood pressure.

Technology, Lillie Coney observed, has become "like butter - few things are not improved if you add it."

If you're someone charged with blocking terrorist attacks you can see the appeal: no one wants to be the failure who lets a bomb onto a plane. Far, far better if it's the technology that fails. And so expensive scanners roll through the nation's airports despite the expert assessment - on this occasion, from Schneier and Ed Luttwak, a senior associate with the Center for Strategic and International Studies - that the scanners are ineffective, invasive, and dangerous. As Luttwak said, the machines pull people's attention, eyes, and brains away from the most essential part of security: watching and understanding the passengers' behavior.

"[The machine] occupies center stage, inevitably," he said, "and becomes the focus of an activity - not aviation security, but the operation of a scanner."

Equally offensive in a democracy, many speakers argued, is the TSA's secrecy and lack of accountability. Even Meera Shankar, the Indian ambassador, could not get much of a response to her complaint from the TSA, Luttwak said. "God even answered Job." The agency sent no representative to this meeting, which included Congressmen, security experts, policy scholars, lawyers, and activists.

"It's the violation of the entire basis of human rights," said the Stanford and Oxford lawyer Chip Pitts around the time that the 112th Congress was opening up with a bipartisan reading of the US Constitution. "If you are treated like cattle, you lose the ability to be an autonomous agent."

As Libertarian National Committee executive director Wes Benedict said, "When libertarians and Ralph Nader agree that a program is bad, it's time for our government to listen up."

So then, what are the alternatives to spending - so far, in the history of the Department of Homeland Security, since 2001 - $360 billion, not including the lost productivity and opportunity costs to the US's 100 million flyers?

Well, first of all, stop being weenies. The number of speakers who reminded us that the US was founded by risk-takers was remarkable. More people, Schneier noted, are killed in cars every month than died on 9/11. Nothing, Ralph Nader said, is spent on the 58,000 Americans who die in workplace accidents every year or the many thousands more who are killed by pollution or medical malpractice.

"We need a comprehensive valuation of how to deploy resources in a rational manner that will be effective, minimally invasive, efficient, and obey the Constitution and federal law," Nader said

So: dogs are better at detecting explosives than scanners. Intelligent profiling can whittle down the mass of suspects to a more manageable group than "everyone" in a giant game of airport werewolf. Instead, at the moment we have magical thinking, always protecting ourselves from the last attack.

"We're constantly preparing for the rematch," said Lillie Coney. "There is no rematch, only tomorrow and the next day." She was talking as much about Katrina and New Orleans as 9/11: there will always, she said, be some disaster, and the best help in those situations is going to come from individuals and the people around them. Be prepared: life is risky.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 31, 2010

Good, bad, ugly...the 2010 that was

Every year deserves its look back, and 2010 is no exception. On the good side, the younger generation beginning to enter politics is bringing with it a little more technical sense than we've had in government before. On the bad side, the year's many privacy scandals reminded us all how big a risk we take in posting as much information online as we do. The ugly...we'd have to say the scary new trends in malware. Happy New Year.

By the numbers:

$5.3 billion: the Google purchase offer that Groupon turned down. Smart? Stupid? Shopping and social networks ought to mix combustibly (and could hit local newspapers and their deal flyers), but it's a labor-intensive business. The publicity didn't hurt: Groupon has now managed to raise half a billion dollars on its own. They aren't selling anything we want to buy, but that doesn't seem to hurt Wal-Mart or McDonalds.

$497 million: the amount Harvard scientists Tyler Moore and Benjamin Edelman estimate that Google is earning from "typosquatting". Pocket change, really: Google's 2009 revenues were $23 billion. But still.

15 million (estimated): number of iPads sold since its launch in May. It took three decades of commercial failures for someone to finally launch a successful tablet computer. In its short life the iPad has been hailed and failed as the savior of print publications, and halved Best Buy's laptop sales. We still don't want one - but we're keyboard addicts, hardly its target market.

250,000: diplomatic cables channeled to Wikileaks. We mention this solely to enter The Economist's take on Bruce Sterling's take into the discussion. Wikileaks isn't at all the crypto-anarchy that physicist Timothy C. May wrote about in 1992. May's essay imagined the dark uses of encrypted secrecy; Wikileaks is, if anything, the opposite of it.

500: airport scanners deployed so far in the US, at an estimated cost of $80 million. For 2011, Obama has asked for another $88 million for the next round of installations. We'd like fewer scanners and the money instead spent on...well, almost anything else, really. Intelligence, perhaps?

65: Percentage of Americans that Pew Internet says have paid for Internet content. Yeah, yeah, including porn. We think it's at least partly good news.

58: Number of investigations (countries and US states) launched into Google's having sniffed approximately 600Gb of data from open WiFi connections, which the company admitted in May. The progress of each investigation is helpfully tallied by SearchEngineLand. Note that the UK's ICO's reaction was sufficiently weak that MPs are complaining.

24: Hours of Skype outage. Why are people writing about this as though it were the end of Skype? It was a lot more shocking when it happened to AT&T in 1990 - in those days, people only had one phone number!

5: number of years I've wished Google would eliminate useless shopping aggregator sites from its search results listings. Or at least label them and kick them to the curb.

2: Facebook privacy scandals that seem to have ebbed leaving less behavorial change than we'd like in their wake. In January, Facebook founder and CEO Mark Zuckerberg opined that privacy is no longer a social norm; in May the revamped its privacy settings to find an uproar in response (and not for the first time). Still, the service had 400 million users at the beginning of 2010 and has more than 500 million now. Resistance requires considerable anti-social effort, though the cool people have, of course, long fled.

1: Stuxnet worm. The first serious infrastructure virus. You knew it had to happen.

In memoriam:

- Kodachrome. The Atlantic reports that December 30, 2010 saw the last-ever delivery of Kodak's famous photographic film. As they note, the specific hues and light-handling of Kodachrome defined the look of many decades of the 20th century. Pause to admire The Atlantic's selection of the 75 best pictures they could find: digital has many wonderful qualities, but these seem to have a three-dimensional roundness you don't see much any more. Or maybe we just forget to look.

- The 3.5in floppy disk. In April, Sony announced it would stop making the 1.4Mb floppy disk that defined the childhoods of today's 20-somethings. The first video clip I ever downloaded, of the exploding whale in Oregon (famed of Web site and Dave Barry column), required 11 floppy disks to hold it. You can see why it's gone.

- Altavista: A leaked internal memo puts Altavista on Yahoo!'s list of services due for closure. Before Google, Altavista was the best search engine by a long way, and if it had focused on continuing to improve its search algorithms instead of cluttering up its front page in line with the 1995 fad for portals it might be still. Google's overwhelming success had as much to do with its clean, fast-loading design as it did with its superior ability to find stuff. Altavista also pioneered online translation with its Babelfish (and don't you have to love a search engine that quotes Douglas Adams?).

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 24, 2010

Random acts of security

When I was in my 20s in the 1970s, I spent a lot of time criss-crossing the US by car. One of the great things about it, as I said to a friend last week, was the feeling of ownership that gave me wherever I was: waking up under the giant blue sky in Albuquerque, following the Red River from Fargo to Grand Forks, or heading down the last, dull hour of New York State Thruway to my actual home, Ithaca, NY, it was all part of my personal backyard. This, I thought many times, is my country!

This year's movie (and last year's novel) Up in the Air highlighted the fact that the world's most frequent flyers feel the same way about airports. When you've traversed the same airports so many times that you've developed a routine it's hard not to feel as smug as George Clooney's character when some disorganized person forgets to take off her watch before going through the metal detector. You, practiced and expert, slide through smoothly without missing a beat. The check-in desk staff and airline club personnel ask how you've been. You sit in your familiar seat on the plane. You even know the exact moment in the staff routine to wander back to the galley and ask for a mid-flight cup of tea.

Your enemy in this comfortable world is airport security, which introduces each flight by putting you back in your place as an interloper.

Our equivalent back then was the Canadian border, which we crossed in quite isolated places sometimes. The border highlighted a basic fact of human life: people get bored. At the border crossing between Grand Forks, ND and Winnipeg, Manitoba, for example, the guards would keep you talking until the next car hove into view. Sometimes that was one minute, sometimes 15.

We - other professional travelers and I - had a few other observations. If you give people a shiny, new toy they will use it, just for the novelty. One day when I drove through Lewiston-Queenston they had drug-sniffing dogs on hand to run through and around the cars stopped for secondary screening. Fun! I was coming back from a folk festival in a pickup truck with a camper on the back, so of course I was pulled over. Duh: what professional traveler who crosses the border 12 times a year risks having drugs in their car?

Cut to about a week ago, at Memphis airport. It was 10am on a Saturday, and the traffic approaching the security checkpoint was very thin. The whole body image scanners - expensive, new, the latest in cover-your-ass-ness - are in theory only for secondary screening: you go through them if you alarm the metal detectors or are randomly selected.

How does that work? When there's little traffic everyone goes through the scanner. For the record, I opted out and was given an absolutely professional and courteous pat-down, in contrast to the groping reports in the media for the last month. Yes: felt around under my waistband and hairline. No: groping. You've got to love the Net's many charming inhabitants: when I posted this report to a frequent flyer forum a poster hazarded that I was probably old and ugly.

My own theory is simply that it was early in the day, and everyone was rested and fresh and hadn't been sworn at a whole lot yet. So no one was feeling stressed out or put-upon by a load of uppity, obnoxious passengers.

It seems clear, however, that if you wanted to navigate security successfully carrying items that are typically unwanted on a flight, your strategy for reducing the odds of attracting extra scrutiny would be fairly simple, although the exact opposite of what experienced (professional) travelers are in the habit of doing:

- Choose a time when it's extremely crowded. Scanners are slower than metal detectors, so the more people there are the smaller the percentage going through them. (Or study the latest in scanner-defeating explosives fashions.)

- Be average and nondescript, someone people don't notice particularly or feel disposed to harass when they're in a bad mood. Don't be a cute, hot young woman; don't be a big, fat, hulking guy; don't wear clothes that draw the eye: expensive designer fashions, underwear, Speedos, a nun's habit (who knows what that could hide? and anyway isn't prurient curiosity about what could be under there a thing?).

- Don't look rich, powerful, special, or attitudinous. The TSA is like a giant replication of Stanley Milgram's experiment. Who's the most fun to roll over? The business mogul or the guy just like you who works in a call center? The guy with the video crew spoiling for a fight, or the guy who treats you like a servant? The sexy young woman who spurned you in high school or the crabby older woman like your mean second-grade teacher? Or the wheelchair-bound or medically challenged who just plain make you uncomfortable?

- When you get in line, make sure you're behind one or more of the above eye-catching passengers.

Note to TSA: you think the terrorists can't figure this stuff out, too? The terrorist will be the last guy your agents will pick for closer scrutiny.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 26, 2010

Like, unlike

Some years back, the essayist and former software engineer Ellen Ullman wrote about the tendency of computer systems to infect their owners. The particular infectious she covered in Close to the Machine: Technophilia and Its Discontents was databases. Time after time, she saw good, well-meaning people commission a database to help staff or clients, and then begin to use it to monitor those they originally intended to help. Why? Well, because they *can*.

I thought - and think - that Ullman was onto something important there, but that this facet of human nature is not limited to computers and databases. Stanley Milgram's 1961 experiments showed that humans under the influence of apparent authority will obey instructions to administer treatment that outside of such a framework they would consider abhorrent. This seems to me sufficient answer to Roger Ebert's comment that no TSA agent has yet refused to perform the "enhanced pat-down", even on a child.

It would almost be better if the people running the NHS Choices Web site had been infected with the surveillance bug because they would be simply wrong. Instead, the NHS is more complicatedly wrong: it has taken the weird decision that what we all want is to . share with our Facebook friends the news that we have just looked at the page on gonorrhea. Or, given the well-documented privacy issues with Facebook's rapid colonization of the Web via the "Like" button, allow Facebook to track our every move whether we're logged in or not.

I can only think of two possibilities for the reasoning behind this. One is that NHS managers have little concept of the difference between their site, intended to provide patient information and guidance, and that of a media organization needing advertising to stay afloat. It's one of the truisms of new technologies that they infiltrate the workplace through the medium of people who already use them: email, instant messaging, latterly social networks. So maybe they think that because they love Facebook the rest of us must, too. My other thought is that NHS managers think this is what we want because their grandkids have insisted they get onto Facebook, where they now occupy their off-hours hitting the "like" button and poking each other and think this means they're modern.

There's the issue Tim Berners-Lee has raised, that Facebook and other walled gardens are dividing the Net up into incompatible silos. The much worse problem, at least for public services and we who must use them, is the insidiously spreading assumption that if a new technology is popular it must be used no matter what the context. The effect is about as compelling as a TSA agent offering you a lollipop after your pat-down.

Most likely, the decision to deploy the "Like" button started with the simple, human desire for feedback. At some point everyone who runs a Web site wonders what parts of the site get read the most...and then by whom...and then what else they read. It's obviously the right approach if you're a media organization trying to serve your readers better. It's a ludicrously mismatched approach if you're the NHS because your raison d'être is not to be popular but to provide the public with the services they need at the most vulnerable times in their lives. Your page on rare lymphomas is not less valuable or important just because it's accessed by fewer people than the pages on STDs, nor are you actually going to derive particularly useful medical research data from finding that people who read about lymphoma also often read pages on osteoporosis. But it's easy, quick, and free to install Google Analytics or Facebook Like, and so people do it without thought.

Both of these incidents have also exposed once and for all the limited value of privacy policies. For one thing, a patient in distress is not going to take time out from bleeding to read the fine print ("when you visit pages on our site that display a Facebook Like button, Facebook will collect information about your visit") or check for open, logged-in browser windows. The NHS wants its sites to be trusted; but that means more than simply being medically accurate; it requires implementing confidentiality as well. The NHS's privacy policy is meaningless if you need to be a technical expert to exercise any choice. Similarly, who cares what the TSA's privacy policy says if the simple desire to spend Christmas with your family requires you to submit to whatever level of intimate inspection the agent on the ground that day feels like dishing out? What privacy policy makes up for being required to covered in urine spilled from your roughly handled urostomy bag? Milgram moments, both.

It's at this point that we need our politicians to act in our interests, because the thinking has to change at the top level.

Meantime, if you're traveling in the US this Christmas, the ACLU, and Edward Hasbrouck have handy guides to your rights. But pragmatically, if you do get patted down and really want to make your flight, it seems like your best policy is to lie back and think of the country of your choice.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 19, 2010

Power to the people

We talk often about the fact that ten years of effort - lawsuits, legislation, technology - on the part of the copyright industries has made barely a dent in the amount of material available online as unauthorized copies. We talk less about the similar situation that applies to privacy despite years of best efforts by Privacy International, Electronic Privacy Information Center, Center for Democracy and Technology, Electronic Frontier Foundation, Open Rights Group, No2ID, and newcomer Big Brother Watch. The last ten years have built Google, and Facebook, and every organization now craves large data stores of personal information that can be mined. Meanwhile, governments are complaisant, possibly because they have subpoena power. It's been a long decade.

"Information is the oil of the 1980s," wrote Thomas McPhail and Brenda McPhail in 1987 in an article discussing the politics of the International Telecommunications Union, and everyone seems to take this encomium seriously.

William Heath, who spent his early career founding and running Kable, a consultancy specializing in government IT. The question he focused on a lot: how to create the ideal government for the digital era, has been saying for many months now that there's a gathering wave of change. His idea is that the *new* new thing is technologies to give us back control and up-end the current situation in which everyone behaves as if they own all the information we give them. But it's their data only in exactly the same way that taxpayers' money belongs to the government. They call it customer relationship management; Heath calls the data we give them volunteered personal information and proposes instead vendor relationship management.

Always one to put his effort where his mouth is (Heath helped found the Open Rights Group, the Foundation for Policy Research, and the Dextrous Web as well as Kable), Heath has set up not one, but two companies. The first, Ctrl-Shift, is a research and advisory businesses to help organizations adjust and adapt to the power shift. The second, Mydex, a platform now being prototyped in partnership with the Department of Work and Pensions and several UK councils (PDF). Set up as a community interest company, Mydex is asset-locked, to ensure that the company can't suddenly reverse course and betray its customers and their data.

The key element of Mydex is the personal data store, which is kept under each individual's own control. When you want to do something - renew a parking permit, change your address with a government agency, rent a car - you interact with the remote council, agency, or company via your PDS. Independent third parties verify the data you present. To rent a car, for example, you might present a token from the vehicle licensing bureau that authenticates your age and right to drive and another from your bank or credit card company verifying that you can pay for the rental. The rental company only sees the data you choose to give it.

It's Heath's argument that such a setup would preserve individual privacy and increase transparency while simultaneously saving companies and governments enormous sums of money.

"At the moment there is a huge cost of trying to clean up personal data," he says. "There are 60 to 200 organisations all trying to keep a file on you and spending money on getting it right. If you chose, you could help them." The biggest cost, however, he says, is the lack of trust on both sides. People vanish off the electoral rolls or refuse to fill out the census forms rather than hand over information to government; governments treat us all as if we were suspected criminals when all we're trying to do is claim benefits we're entitled to.

You can certainly see the potential. Ten years ago, when they were talking about "joined-up government", MPs dealing with constituent complaints favored the notion of making it possible to change your address (for example) once and have the new information propagate automatically throughout the relevant agencies. Their idea, however, was a huge, central data store; the problem for individuals (and privacy advocates) was that centralized data stores tend to be difficult to keep accurate.

"There is an oft-repeated fallacy that existing large organizations meant to serve some different purpose would also be the ideal guardians of people's personal data," Heath says. "I think a purpose-created vehicle is a better way." Give everyone a PDS, and they can have the dream of changing their address only once - but maintain control over where it propagates.

There are, as always, key questions that can't be answered at the prototype stage. First and foremost is the question of whether and how the system can be subverted. Heath's intention is that we should be able to set our own terms and conditions for their use of our data - up-ending the present situation again. We can hope - but it's not clear that companies will see it as good business to differentiate themselves on the basis of how much data they demand from us when they don't now. At the same time, governments who feel deprived of "their" data can simply pass a law and require us to submit it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 29, 2010

Wanted: less Sir Humphrey, more shark


Seventeen MPs showed up for Thursday's Backbenchers' Committee debate on privacy and the Internet, requested by Robert Halfon (Con-Harlow). They tell me this is a sell-out crowd. The upshot: Google and every other Internet company may come to rue the day that Google sent its Street View cars around Britain. It crossed a line.

That line is this: "Either your home is your castle or it's not." Halfon, talking about StreetView and email he had from a vastly upset woman in Cornwall whose home had been captured and posted on the Web. It's easy for Americans to forget how deep the "An Englishman's home is his castle" thing goes.

Halfon's central question: are we sleepwalking into a privatized surveillance society, and can we stop it? "If no one has any right to privacy, we will live in a Big Brother society run by private companies." StreetView, he said, "is brilliant - but they did it without permission." Of equal importance to Halfon is the curious incident of the silent Information Commissioner (unlike apparently his equivalent everywhere else in the world) and Google's sniffed wi-fi data. The recent announcement that the sniffed data includes contents of email messages, secure Web pages, and passwords has prompted the ICO to take another look.

The response of the ICO, Halfon said, "has been more like Sir Humphrey than a shark with teeth, which is what it should be."

Google is only one offender; Julian Huppert (LibDem-Cambridge) listed some of the other troubles, including this week's release of Firesheep, a Firefox add-on designed to demonstrate Facebook's security failings. Several speakers raised the issue of the secret BT/Phorm trials. A key issue: while half the UK's population choose to be Facebook users (!), and many more voluntarily use Google daily, no one chose to be included in StreetView; we did not ask to be its customers.

So Halfon wants two things. He wants an independent commission of inquiry convened that would include MPs with "expertise in civil liberties, the Internet, and commerce" to suggest a new legal framework that would provide a means of redress, perhaps through an Internet bill of rights. What he envisions is something that polices the behavior of Internet companies the way the British Medical Association or the Law Society provides voluntary self-regulation for their fields. In cases of infringement, fines, perhaps.

In the ensuing discussion many other issues were raised. Huppert mentioned "chilling" (Labour) government surveillance, and hoped that portions of the Digital Economy Act might be repealed. Huppert has also been asking Parliamentary Questions about the is-it-still-dead? Interception Modernization Programme; he is still checking on the careful language of the replies. (Asked about it this week, the Home Office told me they can't speculate in advance about the details will that be provided "in due course"; that what is envisioned is a "program of work on our communications abilities"; that it will be communications service providers, probably as defined in RIPA Section 2(1), storing data, not a government database; that the legislation to safeguard against misuse will probably but not certainly, be a statutory instrument.)

David Davis (Con-Haltemprice and Howden) wasn't too happy even with the notion of decentralized data held by CSPs, saying these would become a "target for fraudsters, hackers and terrorists". Damien Hinds (Con-East Hampshire) dissected Google's business model (including £5.5 million of taxpayers' money the UK government spent on pay-per-click advertising in 2009).

Perhaps the most significant thing about this debate is the huge rise in the level of knowledge. Many took pains to say how much they value the Internet and love Google's services. This group know - and care - about the Internet because they use it, unlike 1995, when an MP was about as likely to read his own email as he was to shoot his own dog.

Not that I agreed with all of them. Don Foster (LibDem-Bath) and Mike Weatherley (Con-Hove) were exercised about illegal file-sharing (Foster and Huppert agreed to disagree about the DEA, and Damian Collins (Con-Folkestone and Hythe complained that Google makes money from free access to unauthorized copies). Nadine Dorries (Con-Mid Bedfordshire) wanted regulation to young people against suicide sites.

But still. Until recently, Parliament's definition of privacy was celebrities' need for protection from intrusive journalists. This discussion of the privacy of individuals is an extraordinary change. Pressure groups like PI, , Open Rights Group, and No2ID helped, but there's also a groundswell of constituents' complaints. Mark Lancaster (Con-Milton Keynes North) noted that a women's refuge at a secret location could not get Google to respond to its request for removal and that the town of Broughton formed a human chain to block the StreetView car. Even the attending opposition MP, Ian Lucas (Lab-Wrexham), favored the commission idea, though he still had hopes for self-regulation.

As for next steps, Ed Vaizey (Con-Wantage and Didcot), the Minister for Communication, Culture, and the Creative Industries, said he planned to convene a meeting with Google and other Internet companies. People should have a means of redress and somewhere to turn for mediation. For Halfon that's still not enough. People should have a choice in the first place.

To be continued...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 23, 2010

An affair to remember

Politicians change; policies remain the same. Or if, they don't, they return like the monsters in horror movies that end with the epigraph, "It's still out there..."

Cut to 1994, my first outing to the Computers, Freedom, and Privacy conference. I saw: passionate discussions about the right to strong cryptography. The counterargument from government and law enforcement and security service types was that yes, strong cryptography was a fine and excellent thing at protecting communications from prying eyes and for that very reason we needed key escrow to ensure that bad people couldn't say evil things to each other in perfect secrecy. The listing of organized crime, terrorists, drug dealers, and pedophiles as the reasons why it was vital to ensure access to cleartext became so routine that physicist Timothy May dubbed them "The Four Horsemen of the Infocalypse". Cypherpunks opposed restrictions on the use and distribution of strong crypto; government types wanted at the very least a requirement that copies of secret cryptographic keys be provided and held in escrow against the need to decrypt in case of an investigation. The US government went so far as to propose a technology of its own, complete with back door, called the Clipper chip.

Eventually, the Clipper chip was cracked by Matt Blaze, and the needs of electronic commerce won out over the paranoia of the military and restrictions on the use and export of strong crypto were removed.

Cut to 2000 and the run-up to the passage of the UK's Regulation of Investigatory Powers Act. Same Four Horsemen, same arguments. Eventually RIPA passed with the requirement that individuals disclose their cryptographic keys - but without key escrow. Note that it's just in the last couple of months that someone - a teenager - has gone to jail in the UK for the first time for refusing to disclose their key.

It is not just hype by security services seeking to evade government budget cuts to say that we now have organized cybercrime. Stuxnet rightly has scared a lot of people into recognizing the vulnerabilities of our infrastructure. And clearly we've had terrorist attacks. What we haven't had is a clear demonstration by law enforcement that encrypted communications have impeded the investigation.

A second and related strand of argument holds that communications data - that is traffic data such as email headers and Web addresses - must be retained and stored for some lengthy period of time, again to assist law enforcement in case an investigation is needed. As the Foundation for Information Policy Research and Privacy International have consistently argued for more than ten years, such traffic data is extremely revealing. Yes, that's why law enforcement wants it; but it's also why the American Library Association has consistently opposed handing over library records. Traffic data doesn't just reveal who we talk to and care about; it also reveals what we think about. And because such information is of necessity stored without context, it can also be misleading. If you already think I'm a suspicious person, the fact that I've been reading proof-of-concept papers about future malware attacks sounds like I might be a danger to cybersociety. If you know I'm a journalist specializing in technology matters, that doesn't sound like so much of a threat.

And so to this week. The former head of the Department of Homeland Security, Michael Chertoff, at the RSA Security Conference compared today's threat of cyberattack to nuclear proliferation. The US's Secure Flight program is coming into effect, requiring airline passengers to provide personal data for the US to check 72 hours in advance (where possible). Both the US and UK security services are proposing the installation of deep packet inspection equipment at ISPs. And language in the UK government's Strategic Defence and Security Review (PDF) review has led many to believe that what's planned is the revival of the we-thought-it-was-dead Interception Modernisation Programme.

Over at Light Blue Touchpaper, Ross Anderson links many of these trends and asks if we will see a resumption of the crypto wars of the mid-1990s. I hope not; I've listened to enough quivering passion over mathematics to last an Internet lifetime.

But as he says it's hard to see one without the other. On the face of it, because the data "they" want to retain is traffic data and note content, encryption might seem irrelevant. But a number of trends are pushing people toward greater use of encryption. First and foremost is the risk of interception; many people prefer (rightly) to use secured https, SSH, or VPN connections when they're working over public wi-fi networks. Others secure their connections precisely to keep their ISP from being able to analyze their traffic. If data retention and deep packet inspection become commonplace, so will encrypted connections.

And at that point, as Anderson points out, the focus will return to long-defeated ideas like key escrow and restrictions on the use of encryption. The thought of such a revival is depressing; implementing any of them would be such a regressive step. If we're going to spend billions of pounds on the Internet infrastructure - in the UK, in the US, anywhere else - it should be spent on enhancing robustness, reliability, security, and speed, not building the technological infrastructure to enable secret, warrantless wiretapping.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 15, 2010

The elected dictatorship

I wish I had a nickel for every time I had the following conversation with some British interlocutor in the 1970s and 1980s:

BI: You should never have gotten rid of Nixon.

wg: He was a crook.

BI: They're all crooks. He was the best foreign policy president you ever had.

As if it were somehow touchingly naïve to expect that politicians should be held to standards of behaviour in office. (Look, I don't care if they have extramarital affairs; I care if they break the law.)

It is, however, arguable that the key element of my BIs' disapproval was that Americans had the poor judgment and bad taste to broadcast the Watergate hearings live on television. (Kids, this was 1972. There was no C-Span then.) If Watergate had happened in the UK, it's highly likely no one would ever have heard about it until 50 or however many years later the Public Records Office opened the archives.

Around the time I founded The Skeptic, I became aware of the significant cultural difference in how people behave in the UK versus the US when they are unhappy about something. Britons write to their MP. Americans...make trouble. They may write letters, but they are equally likely to found an organization and create a campaign. This do-it-yourself ethic is completely logical in a relatively young country where democracy is still taking shape.

Britain, as an older - let's be polite and call it mature - country, operates instead on a sort of "gentlemen's agreement" ethos (vestiges of which survive in the US Constitution, to be sure). You can get a surprising amount done - if you know the right people. That system works perfectly for the in-group, and so to effect change you either have to become one of them (which dissipates your original desire for change) or gate-crash the party. Sometimes, it takes an American...

This was Heather Brooke's introduction to English society. The daughter of British parents and the wife of a British citizen, burned out from years of investigative reporting on murders and other types of mayhem in the American South, she took up residence in Bethnal Green with her husband. And became bewildered when repeated complaints to the council and police about local crime produced no response. Stonewalled, she turned to writing her book Your Right to Know, which led her to make her first inquiries about viewing MPs' expenses. The rest is much-aired scandal.

In her latest book, The Silent State, Brooke examines the many ways that British institutions are structured to lock out the public. The most startling revelation: things are getting worse, particularly in the courts, where the newer buildings squeeze public and press into cramped, uncomfortable spaces but the older buildings. Certainly, the airport-style security that's now required for entry into Parliament buildings sends the message that the public are both unwelcome and not to be trusted (getting into Thursday's apComms meeting required standing outside in the chill and damp for 15 minutes while staff inspected and photographed one person at a time).

Brooke scrutinizes government, judiciary, police, and data-producing agencies such as the Ordnance Survey, and each time finds the same pattern: responsibility for actions cloaked by anonymity; limited access to information (either because the information isn't available or because it's too expensive to obtain); arrogant disregard for citizens' rights. And all aided by feel-good, ass-covering PR and the loss of independent local press to challenge it. In a democracy, she argues, it should be taken for granted that citizens should have a right to get an answer when they ask the how many violent attacks are taking place on their local streets, take notes during court proceedings or Parliamentary sessions, or access and use data whose collection they paid for. That many MPs seem to think of themselves as members of a private club rather than public servants was clearly shown by the five years of stonewalling Brooke negotiated in trying to get a look at their expenses.

In reading the book, I had a sudden sense of why electronic voting appeals to these people. It is yet another mechanism for turning what was an open system that anyone could view and audit - it doesn't take an advanced degree to be able to count pieces of paper - into one whose inner workings can effectively be kept secret. That its inner workings are also not understandable to MPs =themselves apparently is a price they're willing to pay in return for removing much of the public's ability to challenge counts and demand answers. Secrecy is a habit of mind that spreads like fungus.

We talk a lot about rolling back newer initiatives like the many databases of Blair's and Brown's government, data retention, or the proliferation of CCTV cameras. But while we're trying to keep citizens from being run down by the surveillance state we should also be examining the way government organizes its operations and block the build-out of further secrecy. This is a harder and more subtle thing to do, but it could make the lives of the next generation of campaigners easier.

At least one thing has changed in the last 30 years, though: people's attitudes. In 2009, when the scandal over MPs' expenses broke, you didn't hear much about how other qualities meant we should forgive MPs. Britain wanted *blood*.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 24, 2010

Lost in a Haystack

In the late 1990s you could always tell when a newspaper had just gotten online because it would run a story about the Good Times virus.

Pause for historical detail: the Good Times virus (and its many variants) was an email hoax. An email message with the subject heading "Good Times" or, later, "Join the Crew", or "Penpal Greetings", warned recipients that opening email messages with that header would damage their computers or delete the contents of their hard drives. Some versions cited Microsoft, the FCC, or some other authority. The messages also advised recipients to forward the message to all their friends. The mass forwarding and subsequent complaints were the payload.

The point, in any case, is that the Good Times virus was the first example of mass social engineering that spread by exploiting not particularly clever psychology and a specific kind of technical ignorance. The newspaper staffers of the day were very much ordinary new users in this regard, and they would run the story thinking they were serving their readers. To their own embarrassment, of course. You'd usually see a retraction a week or two later.

Austin Heap, the progenitor of Haystack, software he claimed was devised to protect the online civil liberties of Iranian dissidents, seems unlikely to have been conducting an elaborate hoax rather than merely failing to understand what he was doing. Either way, Haystack represents a significant leap upward in successfully taking mainstream, highly respected publications for a technical ride. Evgeny Morozov's detailed media critique underestimates the impact of the recession and staff cuts on an already endangered industry. We will likely see many more mess-equals-technology-plus-journalism stories because so few technology specialists remain in the post-recession mainstream media.

I first heard Danny O'Brien's doubts about Haystack in June, and his chief concern was simple and easily understood: no one was able to get a copy of the software to test it for flaws. For anyone who knows anything about cryptography or security, that ought to have been damning right out of the gate. The lack of such detail is why experienced technology journalists, including Bruce Schneier, generally avoided commenting on it. There is a simple principle at work here: the *only* reason to trust technology that claims to protect its users' privacy and/or security is that it has been thoroughly peer-reviewed - banged on relentlessly by the brightest and best and they have failed to find holes.

As a counter-example, let's take Phil Zimmermann's PGP, email encryption software that really has protected the lives and identities of far-flung dissidents. In 1991, when PGP first escaped onto the Net, interest in cryptography was still limited to a relatively small, though very passionate, group of people. The very first thing Zimmermann wrote in the documentation was this: why should you trust this product? Just in case readers didn't understand the importance of that question, Zimmermann elaborated, explaining how fiendishly difficult it is to write encryption software that can withstand prolonged and deliberate attacks. He was very careful not to claim that his software offered perfect security, saying only that he had chosen the best algorithms he could from the open literature. He also distributed the source code freely for review by all and sundry (who have to this day failed to find substantive weaknesses). He concludes: "Anyone who thinks they have devised an unbreakable encryption scheme either is an incredibly rare genius or is naive and inexperienced." Even the software's name played down its capabilities: Pretty Good Privacy.

When I wrote about PGP in 1993, PGP was already changing the world by up-ending international cryptography regulations, blocking mooted US legislation that would have banned the domestic use of strong cryptography, and defying patent claims. But no one, not even the most passionate cypherpunks, claimed the two-year-old software was the perfect, the only, or even the best answer to the problem of protecting privacy in the digital world. Instead, PGP was part of a wider argument taking shape in many countries over the risks and rewards of allowing civilians to have secure communications.

Now to the claims made for Haystack in its FAQ:

However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy.

Without proper and open testing of the entire system - peer review - they could not possibly know this. The strongest cryptographic algorithm is only as good as its implementation. And even then, as Clive Robertson writes in Financial Cryptography, technology is unlikely to be a complete solution.

What a difference a sexy news hook makes. In 1993, the Clinton Administration's response to PGP was an FBI investigation that dogged Zimmermann for two years; in 2010, Hillary Clinton's State Department fast-tracked Haystack through the licensing requirements. Why such a happy embrace of Haystack rather than existing privacy technologies such as Freenet, Tor, or other anonymous remailers and proxies remains as a question for the reader.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 3, 2010

Beyond the zipline

When Aaron Sorkin (The West Wing, Sports Night) was signed to write the screenplay for a movie about Facebook, I think the general reaction was one of more or less bafflement. Sorkin has a great track record, sure, but how do you make a movie about a Web site, even if it's a social network? What are you going to show? People typing to each other?

Now that the movie is closer coming out (October 1 in the US) that we're beginning to see sneak peak trailers, and we can tell a lot more from the draft screenplay that's been floating around the Net. The copy I found is dated March 2009, and you can immediately tell it's the real thing: quality dialogue and construction, and the feel of real screenwriting expertise. Turns out, the way you write a screenplay about Facebook is to read the books, primarily the novelistic, not-so-admired Accidental Billionaires by Ben Mezrich, along with other published material and look for the most dramatic bit of the story: the lawsuits eventually launched by the characters you're portraying. Through which, as a framing device, you can tell the story of the little social network that exploded. Or rather, Sorkin can. The script is a compelling read. (It's actually not clear to me that it can be improved by actually filming it.)

Judging from other commentaries, everyone seems to agree it's genuine, though there's no telling where in the production process that script was, how many later drafts there were, or how much it changed in filming and post-production. There's also no telling who leaked it or why: if it was intentional it was a brilliant marketing move, since you could hardly ask for more word-of-mouth buzz.

If anyone wanted to design a moral lesson for the guy who keeps saying privacy is dead, it might be this: turn out your deepest secrets to portray you as a jerk who steals other people's ideas and codes them into the basis for a billion-dollar company, all because you want to stand out at Harvard and, most important, win the admiration of the girl who dumped you. Think the lonely pathos of the socially ostracized, often overlooked Jenny Humphrey in Gossip Girl crossed with the arrogant, obsessive intelligence of Sheldon Cooper in The Big Bang Theory. (Two characters I actually like, but they shouldn't breed.)

Neither the book nor the script is that: they're about as factual as 1978's The Buddy Holly Story or any other Hollywood biopic. Mezrich, who likes to write books about young guys who get rich fast (you can see why; he's gotten several bestsellers out of this approach), had no help from Facebook founder and CEO Mark Zuckerberg, What dialogue there is has been "re-created", and sources other than disaffected co-founder Eduardo Saverin are anonymous. Lacking sourcing (although of course the court testimony is public information), it's unclear how fictional the dramatization is. I'd have no problem with that if the characters weren't real people identified by their real names.

Places, too. Probably the real-life person/place/thing that comes off worst is Harvard, which in the book especially is practically a caricature of the way popular culture likes to depict it: filled with the rich, the dysfunctional, and the terminally arrogant who vie to join secretive, elite clubs that force them to take part in unsavoury hazing rituals. So much so that it was almost a surprise to read in Wikipedia that Mezrich actually went to Harvard.

Journalists and privacy advocates have written extensively about the consequences for today's teens of having their adolescent stupidities recorded permanently on Facebook or elsewhere, but Zuckerberg is already living with having his frat-boy early days of 2004 documented and endlessly repeated. Of course one way to avoid having stupid teenaged shenanigans reported is not to engage in them, but let's face it: how many of us don't have something in our pasts we'd just as soon keep out of the public eye? And if you're that rich that young, you have more opportunities than most people to be a jerk.

But if the only stories people can come up with about Zuckerberg date from before he turned 21, two thoughts occur. First, that Zuckerberg has as much right as anybody to grow up into a mature human being whose early bad judgement should be forgiven. To cite two examples: the tennis player Andre Agassi was an obnoxious little snert at 18 and a statesman of the game at 30; at 30 Bill Gates was criticized for not doing enough for charity but now at 54 is one of the world's most generous philanthropists. It is, therefore, somewhat hypocritical to demand that Zuckerberg protect today's teens from their own online idiocy while constantly republishing his follies.

Second, that outsized, hyperspeed business success might actually have forced him to grow up rather quickly. Let's face it, it's hard to make an interesting movie out of the hard work of coding and building a company.

And a third: by joining the 500 million and counting who are using Facebook we are collectively giving Zuckerberg enough money not to care either way.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 27, 2010

Trust the data, not the database

"We're advising people to opt out," said the GP, speaking of the Summary Care Records that are beginning to be uploaded to what is supposed to be eventually a nationwide database used by the NHS. Her reasoning goes this way. If you don't upload your data now you can always upload it later. If you do upload it now - or sit passively by while the National Health Service gets going on your particular area - and live to regret it you won't be able to get the data back out again.

You can find the form here, along with a veiled hint that you'll be missing out on something if you do opt out - like all those great offers of products and services companies always tell you you'll get if you sign up for their advertising, The Big Opt-Out Web site has other ideas.

The newish UK government's abrupt dismissal of the darling databases of last year has not dented the NHS's slightly confusing plans to put summary care records on a national system that will move control over patient data from your GP, who you probably trust to some degree, to...well, there's the big question.

In briefings for Parliamentarians conducted by the Open Rights Group in 2009, Emma Byrne, a researcher at University College, London who has studied various aspects of healthcare technology policy, commented that the SCR was not designed with any particular use case in mind. Basic questions that an ordinary person asks before every technology purchase - who needs it? for what? under what circumstances? to solve what problem? - do not have clear answers.

"Any clinician understands the benefits of being able to search a database rather than piles of paper records, but we have to do it in the right way," Fleur Fisher, the former head of ethics, science, and information for the British Medical Association said at those same briefings. Columbia University researcher Steve Bellovin, among others, has been trying to figure out what that right way might look like.

As comforting as it sounds to say that the emergency care team looking after you will be able to look up your SCR and find out that, for example, you are allergic to penicillin and peanuts, in practice that's not how stuff happens - and isn't even how stuff *should* happen. Emergency care staff look at the patient. If you're in a coma, you want the staff to run the complete set of tests, not look up in a database, see you're a diabetic and assume it's a blood sugar problem. In an emergency, you want people to do what the data tells them, not what the database tells them.

Databases have errors, we know this. (Just last week, a database helpfully moved the town I live in from Surrey to Middlesex, for reasons best known to itself. To fix it, I must write them a letter and provide documentation.) Typing and cross-matching blood drawn by you from the patient in front of you is much more likely to have you transfusing the right type of blood into the right patient.

But if the SCR isn't likely to be so much used by the emergency staff we're all told would? might? find it helpful, it still opens up much broader possibilities of abuse. It's this part of the system that the GP above was complaining about: you cannot tell who will have access or under what circumstances.

GPs do, in a sense, have a horse in this race, in that if patient data moves out of their control they have lost an important element of their function as gatekeepers. But given everything we know about how and why large government IT projects fail, surely the best approach is small, local projects that can be scaled up once they're shown to be functional and valuable. And GPs are the people at the front lines who will be the first to feel the effects of a loss of patient trust.

A similar concern has kept me from joining at study whose goals I support, intended to determine if there is a link between mobile phone use and brain cancer. The study is conducted by an ultra-respectable London university; they got my name and address from my mobile network operator. But their letter notes that participation means giving them unlimited access to my medical records for the next 25 years. I'm 56, about the age of the earliest databases, and I don't know who I'll be in 25 years. Technology is changing faster than I am. What does this decision mean?

There's no telling. Had they said I was giving them permission for five years and then would be asked to renew, I'd feel differently about it. Similarly, I'd be more likely to agree had they said that under certain conditions (being diagnosed with cancer, dying, developing brain disease) my GP would seek permission to release my records to them. But I don't like writing people blank checks, especially with so many unknowns over such a long period of time. The SCR is a blank check.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

August 20, 2010

Naming conventions

Eric Schmidt, the CEO of Google, is not a stupid person, although sometimes he plays one for media consumption. At least, that's how it seemed this week, when the Wall Street Journal reported that he had predicted, apparently in all seriousness, that the accumulation of data online may result in the general right for young people to change their names on reaching adulthood in order to escape the embarrassments of their earlier lives.

As Danah Boyd commented in response, it is to laugh.

For one thing, every trend in national and international law is going toward greater, permanent trackability. I know the UK is dumping the ID card and many US states are stalling on Real ID, but try opening a new bank account in the US or Europe, especially if you're a newly arrived foreigner. It's true that it's not so long ago - 20 years, perhaps - that people, especially in California, did change their names at the drop of an acid tablet. I'm fairly sure, for example, that the woman I once knew as Dancingtree Moonwater was not named that by her parents. But those days are gone with the anti-money laundering regulations, the anti-terrorist laws, and airport security.

For one thing, when is he imagining the adulthood moment to take place? When they're 17 and applying to college and need to cite their past records of good works, community involvement, and academic excellence? When they're 21 and graduating from college and applying for jobs and need to cite their past records of academic excellence, good works, and community involvement? I don't know about you, but I suspect that an admissions officer/prospective employer would be deeply suspicious of a kid coming of age today who had, apparently, no online history at all. Even if that child is a Mormon.

For another, changing your name doesn't change your identity (even if the change is because you got married). Investigators who track down people who've dropped out of their lives and fled to distant parts to start new ones often do so by, among other things, following their hobbies. You can leave your spouse, abandon your children, change jobs, and move to a distant location - but it isn't so easy to shake a passion for fly-fishing or 1957 Chevys. The right to reinvent yourself, as Action on Rights for Children's Terri Dowty pointed out during the campaign against the child-tracking database ContactPoint, is an important one. But that means letting minor infractions and youthful indiscretions fade into the mists of time, not to be pulled out and laughed until, say, 30 years hence, rather than being recorded in a database that thinks it "knows" you.

I think Schmidt knows all this perfectly well. And I think if such an infrastructure - turn 16, create a new identity - were ever to be implemented the first and most significant beneficiary would be...Google. I would expect most people's search engine use to provide as individual a fingerprint as, well, fingerprints. (This is probably less true for journalists, who research something different every week and therefore display the database equivalent of multiple personality disorder.)

Clearly if the solution to young people posting silly stuff online where posterity can bite them on the ass is a change of name the only way to do it is to assign kids online-only personas at birth that can be retired when they reach an age of reason. But in such a scenario, some kids would wind up wanting to adopt their online personas as their real ones because their online reputation has become too important in their lives. In the knowledge economy, as plenty of others have pointed out, reputation is everything.

This is, of course, not a new problem. As usual. When, in 1995, DejaNews (bought by Google some years back to form the basis of the Google Groups archive) was created, it turned what had been ephemeral Usenet postings into a permanent archive. If you think people post stupid stuff on Facebook now, when they know their friends and families are watching, you should have seen the dumb stuff they posted on Usenet when they thought they were in the online equivalent of Benidorm, where no one knew them and there were no consequences. Many of those Usenet posters were students. But I also recall the newly appointed CEO of a public company who went around the WELL deleting all his old messages. Didn't mean there weren't copies...or memories.

There is a genuine issue here, though, and one that a very smart friend with a 12-year-old daughter worries about regularly: how do you, as a parent, guide your child safely through the complexities of the online world and ensure that your child has the best possible options for her future while still allowing her to function socially with her peers? Keeping her offline is not an answer. Neither are facile statements from self-interested CEOs who, insulated by great wealth and technological leadership, prefer to pretend to themselves that these issues have already been decided in their favor.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 23, 2010

Information Commissioner, where is thy sting?

Does anyone really know what their computers are doing? Lauren Weinstein asked recently in a different context.

I certainly don't. Mostly, I know what they're not doing, and then only when it inconveniences me. Don't most of us have an elaborate set of workarounds for things that are just broken enough not to work but not so broken that we have to fix them?

But companies - particularly companies who have made their fortunes by being clever with technology - are supposed to do better than that. And so we come to the outbreak of legal actions against Google for collecting wifi data - not only wireless network names (SSIDs) and information identifying individual computer devices (MAC addresses) while it was out photographing every house for StreetView, but also payload data. The company says this sniffing was accidental. Privacy International's Simon Davies says that no engineer he's spoken to buys this: either the company collected it deliberately or the company's internal management systems are completely broken.

This was the topic of Tuesday's Big Brother Watch event. We actually had a Googler, Sarah Hunter, head of UK public policy, on the premises taking notes (as far as I could discern she did not have a camera mounted on her head, which seems like a missed opportunity), but the court actions in progress against the company meant that she was under strict orders from legal not to say anything much.

You can't really blame her. The list of government authorities investigating Google over the wifi data now includes: 38 US states and the District of Columbia, led by Connecticut; Germany; France; and Australia. Britain? Not so much.

"I find it amazing that Google did it without permission and seemed to get away with it without anyone causing a fuss," said Rob Halfon MP, who took time between votes on Tuesday to deliver a call to action. "There has to be a limit to what these companies do," he said, calling Street View "a privatized version of Big Brother." Halfon has tabled an early day motion on surveillance and the Internet.

There are two separate issues here. The first is Street View itself, which many countries have been unhappy about.

I was sympathetic when Google first launched Street View in the US and ran into privacy issues. It was, I thought and think, an innocently geeky kind of mistake to make: a look! This is so COOL! kind of moment. In the flush of excitement, I reasoned, it was probably easy to lose sight of the fact that people might object to having their living room windows peered into in a drive-by shoot and the resulting images posted online. Who would stop to ask the opinions of the inept, confused user of typical geek contempt, "my mother"?

By the time Street View arrived in Europe, however, there was no excuse. That the product's launch has sparked public anger in every country with every launch, along with other controversial actions (think Google Books), suggests that the company's standard MO is that of the teenager who deliberately avoids her parents' permission because she knows it will be denied.

It is, I think, reasonable to argue, as Google does, that the company is taking pictures of public areas, something that is not illegal in the US although it has various restrictions in other places. The keys, I think, are first of all the scale of the operation, and second the public display part of the equation, an element that is restricted in some European countries. As Halfon said, "Only big companies have the financial muscle to do this kind of mapping."

The second issue, the wifi data, is much more clear-cut. It seems unquestionable that accidental or not - and in fact we would not know the company had sniffed this data if it hadn't told us itself - laws have been broken in a number of countries. In the UK, it seems likely that the action was illegal under the Regulation of Investigatory Powers Act (2000) and the Computer Misuse Act would apply. Google's founders and CEO, Sergey Brin, Larry Page, and Eric Schmidt, seem to take the view that no harm, no foul.

But that's not the point, which is why Privacy International, having been told the Information Commissioner was not interested in investigating, went to the Metropolitan Police.

"There has to be a point where Google is brought to account because of its systemic failure," he said. "If all the criminal investigation does is to sensitise Google, then internally there may be some evolution."

The key, however, for the UK, is the unwillingness of the Information Commissioner to get involved. First, the ICO declined to restrict Street View. Then it refused to investigate the wifi issue and wanted the data destroyed, an action PI argued would mean destroying the evidence needed for a forensic investigation.

It was this failure that Davies and Alex Deane, director of Big Brother Watch, picked on.

"I find it peculiar that the British ICO was so reluctant to investigate Google when so many other ICOs were willing," Deane said. "The ICO was asleep on the job."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. .

June 25, 2010

New money

It seems that the Glastonbury Festival, which I tend to sniffily dismiss as a Woodstock wannabe, is to get rid of cash. I can understand their thinking: cash is expensive for the festival to transport, store, and guard and creates security problems for individual festival-goers, too. Mr Cashless himself, James Allan, will be pleased. Although, given his squirming reaction to being offered cash at a conference a few months ago, it's hard to believe he'd regard an outdoor festival as sufficiently hygienic to attend.

But here is the key bit:

As well as convenience and security issues, Barclaycard's Mr Mathieson said that information gathered from transactions could be valuable for future marketing. "For example if the system knows what time you went and bought a beer and at which bar, it can make a guess which band you were about to see," he said. "Then the organizers could send you information about upcoming tours. The opportunities are exciting."

Talk about creepy! Your £5 notes do not climb out of your wallet to chirp eagerly about what they'd like to be spent on.

One of the things we talked about in the history of cypherpunks session at CFP last week (the video recording is online) was what ever happened to digital cash, something often discussed in the early 1990s, when cryptography was the revolution. First proposed by David Chaum in an influential Scientific American article in 1992, it was meant to be genuinely the equivalent of anonymous cash.

Chaum's scheme was typically brilliant but typically facing a hard road to acceptance (he has since come up with a clever cryptographic scheme to secure electronic voting). Getting it widely deployed required two things: the cooperation of banks and the willingness of consumers to transfer what they see as "real money" into an unfamiliar currency with uncertain backing. Consumers have generally balked at this kind of thing; the early days of the Net saw a number of attempts at new forms of payment, and the only ones that have succeeded are those that, like Paypal, build on existing and familiar currencies and structures. You could argue that frequent flyer miles are currency and they are, but they generally come free with purchases; when people do buy them with what they perceive as "real" money it's to acquire a tangible near-term benefit such as a cheap ticket, elite status for their next flight, or a free upgrade.

Chaum understood correctly, however, that the future would hold some form of digital cash, and the anonymous version he was proposing was a deliberately chosen alternative to the future he saw unfolding as computerized transactions took hold.

"If the trend toward identifier-based smart cards continues, personal privacy will be increasingly eroded," he wrote in 1992. And so it has proved: credit cards, debit cards, mobile phone and online payments are all designed to make every transaction traceable.

"The banking industry has a vested interest in not providing anonymous payment mechanisms," said Lance Cottrell at CFP, "because they really like to know as much information as they can about you." Combine that with money-laundering laws and increased government surveillance, and anonymous digital cash seems pretty well dead. The one US bank that tried offering DigiCash, the St Louis, Missouri-based Mark Twain bank, dropped the offering in September 1998 because of low take-up; shortly afterwards DigiCash went into liquidation.

Before heading out to CFP, my bedtime reading was Dave Birch's Digital Money Reader 2010, a compilation of all his digital money blog postings, with attached comments, from the past year. Birch is seriously at war with physical cash, which he seems to perceive as the equivalent of an unfair tax on people like him, who would rather do everything electronically. Because the costs of cash aren't visible to consumers at point of use, he argues, people are taught to think of it as free, where electronic transactions have clearly delineated costs. If people were charged the true cost of paying with cash, surely the percentage of cash payments - still around 80 percent in Europe - would begin to drop precipitously.

But it seems clear that the hidden cost of electronic payments as they are presently constituted is handing over tracking data. A truly anonymous Oyster card costs nothing extra in financial terms, but you pay with convenience: you must put down a £5 deposit for a prepaid card at a tube station, and you must always remember to top it up with notes at station machines. Similarly, you can have an anonymous Paypal account in the sense that you can receive funds via a throwaway email address and use them only to buy digital goods that do not require a delivery address. But after the first $500 or so you'll have to set up another account or provide Paypal with verifiable banking information. Because we have so far not come up with a good way to estimate the value of such personal data, we have no way to calculate the true cost of trackable electronic payments.

Still, it occurs to me writing this that if cash ever does die under the ministrations of Birch and his friends, the event will open up new possibilities for struggling post offices everywhere. Stamps, permanently redeemable for at least their face value, could become the new cash.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 18, 2010

Things I learned at this year's CFP

- There is a bill in front of Congress to outlaw the sale of anonymous prepaid SIMs. The goal seems to be some kind of fraud and crime prevention. But, as Ed Hasbrouck points out, the principal people who are likely to be affected are foreign tourists and the Web sites who sell prepaid SIMS to them.

- Robots are getting near enough in researchers' minds for them to be spending significant amounts of time considering the legal and ethical consequences in real life - not in Asimov's fictional world where you could program in three safety llaws and your job was done. Ryan Calo points us at the work of Stanford student Victoria Groom on human-robot interaction. Her dissertation research not yet on the site, discovered that humans allocate responsibility for success and failure proportionately according to how anthropomorphic the robot is.

- More than 24 percent of tweets - and rising sharply - are sent by automated accounts, according to Miranda Mowbray at HP labs. Her survey found all sorts of strange bots: things that constantly update the time, send stock quotes, tell jokes, the tea bot that retweets every mention of tea...

- Google's Kent Walker, the 1997 CFP chair, believes that censorship is as big a threat to democracy as terrorism, and says that open architectures and free expression are good for democracy - and coincidentally also good for Google's business.

- Microsoft's chief privacy strategist, Peter Cullen, says companies must lead in privacy to lead in cloud computing. Not coincidentally, others are the conference note that US companies are losing business to Europeans in cloud computing because EU law prohibits the export of personal data to the US, where data protection is insufficient.

- It is in fact possible to provide wireless that works at a technical conference. And good food!

- The Facebook Effect is changing the attitude of other companies about user privacy. Lauren Gelman, who helps new companies with privacy issues, noted that because start-ups all see Facebook's success and want to be the next 400 million-user environment, there was a strong temptation to emulate Facebook's behavior. Now, with the angry cries mounting from consumers, she's having to spend less effort convincing them about the level of pushback companies will get from consumers if they change their policies and defy their expectations. Even so, it's important to ensure that start-ups include privacy in their budgets and not become an afterthought. In this respect, she makes me realize, privacy in 2010 is at the stage that usability was in the early 1990s.

- All new program launches come through the office of the director of Yahoo!'s business and human rights program, Ebele Okabi-Harris. "It's very easy for the press to focus on China and particular countries - for example, Australia last year, with national filtering," she said, "but for us as a company it's important to have a structure around this because it's not specific to any one region." It is, she added later, a "global problem".

- We should continue to be very worried about the database state because the ID cards repeal act continues the trend toward data sharing among government departments and agencies, according to Christina Zaba from No2ID.

- Information brokers and aggregators, operating behind the scenes, are amassing incredible amounts of details about Americans and it can require a great deal of work to remove one's information from these systems. The main customers of these systems are private investigators, debt collectors, media, law firms, and law enforcement. The Privacy Rights Clearinghouse sees many disturbing cases, as Beth Givens outlined, as does Pam Dixon's World Privacy forum.

- I always knew - or thought I knew - that the word "robot" was not coined by Asimov but by Karel Capek for his play R.U.R. (for "Rossum's Universal Robots", which coincidentally I also know that playing a robot in same was Michael Caine's first acting job). But Twitterers tell me that this isn't quite right. The word is derived from the Czech word "robota", "compulsory work for a feudal landlord". And that it was actually coined by Capek's older brother, Josef..

- There will be new privacy threats emerging from automated vehicles, other robots, and voicemail transcription services, sooner rather than later.

- Studying the inner workings of an organization like the International Civil Aviation Organization is truly difficult because the time scales - ten years to get from technical proposals to mandated standard, which is when the public becomes aware of - are a profound mismatch for the attention span of media and those who fund NGOs. Anyone who feels like funding an observer to represent civil society at ICAO should get in touch with Edward Hasbrouck.

- A lot of our cybersecurity problems could be solved by better technology.

- Lillie Coney has a great description of deceptive voting practices designed to disenfranchise the opposition: "It's game theory run amok!"

- We should not confuse insecure networks (as in vulnerable computers and flawed software) with unsecured networks (as in open wi-fi).

- Next year's conference chairs are EPIC's Lillie Coney and Jules Polonetsky. It will be in Washington, DC, probably the second or third week in June. Be there!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 11, 2010

Bonfire of the last government's vanities

"We have no hesitation in making the national identity card scheme an unfortunate footnote in history. There it should remain - a reminder of a less happy time when the Government allowed hubris to trump civil liberties," the Home Secretary, Theresa May, told the House of Commons at the second reading of the Identity Documents Bill 2010, which will erase the 2006 act introducing ID cards and the National Identity Register. "This will not be a literal bonfire of the last Government's vanities, but it will none the less be deeply satisfying." Estimated saving: £86 million over the next four years.

But not so fast...

An "unfortunate footnote" sounds like the perfect scrapheap on which to drop the National Identity Register and its physical manifestation, ID cards, but if there's one thing we know about ID cards it's that, like the monster in horror movies, they're always "still out there".

In 2005, Lilian Edwards, then at the Centre for Research in Intellectual Property and Law at the University of Edinburgh, invited me to give a talkIdentifying Risks, on the history of ID cards, an idea inspired by a comment from Ross Anderson. The gist: after the ID card was scrapped in 1952 at the end of World War II, attempts to bring it back an ID card were made, on average, about every two or three years. (Former cabinet minister Peter Lilley, speaking at Privacy International's 2002 conference, noted that every new IT minister put the same set of ID card proposals before the Cabinet.)

The most interesting thing about that history is that the justification for bringing in ID cards varied so much; typically, it drew on the latest horrifying public event. So, in 1974 it was the IRA bombings in Guildford and Birmingham. In 1988, football hooliganism and crime. In 1989, social security fraud. In 1993, illegal immigration, fraud, and terrorism.

Within the run of just the 2006 card, the point varied. The stated goals began with blocking benefit fraud, then moved on to include preventing terrorism and serious crime, stopping illegal immigration, and needing to comply with international standards that require biometric features in passports. It is this chameleon-like adaptation to the troubles of the day that makes ID cards so suspect as the solution to anything.

Immediately after the 9/11 attacks, Tony Blair rejected the idea of ID cards (which he had actively opposed in 1995, when John Major's government issued a green paper). But by mid-2002 a consultation paper had been published and by 2004 Blair was claiming that the civil liberties objections had vanished.

Once the 2006 ID card was introduced as a serious set of proposals in 2002, events unfolded much as Simon Davies predicted they would at that 2002 meeting. The government first clothed the ID card in user-friendly obfuscation: an entitlement card. The card's popularity in the polls, at first favourable (except, said David Blunkett for a highly organised minority), slid inexorably as the gory details of its implementation and costs became public. Yet the (dear, departed) Labour government clung to the proposals despite admitting, from time to time, their utter irrelevance for preventing terrorism.

Part of the card's sliding popularity has been due to people's increased understanding of the costs and annoyance it would impose. Their apparent support for the card was for the goals of the card, not the card itself. Plus, since 2002 the climate has changed: the Iraq war is even less popular and even the 2005 "7/7" London attacks did not keep acceptance of the "we are at war" justification for increased surveillance from declining. And the economic climate since 2008 makes large expenditure on bureaucracy untenable.

Given the frequency with which the ID card has resurfaced in the past, it seems safe to say that the idea will reappear at some point, though likely not during this coalition government. The LibDems always opposed it; the Conservatives have been more inconsistent, but currently oppose large-scale public IT projects.

Depending how you look at it, ID cards either took 54 years to resurface (from their withdrawal in1952 to the 2006 Identity Cards Act), or the much shorter time to the first proposals to reinstate them. Australia might be a better guide. In 1985, Bob Hawke made the "Australia card" a central plank of his government. He admitted defeat in 1987, after widespread opposition fueled by civil liberties groups. ID card proposals resurfaced in Australia in 2006, to be withdrawn again at the end of 2007. That's about 21 years - or a generation.

In 2010 Britain, it's as important that much of the rest of the Labour government's IT edifice, such as the ContactPoint database, intended to track children throughout their school years, is being scrapped. Left in place, it might have taught today's generation of children to perceive state tracking as normal. The other good news is that many of today's tireless campaigners against the 2006 ID card will continue to fight the encroachment of the database state. In 20 years - or sooner, if (God forbid) some catastrophe makes it politically acceptable - when or if an ID card comes back, they will still be young enough to fight it. And they will remember how.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

May 28, 2010

Privacy theater

On Wednesday, in response to widespread criticism and protest Facebook finally changed its privacy settings to be genuinely more user-friendly - and for once, the settings actually are. It is now reasonably possible to tell at a glance which elements of the information you have on the system are visible and to what class of people. To be sure, the classes available - friends, friends of friends, and everyone - are still broad, but it is a definite improvement. It would be helpful if Facebook provided a button so you could see what your profile looks like to someone who is not on your friends list (although of course you can see this by logging out of Facebook and then searching for your profile). If you're curious just how much of your information is showing, you might want to try out Outbook.

Those changes, however, only tackle one element of a four-part problem.

1: User interface. Fine-grained controls are, as the company itself has said, difficult to present in a simple way. This is what the company changed this week and, as already noted, the new design is a big improvement. It can still be improved, and it's up to users and governments to keep pressure on the company to do so.

2: Business model. Underlying all of this, however, is the problem that Facebook still has make money. To some extent this is our own fault: if we don't want to pay money to use the service - and it's pretty clear we don't - then it has to be paid for some other way. The only marketable asset Facebook has is its user data. Hence Andrew Brown's comment that users are Facebook's product; advertisers are its customers. As others have commented, traditional media companies also sell their audience to their advertisers; but there's a qualitative difference in that traditional media companies also create their own content, which gives them other revenue streams.

3. Changing the defaults. As this site's graphic representation makes clear, since 2005 the changes in Facebook's default privacy settings have all gone one way: towards greater openness. We know from decades of experience that defaults matter because so many computer users never change them. It's why Microsoft has had to defend itself against antitrust actions regarding bundling Internet Explorer and Windows Media Player into its operating system. On Facebook, users should have to make an explicit decision to make their information public - opt in, rather than opt out. That would also be more in line with the EU's Data Protection Directive.

4: Getting users to understand what they're disclosing. Back in the early 1990s, AT&T ran a series of TV ads in the US targeting a competitor's having asked its customers the names of their friends and family for marketing purposes, "I don't want to give those out," the people in the ads were heard to say. Yet they freely disclose on Facebook every day exactly that sort of information. As director of the Foundation for Information Policy Research Caspar Bowden argued persuasively that traffic analysis - seeing who is talking to whom and with what frequency - is far more revealing than the actual contents of messages.

What makes today's social networks different from other messaging systems (besides their scale) is that typically those - bulletin boards, conferencing systems, CompuServe, AOL, Usenet, today's Web message boards - were and are organized around topics of interest: libel law reform, tennis, whatever. Even blogs, whose earliest audiences are usually friends, become more broadly successful because of the topics they cover and the quality of that coverage. In the early days, that structure was due to the fact that most people online were strangers meeting for the first time. These days, it allows those with minority interests to find each other. But in social media the organizing principle is the social connections of individual people whose tenure on the service begins, by and large, by knowing each other. This vastly simplifies traffic analysis.

A number of factors contributed to the success of Facebook. One was the privacy promises the company made (and have since revised). But another was certainly elements of dissatisfaction with the wider Net. I've heard Facebook described as an effort to reinvent the Net, and there's some truth to that in that it presents itself as a safer space. That image is why people feel comfortable posting pictures of their kids. But a key element in Facebook's success has, I think, also been the brokenness of email and, to a lesser degree, instant messaging. As these became overridden with spam, rather than grapple with spam and other unwanted junk or the uncertainty of knowing which friend was using which incompatible IM service, many people gravitated to social networks as a way of keeping their inboxes as personal space.

Facebook is undoubtedly telling the truth when it says that the privacy complaints have, so far, made little difference to the size and engagement of its user base. It's extreme to say that Facebook victimizes its users, but it is true that the active core of long-term users' expectations have been progressively betrayed. Facebook's users have no transparency about or control over what data Facebook shares with its advertisers. Making that visible would go a long way toward restoring users' trust.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 14, 2010

Bait and switch

If there's one subject Facebook's PR people probably wish its founder and CEO, 27-year-old Mark Zuckerberg, had never discussed in public it's privacy, which he dismissed in January as no longer a social norm.

What made Zuckerberg's statement sound hypocritical - on top of arrogant, blinkered, self-interested, and callous - is the fact that he himself protects information he posts on Facebook. If he doesn't want his own family photographs searchable on Google, why does he assume that other people do?

What's equally revealing, though, is the comment he went on to make (quoted in that same piece) that he views it as really important "to keep a beginner's mind" in deciding what the company should do next. In other words, they ask themselves what decision they would make if they were starting Facebook now - and then they do that.

You can't hardly get there from here.

Zuckerberg is almost certainly right that if he were setting up the company now he'd make everything public as a default setting - as Twitter, founded two years later, does. Of course he'd do things differently: he'd be operating post-Facebook. Most important, he'd be a tiny company instead of a huge one. Size matters: you cannot make the same decisions that you would if you were a start-up when you have 400 million users, are the Web's largest host of photographs, and the biggest publisher of display ads. Facebook is discovering what Microsoft and Google also have: it isn't easy being big.

Being wholly open would, I'm sure, be a simpler situation both legally and in terms of user expectations, and I imagine it would be easier to program and develop. The difficulty is that he isn't starting the company now, and just as the seventh year of a marriage isn't the same as the first year of a marriage, he can't behave as if he is. Because: like in a marriage, Facebook has made promises to its users throughout the last six years, and you cannot single-handedly rewrite the contract without betraying them.

On Sky TV last night, I called Facebook's attitude to privacy a case of classic bait-and-switch. While I have no way of knowing if that was Zuckerberg's conscious intention when he first created Facebook in his Harvard dorm room at 19, that is nonetheless an accurate description of the situation. Facebook users - and the further you go back in the company's history the more true this is - shared their information because the company promised them privacy. Had the network been open from the start, people would likely have made different choices. Both a group of US senators nor the EU's Data Protection working party understand this perfectly. It would be a mistake for Facebook's management to dismiss these complaints as the outdated concerns of a bunch of guys who aren't down with the modern world.

Part of Facebook's difficulty with privacy issues is I'm sure the kind of interface design problem computer companies have struggled with for decades. In published comments, the company has referred to the conflict between granularity and simplicity: people want detailed choices but providing those makes the interface complex; simplifying the interface removes choice. I don't think this is an unsolvable problem; though it does require a new approach.

One thing I'd like Facebook to provide is a way of expiring data (which would solve a number of privacy issues) so that you could specify that anything posted on the site will be deleted after a certain amount of time has passed. Such a setup would also allow users to delete data posted before the beginning date of a new privacy regime. I'd also like to be able to export all my data in a format suitable for searching and archiving on my own system.

Zuckerberg was a little bit right, in that people are disclosing information to anybody who's interested in a way they didn't - couldn't - before. That doesn't, however, mean they're not interested in privacy; it means many think they are in private, talking to their friends, without understanding who else may be watching. It was doubtless that sort of feeling that ledPaul Chambers into trouble: a few days ago he was (in my opinion outrageously) fined £1,000 for sending a menacing message over a public telecommunications network.

I suppose Facebook can argue that the fact that 400 million people use their site means their approach can't be wholly unpopular. The number of people that have deleted their accounts since the latest opening-up announcements seems to be fairly small. But many more are there because they have to be: they have friends who won't communicate in any other way, or there are work commitments that require it. Facebook should remember that this situation came about because the company made promises about privacy. Reneging on those promises and thumbing your nose at people for being so stupid as to believe you invites a backlash.

Where Zuckerberg is wrong is to think that the errors people make in a new and unfamiliar medium where the social norms and community standards are still being defined means there's been a profound change in the world's social values. If it looks like that to rich geeks in California, it may be time for them to get out of Dodge.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

April 16, 2010

Data-mining the data miners

The case of murdered Colombian student Anna Maria Chávez Niño, presented at this week's Privacy Open Space, encompasses both extremes of the privacy conundrum posed by a world in which 400 million people post intimate details about themselves and their friends onto a single, corporately owned platform. The gist: Chávez met her murderers on Facebook; her brother tracked them down, also on Facebook.

Speaking via video link to Cédric Laurant, a Brussels-based independent privacy consultant, Juan Camilo Chávez noted that his sister might well have made the same mistake - inviting dangerous strangers into her home - by other means. But without Facebook he might not have been able to identify the killers. Criminals, it turns out, are just as clueless about what they post online as anyone else. Armed with the CCTV images, Chávez trawled Facebook for similar photos. He found the murderers selling off his sister's jacket and guitar. As they say, busted.

This week's PrivacyOS was the fourth in a series of EU-sponsored conferences to collaborate on solutions to that persistent, growing, and increasingly complex problem: how to protect privacy in a digital world. This week's focused on the cloud.

"I don't agree that privacy is disappearing as a social value," said Ian Brown, one of the event's organizers, disputing Mark privacy-is-no-longer-a-social-norm Zuckerberg's claim. The world's social values don't disappear, he added, just because some California teenagers don't care about them.

Do we protect users through regulation? Require subject releases for YouTube or Qik? Require all browsers to ship with cookies turned off? As Lilian Edwards observed, the latter would simply make many users think the Internet is broken. My notion: require social networks to add a field to photo uploads requiring users to enter an expiration date after which it will be deleted.

But, "This is meant to be a free world," Humberto Morán, managing director of Friendly Technologies, protested. Free as in speech, free as in beer, or free as in the bargain we make with our data so we can use Facebook or Google? We have no control over those privacy policy contracts.

"Nothing is for free," observed NEC's Amardeo Sarma. "You pay for it, but you don't know how you pay for it." The key issue.

What frequent flyers know is that they can get free flights once in a while in return for their data. What even the brightest, most diligent, and most paranoid expert cannot tell them is what the consequences of that trade will be 20 years from now, though the Privacy Value Networks project is attempting to quantify this. It's hard: any photographer will tell you that a picture's value is usually highest when it's new, but sometimes suddenly skyrockets decades later when its subject shoots unexpectedly to prominence. Similarly, the value of data, said David Houghton, changes with time and context.

It would be more right to say that it is difficult for users to understand the trade-offs they're making and there are no incentives for government or commerce to make it easy. And, as the recent "You have 0 Friends" episode of South Park neatly captures, the choice for users is often not between being careful and being careless but between being a hermit and participating in modern life.

Better tools ought to be a partial solution. And yet: the market for privacy-enhancing technologies is littered with market failures. Even the W3C's own Platform for Privacy Preferences (P3P), for example, is not deployed in the current generation of browsers - and when it was provided in Internet Explorer users didn't take advantage of it. The projects outlined at PrivacOS - PICOS and PrimeLife - are frustratingly slow to move from concept to prototype. The ideas seem right: providing a way to limit disclosures and authenticate identity to minimize data trails. But, Lilian Edwards asked: is partial consent or partial disclosure really possible? It's not clear that it is, partly because your friends are also now posting information about you. The idea of a decentralized social network, workshopped at one session, is interesting, but might be as likely to expand the problem as modulate it.

And, as it has throughout the 25 years since the first online communities were founded, the problem keeps growing exponentially in size and complexity. The next frontier, said Thomas Roessler: the sensor Web that incorporates location data and input from all sorts of devices throughout our lives. What does it mean to design a privacy-friendly bathroom scale that tweets your current and goal weights? What happens when the data it sends gets mashed up with the site you use to monitor the calories you consume and burn and your online health account? Did you really understand when you gave your initial consent to the site what kind of data it would hold and what the secondary uses might be?

So privacy is hard: to define, to value, to implement. As Seda Gürses, studying how to incorporate privacy into social networks, said, privacy is a process, not an event. "You can't do x and say, Now I have protected privacy."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats non-spam comments for reasons surpassing understanding.

March 19, 2010

Digital exclusion: the bill

The workings of British politics are nearly as clear to foreigners as cricket; and unlike the US there's no user manual. (Although we can recommend Anthony Trollope's Palliser novels and the TV series Yes, Minister as good sources of enlightenment on the subject.) But what it all boils down to in the case of the Digital Economy Bill is that the rights of an entire nation of Internet users are about to get squeezed between a rock and an election unless something dramatic happens.

The deal is this: the bill has completed all the stages in the House of Lords, and is awaiting its second reading in the House of Commons. Best guesses are that this will happen on or about March 29 or 30. Everyone expects the election to be called around April 8, at which point Parliament disbands and everyone goes home to spend three weeks intensively disrupting the lives of their constituency's voters when they're just sitting down to dinner. Just before Parliament dissolves there's a mad dash to wind up whatever unfinished business there is, universally known as the "wash-up". The Digital Economy Bill is one of those pieces of unfinished business. The fun part: anyone who's actually standing for election is of course in a hurry to get home and start canvassing. So the people actually in the chamber during the wash-up while the front benches are hastily agreeing to pass stuff thought on the nod are likely to be retiring MPs and others who don't have urgent election business.

"What we need," I was told last night, "is a huge, angry crowd." The Open Rights Group is trying to organize exactly that for this Wednesday, March 24.

The bill would enshrine three strikes and disconnection into law. Since the Lords' involvement, it provides Web censorship. It arguably up-ends at least 15 years of government policy promoting the Internet as an engine of economic growth to benefit one single economic sector. How would the disconnected vote, pay taxes, or engage in community politics? What happened to digital inclusion? More haste, less sense.

Last night's occasion was the 20th anniversary of Privacy International (Twitter: @privacyint), where most people were polite to speakers David Blunkett and Nick Clegg. Blunkett, who was such a front-runner for a second Lifetime Menace Big Brother Award that PI renamed the award after him, was an awfully good sport when razzed; you could tell that having his personal life hauled through the tabloid press in some detail has changed many of his views about privacy. Though the conversion is not quite complete: he's willing to dump the ID card, but only because it makes so much more sense just to make passports mandatory for everyone over 16.

But Blunkett's nearly deranged passion for the ID card was at least his own. The Digital Economy Bill, on the other hand, seems to be the result of expert lobbying by the entertainment industry, most especially the British Phonographic Industry. There's a new bit of it out this week in the form of the Building a Digital Economy report, which threatens the loss of 250,000 jobs in the UK alone (1.2 million in the EU, enough to scare any politician right before an election). Techdirt has a nice debunking summary.

A perennial problem, of course, is that bills are notoriously difficult to read. Anyone who's tried knows these days they're largely made up of amendments to previous bills, and therefore cannot be read on their own; and while they can be marked up in hypertext for intelligent Internet perusal this is not a service Parliament provides. You would almost think they don't really want us to read these things.

Speaking at the PI event, Clegg deplored the database state that has been built up over the last ten to 15 years, the resulting change in the relationship between citizen and state, and especially the omission that, "No one ever asked people to vote on giant databases." Such a profound infrastructure change, he argued, should have been a matter for public debate and consideration - and wasn't. Even Blunkett, who attributed some of his change in views to his involvement in the movie Erasing David (opening on UK cinema screens April 29), while still mostly defending the DNA database, said that "We have to operate in a democratic framework and not believe we can do whatever we want."

And here we are again with the Digital Economy Bill. There is plenty of back and forth among industry representatives. ISPs estimate the cost of the DEB's Web censorship provisions at up to £500 million. The BPI disagrees. But where is the public discussion?

But the kind of thoughtful debate that's needed cannot take place in the present circumstances with everyone gunning their car engines hoping for a quick getaway. So if you think the DEB is just about Internet freedoms, think again; the way it's been handled is an abrogation of much older, much broader freedoms. Are you angry yet?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 5, 2010

The surveillance chronicles

There is a touching moment at the end of the new documentary Erasing David, which had an early screening last night for some privacy specialists. In it, Katie, the wife of the film's protagonist, filmmaker David Bond, muses on the contrast between the England she grew up in and the "ugly" one being built around her. Of course, many people become nostalgic for a kinder past when they reach a certain age, but Katie Bond is probably barely 30, and what she is talking about is the engorging Database State (PDF).

Anyone watching this week's House of Lords debate on the Digital Economy Bill probably knows how she feels. (The Open Rights Group has advice on appropriate responses.)

At the beginning, however, Katie's biggest concern is that her husband is proposing to "disappear" for a month leaving her alone with their toddler daughter and her late-stage pregnancy.

"You haven't asked," she points out firmly. "You're leaving me with all the child care." Plus, what if the baby comes? They agree in that case he'd better un-disappear pretty quickly.

And so David heads out on the road with a Blackberry, a rucksack, and an increasingly paranoid state of mind. Is he safe being video-recorded interviewing privacy advocates in Brussels? Did "they" plant a bug in his gear? Is someone about to pounce while he's sleeping under a desolate Welsh tree?

There are real trackers: Cerberus detectives Duncan Mee and Cameron Gowlett, who took up the challenge to find him given only his (rather common) name. They try an array of approaches, both high- and low-tech. Having found the Brussels video online, they head to St Pancras to check out arriving Eurostar trains. They set up a Web site to show where they think he is and send the URL to his Blackberry to see if they can trace him when he clicks on the link.

In the post-screening discussion, Mee added some new detail. When they found out, for example, that David was deleting his Facebook page (which he announced on the site and of which they'd already made a copy), they set up a dummy "secret replacement" and attempted to friend his entire list of friends. About a third of Bond's friends accepted the invitation. The detectives took up several party invitations thinking he might show.

"The Stasi would have had to have a roomful of informants," said Mee. Instead, Facebook let them penetrate Bond's social circle quickly on a tiny budget. Even so, and despite all that information out on the Internet, much of the detectives' work was far more social engineering than database manipulation, although there was plenty of that, too. David himself finds the material they compile frighteningly comprehensive.

In between pieces of the chase, the filmmakers include interviews with an impressive array of surveillance victims, politicians (David Blunkett, David Davis), and privacy advocates including No2ID's Phil Booth and Action on Rights for Children's Terri Dowty. (Surprisingly, no one from Privacy International, I gather because of scheduling issues.)

One section deals with the corruption of databases, the kind of thing that can make innocent people unemployable or, in the case of Operation Ore, destroy lives such as that of Simon Bunce. As Bunce explains in the movie, 98.2 percent of the Operation Ore credit card transactions were fraudulent.

Perhaps the most you-have-got-to-be-kidding moment is when former minister David Blunkett says that collecting all this information is "explosive" and that "Government needs to be much more careful" and not just assume that the public will assent. Where was all this people-must-agree stuff when he was relentlessly championing the ID card ? Did he - my god! - learn something from having his private life exposed in the press?

As part of his preparations, Bond investigates: what exactly do all these organizations know about him? He sends out more than 80 subject access requests to government agencies, private companies, and so on. Amazon.com sends him a pile of paper the size of a phone book. Transport for London tell hims that even though his car is exempt his movements in and out of the charging zone are still recorded and kept. This is a very English moment: after bashing his head on his desk in frustration over the length of his wait on hold, when a woman eventually starts to say, "Sorry for keeping you..." he replies, "No problem".

Some of these companies know things about him he doesn't or has forgotten: the time he "seemed angry" on the phone to a customer service representative. "What was I angry about on November 21, 2006?" he wonders.

But probably the most interesting journey, after all, is Katie's. She starts with some exasperation: her husband won't sign this required form giving the very good nursery they've found the right to do anything it wants with their daughter's data. "She has no data," she pleads.

But she will have. And in the Britain she's growing up in, that could be dangerous. Because privacy isn't isolation and it isn't not being found. Privacy means being able to eat sand without fear.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 19, 2010

Death doth make hackers of us all

"I didn't like to ask him what his passwords were just as he was going in for surgery," said my abruptly widowed friend.

Now, of course, she wishes she had.

Death exposes one of the most significant mismatches between security experts' ideas of how things should be done and the reality for home users. Every piece of advice they give is exactly the opposite of what you'd tell someone trying to create a disaster recovery plan to cover themselves in the event of the death of the family computer expert, finance manager, and media archivist. If this were a business, we'd be talking about losing the CTO, CIO, CSO, and COO in the same plane crash.

Fortunately, while he was alive, and unfortunately, now, my friend was a systems programmer of many decades of expertise. He was acutely aware of the importance of good security. And so he gave his Windows desktop, financial files, and email software fine passwords. Too fine: the desktop one is completely resistant to educated guesses based on our detailed knowledge of his entire life and partial knowledge of some of his other PINs and passwords.

All is not locked away. We think we have the password to the financial files, so getting access to those is a mere matter of putting the hard drive in another machine, finding the files, copying them, installing the financial software on a different machine, and loading them up. But it would be nice to have direct as-him access to his archive of back (and new) email, the iTunes library he painstakingly built and digitized, his Web site accounts, and so on. Because he did so much himself, and because his illness was an 11-day chase to the finish, our knowledge of how he did things is incomplete. Everyone thought there was time.

With backups secured and the financial files copied, we set to the task of trying to gain desktop access.

Attempt 1: ophcrack. This is a fine piece of software that's easy to use as long as you don't look at any of the detail. Put it on a CD, boot from said CD, run it on automatic, and you're fine. The manual instructions I'm sure are fine, too, for anyone who has studied Windows SAM files.

Ophcrack took a happy 4 minutes and 39 seconds to disclose that the computer has three accounts: administrator, my friend's user account, and guest. Administrator and guest have empty passwords; 's is "not found". But that's OK, said the security expert I consulted, because you can log in as administrator using the empty password and change the user account. Here is a helpful command. Sure. No problem.

Except, of course, that this is Vista, and Vista hides the administrator account to make sure that no brainless idiot accidentally got into the administrator account and ran around the system creating havoc and corrupted files. By "brainless idiot" I mean: the user-owner of the computer. Naturally, my friend had left it hidden.

In order to unhide the administrator account so you can run the commands to reset 's password, you have to run the command prompt in administrator mode. Which we can't do because, of course, there are only two administrator accounts and one is hidden and the other is the one we want the password for. Next.

Attempt 2: Password Changer. Now, this is a really nifty thing: you download the software, use it to create a bootable CD, and boot the computer. Which would be fine, except that the computer doesn't like it because apparently command.com is missing...

We will draw a veil over the rest. But my point is that no one would advise a business to operate in this way - and now that computers are in (almost) every home, homes are businesses, too. No one likes to think they're going to die, still less without notice. But if you run your family on your computer you need a disaster recovery plan - fire, flood, earthquake, theft, computer failure, stroke, and yes, unexpected death,

- Have each family member write down their passwords. Privately, if you want, in sealed envelopes to be stored in a safe deposit box at the bank. Include: Windows desktop password, administrator password, automated bill-paying and financial record passwords, and the list of key Web sites you use and their passwords. Also the passwords you may have used to secure phone records and other accounts. Credit and debit card PINs. Etc.

- Document your directory structure so people know where the important data - family photos, financial records, Web accounts, email address books - is stored. Yes, they can figure it out, but you can make it a lot easier for them.

- Set up your printer so it works from other computers on the home network even if yours is turned off. (We can't print anything, either.)

- Provide an emergency access route. Unhide the administrator account.

- Consider your threat model.

Meanwhile, I think my friend knew all this. I think this is his way of taking revenge on me for never letting him touch *my* computer.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 1, 2010

Privacy victims

Frightened people often don't make very good decisions. If I were in charge of aviation security, I'd have been pretty freaked out by the Christmas Day crotch bomber - failure or no failure. Even so, like all of us Boxing Day quarterbacks, I'd like to believe I'd have had more sense than to demand that airline passengers stay seated and unmoving for an hour, laps empty.

But the locking-the-barn elements of the TSA's post-crotch rules are too significant to ignore: the hastily implemented rules were very specifically drafted to block exactly the attack that had just been attempted. Which, I suppose, makes sense if your threat model is a series of planned identical, coordinated attacks and copycats. But as a method of improving airport security it's so ineffective and irrelevant that even the normally rather staid Economist accused the TSA of going insane and Bruce Schneier called the new rulesmagical thinking.

Consider what actually happened on Christmas Day:

- Intelligence failed. Umar Farouk Abdulmutallab was on the watch list (though not, apparently, the no-fly list), and his own father had warned the US embassy.

- Airport screening failed. He got through with his chunk of explosive attached to his underpants and the stuff he needed to set it off. (As the flyer boards have noted, anyone flying this week should be damned grateful he didn't stuff it in a condom and stick it up his ass.)

- And yet, the plan failed. He did not blow up the plane; there were practically no injuries, and no fatalities.

That, of course, was because a heroic passenger was paying attention instead of snoozing and leaped over seats to block the attempt.

The logical response, therefore, ought to be to ask passengers to be vigilant and to encourage them to disrupt dangerous activities, not to make us sit like naughty schoolchildren being disciplined. We didn't do anything wrong. Why are we the ones who are being punished?

I have no doubt that being on the plane while the incident was taking place was terrifying. But the answer isn't to embark upon an arms race with the terrorists. Just as there are well-funded research labs churning out new computer viruses and probing new software for vulnerabilities, there are doubtless research facilities where terrorist organizations test what scanners can detect and in what quantity.

Matt Blaze has a nice analysis of why this approach won't work to deter terrorists: success (plane blown up) and failure (terrorist caught) are, he argues, equally good outcomes for the terrorist, whose goal is to sow terror and disruption. All unpredictable screening does is drive passengers nuts and, in some cases, put their health at risk. Passengers work to the rules. If there are no blankets, we wear warmer clothes; if there is no bathroom access, we drink less; if there is no in-flight entertainment, we rearrange the hours we sleep.

As Blaze says, what's needed is a correct understanding of the threat model - and as Schneier has often said, the most effective changes since 9/11 have been reinforcing the cockpit doors and the fact that passengers now know to resist hijackers.

Since the incident, much of the talk has been about whole-body scanners - "nudie scanners" Dutch privacy advocates have dubbed them - as if these will secure airplanes for once and for all. I think if people think that whole-body scanners are the answer they have misunderstood the problem.

Or problems, because there is more than one. First: how can we make air travel secure from terrorists? Second: how can we make air travelers feel secure? Third: how can we accomplish those things while still allowing travelers to be comfortable, a specification which includes respecting their right to privacy and civil liberties? If your reaction to that last is to say that you don't care whose rights are violated, all that matters is perfect security I'm going to guess that: 1) you fly very infrequently; 2) you would be happy to do so chained to your seat naked with a light coating of Saran wrap; and 3) that your image of the people who are threats is almost completely unlike your own.

It is particularly infuriating to read that we are privacy victims: that the opposition of privacy advocates to invasive practices such as whole-body scanners are the reason this clown got as close as he did. Such comments are as wrong-headed as Jack Straw claiming after 9/11 that opponents of key escrow were naïve.

The most rational response, it seems to me, is for TSA and airlines alike to solicit volunteers among their most loyal and committed passengers. Elite flyers know the rhythms of flights; they know when something is amiss. Train us to help in emergencies and to spot and deter mishaps.

Because the thing we should have learned from this incident is that we are never going to have perfect security: terrorists are a moving target. We need fallbacks, for when our best efforts fail.

The more airport security becomes intrusive, annoying, and visibly stupid, the more motive passengers will have to find workarounds and the less respect they will have for these authorities. That process is already visible. Do you feel safer now?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

December 19, 2009

Little black Facebook

Back in 2004, the Australian privacy advocate and consultant Roger Clarke warned about the growth of social networks. In his paper Very Black 'Little Black Books' he warned of the privacy implications inherent in posting large amounts of personal data to these sites. The primary service Clarke talks about in that paper is Plaxo, though he also mentions the Google's then newly-created Orkut, as well as Tribe.net, various dating sites, and, on the business side, LinkedIn.

The gist: posting all that personal data (especially in the case of Plaxo, to which users upload their entire address books) is a huge privacy risk because the business models for such sites are still unknown.

"The only logical business model is the value of consumers' data," he told me for a piece I wrote on social networks in 2004. "Networking is about viral marketing, and that's one of the applications of social networking. It's social networks in order to achieve economic networks."

In the same interview, Clarke predicted the future for such networks and their business models: "My expectation would be that if they were rational accumulators of data about individuals they wouldn't be caught out abusing until they had a very nice large collection of that data. It doesn't worry me if they haven't abused yet; they will abuse."

Cut to this week, when Facebook - which wouldn't even exist until two years after that interview - suddenly changed its privacy defaults to turn the service inside out. Gawker calls the change a great betrayal, and says, "The company has, in short, turned evil."

The change in a nutshell: Facebook changed the default settings on its privacy controls, so that information that was formerly hidden by default is now visible to default - and not just to people on Facebook but to the Internet at large. The first time I logged on after the change, I got a confusing screen asking me to choose among the privacy options for each of a number of different types of data - open, or "old settings". I stared at it: what were the old settings?

Less than a week after the changes were announced, ten privacy organizations, led by the Electronic Privacy Information Center and including the American Library Association, the Privacy Rights Now Coalition, and the Bill of Rights Foundation, filed a complaint with the Federal Trade Commission (PDF) asking the FTC to enjoin Facebook's "unfair and deceptive business practices" and compel the company to restore its earlier privacy settings and allow complete opt-out, as well as give users more effective control over their data.

The "walled garden" approach to the Net is typically loathed when it's applied to, say, general access to the Internet. But the situation is different when it's applied to personal information; Facebook's entire appeal to its users is based on the notion that it's a convenient way to share stuff with their friends that they don't want to open up to the entire Internet. If they didn't care, they'd put it all on blogs, or family Web sites.

"I like it," one friend told me not long ago, "because I can share pictures of my kids with my family and know no one else can see them."

My guess is that Facebook's owners have been confused by the success of Twitter. On Twitter, almost everything is public: what you post, who you follow, who follows you, and the replies you send to others' messages. All of that is easily searchable by Google, and Tweets show up with regularity in public search results.

But Twitter users know that everything is public, and (one hopes) moderate their behavior accordingly. Facebook users have populated the service with personal chatter and photos of each other at private moments precisely because they expected that material to remain private. (Although: Joseph Bonneau at the University of Cambridge noticed last May that even deleted photos didn't always remain private.) You can understand Facebook's being insecure about Twitter. Twitter is the fastest-growing social network and the one scooping all the media attention (because if ever there were a service designed for the butterfly mentality of journalists, this is it). The fact that Tweets are the same length as Facebook status updates may have led Facebook founding CEO Mark Zuckerberg et al to think that competing with Twitter means implementing the same features that make Twitter so appealing.

Of course, Facebook has done this in a typically Facebookish sort of way, in that the interface is typically clunky and unpleasant (the British journalist Andrew Brown once commented that the Facebook user interface could drive one to suicide.) Hence the need for a guide to reprivatizing your account.

But adding mobile phone connections is one thing; upending users' expectations of your service is another. There is a name for selling a product based on one description and supplying something different and less desirable: bait and switch.

It is as Roger Clarke said five years ago: sooner or later, these companies have to make money. Social networks have only two real assets: their users' desire to keep using their service, and the mass of data users keep giving them. They're not charging users. What does that leave as a business strategy?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

December 4, 2009

Which lie did I tell?


"And what's your mother's maiden name?"

A lot of attention has been paid over the years to the quality of passwords: how many letters, whether there's a sufficient mix of numbers and "special characters", whether they're obviously and easily guessable by anyone who knows you (pet's name, spouse's name, birthday, etc.), whether you've reset them sufficiently recently. But, as someone noted this week on UKCrypto, hardly anyone pays attention to the quality of the answers to the "password hint" questions sites ask so they can identify you when you eventually forget your password. By analogy, it's as though we spent all our time beefing up the weight, impenetrability, and lock quality on our front doors while leaving the back of the house accessible via two or three poorly fitted screen doors.

On most sites it probably doesn't matter much. But the question came up after the BBC broadcast an interview with the journalist Angela Epstein, the loopily eager first registrant for the ID card, in which she apparently mentioned having been asked to provide the answers to five rather ordinary security questions "like what is your favorite food". Epstein's column gives more detail: "name of first pet, favourite song and best subject at school". Even Epstein calls this list "slightly bonkers". This, the UKCrypto poster asked, is going to protect us from terrorists?

Dave Birch had some logic to contribute: "Why are we spending billions on a biometric database and taking fingerprints if they're going to use the questions instead? It doesn't make any sense." It doesn't: she gave a photograph and two fingerprints.

But let's pretend it does. The UKCrypto discussion headed into technicalities: has anyone studied challenge questions?

It turns out someone has: Mike Just, described to me as "the world expert on challenge questions". Just, who's delivered two papers on the subject this year, at the Trust (PDF) and SOUPS (PDF) conferences, has studied both the usability and the security of challenge questions. There are problems from both sides.

First of all, people are more complicated and less standardized than those setting these questions seem to think. Some never had pets; some have never owned cars; some can't remember whether they wrote "NYC", "New York", "New York City", or "Manhattan". And people and their tastes change. This year's favorite food might be sushi; last year's chocolate chip cookies. Are you sure you remember accurately what you answered? With all the right capitalization and everything? Government services are supposedly thinking long-term. You can always start another Amazon.com account; but ten years from now, when you've lost your ID card, will these answers be valid?

This sort of thing is reminiscent of what biometrics expert James Wayman has often said about designing biometric systems to cope with the infinite variety of human life: "People never have what you expect them to have where you expect them to have it." (Note that Epstein nearly failed the ID card registration because of a burn on her finger.)

Plus, people forget. Even stuff you'd think they'd remember and even people who, like the students he tested, are young.

From the security standpoint, there are even more concerns. Many details about even the most obscure person's life are now public knowledge. What if you went to the same school for 14 years? And what if that fact is thoroughly documented online because you joined its Facebook group?

A lot depends on your threat model: your parents, hackers with scripted dictionary attacks, friends and family, marketers, snooping government officials? Just accordingly came up with three types of security attacks for the answers to such questions: blind guess, focused guess, and observation guess. Apply these to the often-used "mother's maiden name": the surname might be two letters long; it is likely one of the only 150,000 unique surnames appearing more than 100 times in the US census; it may be eminently guessable by anyone who knows you - or about you. In the Facebook era, even without a Wikipedia entry or a history of Usenet postings many people's personal details are scattered all over the online landscape. And, as Just also points out, the answers to challenge questions are themselves a source of new data for the questioning companies to mine.

My experience from The Skeptic suggests that over the long term trying to protect your personal details by not disclosing them isn't going to work very well. People do not remember what they tell psychics over the course of 15 minutes or an hour. They have even less idea what they've told their friends or, via the Internet, millions of strangers over a period of decades or how their disparate nuggets of information might match together. It requires effort to lie - even by omission - and even more to sustain a lie over time. It's logically easier to construct a relatively small number of lies. Therefore, it seems to me that it's a simpler job to construct lies for the few occasions when you need the security and protect that small group of lies. The trouble then is documentation.

Even so, says Birch, "In any circumstance, those questions are not really security. You should probably be prosecuted for calling them 'security'."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

November 13, 2009

Cookie cutters

Sometimes laws sneak up on you while you're looking the other way. One of the best examples was the American Telecommunications Act of 1996: we were so busy obsessing about the freedom of speech-suppressing Communications Decency Act amendment that we failed to pay attention to the implications of the bill itself, which allowed the regional Baby Bells to enter the long distance market and changed a number of other rules regarding competition.

We now have a shiny, new example: we have spent so much time and electrons over the nasty three-strikes-and-you're offline provisions that we, along with almost everyone else, utterly failed to notice that the package contains a cookie-killing provision last seen menacing online advertisers in 2001 (our very second net.wars).

The gist: Web sites cannot place cookies on users' computers unless said users have agreed to receive them unless the cookies are strictly necessary - as, for example, when you select something to buy and then head for the shopping cart to check out.

As the Out-Law blog points out this proposal - now to become law unless the whole package is thrown out - is absurd. We said it was in 2001 - and made the stupid assumption that because nothing more had been heard about it the idea had been nixed by an outbreak of sanity at the EU level.

Apparently not. Apparently MEPs and others at EU level spend no more time on the Web than they did eight years ago. Apparently none of them have any idea what such a proposal would mean. Well, I've turned off cookies in my browser, and I know: without cookies, browsing the Web is as non-functional as a psychic being tested by James Randi.

But it's worse than that. Imagine browsing with every site asking you to opt in every - pop-up - time - pop-up - it - pop-up - wants - pop-up - to - pop-up - send - pop-up - you - a - cookie - pop-up. Now imagine the same thing, only you're blind and using the screen reader JAWS.

This soon-to-be-law is not just absurd, it's evil.

Here are some of the likely consequences.

As already noted, it will make Web use nearly impossible for the blind and visually impaired.

It will, because such is the human response to barriers, direct ever more traffic toward those sites - aggregators, ecommerce, Web bulletin boards, and social networks - that, like Facebook, can write a single privacy policy for the entire service to which users consent when they join (and later at scattered intervals when the policy changes) that includes consent to accepting cookies.

According to Out-Law, the law will trap everyone who uses Google Analytics, visitor counters, and the like. I assume it will also kill AdSense at a stroke: how many small DIY Web site owners would have any idea how to implement an opt-in form? Both econsultancy.com and BigMouthMedia think affiliate networks generally will bear the brunt of this legislation. BigMouthMedia goes on to note a couple of efforts - HTTP.ETags and Flash cookies - intended to give affiliate networks more reliable tracking that may also fall afoul of the legislation. These, as those sources note, are difficult or impossible for users to delete.

It will presumably also disproportionately catch EU businesses compared to non-EU sites. Most users probably won't understand why particular sites are so annoying; they will simply shift to sites that aren't annoying. The net effect will be to divert Web browsing to sites outside the EU - surely the exact opposite of what MEPs would like to see happen.

And, I suppose, inevitably, someone will write plug-ins for the popular browsers that can be set to respond automatically to cookie opt-in requests and that include provisions for users to include or exclude specific sites. Whether that will offer sites a safe harbour remains to be seen.

The people it will hurt most, of course, are the sites - like newspapers and other publications - that depend on online advertising to stay afloat. It's hard to understand how the publishers missed it; but one presumes they, too, were distracted by the need to defend music and video from evil pirates.

The sad thing is that the goal behind this masterfully stupid piece of legislation is a reasonably noble one: to protect Internet users from monitoring and behavioural targeting to which they have not consented. But regulating cookies is precisely the wrong way to go about achieving this goal, not just because it disables Web browsing but because technology is continuing to evolve. The EU would be better to regulate by specifying allowable actions and consequences rather than specifying technology. Cookies are not in and of themselves inherently evil; it's how they're used.

Eight years ago, when the cookie proposals first surfaced, they, logically enough, formed part of a consumer privacy bill. That they're now part of the telecoms package suggests they've been banging around inside Parliament looking for something to attach themselves to ever since.

I probably exaggerate slightly, since Out-Law also notes that in fact the EU did pass a law regarding cookies that required sites to offer visitors a way to opt out. This law is little-known, largely ignored, and unenforced. At this point the Net's best hope looks to be that the new version is treated the same way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter or by email to netwars@skeptic.demon.co.uk).

August 28, 2009

Develop in haste, lose the election at leisure

Well, this is a first: returning to last week's topic because events have already overtaken it.

Last week, the UK government was conducting a consultation on how to reduce illegal file-sharing by 70 percent within a year. We didn't exactly love the proposals, but we did at least respect the absence of what's known as "three strikes" - as in, your ISP gets three complaints about your file-sharing habit and kicks you offline. The government's oh-so-English euphemism for this is "technical measures". Activists opposed to "technical measures" often call them HADOPI, after the similar French law that was passed in May (and whose three strikes portions were struck down in June); HADOPI is the digital rights agency that law created.

This week, the government - or more precisely, the Department for Business, Innovation, and Skills - suddenly changed its collective mind and issued an addendum to the consultation (PDF) that - wha-hey! - brings back three strikes. Its thinking has "developed", BIS says. Is it so cynical to presume that what has "developed" in the last couple of months is pressure from rights holders? Three strikes is a policy the entertainment industry has been shopping around from country to country like an unwanted refugee. Get it passed in one place and use that country a lever to make all the others harmonize.

What the UK government has done here is entirely inappropriate. At the behest of one business sector, much of it headquartered outside Britain, it has hijacked its own consultation halfway through. It has issued its new-old proposals a few days before the last holiday weekend of the summer. The only justification it's offered: that its "new ideas" (they aren't new; they were considered and rejected earlier this year, in the Digital Britain report (PDF)) couldn't be implemented fast enough to meet its target of reducing illicit file-sharing by 70 percent by 2012 if they aren't included in this consultation. There's plenty of protest about the proposals, but even more about the government's violating its own rules for fair consultations.

Why does time matter? No one believes that the Labour government will survive the next election, due by 2010. The entertainment industries don't want to have to start the dance all over again, fine: but why should the rest of us care?

As for "three strikes" itself, let's try some equivalents.

Someone is caught speeding three times in the effort to get away from crimes they've committed, perhaps a robbery. That person gets points on their license and, if they're going fast enough, might be prohibited from driving for a length of time. That system is administered by on-the-road police but the punishment is determined by the courts. Separately, they are prosecuted for the robberies, and may serve jail time - again, with guilt and punishment determined by the courts.

Someone is caught three times using their home telephone to commit fraud. They would be prosecuted for the fraud, but they would not be banned from using the telephone. Again, the punishment would be determined by the courts after a prosecution requiring the police to produce corroborating evidence.

Someone is caught three times gaming their home electrical meter so that they are able to defraud the electrical company and get free electricity. (It's not so long since in parts of the UK you could achieve this fairly simply just by breaking into the electrical meter and stealing back the coins you fed it with. You would, of course, be caught at the next reading.) I'm not exactly sure what happens in these cases, but if Wikipedia is to be believed, when caught such a customer would be switched to a higher tariff.

It seems unlikely that any court would sentence such a fraudster to live without an electricity supply, especially if they shared their home, as most people do, with other family members. The same goes for the telephone example. And in the first case, such a person might be banned from driving - but not from riding in a car, even the getaway car, while someone else drove it, or from living in a house where a car was present.

Final analogy: millions of people smoke marijuana, which remains illegal. Marijuana has beneficial uses (relieving the nausea from chemotherapy, remediating glaucoma) as well as recreational ones. We prosecute the drug dealers, not the users.

So let's look again at these recycled-reused proposals. Kicking someone offline after three (or however many) complaints from rights holders:

1- Affects everyone in their household. Kids have to go to the library to do homework, spouses/'parents can't work at home or socialize online. An entire household is dropped down the wrong side of the Digital Divide. As government functions such as filing taxes, providing information about public services, and accepting responses to consultations all move online, this household is now also effectively disenfranchised.

2- May in fact make both the alleged infringer and their spouse unemployable.

3- Puts this profound control over people's lives, private and public, personal and financial into the hands of ISPs, rights holders, and Ofcom, with no information about how or whether the judicial process would be involved. Not that Britain's court system really has the capacity to try the 10 percent of the population that's estimated to engage in file-sharing. (Licit, illicit, who can tell?)

All of these effects are profoundly anti-democratic. Whose government is it, anyway?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

July 24, 2009

Security for the rest of us


Many governments, faced with the question of how to improve national security, would do the obvious thing: round up the usual suspects. These would be, of course, the experts - that is, the security services and law enforcement. This exercise would be a lot like asking the record companies and film studios to advise on how to improve copyright: what you'd get is more of the same.

This is why it was so interesting to discover that the US National Academies of Science was convening a workshop to consult on what research topics to consider funding, and began by appointing a committee that included privacy advocates and usability experts, folks like Microsoft researcher Butler Lampson, Susan Landau, co-author of books on privacy and wiretapping, and Donald Norman, author of the classic book The Design of Everyday Things. Choosing these people suggests that we might be approaching a watershed like that of the late 1990s, when the UK and the US governments were both forced to understand that encryption was not just for the military any more. The peace-time uses of cryptography to secure Internet transactions and protect mobile phone calls from casual eavesdropping are much broader than crypto's war-time use to secure military communications.

Similarly, security is now everyone's problem, both individually and collectively. The vulnerability of each individual computer is a negative network externality, as NYU economist Nicholas Economides pointed out. But, as many asked, how do you get people to understand remote risks? How do you make the case for added inconvenience? Each company we deal with makes the assumption that we can afford the time to "just click to unsubscribe" or remember one password, without really understanding the growing aggregate burden on us. Norman commented that door locks are a trade-off, too: we accept a little bit of inconvenience in return for improved security. But locks don't scale; they're acceptable as long as we only have to manage a small number of them.

In his 2006 book, Revolutionary Wealth, Alvin Toffler comments that most of us, without realizing it, have a hidden third, increasingly onerous job, "prosumer". Companies, he explained, are increasingly saving money by having us do their work for them. We retrieve and print out our own bills, burn our own CDs, provide unpaid technical support for ourselves and our families. One of Lorrie Cranor's students did the math to calculate the cost in lost time and opportunities if everyone in the US read annually the privacy policy of each Web site they visited once a month. Most of these things require college-level reading skills; figure 244 hours per year per person, $3,544 each...$781 billion nationally. Weren't computers supposed to free us of that kind of drudgery? As everything moves online, aren't we looking at a full-time job just managing our personal security?

That, in fact, is one characteristic that many implementations of security share with welfare offices - and that is becoming pervasive: an utter lack of respect for the least renewable resource, people's time. There's a simple reason for that: the users of most security systems are deemed to be the people who impose it, not the people - us - who have to run the gamut.

There might be a useful comparison to information overload, a topic we used to see a lot about ten years back. When I wrote about that for ComputerActive in 1999, I discovered that everyone I knew had a particular strategy for coping with "technostress" (the editor's term). One dealt with it by never seeking out information and never phoning anyone. His sister refused to have an answering machine. One simply went to bed every day at 9pm to escape. Some refused to use mobile phones, others to have computers at home..

But back then, you could make that choice. How much longer will we be able to draw boundaries around ourselves by, for example, refusing to use online banking, file tax returns online, or participate in social networks? How much security will we be able to opt out of in future? How much do security issues add to technostress?

We've been wandering in this particular wilderness a long time. Angela Sasse, whose 1999 paper Users Are Not the Enemy talked about the problems with passwords at British Telecom, said frankly, "I'm very frustrated, because I feel nothing has changed. Users still feel security is just an obstacle there to annoy them."

In practice, the workshop was like the TV game Jeopardy: the point was to generate research questions that will go into a report, which will be reviewed and redrafted before its eventual release. Hopefully, eventually, it will all lead to a series of requests for proposals and some really good research. It is a glimmer of hope.

Unless, that is, the gloominess of the beginning presentations wins out. If you listened to Lampson, Cranor, and to Economides, you got the distinct impression that the best thing that could happen for security is that we rip out the Internet (built to be open, not secure), trash all the computers (all of whose operating systems were designed in the pre-Internet era), and start over from scratch. Or, like the old joke about the driver who's lost and asking for directions, "Well, I wouldn't start from here".

So, here's my question: how can we make security scale so that the burden stays manageable?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

July 17, 2009

Human factors

For the last several weeks I've been mulling over the phrase security fatigue. It started with a paper (PDF) co-authored by Angela Sasse, in which she examined the burden that complying with security policies imposes upon corporate employees. Her suggestion: that companies think in terms of a "compliance budget" that, like any other budget (money, space on a newspaper page), has to be managed and used carefully. And, she said, security burdens weigh differently on different people and at different times, and a compliance budget needs to comprehend that, too.

Some examples (mine, not hers). Logging onto six different machines with six different user IDs and passwords (each of which has to be changed once a month) is annoying but probably tolerable if you do it once every morning when you get to work and once in the afternoon when you get back from lunch. But if the machines all log you out every time you take your hands off the keyboard for two minutes, by the end of the day they will be lucky to survive your baseball bat. Similarly, while airport security is never fun, the burden of it is a lot less to a passenger traveling solo after a good night's sleep who reaches the checkpoints when they're empty than it is to the single parent with three bored and overtired kids under ten who arrives at the checkpoint after an overnight flight and has to wait in line for an hour. Context also matters: a couple of weeks ago I turned down a ticket to Court 1 at Wimbledon on men's semi-finals day because I couldn't face the effort it would take to comply with their security rules and screening. I grudgingly accept airport security as the trade-off for getting somewhere, but to go through the same thing for a supposedly fun day out?

It's relatively easy to see how the compliance budget concept could be worked out in practice in a controlled environment like a company. It's very difficult to see how it can be worked out for the public at large, not least because none of the many companies each of us deals with sees it as beneficial to cooperate with the others. You can't, for example, say to your online broker that you just can't cope with making another support phone call, can't they find some other way to unlock your account? Or tell Facebook that 61 privacy settings is too many because you're a member of six other social networks and Life is Too Short to spend a whole day configuring them all.

Bruce Schneier recently highlighted that last-referenced paper, from Joseph Bonneau and Soeren Preibusch at Cambridge's computer lab, alongside another by Leslie John, Alessandro Acquisti, and George Loewenstein from Carnegie-Mellon, to note a counterintuitive discovery: the more explicit you make privacy concerns the less people will tell you. "Privacy salience" (as Schneier calls it) makes people more cautious.

In a way, this is a good thing and goes to show what privacy advocates have been saying along: people do care about privacy if you give them the chance. But if you're the owners of Facebook, a frequent flyer program, or Google it means that it is not in your business interest to spell out too clearly to users what they should be concerned about. All of these businesses rely on collecting more and more data about more and more people. Fortunately for them, as we know from research conducted by Lorrie Cranor (also at Carnegie-Mellon), people hate reading privacy policies. I don't think this is because people aren't interested in their privacy. I think this goes back to what Sasse was saying: it's security fatigue. For most people, security and privacy concerns are just barriers blocking the thing they came to do.

But choice is a good thing, right? Doesn't everyone want control? Not always. Go back a few years and you may remember some widely publicized research that pointed out that too many choices stall decision-making and make people feel...tired. A multiplicity of choices adds weight and complexity to the decision you're making: shouldn't you investigate all the choices, particularly if you're talking about which of 56 mutual funds to add to your 401(k)?

It seems obvious, therefore, that the more complex the privacy controls offered by social networks and other services the less likely people are to use them: too many choices, too little time, too much security fatigue. In minor cases in real life, we handle this by making a decision once and sticking to it as a kind of rule until we're forced to change: which brand of toothpaste, what time to leave for work, never buy any piece of clothing that doesn't have pockets. In areas where rules don't work, the best strategy is usually to constrain the choices until what you have left is a reasonable number to investigate and work with. Ecommerce sites notoriously get this backwards: they force you to explore group by group instead of allowing you to exclude choices you'll never use.

How do we implement security and privacy so that they're usable? This is one of the great unsolved, under-researched questions in security. I'm hoping to know more next week.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

July 10, 2009

The public interest

It's not new for journalists to behave badly. Go back to 1930s plays-turned-movies like The Front Page (1931) or Mr Smith Goes to Washington (1939), and you'll find behavior (thankfully, fictional) as bad as this week's Guardian story that the News of the World paid out £1 million to settle legal cases that would have revealed that its staff journalists were in the habit of hiring private investigators to hack into people's phone records and voice mailboxes.

The story's roots go back to 2006, when the paper's Royal editor, Clive Goodman, was jailed for illegally intercepting phone calls. The paper's then editor, Andy Coulson, resigned and the Press Complaints Commission concluded the paper's executives did not know what Goodman was doing. Five months later, Coulson became the chief of communications for the Tory party.

There are so many cultural failures here that you almost don't know where to start counting. The first and most obvious is the failure of a newsroom to obey the dictates of common sense, decency, and the law. That particular failure is the one garnering the most criticism, and yet it seems to me the least surprising, especially for one of Britain's most notorious tabloids. Journalists have competed for stories big enough to sell papers since the newspaper business was founded; the biggest rewards generally go to the ones who expose the stories their subjects least wanted exposed. It's pretty sad if any newspaper's journalists think the public interest argument is as strong for listening to Gwyneth Paltrow's voice mail as it was to exposing MPs' expenses, but that leads to the second failure: celebrity culture.

This one is more general: none of this would happen if people didn't flock to buy stories about intimate celebrity details. And newspapers are desperate for sales.

The third failure is specific to politicians: under the rubric of "giving people a second chance" Tory leader David Cameron continues to defend Coulson, who continues to claim he didn't know what was going on. Either Coulson did know, in which case he was condoning it, or he didn't, in which case he had only the shakiest grasp of his newsroom. The latter is the same kind of failure that at other papers and magazines has bred journalistic fraud: surely any editor now ought to be paying attention to sourcing. Either way, Coulson does not come off well and neither does Cameron. It would be more tolerable if Cameron would simply say outright that he doesn't care whether Coulson is honorable or not because he's effective at the job Cameron is paying him for.

The fourth failure is of course the police, the Press Complaints Commission, and the Information Commissioner, all of whom seem to have given up rather easily in 2007.

The final failure is also general: the problem that more and more intimate information about each of us is held in databases whose owners may have incentives (legal, regulatory, commercial) for keeping them secured but which are of necessity accessible by minions whose risks and rewards are different. The weakest link in security is always the human factor, and the problem of insiders who can be bribed or conned into giving up confidential information they shouldn't is as old as the hills, whether it's a telephone company employee, a hotel chambermaid, or a former Royal nanny. Seemingly we have learned little or nothing since Kevin Mitnick pioneered the term "social engineering" some 20 years ago or since Squidgygate, when various Royals' private phone conversations were published. At least some ire should be directed at the phone companies involved, whose staff apparently find it easy to refuse to help legitimate account holders by citing the Data Protection Act but difficult to resist illegitimate blandishments.

This problem is exacerbated by what University College of London security researcher Angela Sasse calls "security fatigue". Gaining access to targets' voice mail was probably easier than you think if you figure that many people never change the default PIN on their phones. Either your private investigator turned phone hacker tries the default PIN or, as Sophos senior fellow Graham Cluley suggests, convinces the phone company to reset the PIN to the default. Yes, it's stupid not to change the default password on your phone. But with so many passwords and PINs to manage and only so much tolerance for dealing with security, it's an easy oversight. Sasse's paper (PDF) fleshing out this idea proposes that companies should think in terms of a "compliance budget" for employees. But this will be difficult to apply to consumers, since no one company we interact with will know the size of the compliance burden each of us is carrying.

Get the Press Complaints Commission to do its job properly by all means. And stop defending the guy who was in charge of the newsroom while all this snooping was going on. Change a culture that thinks that "the public interest" somehow expands to include illegal snooping just because someone is famous.

But bear in mind that, as Privacy International has warned all along, this kind of thing is going to become endemic as Britain's surveillance state continues to develop. The more our personal information is concentrated into large targets guarded by low-paid staff, the more openings there will be for those trying to perpetrate identity fraud or blackmail, snoop on commercial competitors, sell stories about celebrities and politicians, and pry into the lives of political activists.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk.

May 23, 2009

InPhormed consent

This week's announcement that the UK is to begin hooking up its network of CCTV cameras to automatic number plate recognition software is a perfect example of a lot of things. Function creep, which privacy advocates always talk about: CCTV was sold to the public on the basis that it would make local streets safer; ANPR was sold to the public on the basis that it would decrease London's traffic congestion. You can question either or both of those propositions, but nowhere in them was the suggestion that marrying the two technologies together would give the police a network enabling them to track people's movements around the country. In fact, as I understand it, there will probably be two such networks, one for police and the other for enabling road pricing.

It's also a perfect example of why with today's developing technology it's nearly impossible for people to give informed consent. Do I want to post personal photographs where only my friends and family can see them? Sure. Do I want those photos to persist online even after I think I've deleted them and be viewable by outsiders via content delivery networks and other caches? No, or not necessarily.

And it's a perfect example of why opt-in is an important principle. Will I trade access to slightly better treatment and the occasional free ticket for my travel data (in the form of frequent flyer programs)? Apparently so. Does that mean that every casual flyer should perforce be signed up with a frequent flyer number and told to opt out if they don't want their data sold for marketing purposes? Obviously not.

Developing technologies are an area where experts have trouble predicting the outcome. Most people will not or cannot find the time to try to understand the implications, even if those were available. How is anyone supposed to give intelligent and informed consent? Making a system opt-in means that only those who have taken at least some trouble make the trade-offs. With CCTV and ANPR, most of us have little choice: we may vote for or against politicians based on their policies, but we don't have a fine-grained way of voting for this policy and against that one.

Even if we did, however, we'd still have the problem that technology is developing faster than anyone can say "small-scale pilot". This is why it's difficult for anyone to give intelligent and informed consent when a new idea like Phorm comes along to argue that their service is so wonderful and compelling that everyone should be automatically joined to it and those few who are too short-sighted to see the benefits should opt out.

When Phorm first came along and everyone got very hysterical very fast, I took a more cautious, hang-on-let's-see-what-this-is-about view that was criticized by some expert friends and called "a breath of sanity" by one of the Phorm folks I met. Richard Clayton did a careful technical analysis (PDF). Then it emerged that BT had been conducting trials of Phorm's packet inspection technology without getting the consent of its customers. (What do we pay for, eh?). This was clearly arrogant and wrong, a stand with which the EU concurs in the form of a lawsuit despite the Home Office's expressed belief last year that Phorm operates within UK law.

For a lot of us, if we don't quite understand the technology, can't guess the implications, and aren't sure of the implications, we play the man instead of the ball. Who are the people who want us to use this stuff? And do they behave honourably? The BT trial is a clear "no" answer to the last. As for the former, that's where the Stop Phoul Play Web site is so helpful in characterizing its opponents as privacy pirates. I am not listed, but I note that many of those who are serve with me on the Open Rights Group advisory council and/or on that of the Foundation for Information Policy Research, an organization whose aims I also support. But the whole Stop Phorm Web site is written in precisely the tone of the fake news pieces that appear in C. S. Lewis's novel That Hideous Strength, deliberately written as outright lies and propaganda by a weak character under the influence of the novel's forces of evil.

If Phorm had sat down to calculate carefully what its best strategy would be for alienating as many people as possible, it would have created exactly this Web site. I might disagree with but respect an organization that set out its claims and reasoning for public debate. An organization that thinks claiming it's being smeared while smearing its opponents (calling The Register a "media mouthpiece" is particularly hilarious) is either stupid or dishonest, and in neither case can we trust its claims about what its technology does and does not do.

Though we can wonder: did the Home Office support Phorm's proposals because they thought that having a third party build a deep packet inspection system might be something they could use later at low cost? I'm not normally paranoid, but...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at the other blog, follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2009

Statebook of the art

The bad thing about the Open Rights Group's new site, Statebook is that it looks so perfectly simple to use that the government may decide it's actually a good idea to implement something very like it. And, unfortunately, that same simplicity may also create the illusion in the minds of the untutored who still populate the ranks of civil servants and politicians that the technology works and is perfectly accurate.

For those who shun social networks and all who sail in her: Statebook's interface is an almost identical copy of that of Facebook. True, on Facebook the applications you click on to add are much more clearly pointless wastes of time, like making lists of movies you've liked to share with your friends or playing Lexulus (the reinvention of the game formerly known as Scrabulous until Hasbrouck got all huffy and had it shut down).

Politicians need to resist the temptation to believe it's as easy as it looks. The interfaces of both the fictional Statebook and the real Facebook look deceptively simple. In fact, although friends tell me how much they like the convenience of being able to share photos with their friends in a convenient single location, and others tell me how much they prefer Facebook's private messaging to email, Facebook is unwieldy and clunky to use, requiring a lot of wait time for pages to load even over a fast broadband connection. Even if it weren't, though, one of the difficulties with systems attempting to put EZ-2-ewes front ends on large and complicated databases is that they deceive users into thinking the underlying tasks are also simple.

A good example would be airline reservations systems. The fact is that underneath the simple searching offered by Expedia or Travelocity lies some extremely complex software; it prices every itinerary rather precisely depending on a host of variables. These may not just the obvious things like the class of cabin, but the time of day, the day of the week, the time of year, the category of flyer, the routing, how far in advance the ticket is being purchased, and the number of available seats left. Only some of this is made explicit; frequent flyers trying to maxmize their miles per dollar despair while trying to dig out arcane details like the class of fare.

In his 1988 book The Design of Everyday Things, Donald Norman wrote about the need to avoid confusing the simplicity or complexity of an interface with the characteristics of the underlying tasks. He also writes about the mental models people create as they attempt to understand the controls that operate a given device. His example is a refrigerator with two compartments and two thermostatic controls. An uninformed user naturally assumes each thermostat controls one compartment, but in his example, one control sets the thermostat and the other directs the proportion of cold air that's sent to each comparment. The user's mental model is wrong and, as a consequence, attempts that user makes to set the temperature will also, most likely, be wrong.

In focusing on the increasing quantity and breadth of data the government is collecting on all of us, we've neglected to think about how this data will be presented to its eventual users. We have warned about the errors that build up in very large databases that are compiled from multiple sources. We have expressed concern about surveillance and about its chilling impact on spontaneous behaviour. And we have pointed out that data is not knowledge; it is very easy to take even accurate data and build a completely false picture of a person's life. Perhaps instead we should be focusing on ensuring that the software used to query these giant databases-in-progress teaches users not to expect too much.

As an everyday example of what I mean, take the automatic line-calling system used in tennis since 2005, Hawkeye. Hawkeye is not perfectly accurate. Its judgements are based on reconstructions that put together the video images and timing data from four or more high-speed video cameras. The system uses the data to calculate the three-dimensional flight of the ball; it incorporates its knowledge of the laws of physics, its model of the tennis court, and its database of the rules of the game in order to judge whether the ball is in or out. Its official margin for error is 3.6mm.

A study by two researchers at Cardiff University disputed that number. But more relevant here, they pointed out that the animated graphics used to show the reconstructed flight of the ball and the circle indicating where it landed on the court surface are misleading because they look to viewers as though they are authoritative. The two researchers, Harry Collins and Robert Evans, proposed that in the interests of public education the graphic should be redesigned to display the margin for error and the level of confidence.

This would be a good approach for database matches, too, especially since the number of false matches and errors will grow with the size of the databases. A real-life Statebook that doesn't reflect the uncertainty factor of each search, each match, and each interpretation next to every hit would indeed be truly dangerous.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 27, 2009

The view

"Am I in it?"

That seems to be the first question people ask about Street View. Most people I know actually want to see themselves caught unawares; the ones who weren't captured are actively disappointed, while the ones who were are excited.

At least as many - mostly people I don't know - are angry and unhappy and feel their privacy has been invaded just by having the cars drive down their street taking photographs. Hundreds have complained and had pictures taken down. The Register called the cars Orwellian spycars and snoopmobiles, and charted their inexorable progress across the UK on a mash-up.

I can, I think, understand the emotions on both sides. Most of the take-down requests are understandable. Of course, there are some that seem ridiculous. Number 10 Downing Street? The Blairs' house? Will they claim copyright in their homes and sue, like Barbra Streisand in 2003?

What I can't understand is the relative size of the fuss over Street View compared to the pervasive general apathy about CCTV. Street View is one collection of images that will gradually age. CCTV is always with us.

Privacy International, who, to be fair, have persistently and publicly criticized CCTV, has filed a formal complaint with the Information Commissioner and asked the ICO to order the service offline while investigating.

Google, of course, has absolutely no excuse this time. When, two years ago, Street View originally launched in the US, it seemed as though Google had (yet again) failed at privacy - but that it had failed in a very geeky way. You could easily imagine the engineers at Google who started up Street View going, "This is so *cool*! You can see into people's windows!" You can also see them never thinking of applying to each local council for permission and having to wait for a public inquiry and local vote because that would take too long, and we have this idea today!

Google should have learned from the outcry that followed the launch that many people do not react casually to discovering that their images have been captured and put online. The town of North Oaks, Minnesota kicked them out entirely. Two years and scores of complaints weren't enough to teach the company to proceed with a little more humility and caution? Is it so difficult to imagine, when you assign people to drive around the streets taking pictures, that they might capture the strange and the embarrassing?

This isn't like Flickr, where users post millions of images of which the company has no prior knowledge and no control and where there is no organized way to search through them. The Google employees who drive the Street View cars and operate the cameras could, oh, I don't know, actually look at their surroundings while they're doing it. Of course there are plenty of things that look innocent but aren't - the person walking into the newsagent's who's supposed to be at work at a wholly different location, say, or the couple making out on the park bench who are married but to other people. But how hard is it to stop and think that maybe the guy urinating in public - or vomiting, or falling off a bicycle - might prefer not to have that moment immortalized on the Web? This is especially true because the Googlers themselves objected to being photographed.

It's also true that simply blurring car license plates and people's faces isn't enough to erase all chance that they'll be identified. If you wear a lime green coat, own the only 23-year-old Nissan Prairie in London, or routinely play tennis wearing a James Randi Educational Foundation hat you're going to be easily identifiable. (Though it's arguable that if you do those things you probably don't object to standing out from the crowd.)

For all those reasons, Privacy International is right to throw the book at the company (which came bottom of the heap in PI's report on the privacy practices of major Web companies).

And yet. Google's Street View is one very large set of images captured once, and there are all sorts of valid uses for it. You can get a look at the route you're going to navigate through so you don't get lost. You can look at the neighborhoods surrounding the prospective homes you're looking at in the property listings. And there will doubtless be dozens or hundreds of other genuinely useful things you can do with it once we've had time to think. The privacy debate over it, therefore, has similar characteristics to the debate over file-sharing: it, too, is a dual-use technology.

CCTV is not. It has been sold to the public as a crime-prevention technology, and perhaps it seems private because we only see the images when a crime has been committed. CCTV cameras do not - as far as we know - provide anything like the quality or resolution of the Street View photographs. Yet. What Street View really exposes is not the personal moments causing all the fuss but the power we are giving the state by allowing CCTV to spread everywhere.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 14, 2009

The Gattaca in Gossip Girl

Spotted: net.wars obsessing over Gossip Girl instead of diligently reading up on the state of the data retention directive's UK implementation.

It's the cell phones. The central conceit of the show and the books that inspired it is this: an unseen single-person Greek (voiced by Kristen Bell in a sort of cross between her character on Veronica Mars and Christina Ricci's cynical, manipulative trouble-maker in The Opposite of Sex) chorus of unknown identity publishes - to the Web and by blast to subscribers' cell phones - tips and rumors about "the scandalous lives of Manhattan's elite".

The Upper East Siders she? reports on are, of course, the private high school teens whose centrally planned destiny is to inherit their parents' wealth, power, social circles, and Ivy League educations. These are teens under acute pressure to perform as expected, and in between obsessing about whether they can get into Yale (played on-screen by Columbia), they blow off steam by throwing insanely expensive parties, drinking, sexing, and scheming. All, of course, in expensive designer clothes and bearing the most character and product-placement driven selection of phones ever seen on screen.

Most of the plots are, of course, nonsense. The New Yorker more or less hated it on sight. Also my first reaction: I went, not to the school the books' author, Cecily von Ziegesar, did, but to one in the same class 25 years earlier and then to an Ivy League school. One of my closest high school friends grew up in - and his parents still live at - the building the inhabited in the series by teen queen Blair Waldorf. So I can assess the show's unreality firsthand. So can lots of other New Yorkers who are equally obsessed with the show: the New York Magazine runs a hysterically funny reality index recap of each episode of "the Greatest Show of Our Time", followed by a recap of the many comments.

But we never had the phones! Pink and flip, slider and black, Blackberries, red, gold, and silver phones! Behind the trashy drama portraying the ultra rich as self-important, stressed-out, miserable, self-absorbed, and mean is a fictional exploration of what life is like under constant surveillance by your peers.

Over the year and a half of the show's run - SPOILER ALERT - all sorts of private secrets have been outed on Gossip Girl via importunate camera phone and text message. Serena is spotted buying a pregnancy test (causing panic in at least two households); four characters are revealed at a party full of agog subscribers to be linked by a half-sibling they didn't know they had until the blast went out; and of course everyone is photographed kissing (or worse) the wrong person at some point. Exposure via Gossip Girl is also handy for blackmail (Blair), pre-emption (Chuck), lovesick yearning (Dan), and outing his sister's gay boyfriend (Dan).

"If you're sending tips to Gossip Girl, you're in the game with the rest of us," Jenny tells Dan, who had assumed his own moral superiority.

A lot of privacy advocates express concern that today's "digital natives" don't care about privacy, or at least, don't understand the potential consequences to their future job and education prospects of the decisions they make when they post the intimate details of their lives online. In fact, when this generation grows up they'll all be in the same boat, exposure wise.. Both in reality and in this fiction, the case is as it's usually been, that teens don't fear each other; they collude as allies to exclude their parents. That trope, too, is perfectly played on the show when Blair (again!) gets rid of a sociopathic interloper by going over the garden wall and calling her parents. This is not the world of David Brin's The Transparent Society, after all; the teens surveille each other but catch adults only by accident, though they take full advantage when they do.

"Gossip Girl...is how we communicate," Blair says, trying to make one of her many vendettas seem normal.

Privacy advocates also often stress that surveillance chills spontaneous behaviour. Not here, or at least not yet. Instead, the characters manipulate and expose, then anguish when it happens to them. A few become inured.

Says Serena, trying to comfort Rachel Carr, the first teacher to be so exposed: "I've been on Gossip Girl plenty of times and for the worst things...eventually everyone forgets. The best thing to do with these things is nothing at all,"

Phones and Gossip Girl are not the only mechanisms by which the show's characters spy on and out each other. They use all the more traditional media, too - in-person interaction, mistaken identity (a masked ball!), rifling through each other's belongings, stolen phones, eavesdropping, accident, and, of course, the gossip pages of the New York press.

"It's anonymous, so no one really knows," Serena says, when asked who is behind the site. But she and all the others do know: the tips come from each other and from the nameless other students they ignore in the background. Gossip Girl merely forwards them, with commentary in her own style:

You know you love me.

XOXO,
Net.wars

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 16, 2009

Health watch

We'll have to wait some months to find out what Steve Jobs' health situation really is, just as investors will have to wait to find out how well Apple is prepared to handle his absence. But that doesn't stop rampant speculation about both things, or discussion about whether Jobs owes it to the public to disclose his health problems.

As an individual, of course not. We write - probably too often for some people's tastes - about privacy with respect to health matters. But Jobs isn't just a private individual, and he isn't an average CEO. Like Warren Buffett, who saw his company's share price decline noticeably some years back during a scare over his health, Jobs's presence as CEO is a noticeable percentage of Apple's share price. That means that shareholders - and therefore by extension the Securities and Exchange Commission - have some legitimate public interest in his state of health.

That doesn't mean that all the speculation going on is a good thing. If Jobs is smart, he doesn't read news stories about himself; in normal times no one needs their sense of self-importance inflated that much, and in a health crisis the last thing you need is to read dozens of people speculating that you're on the way out. The pruriently curious may like to know that there is some speculation that the weight loss is the result of the Whipple procedure Jobs reportedly had in 2004 to treat his islet cell neuroendocrine tumor (a less aggressive type of pancreatic cancer); or that it's a thyroid disorder. No one wants to just write a post that says simply, "I don't know."

It would not matter if Jobs and Apple did not so conspicuously embrace the cult of personality. The downside of having a celebrity CEO is that when that CEO is put out of action the company struggles to keep its market credibility. The more the CEO takes credit - and Jobs is indelibly associated with each of Apple's current products - the less confidence people have in the company he runs.

To a large extent, it's absurd. No one - not even Jobs - can run a tech company the size of Apple by himself. Jobs may insist on signing off on every design detail, but let's face it, he's not the one working evenings and weekends to write the software code and run bug testing and run a final polishing cloth over the shinies before they hit the stores. Apple definitely lost his way during the period he wasn't at the helm - that much is history. But Jobs helped recruit John Sculley, the CEO who ran Apple during those lost years. And Jobs's next company, NeXT, was a glossy, well-designed, technically sophisticated market failure whose biggest success came when Apple bought it (and Jobs) and incorporated some of the company's technology into its products. Jobs had far more success with Pixar, now part of Disney; but accounts of the company's early history suggest was the company's founders who did the heavy lifting.

Unfortunately, if you're a public company you don't get to create public confidence by pointing out the obvious: that even with Jobs out of action there's a lot of company left for the managers he picked to run in the direction's he's chosen. Apple, whose relations with the press seem to be a dictionary definition of "arrogant", has apparently never cared to create a public image for itself that suggests it's a strong company with or without Jobs.

Compare and contrast to Buffett, who has been a rock star CEO for far longer than Jobs has. Buffett is 78, and Berkshire Hathaway's success is universally associated almost solely with him; yet every year he reminds shareholders that he has three or four candidates to succeed him who are chosen and primed and known to his board of directors. His annual shareholder's letters, too, are filled with praise for the managers and directors of the many subsidiaries Berkshire owns. Based on all that, it is clear that Buffett has an eye to ensuring that his company will retain its value and culture with or without him. That so many Berkshire Hathaway millionaires are his personal friends and neighbors, who staked money in the company decades ago at some personal risk, may have something to do with it.

Apple has not done anything like the same, which may have something to do with the personality of its CEO. Jobs's health troubles of 2004 should have been a wakeup call; if Buffett can understand that his age is a concern for shareholders, why can't Jobs understand that his health is, too? If he doesn't want people prying into his medical condition, that's understandable. But then the answer is to loosen his public identification with the company. As long as the perception is that Jobs is Apple and Apple is Jobs, the company's fortunes and share price will be inextricably linked to the fragility of his aging human body. Show that the company has a plan for succession, give its managers and product developers public credit, and identify others with its most visible products, and Jobs can go back to having some semblance of a private medical record.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 2, 2009

No rest for 2009

It's been a quiet week, as you'd expect. But 2009 is likely to be a big year in terms of digital rights.

Both the US and the UK are looking to track non-citizens more closely. The UK has begun issuing foreigners with biometric ID cards. The US, which began collecting fingerprints from visiting tourists two years ago says it wants to do the same with green card holders. In other words, you can live in the US for decades, you can pay taxes, you can contribute to the US economy - but you're still not really one of us when you come home.

The ACLU's Barry Steinhardt has pointed out, however, that the original US-VISIT system actually isn't finished: there's supposed to be an exit portion that has yet to be built. The biometric system is therefore like a Roach Motel: people check in but they never leave.

That segues perfectly into the expansion of No2ID's "database state". The UK is proceeding with its plan for a giant shed to store all UK telecommunications traffic data. Building the data shed is a lot like saying we're having trouble finding a few needles in a bunch of haystacks so the answer is to build a lot bigger haystack.

Children in the UK can also look forward to ContactPoint (budget £22.4 million) going live at the end of January, only the first of several. The conservativers apparently have pledged to scrap ContactPoint in favor of a less expensive system that would track only children deemed to be at risk. If the conservatives don't get their chance to scrap it - probably even if they do - the current generation may be the last that doesn't get to grow up taking for granted that their every move is being tracked. Get 'em young, as the Catholic church used to say, and they're yours for life.

The other half of that is, of course, the National Identity Register. Little has been heard of the ID card in recent months; although the Home Office says 1,000 people have actually requested one. Since these have begun rolling out to foreigners, it's probably best to keep an eye on them.

On January 19, look for the EU to vote on copyright term extension in sound recordings. They have now: 50 years. They want: 95 years. The problem: all the independent reviewers agree it's a bad idea economically. Why does this proposal keep dogging us? Especially given that even the UK government accepts that recording contracts mean that little of the royalties will go to the musicians the law is supposedly trying to help, why is the European Parliament even considering it? Write your MEP. Meanwhile, the economic downturn reaches Cliff Richards; his earliest recordings begin entering the public domain...oh, look - yesterday, January 1, 2009.

Those interested in defending file-sharing technology, the public domain, or any other public interest in intellectual property will find themselves on the receiving end of a pack of new laws and initiatives out to get them.

The RIAA recently announced it would cease suing its customers in the US. It plans to "work with ISPs". Anyone who's been around the UK and France in recent months should smell the three-strikes policy that the Open Rights Group has been fighting against. ORG's going to find it a tougher battle, now that the govermment is considering a stick and carrot approach: make ISPs liable for their users' copyright infringement, but give them a slice of the action for legal downloads. One has to hope that even the most cash-strapped ISPs have more sense.

Last year's scare over the US's bald statement that customs authorities have the right to search and impound computers and other electronic equipment carried by travellers across the national borders will probably be followed up with lengthy protest over new rules known as the Anti-Counterfeiting Trade Agreement and being negotiated by the US, EU, Japan, and other countries. We don't know as much as we'd like about what the proposals actually are, though some information escaped last June. Negotiations are expected to continue in 2009.

The EU has said that it has no plans to search individual travellers, which is a relief; in fact, in most cases it would be impossible for a border guard to tell whether files on a computer were copyright violations. Nonetheless, it seems likely that this and other laws will make criminals of most of us; almost everyone who owns an MP3 player has music on it that technically infringes the copyright laws (particularly in the UK, where there is as yet no exemption for personal copying).

Meanwhile, Australia's new $44 million "great firewall" is going ahead despiteknown flaws in the technology. Nearer home, British Culture Secretary Andy Burnham would like to rate the Web, lest it frighten the children.

It's going to be a long year. But on the bright side, if you want to make some suggestions for the incoming Obama administration, head over to Change.org and add your voice to those assembling under "technology policy".

Happy new year!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 5, 2008

Saving seeds

The 17 judges of the European Court of Human Rights ruled unanimously yesterday that the UK's DNA database, which contains more than 3 million DNA samples, violates Article 8 of the European Convention on Human Rights. The key factor: retaining, indefinitely, the DNA samples of people who have committed no crime.

It's not a complete win for objectors to the database, since the ruling doesn't say the database shouldn't exist, merely that DNA samples should be removed once their owners have been acquitted in court or the charges have been dropped. England, the court said, should copy Scotland, which operates such a policy.

The UK comes in for particular censure, in the form of the note that "any State claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance..." In other words, before you decide to be the first on your block to use a new technology and show the rest of the world how it's done, you should think about the consequences.

Because it's true: this is the kind of technology that makes surveillance and control-happy governments the envy of other governments. For example: lacking clues to lead them to a serial killer, the Los Angeles Police Department wants to copy Britain and use California's DNA database to search for genetic profiles similar enough to belong to a close relative .The French DNA database, FNAEG, was proposed in 1996, created in 1998 for sex offenders, implemented in 2001, and broadened to other criminal offenses after 9/11 and again in 2003: a perfect example of function creep. But the French DNA database is a fiftieth the size of the UK's, and Austria's, the next on the list, is even smaller.

There are some wonderful statistics about the UK database. DNA samples from more than 4 million people are included on it. Probably 850,000 of them are innocent of any crime. Some 40,000 are children between the ages of 10 and 17. The government (according to the Telegraph) has spent £182 million on it between April 1995 and March 2004. And there have been suggestions that it's too small. When privacy and human rights campaigners pointed out that people of color are disproportionately represented in the database, one of England's most experienced appeals court judges, Lord Justice Sedley, argued that every UK resident and visitor should be included on it. Yes, that's definitely the way to bring the tourists in: demand a DNA sample. Just look how they're flocking to the US to give fingerprints, and how many more flooded in when they upped the number to ten earlier this year. (And how little we're getting for it: in the first two years of the program, fingerprinting 44 million visitors netted 1,000 people with criminal or immigration violations.)

At last week's A Fine Balance conference on privacy-enhancing technologies, there was a lot of discussion of the key technique of data minimization. That is the principle that you should not collect or share more data than is actually needed to do the job. Someone checking whether you have the right to drive, for example, doesn't need to know who you are or where you live; someone checking you have the right to borrow books from the local library needs to know where you live and who you are but not your age or your health records; someone checking you're the right age to enter a bar doesn't need to care if your driver's license has expired.

This is an idea that's been around a long time - I think I heard my first presentation on it in about 1994 - but whose progress towards a usable product has been agonizingly slow. IBM's PRIME project, which Jan Camenisch presented, and Microsoft's purchase of Credentica (which wasn't shown at the conference) suggest that the mainstream technology products may finally be getting there. If only we can convince politicians that these principles are a necessary adjunct to storing all the data they're collecting.

What makes the DNA database more than just a high-tech fingerprint database is that over time the DNA stored in it will become increasingly revealing of intimate secrets. As Ray Kurzweil kept saying at the Singularity Summit, Moore's Law is hitting DNA sequencing right now; the cost is accordingly plummeting by factors of ten. When the database was set up, it was fair to characterize DNA as a high-tech version of fingerprints or iris scans. Five - or 15, or 25, we can't be sure - years from now, we will have learned far more about interpreting genetic sequences. The coded, unreadable messages we're storing now will be cleartext one day, and anyone allowed to consult the database will be privy to far more intimate information about our bodies, ourselves than we think we're giving them now.

Unfortunately, the people in charge of these things typically think it's not going to affect them. If the "little people" have no privacy, well, so what? It's only when the powers they've granted are turned on them that they begin to get it. If a conservative is a liberal who's been mugged, and a liberal is a conservative whose daughter has needed an abortion, and a civil liberties advocate is a politician who's been arrested...maybe we need to arrest more of them.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 24, 2008

Living by numbers

"I call it tracking," said a young woman. She had healthy classic-length hair, a startling sheaf of varyingly painful medical problems, and an eager, frequent smile. She spends some minutes every day noting down as many as 40 different bits of information about herself: temperature, hormone levels, moods, the state of the various medical problems, the foods she eats, the amount and quality of sleep she gets. Every so often, she studies the data looking for unsuspected patterns that might help her defeat a problem. By this means, she says she's greatly reduced the frequency of two of them and was working on a third. Her doctors aren't terribly interested, but the data helps her decide which of their recommendations are worth following.

And she runs little experiments on herself. Change a bunch of variables, track for a month, review the results. If something's changed, go back and look at each variable individually to find the one that's making the difference. And so on.

Of course, everyone with the kind of medical problem - diabetes, infertility, allergies, cramps, migraines, fatigue - that medicine can't really solve - has done something like this for generations. Diabetics in particularly have long had to track and control their blood sugar levels. What's different is the intensity - and the computers. She currently tracks everything in an Excel spreadsheet, but what she's longing for is good tools to help her with data analysis.

From what Gary Wolf, the organizer of this group, Quantified Self, says - about 30 people are here for its second meeting, after hours at Palo Alto's Institute for the Future to swap notes and techniques on personal tracking - getting out of the Excel spreadsheet is a key stage in every tracker's life. Each stage of improvement thereafter gets much harder.

Is this a trend? Co-founder Kevin Kelley thinks so, and so does the Washington Post, which covered this group's first meeting. You may not think you will ever reach the stage of obsession that would lead you to go to a meeting about it, but in fact, if the interviews I did with new-style health companies in the past year is any guide, we're going to be seeing a lot of this in the health side of things. Home blood pressure monitors, glucose tests, cholesterol tests, hormone tests - these days you can buy these things in Wal-Mart.

The key question is clearly going to be: who owns your health data? Most of the medical devices in development assume that your doctor or medical supplier will be the one doing the monitoring; the dozens of Web sites highlighted in that Washington Post article hope there's a business in helping people self-track everything from menstrual cycles to time management. But the group in Palo Alto are more interested in self-help: in finding and creating tools everyone can use, and in interoperability. One meeting member shows off a set of consumer-oriented prototypes - bathroom scale, pedometer, blood pressure monitor, that send their data to software on your computer to display and, prospectively, to a subscription Web site. But if you're going to look at those things together - charting the impact of how much you walk on your weight and blood pressure - wouldn't you also want to be able to put in the foods you eat? There could hardly be an area where open data formats will be more important.

All of that makes sense. I was less clear on the usefulness of an idea another meeting member has - he's doing a start-up to create it - a tiny, lightweight recording camera that can clip to the outside of a pocket. Of course, this kind of thing already has a grand, old man in the form of Steve Mann, who has been recording his life with an increasingly small sheaf of devices for a couple of decades now. He was tired, this guy said, of cameras that are too difficult to use and too big and heavy; they get left at home and rarely used. This camera they're working on will have a wide-angle lens ("I don't know why no one's done this") and take two to five pictures a second. "That would be so great," breathes the guy sitting next to me.

Instantly, I flash on the memory of Steve Mann dogging me with flash photography at Computers, Freedom, and Privacy 2005. What happens when the police subpoenas your camera? How long before insurance companies and marketing companies offer discounts as inducements to people to wear cameras and send them the footage unedited so they can study behavior they currently can't reach?

And then he said, "The 10,000 greatest minutes of your life that your grandchildren have to see," and all you can think is, those poor kids.

There is a certain inevitable logic to all this. If retailers, manufacturers, marketers, governments, and security services are all convinced they can learn from data mining us why shouldn't we be able to gain insights by doing it ourselves?

At the moment, this all seems to be for personal use. But consider the benefits of merging it with Web 2.0 and social networks. At last you'll be able to answer the age-old question: why do we have sex less often than the Joneses?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 10, 2008

Data mining snake oil

The basic complaints we've been making for years about law enforcement's and government's desire to collect masses of data have primarily focused on the obvious set of civil liberties issues: the chilling effect of surveillance, the right of individuals to private lives, the risk of abuse of power by those in charge of all that data. On top of that we've worried about the security risks inherent in creating such large targets from which data will, inevitably, leak sometimes.

This week, along came the National Research Council to offer a new trouble with dataveillance: it doesn't actually work to prevent terrorism. Even if it did work, the tradeoff of the loss of personal liberties against the security allegedly offered by policies that involve tracking everything everyone does from cradle to grave was hard to justify. But if it doesn't work - if all surveillance all the time won't make us actually safer - then the discussion really ought to be over.

The NAS report, Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Assessment, makes its conclusions clear: "Modern data collection and analysis techniques have had remarkable success in solving information-related problems in the commercial sector... But such highly automated tools and techniques cannot be easily applied to the much more difficult problem of detecting and preempting a terrorist attack, and success in doing so may not be possible at all."

Actually, the many of us who have had our cards stopped for no better reason than that the issuing bank didn't like the color of the Web site we were buying from, might question how successful these tools have been in the commercial sector. At the very least, it has become obvious to everyone how much trouble is being caused by false positives. If a similar approach is taken to all parts of everyone's lives instead of just their financial transactions, think how much more difficult it's going to be to get through life without being arrested several times a year.

The report again: "Even in well-managed programs such tools are likely to return significant rates of false positives, especially if the tools are highly automated." Given the masses of data we're talking about - the UK wants to store all of the nation's communications data for years in a giant shed, and a similar effort in the US would have to be many times as big - the tools will have to be highly automated. And - the report yet again - the difficulty of detecting terrorist activity "through their communications, transactions, and behaviors is hugely complicated by the ubiquity and enormoity of electronic databases maintained by both government agencies and private-sector corporations." The bigger the haystack, the harder it is to find the needle.

In a recent interview, David Porter, CEO of Detica, who has spent his entire career thinking about fraud prevention, said much the same thing. Porter's proposed solution - the basis of the systems Detica sells -is to vastly shrink the amount of data to be analyzed by throwing out everything we know is not fraud (or, as his colleague, Tom Black, said at the Homeland and Border Security conference in July, terrorist activity). To catch your hare, first shrink your haystack.

This report, as the title suggests, focuses particularly on balancing personal privacy against the needs of anti-terrorist efforts. (Although, any terrorist watching the financial markets the last couple of weeks would be justified in feeling his life's work had been wasted, since we can do all the damage that's needed without his help.) The threat from terrorists is real, the authors say - but so is the threat to privacy. Personal information in databases cannot be fully anonymized; the loss of privacy is real damage; and data varies substantially in quality. "Data derived by linking high-quality data with data of lesser quality will tend to be low-quality data." If you throw a load of silly string into your haystack, you wind up with a big mess that's pretty much useless to everyone and will be a pain in the neck to clean up.

As a result, the report recommends requiring systematic and periodic evaluation of every information-based government program against core values and proposes a framework for carrying that out. There should be "robust, independent oversight". Research and development of such programs should be carried out with synthetic data, not real data "anonymized"; real data should only be used once a program meets the proposed criteria for deployment and even then only phased in at a small number of sites and tested thoroughly. Congress should review privacy laws and consider how best to protect privacy in the context of such programs.

These things seem so obvious; but to get to this the point it's taken three years of rigorous documentation and study by a 21-person committee of unimpeachable senior scientists and review by members of a host of top universities, telephone companies, and top technology companies. We have to think the report's sponsors, who include the the National Science Foundation, and the Department of Homeland Security, will take the results seriously. Writing for Cnet, Declan McCullagh notes that the similar 1996 NRC CRISIS report on encryption was followed by decontrol of the export and use of strong cryptography two years later. We can but hope.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 25, 2008

Who?

A certain amount of government and practical policy is being made these days based on the idea that you can take large amounts of data and anonymize it so researchers and others can analyze it without invading anyone's privacy. Of particular sensitivity is the idea of giving medical researchers access to such anonymized data in the interests of helping along the search for cures and better treatments. It's hard to argue with that as a goal - just like it's hard to argue with the goal of controlling an epidemic - but both those public health interests collide with the principle of medical confidentiality.

The work of Latanya Sweeney was I think the first hint that anonymizing data might not be so straightforward; I've written before about her work. This week, at the Privacy Enhancing Technologies Symposium in Leuven, Belgium (which I regrettably missed) researchers Arvind Narayanan and Vitaly Shmatikov from the University of Texas at Austin won an award sponsored by Microsoft for taking reidentifying supposedly anonymized data a step further.

The pair took a database released by the online DVD rental company Netflix last year as part of the $1 million Netflix Prize, a project to improve upon the accuracy of the system's predictions. You know the kind of thing, since it's built into everything from Amazon to Tivos - you give the system an idea of your likes and dislikes by rating the movies you've rented and the system makes recommendations for movies you'll like based on those expressed preferences. To enable researchers to work on the problem of improving these recommendations, Netflix released a dataset containing more than 100 million movie ratings contributed by nearly 500,000 subscribers between December 1999 and December 2005 with, as the service stated in its FAQ, all customer identifying information removed.

Maybe in a world where researchers only had one source of information that would be a valid claim. But just as Sweeney showed in 1997 that it takes very little in the way of public records to re-identify a load of medical data supplied to researchers in the state of Massachusetts, Narayananan and Shamtikov's work reminds us that we don't live in a world like that. For one thing, people tend disproportionately to rate their unusual, quirky favorites. Rating movies takes time; why spend it on giving The Lord of the Rings another bump when what people really need is to know about the wonders of King of Hearts, All That Jazz, and The Tall Blond Man with One Black Shoe? The consequence is that the Netflix dataset is what they call "sparse" - that is, there few subscribers have very similar records.

So: how much does someone need to know about you to identify a particular user from the database? It turns out, not much. The is the public ratings and dates at the Internet Movies Database, which include dates and real names. Narayanan and Shmatikov concluded that 99 percent of records could be uniquely identified from only eight matching ratings (of which two could be wrong); for 68 percent of the records you only need two (and reidentifying the rest becomes easier). And of course, if you know a little bit about the particular person whose record you want to identify things get a lot easier - the three movies I've just listed would probably identify me and a few of my friends.

Even if you don't care if your tastes in movies are private - and both US law and the American Library Association's take on library loan records would protect you more than you yourself would - there are couple of notable things here. First of all, the compromise last week whereby Google agreed to hand Viacom anonymized data on YouTube users isn't as good a deal for users as they might think. A really dedicated searcher might well think it worth the effort to come up with a way to re-identify the data - and so far rightsholders have shown themselves to be very dedicated indeed.

Second of all, the Thomas-Walport review on data-sharing actually recommends requiring NHS patients to agree to sharing data with medical researchers. There is a blithe assumption running through all the government policies in this area that data can be anonymized, and that as long as they say our privacy is protected it will be. It's a perfect example of what someone this week called "policy-based evidence-making".

Third of all, most such policy in this area assumes it's the past that matters. What may be of greater significance, as Narayanan and Shmatikov point out, is the future: forward privacy. Once a virtual identity has been linked to a real-world identity, that linkage is permanent. Yes, you can create a new virtual identity, but any slip that links it to either your previous virtual or your real-world identity blows your cover.

The point is not that we should all rush to hide our movie ratings. The point is that we make optimistic assumptions every day that the information we post and create has little value and won't come back to bite us on the ass. We do not know what connections will be possible in the future.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 11, 2008

Voters for sale

It must be hard to be the Direct Marketing Association. All individuals in the DMA must know that they themselves hate getting marketing calls during dinner, weeding the real post out from the junk mail, and constantly having to unsubscribe from email lists that they're only on because they had the misfortune to buy something from the sender. Collectively, the DMA remains firmly convinced that people want advertising really, it just has to be targeted right (at which point people no longer call it advertising). It must be very hard for everyone involved to maintain this level of cognitive dissonance.

And it leads them to do things as an organization that probably each individual would oppose if they were working for someone else. Today the DMA is opposing the withdrawal of the edited electoral register, a recommendation appearing in the Data-Sharing Review, published by the Ministry of Justice and written by Information Commissioner Richard Thomas and Dr Mark Walport. There's a lot of interesting stuff to digest; the electoral register issue is one of the simpler bits.

To recap: historically the UK, like the US, treated the electoral rolls as public information. In the UK every household gets sent a canvassing form once a year that comes with a stern warning that you are legally required to register.

Starting in the 1830s, the British electoral rolls have been available for public inspection and sale; what a godsend for direct marketers as their industry grew up. As of 2001, electoral registration officers are required to sell a copy of the register at a specified price to anyone who wants it under Regulation 48 of the Representation of the People (England and Wales) Regulations. Almost immediately there were objections on privacy grounds, most notably a complaint by Pontefract-based Brian Robertson, a retired accountant, against Wakefield City Council because there was no provision for him to prevent the sale of his information for commercial use. He refused to register, took them to court - and won.

The regulations were promptly amended to require councils to maintain two registers: the full public register and an edited version that could be sold to commercial organizations and others and to which voters would be added automatically - but with the right to opt out. The first edited registers appeared in 2002.

And there was a lot of confusion. The canvassing forms that first year didn't make it very clear what the edited register was, and it was easy to make the mistake of thinking that if you opted out you would not be able to vote. Subsequent years saw amended forms that made it more clear just what you were opting out of. And the results really shouldn't surprise anyone: in the latest rolls 40 percent of voters opted out, double the percentage in the first years. Given that, it's not entirely clear why the government needs to withdraw the register. If they just wait a few more years everyone of any value to marketers will have opted out, and the edited rolls will become useful again as a list of all the people who aren't worth marketing to. Anyone left presumably either didn't understand the form, so lonely they enjoy the attention, or so mentally afflicted that someone else filled out the form for them.

The full register is available - at least in theory - only to a select group of people and organizations: political parties for electoral purposes, credit reference agencies to check names and addresses when people apply for credit, and law enforcement. The main purchasers of the edited register, the Thomas-Walport report notes, are direct marketing companies and companies compiling directories.

Thomas and Walport disapprove of its existence on these grounds: "It sends a particularly poor message to the public that personal information collected for something as vital as participation in the democratic process can be sold to 'anyone for any purpose'."

A key data protection principle is that a chance of use in personal information requires the consent of the individual. If ever there were a more significant change of use than selling information collected to enable people to vote to third party companies for general marketing purposes, I don't know what it would be.

The DMA's objection to its withdrawal is that its members won't be able to clean their lists and keep them accurate and up-to-date. And it happily sees the direct mail envelope as more than half full: "Some householders have opted out, but around 60 petrcent have chosen to remain on the edited register." They don't believe the forms are all confusing. And the DMA plays the environmental card: targeting reduces the amount of waste paper the industry produces.

One issue neither group tackles is whether the register represents a significant source of income for councils. How much are we willing to pay for privacy. This warrants more research; a quick glance turns up figures from Bath and North East Somerset Counil. In 2005-2006, the council netted £1,553 and £380.50 for the sales of the full and edited registers respectively; in 2006-2007 those figures were £1558.50 and £681. If that's indicative of national trends, we can afford it, especially given the savings on administering the opt-out process.

"The edited register does serve a purpose," the DMA concludes, "and so should not be abolished." A purpose, yes. Just not our purpose.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 27, 2008

Mistakes were made

This week we got the detail on what went wrong at Her Majesty's Revenue and Customs that led to the loss of those two CDs full of the personal details of 25 million British households last year with the release of the Poynter Review (PDF). We also got a hint of how and whether the future might be different with the publication yesterday of Data Handling: Proecures in Government (PDF), written by Sir Gus O'Donnell and commissioned by the Prime Minister after the HMRC loss. The most obvious message of both reports: government needs to secure data better.

The nicest thing the Poynter review said was that HMRC has already made changes in response to its criticisms. Otherwise, it was pretty much a surgical demonstration of "institutional deficiencies".

The chief points:


- Security was not HMRC's top priority.

- HMRC in fact had the technical ability to send only the selection of data that NAO actually needed, but the staff involved didn't know it.

- There was no designated single point of contact between HMRC and NAO.

- HMRC used insecure methods for data storage and transfer.

- The decision to send the CDs to the NAO was taken by junior staff without consulting senior managers - which under HMRC's own rules they should have done.

- The reason HMRC's junior staff did not consult managers was that they believed (wrongly) that NAO had absolute authority to access any and all information HMRC had.

- The HMRC staffer who dispatched the discs incorrectly believed the TNT Post service was secure and traceable, as required by HMRC policy. A different TNT service that met those requirements was in fact available.

- HMRC policies regarding information security and the release of data were not communicated sufficiently through the organization and were not sufficiently detailed.

- HMRC failed on accountability, governance, information security...you name it.

The real problem, though, isn't any single one of these things. If junior staff had consulted senior staff, it might not have mattered that they didn't know what the policies were. If HMRC used proper information security and secure methods for data storage (that is, encryption rather than simple password protection), they wouldn't have had access to send the discs. If they'd understood TNT's services correctly, the discs wouldn't have gotten lost - or at least been traceable if they had.

The real problem was the interlocking effect of all these factors. That, as Nassim Nicholas Taleb might say, was the black swan.

For those who haven't read Taleb's The Black Swan: The Impact of the Highly Improbable, the black swan stands for the event that is completely unpredictable - because, like black swans until one was spotted in Australia, no such thing has ever been seen - until it happens. Of course, data loss is pretty much a white swan; we've seen lots of data breaches. The black swan, really, is the perfectly secure system that is still sufficiently open for the people who need to use it.

That challenge is what O'Donnell's report on data handling is about and, as he notes, it's going to get harder rather than easier. He recommends a complete rearrangement of how departments manage information as well as improving the systems within individual departments. He also recommends greater openness about how the government secures data.

"No organisation can guarantee it will never lose data," he writes, "and the Government is no exception." O'Donnell goes on to consider how data should be protected and managed, not whether it should be collected or shared in the first place. That job is being left for yet another report in progress, due soon.

It's good to read that some good is coming out of the HMRC data loss: all departments are, according to the O'Donnell report, reviewing their data practices and beginning the process of cultural change. That can only be a good thing.

But the underlying problem is outside the scope of these reports, and it's this government's fondness for creating giant databases: the National Identity Register, ContactPoint, the DNA database, and so on. If the government really accepted the principle that it is impossible to guarantee complete data security, what would they do? Logically, they ought to start by cancelling the data behemoths on the understanding that it's a bad idea to base public policy on the idea that you can will a black swan into existence.

It would make more sense to create a design for government use of data that assumes there will be data breaches and attempts to limit the adverse consequences for the individuals whose data is lost. If my privacy is compromised alongside 50 million other people's and I am the victim of identity theft does it help me that the government department that lost the data knows which staff member to blame?

As Agatha Christie said long ago in one of her 80-plus books, "I know to err is human, but human error is nothing compared to what a computer can do if it tries." The man-machine combination is even worse. We should stop trying to breed black swans and instead devise systems that don't create so many white ones.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 30, 2008

Ten

It's easy to found an organization; it's hard to keep one alive even for as long as ten years. This week, the Foundation for Information Policy Research celebrated its tenth birthday. Ten years is a long time in Internet terms, and even longer when you're trying to get government to pay attention to expertise in a subject as difficult as technology policy.

My notes from the launch contain this quote from FIPR's first director, Caspar Bowden, which shows you just how difficult FIPR's role was going to be: "An educational charity has a responsibility to speak the truth, whether it's pleasant or unpleasant." FIPR was intended to avoid the narrow product focus of corporate laboratory research and retain the traditional freedoms of an academic lab.

My notes also show the following list of topics FIPR intended to research: the regulation of electronic commerce; consumer protection; data protection and privacy; copyright; law enforcement; evidence and archiving; electronic interaction between government, businesses, and individuals; the risks of computer and communications systems; and the extent to which information technologies discriminate against the less advantaged in society. Its first concern was intended to be researching the underpinnings of electronic commerce, including the then recent directive launched for public consultation by the European Commission.

In fact, the biggest issue of FIPR's early years was the crypto wars leading up to and culminating in the passage of the Regulation of Investigatory Powers Act (2000). It's safe to say that RIPA would have been a lot worse without the time and energy Bowden spent listening to Parliamentary debates, decoding consultation papers, and explaining what it all meant to journalists, politicians, civil servants, and anyone else who would listen.

Not that RIPA is a fountain of democratic behavior even as things are. In the last couple of weeks we've seen the perfect example of the kind of creeping functionalism that FIPR and Privacy International warned about at the time: the Poole council using the access rules in RIPA to spy on families to determine whether or not they really lived in the right catchment area for the schools their children attend.

That use of the RIPA rules, Bowden said at at FIPR's half-day anniversary conference last Wednesday, sets a precedent for accessing traffic data for much lower level purposes than the government originally claimed it was collecting the data for. He went on to call the recent suggestion that the government may be considering a giant database, updated in real time, of the nation's communications data "a truly Orwellian nightmare of data mining, all in one place."

Ross Anderson, FIPR's founding and current chair and a well-known security engineer at Cambridge, noted that the same risks adhere to the NHS database. A clinic that owns its own data will tell police asking for the names of all its patients under 16 to go away. "If," said Anderson, "it had all been in the NHS database and they'd gone in to see the manager of BT, would he have been told to go and jump in the river? The mistake engineers make too much is to think only technology matters."

That point was part of a larger one that Anderson made: that hopes that the giant databases under construction will collapse under their own weight are forlorn. Think of developing Hulk-Hogan databases and the algorithms for mining them as an arms race, just like spam and anti-spam. The same principle that holds that today's cryptography, no matter how strong, will eventually be routinely crackable means that today's overload of data will eventually, long after we can remember anything we actually said or did ourselves, be manageable.

The most interesting question is: what of the next ten years? Nigel Hickson, now with the Department of Business, Enterprise, and Regulatory Reform, gave some hints. On the European and international agenda, he listed the returning dominance of the large telephone companies on the excuse that they need to invest in fiber. We will be hearing about quality of service and network neutrality. Watch Brussels on spectrum rights. Watch for large debates on the liability of ISPs. Digital signatures, another battle of the late 1990s, are also back on the agenda, with draft EU proposals to mandate them for the public sector and other services. RFID, the "Internet for things" and the ubiquitous Internet will spark a new round of privacy arguments.

Most fundamentally, said Anderson, we need to think about what it means to live in a world that is ever more connected through evolving socio-technological systems. Government can help when markets fail; though governments themselves seem to fail most notoriously with large projects.

FIPR started by getting engineers, later engineers and economists, to talk through problems. "The next growth point may be engineers and psychologists," he said. "We have to progressively involve more and more people from more and more backgrounds and discussions."

Probably few people feel that their single vote in any given election really makes a difference. Groups like FIPR, PI, No2ID, and ARCH remind us that even a small number of people can have a significant effect. Happy birthday.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).


May 23, 2008

The haystack conundrum

Early this week the news broke that the Home Office wants to create a giant database in which will be stored details of all communications sent in Britain. In other words, instead of data retention, in which ISPs, telephone companies, and other service providers would hang onto communications data for a year or seven in case the Home Office wanted it, everything would stream to a Home Office data center in real time. We'll call it data swallowing.

Those with long memories - who seem few and far between in the national media covering this sort of subject - will remember that in about 1999 or 2000 there was a similar rumor. In the resulting outraged media coverage it was more or less thoroughly denied and nothing had been heard of it since, though privacy advocates continued to suspect that somewhere in the back of a drawer the scheme lurked, dormant, like one of those just-add-water Martians you find in the old Bugs Bunny cartoons. And now here it is again in another leak that the suspicious veteran watcher of Yes, Minister might think was an attempt to test public opinion. The fact that it's been mooted before makes it seem so much more likely that they're actually serious.

This proposal is not only expensive, complicated, slow, and controversial/courageous (Yes, Minister's Fab Four deterrents), but risk-laden, badly conceived, disproportionate, and foolish. Such a database will not catch terrorists, because given the volume of data involved trying to use it to spot any one would-be evil-doer will be the rough equivalent of searching for an iron filing in a haystack the size of a planet. It will, however, make it possible for anyone trawling the database to make any given individual's life thoroughly miserable. That's so disproportionate it's a divide-by-zero error.

The risks ought to be obvious: this is a government that can't keep track of the personal details of 25 million households, which fit on a couple of CDs. Devise all the rules and processes you want, the bigger the database the harder it will be to secure. Besides personal information, the giant communications database would include businesses' communication information, much of likely to be commercially sensitive. It's pretty good going to come up with a proposal that equally offends civil liberties activists and businesses.

In a short summary of the proposed legislation, we find this justification: "Unless the legislation is updated to reflect these changes, the ability of public authorities to carry out their crime prevention and public safety duties and to counter these threats will be undermined."

Sound familiar? It should. It's the exact same justification we heard in the late 1990s for requiring key escrow as part of the nascent Regulation of Investigatory Powers Act. The idea there was that if the use of strong cryptography to protect communications became widespread law enforcement and security services would be unable to read the content of the messages and phone calls they intercepted. This argument was fiercely rejected at the time, and key escrow was eventually dropped in favor of requiring the subjects of investigation to hand over their keys under specified circumstances.

There is much, much less logic to claiming that police can't do their jobs without real-time copies of all communications. Here we have real analogies: postal mail, which has been with us since 1660. Do we require copies of all letters that pass through the post office to be deposited with the security services? Do we require the Royal Mail's automated sorting equipment to log all address data?

Sanity has never intervened in this government's plans to create more and more tools for surveillance. Take CCTV. Recent studies show that despite the millions of pounds spent on deploying thousands of cameras all over the UK, they don't cut crime, and, more important, the images help solve crime in only 3 percent of cases. But you know the response to this news will not be to remove the cameras or stop adding to their number. No, the thinking will be like the scheme I once heard for selling harmless but ineffective alternative medical treatments, in which the answer to all outcomes is more treatment. (Patient gets better - treatment did it. Patient stays the same - treatment has halted the downward course of the disease. Patient gets worse - treatment came too late.)

This week at Computers, Freedom, and Privacy, I heard about the Electronic Privacy Information Center's work on fusion centers, relatively new US government efforts to mine many commercial and public sources of data. EPIC is trying to establish the role of federal agencies in funding and controlling these centers, but it's hard going.

What do these governments imagine they're going to be able to do with all this data? Is the fantasy that agents will be able to sit in a control room somewhere and survey it all on some kind of giant map on which criminals will pop up in red, ready to be caught? They had data before 9/11 and failed to collate and interpret it.

Iron filing; haystack; lack of a really good magnet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 9, 2008

Swings and roundabouts

There was a wonderful cartoon that cycled frequently around computer science departments in the pre-Internet 1970s - I still have my paper copy - that graphically illustrated the process by which IT systems get specified, designed, and built, and showed precisely why and how far they failed the user's inner image of what it was going to be. There is a scan here. The senior analyst wanted to make sure no one could possibly get hurt; the sponsor wanted a pretty design; the programmers, confused by contradictory input, wrote something that didn't work; and the installation was hideously broken.

Translate this into the UK's national ID card. Consumers, Sir James Crosby wrote in March (PDF)want identity assurance. That is, they - or rather, we - want to know that we're dealing with our real bank rather than a fraud. We want to know that the thief rooting through our garbage can't use any details he finds on discarded utility bills to impersonate us, change our address with our bank, clean out our accounts, and take out 23 new credit cards in our name before embarking on a wild spending spree leaving us to foot the bill. And we want to know that if all that ghastliness happens to us we will have an accessible and manageable way to fix it.

We want to swing lazily on the old tire and enjoy the view.

We are the users with the seemingly simple but in reality unobtainable fantasy.

The government, however - the project sponsor - wants the three-tiered design that barely works because of all the additional elements in the design but looks incredibly impressive. ("Be the envy of other major governments," I feel sure the project brochure says.) In the government's view, they are the users and we are the database objects.

Crosby nails this gap when he draws the distinction between ID assurance and ID management:

The expression 'ID management' suggests data sharing and database consolidation, concepts which principally serve the interests of the owner of the database, for example, the Government or the banks. Whereas we think of "ID assurance" as a consumer-led concept, a process that meets an important consumer need without necessarily providing any spin-off benefits to the owner of any database.

This distinction is fundamental. An ID system built primarily to deliver high levels of assurance for consumers and to command their trust has little in common with one inspired mainly by the ambitions of its owner. In the case of the former, consumers will extend use both across the population and in terms of applications such as travel and banking. While almost inevitably the opposite is true for systems principally designed to save costs and to transfer or share data.

As writer and software engineer Ellen Ullman wrote in her book Close to the Machine, databases infect their owners, who may start with good intentions but are ineluctibly drawn to surveillance.

So far, the government pushing the ID card seems to believe that it can impose anything it likes and if it means the tree collapses with the user on the swing, well, that's something that can be ironed out later. Crosby, however, points out that for the scheme to achieve any of the government's national security goals it must get mass take-up. "Thus," he writes, "even the achievement of security objectives relies on consumers' active participation."

This week, a similarly damning assessment of the scheme was released by the Independent Scheme Assurance Panel (PDF) (you may find it easier to read this clean translation - scroll down to policywatcher's May 8 posting). The gist: the government is completely incompetent at handling data, and creating massive databases will, as a result, destroy public trust in it and all its systems.

Of course, the government is in a position to compel registration, as it's begun doing with groups who can't argue back, like foreigners, and proposes doing for employees in "sensitive roles or locations, such as airports". But one of the key indicators of how little its scheme has to do with the actual needs and desires of the public is the list of questions it's asking in the current consultation on ID cards, which focus almost entirely on how to get people to love, or at least apply for, the card. To be sure, the consultation document pays lip service to accepting comments on any ID card-related topic, but the consultation is specifically about the "delivery scheme".

This is the kind of consultation where we're really damned if we do and damned if we don't. Submit comments on, for example, how best to "encourage" young people to sign up ("Views are invited particularly from young people on the best way of rolling out identity cards to them") without saying how little you like the government asking how best to market its unloved policy to vulnerable groups and when the responses are eventually released the government can say there are now no objectors to the scheme. Submit comments to the effect that the whole National Identity scheme is poorly conceived and inappropriate, and anything else you say is likely to be ignored on the grounds that they've heard all that and it's irrelevant to the present consultation. Comments are due by June 30.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 2, 2008

Bet and sue

Most net.wars are not new. Today's debates about free speech and censorship, copyright and control, nationality and disappearing borders were all presaged by the same discussions in the 1980s even as the Internet protocols were being invented. The rare exception: online gambling. Certainly, there were debates about whether states should regulate gambling, but a quick Usenet search does not seem to throw up any discussions about the impact the Internet was going to have on this particular pastime. Just sex, drugs, and rock 'n' roll.

The story started in March, when the French Tennis Federation (FFT - Fédération Française de Tennis) filed suit in Belgium against Betfair, Bwin, and Ladbrokes to prevent them from accepting bets on matches played at the upcoming French Open tennis championships, which start on May 25. The FFT's arguments are rather peculiar: that online betting stains the French Open's reputation; that only the FFT has the right to exploit the French Open; that the online betting companies are parasites using the French Open to make money; and that online betting corrupts the sport. Bwin countersued for slander.

On Tuesday of this week, the Liège court ruled comprehensively against the FFT and awarded the betting companies costs.

The FFT will still, of course, control the things it can: fans will be banned from using laptops and mobile phones in the stands. The convergence of wireless telephony, smart phones, and online sites means that in the second or two between the end of a point and the electronic scoreboard updating, there's a tiny window in which people could bet on a sure thing. Why this slightly improbable scenario concerns the FFT isn't clear; that's a problem for the betting companies. What should concern the FFT is ensuring a lack of corruption within the sport. That means the players and their entourages.

The latter issue has been a touchy subject in the tennis world ever since last August, when Russian player Nikolay Davydenko, currently fourth in the world rankings, retired in the third and final set of a match in Poland against 87th ranked Marin Vassallo Arguello, citing a foot injury. Davydenko was accused of match-fixing; the investigation still drags on. In the resulting publicity, several other players admitted being approached to fix matches. As part of subsequent rule-tightening by the Association of Tennis Professionals, the governing body of men's professional tennis, three Italian players were suspended briefly late last year for betting on other players' matches.

Probably the most surprising thing is that tennis, along with soccer and horse racing, is actually among the most popular sports for betting. A minority sport like tennis? Yet according to USA Today, the 2007 Paris Masters event saw $750 million to $1.5 billion in bets. I can only assume that the inverted pyramid of matches every week involving individual players fits well with what bettors like to do.

Fixing matches seems even more unlikely. The best payouts come from correctly picking upsets, the bigger the better. But top players are highly unlikely to throw matches to order. Most of them play a relatively modest number of events (Davydenko is admittedly the exception) and need all the match wins and points from those events to sustain their rankings. Plus, they're just too damn rich.

In 2007, Roger Federer, the ultra-dominant number one player since the end of 2003, earned upwards of $10 million in prize money alone; Davydenko picked up over $2 million (and has already won another $1 million in 2008). All of the top 12 earned over $1 million. Add in endorsements, and even after you subtract agents' fees, tax, and travel costs for self and entourage, you're still looking at wealthy guys. They might tank matches at events where they're being paid appearance fees (which are legal on the men's tour at all but the top 14 events, but proving they've done so is exceptionally difficult. Fixing matches, which could cost them in lost endorsements on top of the tour's own sanctions, surely can't be worth it.

There are several ironies about the FFT's action. First of all (something most of the journalists covering this story don't mention, probably because they don't spend a lot of time watching tennis on TV), Bwin has been an important advertiser sponsoring tennis on Eurosport. It's absolutely typical of the counter-productive and intricately incestuous politics that characterize the tennis world that one part of the sport would sue someone who pays money into another part of the sport.

Second of all, as Betfair and Bwin pointed out, all three of these companies are highly regulated European licensed operations. Ruling them out of action would mean shift online betting to less well regulated offshore companies. They also pointed out the absurdity of the parasites claim: how could they accept bets on an event without using its name? Betfair in particular documented its careful agreements with tennis's many governing bodies.

Third of all, the only reason match-fixing is an issue in the tennis world right now is that Betfair spotted some unusual betting patterns during that Polish Davydenko match, cancelled all the bets, and went public with the news. Without that, Davydenko would have avoided the fight over his family's phone records. Come to think of it, making the issue public probably explains the FFT's behavior: it's revenge.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2008

My IP address, my self

Some years back when I was writing about the data protection directive, Simon Davies, director of Privacy International, predicted a trade war between the US and Europe over privacy laws. It didn't happen, or at least it hasn't happened yet.

The key element to this prediction was the rule in the EU's data protection laws that prohibited sending data on for processing to countries whose legal regimes aren't as protective as those of the EU. Of course, since then we've seen the EU sell out on supplying airline passenger data to the US. Even so, this week the Article 29 Data Protection Working Party made recommendations about how search engines save and process personal data that could drive another wedge between the US and Europe.

The Article 29 group is one of those arcane EU phenomena that you probably don't know much about unless you're a privacy advocate or paid to find out. The short version: it's a sort of think tank of data protection commissioners from all over Europe. The UK's Information Commissioner, Richard Thomas, is a member, as are his equivalents in countries from France to Lithuania.

The Working Party (as it calls itself) advises and recommends policies based on the data protection principles enshrined in the EU Data Protection Directive. It cannot make law, but both its advice to the European Commission and the Commission's action (or lack thereof) are publicly reported. It's arguable that in a country like the UK, where the Information Commissioner operates with few legal teeth to bite with, the existence of such a group may help strengthen the Commissioner's hand.

(Few legal teeth, at least in respect of government activities: the Information Commissioner has issued an opinion about Phorm indicating that the service must be opt-in only. As Phorm and the ISPs involved are private companies, if they persisted with a service that contravened data protection law, the Information Commissioner could issue legal sanctions. But while the Information Commissioner can, for example, rule that for an ISP to retain users' traffic data for seven years is disproportionate, if the government passes a law saying the ISP must do so then within the UK's legal system the Information Commissioner can do nothing about it. Similarly, the Information Commissioner can say, as he has, that he is "concerned" about the extent of the information the government proposes to collect and keep on every British resident, but he can't actually stop the system from being built.)

The group's key recommendation: search engines should not keep personally identifiable search histories for longer than six months, and it specifically includes search engines whose headquarters are based outside the EU. The group does not say which search engines it studied, but it was reported to be studying Google as long ago as last May. The report doesn't look at requirements to keep traffic data under the Data Retention Directive, as it does not apply to search engines.

Google's shortening the life of its cookies and anonymizing its search history logs after 18 months turns out to have a significance I didn't appreciate when, at the time, I dismissed it as insultingly trivial (which it was): it showed the Article 29 working group that the company doesn't really need to keep all that data for so long. In

One of the key items the Article 29 group had to decide in writing its report on data protection issues related to search engines (PDF) is this: are IP addresses personal information? It sounds like one of those bits of medieval sophistry, like asking how many angels can dance on the head of a pin. In the dial-up days, it might not have mattered, at least in Britain, where local phone charges forced limited usage, so users were assigned a different IP address every time they logged in. But in the world of broadband, where even the supposedly dynamic IP addresses issued by cable suppliers may remain with a single subscriber for years on end. Being able to track your IP address's activities is increasingly like being able to track your library card, your credit card, and your mobile phone all at the same time. Fortunately, the average ISP doesn't have the time to be that interested in most of its users.

The fact is that any single piece of information that identifies your activities over a long period and can be mapped to your real-life identity has to be considered personal information or the data protection laws make no sense. The libertarian view, of course, would be that there are other search engines. You do not actually have to use Google, Gmail, or even YouTube. But if all search engines adopted Google's habits the choice would be more apparent than real. Time was when the US was the world's policeman. With respect to data, it seems that the EU has taken on this role. It will be interesting to see whether this decision has any impact on Google's business model and practices. If it does, that trade war could finally be upon us. If not, then Google was building up a vast data store just because we can.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 28, 2008

Leaving Las Vegas

Las Vegas shouldn't exist. Who drops a sprawling display of electric lights with huge fountains and luxury hotels that into the best desert scenery on the planet during an energy crisis? Indoors, it's Britain in mid-winter; outdoors you're standing in a giant exhaust fan. The out-of-proportion scale means that everything is four times as far away as you think, including the jackpot you're not going to win at one of its casinos. It's a great place to visit if you enjoy wallowing in self-righteous disapproval.

This all makes it the stuff of song, story, and legend and explains why Jeff Jonas's presentation at etech was packed.

The way Jonas tells it in his blog and at his presentation, he got into the gaming industry by driving through Las Vegas in 1989 idly wondering what was going on behind the scenes at the casinos. A year later he got the tiny beginnings of an answer when he picked up a used couch he'd found in the newspaper classified ads (boy, that dates it, doesn't it?) and found that its former owner played blackjack "for a living". Jonas began consulting to the gaming industry in 1991, helping to open Treasure Island, Bellagio, and Wynn.

"Possibly half the casinos in the world use technology we created," he said at etech.

Gaming revenues are now less than half of total revenues, he said, and despite the apparent financial win they might represent problem gamblers are in fact bad for business. The goal is for people to have fun. And because of that, he said, a place like the Bellagio is "optimized for consumer experience over interference. They don't want to spend money on surveillance."

Jonas began with a slide listing some common ideas about how Las Vegas works, culled from movies like Ocean's 11 and the TV show Las Vegas. Does the Bellagio have a vault? (No.) Do casinos perform background checks on guests based on public records? (No.) Is there a gaming industry watch list you can put yourself on but not take yourself off? (Yes, for people who know they have a gambling addiction.) Do casinos deliberately hire ex-felons? (Yes, to rehabilitate them.) Do they really send private jets for high rollers? (Cue story.)

There was, he said, a casino high roller who had won some $18 million. A win like that is going to show up in a casino's quarterly earnings. So, yes, they sent a private jet to his town and parked a limo in front of his house for the weekend. If you've got the bug, we're here for you, that kind of thing. He took the bait, and lost $22 million.

Do they help you create cover stories? (Yes.) "What happens in Vegas stays in Vegas" is an important part of ensuring that people can have fun that does not come back to bite them when they go home. The casinos' problem is with identity, not disguises, because they are required by anti-money laundering rules to report it any time someone crosses the $10,000 threshold for cash transactions. So if you play at several different tables, then go upstairs and change disguises, and come back and play some more, they have to be able to track you through all that. ID, therefore, is extremely important. Disguises are welcome; fake ID is not.

Do they use facial recognition to monitor the doors to spot cheaters on arrival? (Well...)

Of course technology-that-is-indistinguishable-from-magic-because-it-actually-is-magic appears on every crime-solving TV show these days. You know, the stuff where Our Heroes start with a fuzzy CCTV image and they punch in on a tiny piece of it and blow it up. And then someone says, "Can you enhance that?" and someone else says, "Oh, yes, we have new software," and a second later a line goes down the picture filling in detail. And a second after that you can read the brand on the face of a wrist watch (Numb3rs or the manufacturer's coding on a couple of pills (Las Vegas. Or they have a perfect matching system that can take a partial fingerprint lifted off a strand of hair or something and bang! the database can find not only the person's identity but their current home address and phone number (Bones). And who can ever forget the first episode of 24, when Jack Bauer, alarmed at the disappearance of his daughter, tosses his phone number to an underling and barks, "Find me all the Internet passwords associated with this phone number."

And yet...a surprising number of what ought to be the technically best-educated audience on the planet thought facial recognition was in operation to catch cheaters. Folks, it doesn't work in airports, either.

Which is the most interesting thing Jonas said: he now works for IBM (which bought his company) on privacy and civil liberties issues, including work on software to help the US government spot terrorists without invading privacy. It's an interesting concept, partly because security at airports and other locations is now so invasive. But also because if Las Vegas can find a way to deploy surveillance such that only the egregious problems are caught and everyone else just has a good time...why can't governments?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 14, 2008

Uninformed consent

Apparently the US Congress is now being scripted by Jon Stewart of the Daily Show. In a twist of perfect irony, the House of Representatives has decided to hold its first closed session in 25 years to debate - surveillance.

But it's obvious why they want closed doors: they want to talk about the AT&T case. To recap: AT&T is being sued for its complicity in the Bush administration's warrantless surveillance of US citizens after its technician Mark Klein blew the whistle by taking documents to the Electronic Frontier Foundation (which a couple of weeks ago gave him a Pioneer Award for his trouble).

Bush has, of course, resisted any effort to peer into the innards of his surveillance program by claiming it's all a state secret, and that's part of the point of this Congressional move: the Democrats have fielded a bill that would give the whole program some more oversight and, significantly, reject the idea of giving telecommunications companies - that is, AT&T - immunity from prosecution for breaking the law by participating in warrantless wiretapping. 'Snot fair that they should deprive us of the fun of watching the horse-trading. It can't, surely, be that they think we'll be upset by watching them slag each other off. In an election year?

But it's been a week for irony, as Wikipedia founder Jimmy Wales has had his sex life exposed when he dumped his girlfriendand been accused of - let's call it sloppiness - in his expense accounts. Worse, he stands accused of trading favorable page edits for cash. There's always been a strong element of Schadenpedia around, but the edit-for-cash thing really goes to the heart of what Wikipedia is supposed to be about.

I suspect that nonetheless Wikipedia will survive it: if the foundation has the sense it seems to have, it will display zero tolerance. But the incident has raised valid questions about how Wikipedia can possibly sustain itself financially going forward. The site is big and has enviable masses of traffic; but it sells no advertising, choosing instead to live on hand-outs and the work of volunteers. The idea, I suppose, is that accepting advertising might taint the site's neutral viewpoint, but donations can do the same thing if they're not properly walled off: just ask the US Congress. It seems to me that an automated advertising system they did not control would be, if anything, safer. And then maybe they could pay some of those volunteers, even though it would be a pity to lose some of the site's best entertainment.

With respect to advertising, it's worth noting that Phorm, which we is under increasing pressure. Earlier this week, we had an opportunity to talk to Kent Ertegrul, CEO of Phorm, who continues to maintain that Phorm's system, because it does not store data, is more protective of privacy than today's cookie-driven Web. This may in fact be true.

Less certain is Ertegrul's belief that the system does not contravene the Regulation of Investigatory Powers Act, which lays down rules about interception. Ertegrul has some support from a informal letter from the Home Office whose reasoning seems to be that if users have consented and have been told how they can opt out, it's legal. Well, we'll see; there's a lot of debate going on about this claim and it will be interesting to hear the Information Commissioner's view. If the Home Office's interpretation is correct, it could open a lot of scope for abusive behavior that could be imposed upon users simply by adding it to the terms of service to which they theoretically consent when they sign up, and a UK equivalent of AT&T wanting to assist the government with wholesale warrantless wiretapping would have only to add it to the terms of service.

The real problem is that no one really knows how Phorm's system works. Phorm doesn't retain your IP address, but the ad servers surely have to know it when they're sending you ads. If you opt out but can still opt back in (as Ertegrul said you can), doesn't that mean you still have a cookie on your system and that your data is still passed to Phorm's system, which discards it instead of sending you ads? If that's the case, doesn't that mean you can not opt out of having your data shared? If that isn't how it works, then how does it work? I thought I understood it after talking to Ertegrul, I really did - and then someone asked me to explain how Phorm's cookie's usefulness persisted between sessions, and I wasn't sure any more. I think the Open Rights Group: Phorm should publish details of how its system works for experts to scrutinize. Until Phorm does that the misinformation Ertegrul is so upset about will continue. (More disclosure: I am on ORG's Advisory Council.

But maybe the Home Office is on to something. Bush could solve his whole problem by getting everyone to give consent to being surveilled at the moment they take US citizenship. Surely a newborn baby's footprint is sufficient agreement?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 22, 2008

Strikeout

There is a certain kind of mentality that is actually proud of not understanding computers, as if there were something honorable about saying grandly, "Oh, I leave all that to my children."

Outside of computing, only television gets so many people boasting of their ignorance. Do we boast how few books we read? Do we trumpet our ignorance of other practical skills, like balancing a cheque book, cooking, or choosing wine? When someone suggests we get dressed in the morning do we say proudly, "I don't know how"?

There is so much insanity coming out of the British government on the Internet/computing front at the moment that the only possible conclusion is that the government is made up entirely of people who are engaged in a sort of reverse pissing contest with each other: I can compute less than you can, and see? here's a really dumb proposal to prove it.

How else can we explain yesterday's news that the government is determined to proceed with Contactpoint even though the report it commissioned and paid for from Deloitte warns that the risk of storing the personal details of every British child under 16 can only be managed, not eliminated? Lately, it seems that there's news of a major data breach every week. But the present government is like a batch of 20-year-olds who think that mortality can't happen to them.

Or today's news that the Department of Culture, Media, and Sport has launched its proposals for "Creative Britain", and among them is a very clear diktat to ISPs: deal with file-sharing voluntarily or we'll make you do it. By April 2009. This bit of extortion nestles in the middle of a bunch of other stuff about educating schoolchildren about the value of intellectual property. Dare we say: if there were one thing you could possibly do to ensure that kids sneer at IP, it would be to teach them about it in school.

The proposals are vague in the extreme about what kind of regulation the DCMS would accept as sufficient. Despite the leaks of last week, culture secretary Andy Burnham has told the Financial Times that the "three strikes" idea was never in the paper. As outlined by Open Rights Group executive director Becky Hogge in New Statesman, "three strikes" would mean that all Internet users would be tracked by IP address and warned by letter if they are caught uploading copyrighted content. After three letters, they would be disconnected. As Hogge says (disclosure: I am on the ORG advisory board), the punishment will fall equally on innocent bystanders who happen to share the same house. Worse, it turns ISPs into a squad of private police for a historically rapacious industry.

Charles Arthur, writing in yesterday's Guardian, presented the British Phonographic Institute's case about why the three strikes idea isn't necessarily completely awful: it's better than being sued. (These are our choices?) ISPs, of course, hate the idea: this is an industry with nanoscale margins. Who bears the liability if someone is disconnected and starts to complain? What if they sue?

We'll say it again: if the entertainment industries really want to stop file-sharing, they need to negotiate changed business models and create a legitimate market. Many people would be willing to pay a reasonable price to download TV shows and music if they could get in return reliable, fast, advertising-free, DRM-free downloads at or soon after the time of the initial release. The longer the present situation continues the more entrenched the habit of unauthorized file-sharing will become and the harder it will be to divert people to the legitimate market that eventually must be established.

But the key damning bit in Arthur's article (disclosure: he is my editor at the paper) is the BPI's admission that they cannot actually say that ending file-sharing would make sales grow. The best the BPI spokesman could come up with is, "It would send out the message that copyright is to be respected, that creative industries are to be respected and paid for."

Actually, what would really do that is a more balanced copyright law. Right now, the law is so far from what most people expect it to be - or rationally think it should be - that it is breeding contempt for itself. And it is about to get worse: term extension is back on the agenda. The 2006 Gowers Review recommended against it, but on February 14, Irish EU Commissioner Charlie McCreevy (previously: champion of software patents) has announced his intention to propose extending performers' copyright in sound recordings from the current 50-year term to 95 years. The plan seems to go something like this: whisk it past the Commission in the next two months. Then the French presidency starts and whee! new law! The UK can then say its hands are tied.

That change makes no difference to British ISPs, however, who are now under the gun to come up with some scheme to keep the government from clomping all over them. Or to the kids who are going to be tracked from cradle to alcopop by unique identity number. Maybe the first target of the government computing literacy programs should be...the government.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 7, 2007

Data hogs

If a data point falls in the forest and there's no database to pick it up, is it still private?

There is a general view that people do not care about privacy, particularly younger people. They blog the names of all their favorite bands and best friends, post their drunken photographs on Facebook, and tell all of MySpace who they slept with last night. No one, the argument goes – actually 22 percent – reads the privacy policies Web sites pay their lawyers to draw up so unreadably.

And yet the perception is wrong. People do, clearly, care about privacy – when the issues are made visible to them. Unfortunately, the privacy-invasiveness of a service, policy, or Web site usually only becomes visible after the horse has escaped and is comfortably grazing in the field of three-leaf clover.

A lot of this is, as Charles Arthur blogged recently in relation to the loss of the HMRC discs holding the Child Benefit database, an education issue: if we taught kids important principles of computer science, like security, privacy, and the value of data, instead of boring things like how to format an Excel spreadsheet, some of the most casual data violations wouldn't happen.

A lot of recent privacy failures seem to have happened in just this same unconscious way. Google's various privacy invasions, for example, seem to be a peculiarly geeky failure to connect with the general public's view of things. You can just imagine the techies at the Googleplex saying, "Oh, cool! Look, you can see right into the windows of those houses!" and utterly failing at simple empathy.

The continuing Facebook privacy meltdown seems to include the worst aspects of both the HMRC incident and Google's blind spot. If you haven't been following it, the story in brief is that Facebook created a new advertising program it calls Beacon, which collects tracking data from a variety of partner sites such as Blockbuster.com. Beacon then uses the data to display your latest purchases so your friends can see them.

The blind spot is, of course, the utter surprise with which the company greeted the discovery that people have all sorts of reasons why they don't want their purchase history displayed to their friends. They might be gifts for said friends. The friends, as so often on Facebook and the other social networks, may not be really friends but acquaintances chosen to make you look well-connected, or relatives you assiduously avoid in real life. And even your closest real friends may prefer not to know too much about the porn DVDs you rent. American librarians are militant about protecting the reading lists of library patrons; but Facebook would gleefully expose the books you buy. Are you kidding me? Facebook CEO Mark Zuckerberg can apologize all he wants, but his apparent surprise at the size of the fuss suggests that he's as inexperienced at shopping as those women in front of you in the grocery checkout who seem not to know they'll need to pay until after everything's been bagged up.

What Facebook shares with HMRC, though, is the underlying principle that it's cheaper to send the full set of data and let the recipients delete what they don't want than to be selective. And so, as the story has developed, it turns out that all sorts of data is being sent to Facebook, some of it even relating to non-users. They just delete what they don't want, so they say.

Facebook was briefly defensive, then allowed users to opt out, and then finally allowed users to delete the thing entirely. But the whole thing highlights one of the very real problems with social network sites that net.wars first wrote about in connection with (the now more responsibly designed) Plaxo: they grow by getting people to invade their own and their friends' privacy. The Australian computer scientist and privacy advocate Roger Clarke, whose paper Very Black "Little Black Boooks" is the seminal work in this area, predicted in 2003 that the social networks' business models would force them to become extremely invasive. And so it has proved.

How do we make privacy a choice? We know people care about privacy when they can see its loss: the reactions to the Facebook and HMRC incidents have made this plain. We know theyRecent research by Lorrie Cranor at Carnegie-Mellon (PDF) suggests, for example, that people's purchasing habits will change if you give them an easy-to-understand graphical representation of how well an ecommerce site's practices match their privacy preferences.

But visibility to users, helpful though it would be, is not the root of the problem. What privacy advocates need going forward is a way to persuade companies and governments to make privacy choices easy and visible when their mindset is to collect and keep all data, all the time? These organisations do not perceive giving users control over their privacy as being in their own best interests. Maybe plummeting stock prices and forced resignations, however brief, will get through to them. But to keep their attention focused on building better systems that put the user in control, we need to make the consequences of getting it wrong constantly visible and easily interpretable to the data hogs themselves.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 23, 2007

Road block

There are many ways for a computer system to fail. This week's disclosure that Her Majesty's Revenue and Customs has played lost-in-the-post with two CDs holding the nation's Child Benefit data is one of the stranger ones. The Child Benefit database includes names, addresses, identifying numbers, and often bank details, on all the UK's 25 million families with a child under 16. The National Audit Office requested a subset for its routine audit; the HMRC sent the entire database off by TNT post.

There are so many things wrong with this picture that it would take a village of late-night talk show hosts to make fun of them all. But the bottom line is this: when the system was developed no one included privacy or security in the specification or thought about the fundamental change in the nature of information when paper-based records are transmogrified into electronic data. The access limitations inherent in physical storage media must be painstakingly recreated in computer systems or they do not exist. The problem with security is it tends to be inconvenient.

With paper records, the more data you provide the more expensive and time-consuming it is. With computer records, the more data you provide the cheaper and quicker it is. The NAO's file of email relating to the incident (PDF) makes this clear. What the NAO wanted (so it could check that the right people got the right benefit payments): national insurance numbers, names, and benefit numbers. What it got: everything. If the discs hadn't gotten lost, we would never have known.

Ironically enough, this week in London also saw at least three conferences on various aspects of managing digital identity: Digital Identity Forum, A Fine Balance, and Identity Matters. All these events featured the kinds of experts the UK government has been ignoring in its mad rush to create and collect more and more data. The workshop on road pricing and transport systems at the second of them, however, was particularly instructive. Led by science advisor Brian Collins, the most notable thing about this workshop is that the 15 or 20 participants couldn't agree on a single aspect of such a system.

Would it run on GPS or GSM/GPRS? Who or what is charged, the car or the driver? Do all roads cost the same or do we use differential pricing to push traffic onto less crowded routes? Most important, is the goal to raise revenue, reduce congestion, protect the environment, or rebalance the cost of motoring so the people who drive the most pay the most? The more purposes the system is intended to serve, the more complicated and expensive it will become, and the less likely it is to answer any of those goals successfully. This point has of course also been made about the National ID card by the same sort of people who have warned about the security issues inherent in large databases such as the Child Benefit database. But it's clearer when you start talking about something as limited as road charging.

For example: if you want to tag the car you would probably choose a dashboard-top box that uses GPS data to track the car's location. It will have to store and communicate location data to some kind of central server, which will use it to create a bill. The data will have to be stored for at least a few billing cycles in case of disputes. Security services and insurers alike would love to have copies. On the other hand, if you want to tag the driver it might be simpler just to tie the whole thing to a mobile phone. The phone networks are already set up to do hand-off between nodes, and tracking the driver might also let you charge passengers, or might let you give full cars a discount.

The problem is that the discussion is coming from the wrong angle. We should not be saying, "Here is a clever technological idea. Oh, look, it makes data! What shall we do with it?" We should be defining the problem and considering alternative solutions. The people who drive most already pay most via the fuel pump. If we want people to drive less, maybe we should improve public transport instead. If we're trying to reduce congestion, getting employers to be more flexible about working hours and telecommuting would be cheaper, provide greater returns, and, crucially for this discussion, not create a large database system that can be used to track the population's movements.

(Besides, said one of the workshop's participants: "We live with the congestion and are hugely productive. So why tamper with it?")

It is characteristic of our age that the favored solution is the one that creates the most data and the biggest privacy risk. No one in the cluster of organisations opposing the ID card - No2ID, Privacy International, Foundation for Information Policy Research, or Open Rights Group - wanted an incident like this week's to happen. But it is exactly what they have been warning about: large data stores carry large risks that are poorly understood, and it is not enough for politicians to wave their hands and say we can trust them. Information may want to be free, but data want to leak.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 9, 2007

Watching you watching me

A few months ago, a neighbour phoned me and asked if I'd be willing to position a camera on my windowsill. I live at the end of a small dead-end street (or cul-de-sac), that ends in a wall about shoulder height. The railway runs along the far side of the wall, and parallel to it and further away is a long street with a row of houses facing the railway. The owners of those houses get upset because graffiti keeps appearing alongside the railway where they can see it and covers flat surfaces such as the side wall of my house. The theory is that kids jump over the wall at the end of my street, just below my office window, either to access the railway and spray paint or to escape after having done so. Therefore, the camera: point it at the wall and watch to see what happens.

The often-quoted number of times the average Londoner is caught on camera per day is scary: 200. (And that was a few years ago; it's probably gone up.) My street is actually one of those few that doesn't have cameras on it. I don't really care about the graffiti; I do, however, prefer to be on good terms with neighbours, even if they're all the way across the tracks. I also do see that it makes sense at least to try to establish whether the wall downstairs is being used as a hurdle in the getaway process. What is the right, privacy-conscious response to make?

I was reminded of this a few days ago when I was handed a copy of Privacy in Camera Networks: A Technical Perspective, a paper published at the end of July. (We at net.wars are nothing if not up-to-date.)

Given the amount of money being spent on CCTV systems, it's absurd how little research there is covering their efficacy, their social impact, or the privacy issues they raise. In this paper, the quartet of authors – Marci Lenore Meingast (UC Berkeley), Sameer Pai (Cornell), Stephen Wicker (Cornell), and Shankar Sastry (UC Berkeley) – are primarily concerned with privacy. They ask a question every democratic government deploying these things should have asked in the first place: how can the camera networks be designed to preserve privacy? For the purposes of preventing crime or terrorism, you don't need to know the identity of the person in the picture. All you want to know is whether that person is pulling out a gun or planting a bomb. For solving crimes after the fact, of course, you want to be able to identify people – but most people would vastly prefer that crimes were prevented, not solved.

The paper cites model legislation (PDF) drawn up by the Constitution Project. Reading it is depressing: so many of the principles in it are such logical, even obvious, derivatives of the principles that democratic governments are supposed to espouse. And yet I can't remember any public discussion of the idea that, for example, all CCTV systems should be accompanied by identification of and contact information for the owner. "These premises are protected by CCTV" signs are everywhere; but they are all anonymous.

Even more depressing is the suggestion that the proposals for all public video surveillance systems should specify what legitimate law enforcement purpose they are intended to achieve and provide a privacy impact assessment. I can't ever remember seeing any of those either. In my own local area, installing CCTV is something politicians boast about when they're seeking (re)election. Look! More cameras! The assumption is that more cameras equals more safety, but evidence to support this presumption is never provided and no one, neither opposing politicians nor local journalists, ever mounts a challenge. I guess we're supposed to think that they care about us because they're spending the money.
The main intention of Meingast, Pai, et al, however, is to look at the technical ways such networks can be built to preserve privacy. They suggest, for example, collecting public input via the Internet (using codes to identify the respondents on whom the cameras will have the greatest impact). They propose an auditing system whereby these systems and their usage is reviewed. As the video streams become digital, they suggest using layers of abstraction of the resulting data to limit what can be identified in a given image. "Information not pertinent to the task in hand," they write hopefully, "can be abstracted out leaving only the necessary information in the image." They go on into more detail about this, along with a lengthy discussion of facial recognition.

The most depressing thing of all: none of this will ever happen, and for two reasons. First, no government seems to have the slightest qualm of conscience about installing surveillance systems. Second, the mass populace don't seem to care enough to demand these sorts of protections. If these protections are to be put in place at all, it must be done by technologists. They must design these systems so that it's easier to use them in privacy-protecting ways than to use them in privacy-invasive ways. What are the odds?

As for the camera on my windowsill, I told my neighbour after some thought that they could have it there for a maximum of a couple of weeks to establish whether the end of my street was actually being used as an escape route. She said something about getting back to me when something or other happened. Never heard any more about it. As far as I am aware, my street is still unsurveilled.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 12, 2007

The permission-based society

It was Edward Hasbrouck who drew my attention to a bit of rulemaking being proposed by the Transportation Security Agency. Under current rules, if you want to travel on a plane out of, around, into, or over the US you buy a ticket and show up at the airport, where the airline compares your name and other corroborative details to the no-fly list the TSA maintains. Assuming you're allowed onto the flight, unbeknownst to you, all this information has to be sent to the TSA within 15 minutes of takeoff (before, if it's a US flight, after if it's an international flight heading for the US).

Under the new rules, the information will have to arrive at the TSA 72 hours before the flight takes off – after all, most people have finalised their travel plans by that time, and only 7 to 10 percent of itineraries change after that – and the TSA has to send back an OK to the airline before you can be issued a boarding pass.

There's a whole lot more detail in the Notice of Proposed Rulemaking, but that's the gist. (They'll be accepting comments until October 22, if you would like to say anything about these proposals before they're finalised.)

There are lots of negative things to say about these proposals – the logistical difficulties for the travel industry, the inadequacy of the mathematical model behind this (which at the public hearing the ACLU's Barry Steinhardt compared to trying to find a needle in a haystack by pouring more hay on the stack), and the privacy invasiveness inherent in having the airlines collect the many pieces of data the government wants and, not unnaturally, retaining copies while forwarding it on to the TSA. But let's concentrate on one: the profound alteration such a scheme will make to American society at large. The default answer to the question of whether you had the right to travel anywhere, certainly within the confines of the US, has always been "Yes". These rules will change it to "No".

(The right to travel overseas has, at times, been more fraught. The folk scene, for example, can cite several examples of musicians who were denied passports by the US State Department in the 1950s and early 1960s because of their left-wing political beliefs. It's not really clear to me why the US wanted to keep people whose views it disapproved of within its borders but some rather hasty marriages took place in order to solve some of these immigration problems, though everyone's friends again now and it's fresh passports all round.)

Hasbrouck, Steinhardt, and EFF founder John Gilmore, who sued the government over the right to travel anonymously within the US, have all argued that the key issue here is the right to assemble guaranteed in the First Amendment. If you can't travel, you can't assemble. And if you have to ask permission to travel, your right of assembly is subject to disruption at any time. The secrecy with which the TSA surrounds its decision-making doesn't help.

Nor does the amount of personal data the TSA is collecting from airline passenger name records. The Identity Project's recent report on the subject highlights that these records may include considerable detail: what books the passenger is carrying, what answer you give when asked where you've been or are going, names and phone numbers given as emergency contacts, and so on. Despite the data protection laws, it isn't always easy to find out what information is being stored; when I made such a request of US Airways last year, the company refused to show me my PNR from a recent flight and gave as the reason: "Security." Civilisation as we know it is at risk if I find out what they think they know about me? We really are in trouble.

In Britain, the chief objections to the ID card and, more important, the underlying database, have of course been legion, but they have generally focused on the logistical problems of implementing it (huge cost, complex IT project, bound to fail) and its general privacy-invasiveness. But another thing the ID card – especially the high-tech, biometric, all-singing, all-dancing kind – will do is create a framework that could support a permission-based society in which the ID card's interaction with systems is what determines what you're allowed to do, where you're allowed to go, and what purchases you're allowed to make. There was a novel that depicted a society like this: Ira Levin's This Perfect Day, in which these functions were all controlled by scanner bracelets and scanners everywhere that lit up green to allow or red to deny permission. The inhabitants of that society were kept drugged, so they wouldn't protest the ubiquitous controls. We seem to be accepting the beginnings of this kind of life stone, cold sober.

American children play a schoolyard game called "Mother, May I?" It's one of those games suitable for any number of kids, and it involves a ritual of asking permission before executing a command. It's a fine game, but surely it isn't how we want to live.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 14, 2007

Nothing to hide, no one to trust

The actor David Hyde Pierce is widely reported to have once responded to a TV interviewer inquiring as to whether he was gay, "My life is an open book, but that doesn't mean I'm going to read it to you." (Or something very like that.)

This seems to me a both witty and intelligent response to the seemingly ever-present mantra these days, "If you have nothing to hide, you have nothing to fear," invoked every time someone wants to institute some new, egregious privacy-invasive surveillance practice. And there are a lot of these.

Last week, a British judge came up with a brilliant scheme for eliminating the racial bias of the 3 million-entry DNA database: collect samples from everyone, even visitors. I may have nothing to fear from this – after all, DNA testing has, in the US, been used to exonerate the innocent on Death Row - but it invokes in me what British politicos sometimes call the "yuck factor". Normally, this is reserved for such science-related ethical dilemmas as human cloning, but for me at least it applies much more strongly here. I loved the movie Gattaca, but I don't want to live there.

In fact, there are considerable risks in DNA-printing the entire population (aside from killing tourism). For one thing, we do not know how we will be able to use or interpret DNA in the future as sequencing plummets in price (as it's expected to do). Say, the UK had considered compiling a nationwide fingerprints database back in 1970 (there would have been riots, but leaving that aside). No one would have foreseen then the widespread availability of cheap fingerprint scanners and online communications that could turn that database into a central authority.

We can surmise that the DNA database will contain sufficient information to allow anyone who can gain access to it to impersonate anyone at any time. Conversely, as we get better and better at understanding what individual genes mean and sequencing drops precipitously in price, the DNA database will grant those who have access to it unprecedented amounts of information about each person's biological or medical prospects and those of their immediate relatives. While there are many diseases that do not have markers in our genes, there are plenty more that do. Does anyone really want the government to be the first to know that they carry the gene for cystic fibrosis or breast cancer?

I don't believe for a second that it was a serious proposal. This is the kind of thing someone says and then everyone holds their breath to gauge the reaction. Has the country been softened up enough to accept such a thing yet? But the fact that someone could say it at all shows how far we have moved away from the presumption of innocence on which both the UK and the US governments were founded.

Witty answers on talk shows aren't, however, quite enough to make a case to a government that what it wants to do is a bad idea. Daniel Solove, author of The Digital Person and a law professor at

In it, he compares privacy to environmental damage: not the single horror story implied by "nothing to hide, nothing to fear", but the result of the accumulating damage caused by a series of "small acts by different actors". The broader structural damage that happens in breaches of confidentiality (such as companies violating their own privacy policies by selling data to third parties) is a loss of trust.

I am not a supporter of open gun ownership, but the US Second Amendment has some merit in principle: the basic idea is to balance the power of the individual against the State. The EU's data protection laws do – or would, if the EU doesn't ignore them as it has in the case of passenger data – a reasonable job of balancing the power of the individual against commercial companies. But the data protection laws can be upended, it seems, whenever a national government wants to do so. All it has to do is pass a law making it legal or mandatory to supply the data it wants to collect, transfer, share, or sell. But the fact that such policies are possible doesn't make them a good idea, even with the best intentions of improving security or personal safety.

The San Francisco computer security expert Russell Brand once asked me, in the casual way he poses philosophical questions, "If you knew they would never be used against you, would you still have secrets?" After some thought, I came up with this: yes. Because one of the ways you show someone that they are important to you and you trust them is that you disclose to them things you don't normally tell other people. It is, in fact, one of the ways we show we love them. We don't tell governments secrets because we want to be intimate with them; we do it because we're required to do so by law. The more one-sided laws make the balance of power, the less we're going to trust our government. Is that really what they want?


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 7, 2007

Was that private?

Time and again the Net has proved that anything large corporations can do to us – we can do to ourselves much more effectively and willingly. Churn, for example, used to be something people fired stockbrokers for; it is the practice of buying and selling stocks much more frequently than is rational in order to profit from the brokerage fees. But during the dot-com day-trading boom, as I remember brokers saying at the time, people churned their own accounts far more than any broker would ever have dared to do (and, we add, probably losing themselves far more money).

The same is happening now with privacy. If, say, Tesco posted the real names of its club card members on a Web site complete with little jokes about the foods they buy and the times of day they like deliveries, there would be considerable outrage. (And rightly so.) When a government allows the kinds of leaks detailed here on the Blindside site (to which I contribute) we are significantly unimpressed. On the other hand, over and over on the Net we see people invading their own privacy to a degree that probably no corporation or government would dare. We post about our friends, our habits, our flight times, the shinies we just bought, the books and newspapers we read, the TV programs we watch (and, in some cases, download), our religious feelings or lack thereof, and the organisations we join and promote. We do this for the same reason people feel safer driving in their own cars than they do flying on someone else's airplane: we think we're in control.

Which all leads to a 2003 European court decision that was noted this week by Oxford Internet Institute law professor Jonathan Zittrain (who had it from Karen McCullagh) about Mrs Bodil Lindqvist, a Swedish church catechist who apparently chatted rather liberally about some of her fellow church committee members on a Web site they didn't know she'd created. Among other information, she included names, the fact that a colleague was on partial medical leave because her foot was injured, phone numbers, and so on.

Your attitude about this sort of thing depends partly on your personality, your culture, and your own online habits. So many of my friends blog what seems to be their whole lives in copious detail that it never occurred to me I was doing anything privacy-invasive when I visited a foreign country and blogged the name of the friend of a friend I met there and an account of a day we spent together. The blogee, however, was unhappy and asked me to remove the name and other identifying details. I was surprised, but complied. Referring to someone's injured foot seems kind of harmless, too. On the other hand, I would never, however, put someone's telephone number online without their consent, even though my own phone number is on my Web site. Yet that prejudice seems irrational if you instead call the information about the injured foot "personal medical data" but see the phone number as something anyone could find in the phone book.

Fortunately, I don't live in Sweden. Mrs Lindqvist also took down her pages when she found out the people she had mentioned were upset. But nonetheless she was prosecuted for several violations of the data protection laws – processing personal data without giving prior written notice to the data protection authorities, processed sensitive personal data (the medical information about the foot) without consent, and transferred personal data to a third country. The European Court of Justice, where the case eventually ended up, concluded the third of these did not apply – simply posting something on a Web page that can be read by someone in another country isn't enough for that. But the ECJ agreed she was guilty of the first two of these offenses – as are, by now, probably hundreds of thousands, if not millions, of other Europeans.

This judgement was rendered (and ignored) before Facebook and MySpace became phenomena, and before blogging became quite such a widespread pastime. It acknowledges the competing claims of laws guaranteeing freedom of expression, but still comes down against Mrs Lindvist – who, frankly, seems like pretty small beer compared to this week's announcement that Facebook is to open its member listings to Google's search engine.

The problem with the Facebook decision is that one of the things that really does govern the Net while the law is still making up its mind is community standards. Based on Facebook's assurances that the system was closed to members only, people posted material about themselves and their friends that they thought would stay private. The decision to open the service makes them more like celebrities at Addictions Anonymous meetings. They are about to discover this in the same way that a generation of Usenet posters did when the archives of what they thought were ephemera were assembled and opened to the public by Deja News (now Google Groups). What they will also discover is that although they can delete their own accounts or mark them private, like Mrs Lindqvist's church colleagues, they have no control over what others have said about them in public when they weren't looking.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 24, 2007

Game gods

Virtual worlds have been with us for a long time. Depending who you listen to, they began in 1979, or 1982, or it may have been the shadows on the walls of Plato's cave. We'll go with the University of Essex MUD, on the grounds that its co-writer Richard Bartle can trace its direct influence on today's worlds.

At State of Play this week, it was clear that just as the issues surrounding the Internet in general have changed very little since about 1988, neither have the issues surrounding virtual worlds.

True, the stakes are higher now and, as Professor Yee Fen Lim noted, when real money starts to be involved people become protective.

Level 70 warrior accounts on World of Warcraft go for as little as $10 (though your level number cannot disguise your complete newbieness), but the unique magic sword you won in a quest may go for much more. The best-known pending case is Bragg versus Second Life over virtual property the world's owners confiscated when they realized that Bragg was taking advantage of a loophole in their system to buy "land" at exceptionally cheap prices. Lim had an interesting take on the Bragg case: as a legal concept, she argued, property is right of control, even though Linden Labs itself defines its virtual property as rental of a processor. As computer science that's fine, but it's not law. Otherwise, she said, "Property is mere illusion."

Ultimately, the issues all come down to this: who owns the user experience? In subscription gaming worlds, the owners tend to keep very tight control of everything – they claim ownership in all intellectual property in the world, limit users' ability to create their own content, and block the sale of cheats as much as possible. In a free-form world like Second Life which may host games but is itself a platform rather than a game, users are much freer to do what they want but the EULAs or Terms of Service may be just as unfair.

Ultimately, no matter what the agreement says, today's privately owned virtual worlds all function under the same reality: the game gods can pull the plug at any time. They own and control the servers. Possession is nine-tenths of the law, and all that. Until someone implements open source world software on a P2P platform, this will always be the way. Linden Labs says, for what it's worth, that its long-term intention is to open-source its platform so that anyone may set up a world. This, too, has been done before, with The Palace.

One consequence of this is that there is no such thing as virtual privacy, a topic that everyone is aware of but no one's talking about. The piecemeal nature of the Net means that your friend's IRC channel doesn't know anything about your Web use, and Amazon.com doesn't track what you do on eBay. But virtual worlds log everything. If you buy a new shirt at a shop and then fly to a distant island to have sex with it, all that is logged. (Just try to ensure the shirt doesn't look like a child's shirt and you don't get into litigation over who owns the island…)

There are, as scholars say, legitimate reasons. Logging everything that happens is important in helping game developers pinpoint the source of crashes and eliminate bugs. Logs help settle disputes over who did what to whose magic sword. And in a court case, they may be important evidence (although how you can ensure that the logs haven't been adjusted to suit the virtual world provider, who is usually one of the parties to the litigation, I don't know).

As long as you think of virtual worlds as games, maybe this isn't that big a problem. After all, no one is forced to spend half their waking hours killing enough monsters in World of Warcraft to join a guild for a six-hour quest.

But something like Second Life aspires to be a lot more than that. The world is adding voice communication, which will be interesting: if you have to use your real voice, the relative anonymity conferred by the synthetic world are gone. Quite apart from bandwidth demands (lag is the bane of every SLer's existence), exploring what virtual life is like in the opposite gender isn't going to work. They're going to need voice synthesizers.

Much of the law in this area is coming out of Asia, where massively multi-player online games took off so early with such ferocity that, according to Judge Unggi Yoon, in a recent case a member of a losing team in one such game ran to the café where the winning team was playing and physically battered one of its members. Yoon, who explained some of the new laws, is an experienced online gamer, all the way back to playing Ultima Online in middle school. In his country, a law has recently come into force taxing virtual world transactions (it works like a VAT threshold – under $100 a month you don't owe anything). For Westerners, who are used to the idea that we make laws and export them rather than the other way around, this is quite a reality shift.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 27, 2007

There ain't no such thing as a free Benidorm

This has been the week for reminders that the border between real life and cyberspace is a permeable blood-brain barrier.

On Wednesday, Linden Labs announced that it was banning gambling in Second Life. The resentment expressed by some of SL residents is understandable but naive. We're not at the beginning of the online world any more; Second Life is going through the same reformation to take account of national laws as Usenet and the Web did before it.

Second, this week MySpace deleted the profiles of 29,000 American users identified as sex offenders. That sounds like a lot, but it's a tiny percentage of MySpace's 180 million profiles. None of them, be it noted, are Canadian.

There's no question that gambling in Second Life spills over into the real world. Linden dollars, the currency used in-world, have active exchange rates, like any other currency, currently running about L$270 to the US dollar. (When I was writing about a virtual technology show, one of my interviewees was horrified that my avatar didn't have any distinctive clothing; she was and is dressed in the free outfit you are issued when you join. He insisted on giving me L$1,000 to take her shopping. I solemnly reported the incident to my commissioning editor, who felt this wasn't sufficiently corrupt to worry about: US$3.75! In-world, however, that could buy her several cars.) Therefore: the fact that the wagering takes place online in a simulated casino with pretty animated decorations changes nothing. There is no meaningful difference between craps on an island in Second Life and poker on an official Web-based betting site. If both sites offer betting on real-life sporting events, there's even less difference.

But the Web site will, these days, have gone through considerable time and money to set up its business. Gaming, even outside the US, is quite difficult to get into: licenses are hard to get, and without one banks won't touch you. Compared to that, the $3,800 and 12 to 14 hours a day Brighton's Anthony Smith told Information Week he'd invested in building his SL Casino World is risibly small. You have to conclude that there are only two possibilities. Either Smith knew nothing about the gaming business - if he did, he know that the US has repeatedly cracked down on online gambling over the last ten years and that ultimately US companies will be forced to decide to live within US law. He'd also have known how hard and how expensive it is to set up an online gambling operation even in Europe. Or, he did know all those things and thought he'd found a loophole he could exploit to avoid all the red tape and regulation and build a gaming business on the cheap.

I have no personal interest in gaming; risking real money on the chance draw of a card or throw of dice seems to me a ridiculous waste of the time it took to earn it. But any time you have a service that involves real money, whether that service is selling an experience (gaming), a service, or a retail product, when the money you handle reaches a certain amount governments are going to be interested. Not only that, but people want them involved; people want protection from rip-off artists.

The MySpace decision, however, is completely different. Child abuse is, rightly, illegal everywhere. Child pornography is, more controversially, illegal just about everywhere. But I am not aware of any laws that ban sex offenders from using Web sites, even if those Web sites are social networks. Of course, in the moral panic following the MySpace announcement, someone is proposing such a law. The MySpace announcement sounds more like corporate fear (since the site is now owned by News International) than rational response. There is a legitimate subject for public and legislative debate here: how much do we want to cut convicted sex offenders out of normal social interaction? And a question for scientists: will greater isolation and alienation be effective strategies to keep them from reoffending? And, I suppose, a question for database experts: how likely is it that those 29,000 profiles all belonged to correctly identified, previously convicted sex offenders? But those questions have not been discussed. Still, this problem, at least in regards to MySpace, may solve itself: if parents become better able to track their kids' MySpace activities, all but the youngest kids will surely abandon it in favour of sites that afford them greater latitude and privacy.

A dozen years ago, John Perry Barlow (in)famously argued that national governments had no place in cyberspace. It was the most hyperbolic demonstration of what I call the "Benidorm syndrome": every summer thousands of holidaymakers descend on Benidorm, in Spain, and behave in outrageous and sometimes lawless ways that they would never dare indulge in at home in the belief that since they are far away from their normal lives there are no consequences. (Rinse and repeat for many other tourist locations worldwide, I'm sure.) It seems to me only logical that existing laws apply to behaviour in cyberspace. What we have to guard against is deforming cyberspace to conform to laws that don't exist.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 20, 2007

The cookie monster

Google announced this week that in order to improve user privacy it would cut the length of time its cookies stay on our systems to two years. The clock will start ticking again every time you use Google or a site using a Google application. The company also plans to anonymize its user logs after 18 months.

The company has been under attack lately for its privacy policies. First, a few weeks ago, Privacy International published its report on the privacy practices of key Internet companies, A Race to the Bottom, and Google came last. Not, you understand, that many companies did all that much better.

Privacy International didn't think any company was "privacy-friendly and privacy-enhancing", but it classed several as "generally privacy-aware but in need of improvement". These were: eBay, LiveJournal, the BBC, Wikipedia, and Last.fm. The report excluded travel sites and financial services, on the grounds that these are subject to regulations beyond their control..

You may notice that these sites aren't exactly comparable with Google, to say nothing of other companies included in the survey, such as AOL, Friendster, Microsoft, and Skype. This seems to me a real problem. The BBC, which does not rely on advertising or commercial sales, needs to operate no registration system, and outside its ecommerce sales has no reason to track anybody beyond generating usage statistics to show the patterns of how content is accessed on its site. Skype, on the other hand, can hardly offer a service without retaining user information, call records, and financial data. Practices that Skype must engage in to operate would be shockingly privacy-invasive if adopted by the BBC. More difficult to assess is user privacy on the social networking sites; on a site like Friendster or LiveJournal by publishing the details of their lives and thoughts users may invade their own privacy far more comprehensively than the site itself can.

The sites are also not comparable in terms of how necessary they are. Hardly anyone really needs AOL. Keeping a blog on LiveJournal is optional; my life proceeds quite happily without Friendster or Facebook. But it's almost impossible these days to look anything up without at least considering looking on Wikipedia, and while there are many VoIP services, peer pressure makes a lot of people sign up for Skype. Therefore, while it's reasonable to compare the companies' corporate behavior, the impact of that behavior is not comparable, nor is the amount of effort and money respecting privacy costs the company. It's a lot harder for Google to respect privacy and maintain its revenue stream than it is for the BBC.

It's also ironic that eBay should have scored so well. Police forces all over Britain agree that online auction fraud is one of the biggest sources of complaints they have. Google's ability to track everyone's search history, reading habits, and general interests may be, long-term, the worse privacy invasion. But to most people it's worse to be ripped off, and while eBay says it takes fraud seriously, the site is still awash in counterfeit DVDs, and does nothing to warn people with transactions in progress when a user's account is suspended for fraud. Which is worse? Being marketed at and tracked or being ripped off? Given that so many people are happy to hand over their privacy in return for some money off groceries (loyalty cards) or a truly modest amount of better treatment from their airline, I'd guess most people would think the latter.

But even given all that Google's announcement this week is so trivial that it's insulting. For one thing, as Google Watch points out, Google assigns your computer a unique ID that persists through rain, snow, IP address change and cookie rewrite. For another, you have no idea when you click on a URL whether a Web site you're about to visit uses a Google service. The point of privacy practices is to give users control; this does anything but that. Why not instead widen the user-configurable preferences to include whether or not to accept cookies and for how long? How hard can that be for all those Google geniuses?

The Article 29 working group pointed out also that the bigger problem is Google's storing search histories and IP addresses. As Privacy International noted, most companies regard individual IP addresses as essentially anonymous, impersonal data – absurd in this time of broadband, when people have the same addresses for years on end. My IP address identifies my computer system more tightly than a library card.

To a large extent, Privacy International blames advertising. As long as content and services are going to be paid for by advertising, sites must track user statistics and supply the data that keeps advertisers happy. There's some justice to that.

But the real problem is the users: who is going to stop using Google because of its privacy policies? You might decide to avoid Gmail, or to delete patiently, one by one, the Usenet postings you crazily typed one night while drunk in 1982, but if you want search, or advertising on your own site… Google is successful as a business because it's made itself indispensable.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 20, 2007

Green spies

Some months back I blogged a breakdown of the various fees that are added on to each airline ticket and tagged it "What we're paying." A commenter took issue: society at large, he wrote, was paying a good deal more than that for my evil flying habits, and I shouldn't be going to Miami anyway. He had a point. What's offending one niece by missing her wedding? I have more.

The intemperateness of the conversation is the kind of thing smokers used to get from those who've already quit.
Just how acrimonious the whole thing is getting was brought home to me this week when Ian Angell surfaced to claim that it is not really possible to be a privacy advocate and an environmentalist at the same time. Of course, Angell was in part just trying to make trouble and get people arguing. But he says he has a serious point.

"The green issues are providing a moral justification for the invasion of privacy," he says, "and the green lobby must take it on board as part of what they're doing. And the fact that they're not taking it on board makes them guilty."

I wouldn't go that far – I do not think you can blame people for unintended consequences. But there are a number of proposals floating around in the UK that could provide yet more infrastructure for endemic surveillance, even if the intention at the moment is to protect the environment.

For example: the idea of the personal carbon allowance, first mooted in 2005 with the notion that it could be linked to the ID card. Last July, environment minister, David Miliband, proposed issuing swipe cards to all consumers, which you'd have to produce whenever you bought anything like petrol or heating – or plane tickets. That at least would give me ammunition against my blog commenter, because other than flying my carbon footprint is modest. In fact, we could have whole forums of moral superiors boasting about how few carbon points they used, like we now have people who boast about how early they get up in the morning. And we could have billboards naming and shaming those who – oh, the horror – had to buy extra carbon points, like they do for TV license delinquents.

Or take the latest idea in waste management, the spy bin fitted with a microchip sensor that communicates with the garbage truck to tell your local council how much you've contributed to the landfill. Given the apparent eagerness of manufacturers to enhance their packaging with RFID chips, this could get really interesting over time.

This is also a country where the congestion charge – a scheme intended to reduce the amount of traffic in central London – is enforced by cameras that record the license plates of every vehicle as it crosses the border. Other countries have had road tolls for decades, but London's mayor, formerly known as "Red Ken" Livingstone because of his extreme left-wing leanings, chose the most privacy-invasive way to do it. Proposals for nationwide road charging follow the same pattern, although the claim is that there will be safeguards against using the installed satellite tracking boxes to actually track motorists. Why on earth is this huge infrastructure remotely necessary? We already have per-mile road use charging. It's called buying fuel.

Privacy International's executive director, Simon Davies, points out that none of these proposals – nor those to expand the use of CCTV (talking cameras!) – are supported by research to show how the environment will benefit.

Of course, if there's one rule about environmentalism it is, as Angell says, "The best tax is the tax the other guy pays." Personally, I'd ban airconditioning; it doesn't get that hot in the UK anyway, and a load of ceiling fans and exhaust fans would take care of all but the most extreme cases of medical need. It certainly does seem ironic that just at the moment when everyone's getting exercised about saving energy and global warming – they're all putting in airconditioning so cold you have to carry a sweater with you if you go anywhere in the "summer".

So, similarly, when Angell says there are "straightforward, immediate answers" he's perfectly right. The problem is they'll all enrage some large group of businesses. "You could reduce garbage by 80 percent by banning packaging in shops. We are squabbling about tiny little changes when quite substantial changes are just not on the cards."
And then, he adds, "They jump on airline travel because you can bump up the taxes and it's morally justified."

I am convinced, however, that it's possible to be a privacy advocate and an environmentalist simultaneously. This is a type of issue that has come up before, most notably in connection with epidemiology. If you make AIDS a notifiable disease you make it easier to track the patterns of infection and alert those most at risk; but doing so invades patient privacy. But in the end, although Angell's primary goal was to stir up trouble, he's right to say that environmentalists need to ensure that their well-meaning desire to save the planet is not hijacked. Or, he says, "they will be blamed for the taxation and the intrusion."

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 19, 2007

Spineless

A friend went to India recently and got sick. Unsurprising, you might think, except that the reason she got sick was that her doctor in Pennsylvania didn't know that the anti-malaria medication he prescribed would interact badly with the anti-acid reflux drug he had also prescribed. The Indian doctor (who, ironically, had trained in Pennsylvania) knew all about it. Geography.

Preventing this kind of situation, at least on a national level, is part of the theory behind the NHS data spine; for Americans, it's a giant database onto which patient information from all parts of Britain's National Health Service is going to be put. It's also presumably part of the reason that pharmacists think they should be allowed to edit and add to patient records; American pharmacies have for years marketed the notion that if you fill all your prescriptions at the same place they'll be able to tell you if you're prescribed something stupid.

Personally, I can see a lot of merit in this idea. Also in the idea that medical personnel would have access to my records, that my allergies would be known to all and sundry (it would be a help, in a medical crisis, if someone didn't try to revive me by feeding me peanut butter).

The problem is that this is a fantasy. It seems appealing to me, I suppose, only because a) I have hardly any medical records – all my doctors either died, refused to give me my records, or destroyed them after they hadn't heard from me for too long – and b) I can't imagine anything bad happening to me if any of my medical history were disclosed. This is not true for most people, and it ignores the most important thing we know about all databases: they contain errors. This is how the campaign to opt out of the database was born: its organiser, Helen Wilkinson, discovered that her medical records had erroneously labeled her an alcoholic. (Not that there's anything wrong with that.) Getting that corrected took years and questions in Parliament.

The problem with all these systems is that they seek to replace knowledge with information. Your GP may know you; the database merely holds information about you and can make no intelligent judgments about what's relevant to a particular situation or distinguish true from false.

Last year, the World Privacy Forum released a report on medical identity theft. Identity fraud, they concluded, happens at all levels in the US medical system. Medical personnel or clinics seeking to pad their income may add treatments they have never delivered to patient records and present them to insurers for payment. Thieves may use doctors' information to forge prescriptions. Patients without insurance or who do not want particular types of treatment to appear on their own records may steal another's identity. In the US, where most treatment is funded by private medical insurance, the consequences can be far-reaching for the victims of such fraud: their credit ratings, employment prospects, and ability to get medical insurance can all be hit hard. A lot of the complaints, therefore, about the Health Insurance Portability and Accountability Act are that it opens medical records to far too many people and, like the NHS Data Spine, does not provide a way for individuals to correct their own records.

In the UK, things are a bit different. Here, GPs are gatekeepers to all care. According to Fleur Fisher, a consultant on ethics and health care practice, and probably the leading expert on medical privacy in the UK, there have, however, been serious frauds in dentistry in the UK, where dentists may claim for big treatments they haven't actually performed.

The problem in the UK, she says, "is not that people will assume your medical identity so they can get treatment. It's much more that it will open people's health records."

A key part of the NHS plan seems to be to provide data to researchers to help determine public policy. Again, in a lot of ways this makes sense; but there is an old and recurring conflict between the desire for privacy of patients with, say, AIDS, and the legitimate interest of society at large to halt the disease's spread. One thing you should not rely on is that the data will be unidentifiable, even if the NHS confirms that it will be "anonymized". Years ago, Latanya Sweeney showed just how unreliable this is by analyzing supposedly anonymized data from the health system in the state of Massachusetts by matching it against publicly available motor vehicle rolls. With only a few database fields she was able to identify almost all of the individuals in the medical data.

Opting out has turned out not to be so simple, despite the fact that according to Ross Anderson, who has been working on medical privacy for well over a decade, most GPs are unhappy about the forced uploading of patient data to a centralised database. That being the case, as Phil Booth notes in the No2ID forum on the topic, if you want to opt out treat your GP as your ally unless he proves otherwise.

Meantime, if you really want emergency personnel to know the important stuff about you, wear an alert bracelet or some other identifier.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 1, 2006

A SWIFT kick

One of the clear trends of the last five years has been increasing international surveillance, especially by or on behalf of the US. Foreign visitors to the US now are welcomed with demands for fingerprints and other biometrics; airlines flying to the US are required to hand over passenger data even before the plane pushes back; and, behind the scenes, the cooperative that handles interbank transfers within Europe has been sending the US Treasury department banking records that the average European citizen almost certainly assumes are confidential.

This week, the Article 29 group – a panel of European Commissioners for Freedom, Security, and Justice – ruled that the interbank money transfer service SWIFT (Society for Worldwide Interbank Financial Telecommunication) has failed to respect the provisions of the EU Data Protection directive by transferring personal financial data to the US in a manner the press release describes as "hidden, systematic, massive, and long-term."

It doesn't sound like much when you say that a few people brought a complaint about an obscure organization to an equally obscure branch of the EU government and won. It sounds like a lot more when you say that a few people brought a complaint that, upheld, means that the European financial world will have to change their behavior.

The transfers are part of anti-terrorist programs put in place after the September 11, 2001 attacks to allow American intelligence agency analysts to spot funds being sent to finance terrorists. The problem is that, under EU law, the Data Protection Directive forbids the transfer of personal data to countries that do not have the same level of protection in place; the US is most certainly in that category. Simon Davies, executive director of Privacy International, says the goal in making the complaint that led to the Article 29 group's decision was not to stop all data transfers. "The data should be transferred when there's some level of evidence," he says. What PI objected to was the lack of oversight from anyone outside the cooperative, which is owned by the many private companies – banks, brokers, investment managers, and corporations.

"Now that we know SWIFT was acting illegally," says Davies, "the aim is to bring SWIFT and the banks to account, first by establishing a meaningful oversight mechanicsm, and second by bringing some transparency to the whole arrangement." Part of Privacy International's involvement was, together with the American Civil Liberties Union, to prepare a report on the involvement of consulting firm Booz Allen Hamilton, which is SWIFT's supposedly independent auditor but which, according to the report, has been deeply involved with American surveillance programs for the last ten years. Booz Allen told the New York Times that it rejected PI's charges.

PI's next step, Davies says, will be to contact the banks to ask what they intend to do or have done to comply with the decision. Under the law, they have 30 days to reply. "At the end of the 30 days, unless they provide evidence that they have complied, we then follow up with a second round of complaints to all commissioners worldwide." The US, of course, has no data protection commissioner – and even if it did, the transfers are legal there – so the list Davies is talking about is all the EU countries, Canada, Hong Kong, Australia, New Zealand, and a smattering of others.

"What they do depends on their powers in each country," says Davies, noting that "the UK is particularly weak." Unlimited fines can be imposed, should the commissioners so choose. "If SWIFT doesn't make an adult decision to deal with the situation, then it's up to member banks to use their voting rights within SWIFT to force change."

Meanwhile, he says, "SWIFT is also stuck. They have to comply with subpoenas issued by US authorities." Otherwise, SWIFT would be incurring criminal liability.

Davies' belief is that what's needed is either a truly independent oversight body or perhaps a former judge, to review proposed data transfers and ensure they comply with the law.

That, of course, is not what the US wants; Jane Hovarth, chief privacy and civil liberties officer for the US Department of Justice, told the recent international conference of data protection officers that the US does, too, have privacy laws, and that everyone should get together and agree on some kind of global data law. Under EU law, however, the US would have to raise its privacy protections to EU standards before sending data there would be legal.

This seems unlikely, but you never know. A couple of years ago, when the EU had the choice of honoring data protection law or sending the US government all the airline passenger data it wanted – the EU caved and sent the passenger data. Still, in this era when people seem willing to justify almost any amount of privacy invasion with the words "anti-terrorism", it was heartening to read the Working Party's final comment on the whole thing:

"The Working Party recalls that any measures taken in the fight against crime and terrorism should not and must not reduce standards of protection of fundamental rights which characterise democratic societies."

It's up to us to make them stick to that.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 3, 2006

Where can we go?

One of the things about being an expatriate is this: whenever there’s a problem you always think that changing countries might be the solution. So one of the games we like to play is where-can-we-go. Global warming means the Gulfstream will stop or change direction and Britain will freeze? Where can we go? Britain is awash in CCTV cameras and wants to bring in a national identity card. Where can we go?

My usual answer is New Zealand, based on no knowledge whatsoever: it just seems so far from here that anything that’s a problem here surely can’t be one there, too. And I know so little about the country that it’s easy to fantasize being left alone to roam the hills among the sheep. Yes, I know: it can’t really be like that, and I’d hate it if it were.

But apparently when it comes to privacy the answers are Germany or Canada. In a pinch, Belgium, Austria, Greece, Hungary, or Argentina. At least, that’s the situation according to Privacy International’s new human rights survey New Zealand’s score is a good bit lower, although you could improve it by avoiding employment; workplace monitoring was the one area in which it scored really badly. Of course, the raw numbers never really tell you the quality of life in those countries; and most people don’t want to play where-can-we-go. Most people, being sensible, want the place they do live, and where they have their cultural and social ties, to be better.

The problem with talking about privacy is that it's so abstract. It’s arguable, for example, that the biggest worry in many people’s lives in the US is not privacy but how to pay for health care. A friend, for example, recently had occasion to go to the emergency room for a few tests; she estimated the bill at $2,000 and her share of it at $500.

In that sense, the Information Commissioner’s new survey, A Surveillance Society, launched alongside the annual data protection conference, is more alarming, in part because it investigates the consequences of constant surveillance and the impact it has on the realities of daily life. What can be done and is being done is to create a class system of great rigidity: surveillance, it says, brings social sorting to define target markets and risky populations. The airline that knows how much you travel decides accordingly how to treat you; the health service decides how to treat you based on its assessment of your worthiness for treatment. Welfare becomes an exercise in deterring fraud rather than assuring safety. It is, the IC’s survey says, risk management rather than the original promise of universal health care. That it’s not a conspiracy, as the IC survey repeats several times, makes it almost more alarming: there is no one specific enemy to fight.

This should not be surprising to anyone who read, some years back, the software engineer Ellen Ullman’s wonderful essay collection, Close to the Machine. Everything the IC is talking about was laid out there in detail, including the exact process by which it happens. Her story concerned a database created to help ensure that people with AIDS got all the help that was available to them. Slowly, the system morphed; in her words, it “infected” its users. The fuzzy, human logic by which one person might get an extra blanket was replaced by inexorable computer rules. Then it became hostile, trying to ensure that no one got more than they were entitled to – the precise stage Britain is at right now with respect to welfare.

In another case, the fact of a system’s existence led a boss to wonder whether he could monitor his sole employee to find out what she did all day. The employee had worked for him for decades and had picked up his children from school. This is, I suppose, where we are with National Identity cards. We *can* find out what everyone does all day, so why shouldn’t we?

Of course, Britain is famous for its class system and the anti-democratic nature of it. But if there was one thing you could say for the old ways, the class differences were clearly visible on the outside. Accent and habits of speech, as George Bernard Shaw observed more than a century ago in Pygmalion, determined how you were treated, and you knew what to expect. Social sorting via surveillance is more democratic in the sense that you don’t have to have a title or the right accent to be a big spender the airlines will treat like gold dust/ But the rules are hidden and insidious; rather than open and well-understood.

So: where can we go (PDF)? A lot of people like the sound of Ireland – they speak English, it’s close, and it’s pretty. But it ranks only a tier above the UK and will always been under pressure to adopt British standards because of the common travel area. Some people like Sweden, for its longstanding commitment to social welfare. It, too, ranks low on the privacy scale. It will have to be Canada. But it’s cold, I hear you cry. Nah. Global warming. Those igloos will be melting any day now. Off to Winnipeg (DOC)!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her , or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 20, 2006

Spam, spam, spam, and spam

Illinois is a fine state. It is the Land of Lincoln. It is the birth place of such well-known Americans as Oprah Winfrey, Roger Ebert, and Ronald Reagan. It has a baseball team so famous that even I know it's called the Chicago Cubs. John Dewey (as in the Dewey decimal system for cataloguing library books) came from Illinois. So did the famous pro-evolution lawyer Clarence Darrow, Mormon church founder Joseph Smith, the nuclear physicist Enrico Fermi, semiconductor inventor William Shockley, and Frank Lloyd Wright.

I say all this because I don't want anyone to think I don't like or respect Illinois or the intelligence and honor of its judges, including those of Charles Kocoras, who who awarded $11.7 million in damages to e360Insight, a company branded a spammer by the Spamhaus Project.

The story has been percolating for a while now, but is reasonably simple. e360Insight says it's not a bad spammer guy but a good opt-in marketing guy; Spamhaus first said the Illinois court didn't have jurisdiction over a British company with no offices, staff, or operations in the US, then decided to appeal against the court's $11.7 million judgement. e360Insight filed a motion asking the court to haveICANN and/or Spamhaus's domain registrar, the Canadian company Tucows, remove Spamhaus's domain from the Net. The judge refused to grant this request, partly because doing so would cut off Spamhaus's lawful activities, not just those in contravention of the order he issued against Spamhaus. And a good time is being had by all the lawyers.

The case raises so many problems you almost don't know where to start. For one thing, there's the arms race that is spam and anti-spam. This lawsuit escalates it, in that if you can't get rid of an anti-spammer through DDoS attacks, well, hey, bankrupt them through lawsuits.

Spam, as we know, is a terrible, intractable problem that has broken email, and is trying to break blogs, instant messaging, online chat, and, soon, VOIP. (The net.wars blog, this week, has had hundreds of spam comments, all appearing to come from various Gmail addresses, all landing in my inbox, breaking both blogs and email in one easy, low-cost plan. The breakage takes two forms. One is the spam itself – up to 90 percent of all email. But the second is the steps people take to stop it. No one can use email with any certainty now.

Some have argued that real-time blacklists are censorship. I don't think it's fair to invoke the specter of Joseph McCarthy. For one thing, using these blacklists is voluntary. No one is forced to subscribe, not even free Webmail users. That single fact ought to be the biggest protection against abuse. For another thing, spam email in the volumes it's now going out is effectively censorship in itself: it fills email boxes, often obscuring and sometimes blocking entirely wanted email. The fact that most of it either is a scam or advertises something illegal is irrelevant; what defines spam, I have long argued, is the behavior that produces it. I have also argued that the most effective way to put spammers out of business is to lean on the credit card companies to pull their authorisations.

Mail servers are private property; no one has the automatic right to expect mine to receive unwanted email just as I am not obliged to speak to a telemarketer who phones during dinner.

That does not mean all spambusters are perfect. Spamhaus provides a valuable public service. But not all anti-spammers are sane; in 2004 journalist Brian McWilliams made a reasonable case in his book Spam Kings that some anti-spammers can be as obsessive as the spammers they chase.

The question that's dominated a lot of the Spamhaus coverage is whether an Illinois court has jurisdiction over a UK-based company with no offices or staff in the US. In the increasingly connected world we live in, there are going to be a lot of these jurisdictional questions. The first one I remember – the 1996 case United States vs. Thomas – came down in favor of the notion that Tennessee could impose its community decency standards on a bulletin board system in California. It may be regrettable – but consumers are eager enough for their courts to have jurisdiction in case of fraud. Spamhaus is arguably as much in business in the US as any foreign organisation whose products are bought or used in the US. Ultimately, "Come here and say that" just isn't much of a legal case.

The really tricky and disturbing question is: how should blacklists operate in future? Publicly listing the spammers whose mail is being blocked is an important – even vital – way of keeping blacklists honest. If you know what's being blocked and can take steps to correct it, it's not censorship. But publishing those lists makes legal action against spam blockers of all types – blacklists, filtering software, you name it – easier.

Spammers themselves, however, should not rejoice if Spamhaus goes down. Spam has broken email, that's not news. But if Spamhaus goes and we actually receive all the spam it's been weeding out for us – the flood will be so great that spam will finally break spam itself.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 6, 2006

A different kind of poll tax

Elections have always had two parts: the election itself, and the dickering beforehand (and occasionally afterwards) over who gets to vote. The latest move in that direction: at the end of September the House of Representatives passed the Federal Election Integrity Act of 2006 (H.R. 4844), which from 2010 will prohibit election officials from giving anyone a ballot who can't present a government-issued photo ID whose issuing requirements included proof of US citizenship. (This lets out driver's licenses, which everyone has, though I guess it would allow passports, which relatively few have.)
These days, there is a third element: specifying the technology that will tabulate the votes. Democracy depends on the voters' being able to believe that what determines the election is the voters' choices rather than the latter two.

The last of these has been written about a great deal in technology circles over the last decade. Few security experts are satisfied with the idea that we should trust computers to do "black box voting" where they count up and just let us know the results. Even fewer security experts are happy with the idea that so many politicians around the world want to embrace: Internet (and mobile phone) voting.

The run-up to this year's mid-term US elections has seen many reports of glitches. My favorite recent report comes from a test in Maryland, where it turned out that the machines under test did not communicate with each other properly when the touch screens were in use. If they don't communicate correctly, voters might be able to vote more than once. Attaching mice to the machines solves the problem – but the incident is exactly the kind of wacky glitch that's familiar from everyday computing life and that can take absurd amounts of time to resolve. Why does anyone think that this is a sensible way to vote? (Internet voting has all the same risks of machine glitches, and then a whole lot more.)

The 2000 US Presidential election isn’t as famous for the removal from the electoral rolls in Florida of few hundred thousand voters as it is for hanging chad – but read or watch on the subject. Of course, wrangling over who gets to vote didn't start then. Gerrymandering districts, fighting over giving the right to vote to women, slaves, felons, expatriates…

The latest twist in this fine, old activity is the push in the US towards requiring Voter ID. Besides the federal bill mentioned above, a couple of dozen states have passed ID requirements since 2000, though state courts in Missouri, Kentucky, Arizona, and California are already striking them down. The target here seems to be that bogeyman of modern American life, illegal immigrants.

Voter ID isn't as obviously a poll tax. After all, this is just about authenticating voters, right? Every voter a legal voter. But although these bills generally include a requirement to supply a voter ID free of charge to people too poor to pay for one, the supporting documentation isn't free: try getting a free copy of your birth certificate, for example. The combination of the costs involved in that aspect, plus the effort involved in getting the ID are a burden that falls disproportionately on the usual already disadvantaged groups (the same ones stopped from voting in the past by road blocks, insufficient provision of voting machines in some precincts, and indiscriminate cleaning of the electoral rolls). Effectively, voter ID creates an additional barrier between the voter and the act of voting. It may not be the letter of a poll tax, but it is the spirit of one.

This is in fact the sort of point that opponents are making.

There are plenty of other logistical problems, of course, such as: what about absentee voters? I registered in Ithaca, New York, in 1972. A few months before federal primaries, the Board of Elections there mails me a registration form; returning it gets me absentee ballots for the Democratic primaries and the elections themselves. I've never known whether my vote is truly anonymous; nor whether it's actually counted. I take those things on trust, just as, I suppose, the Board of Elections trusts that the person sending back these papers is not some stray British person who's does my signature really well. To insert voter ID into that process would presumably require turning expatriate voters over to, say, the US Embassies, who are familiar with authentication and checking identity documents.

Given that most countries have few such outposts, the barriers to absentee voting would be substantially raised for many expatriates. Granted, we're a small portion of the problem. But there's a direct clash between the trend to embrace remote voting - the entire state of Oregon votes by mail – and the desire to authenticate everyone.
We can fix most of the voting technology problems by requiring voter-verifiable, auditable, paper trails, as Rebecca Mercuri began pushing for all those years ago (and most computer scientists now agree with), and there seem to be substantial moves in that direction as state electors test the electronic equipment and scientists find more and more serious potential problems. Twenty-seven states now have laws requiring paper trails. But how we control who votes is the much more difficult and less talked-about frontier.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 4, 2006

Hard times at the identity corral

If there is one thing we always said about the ID card it's that it was going to be tough to implement. About ten days ago, the Sunday Times revealed how tough: manufacturers are oddly un-eager to bid to make something that a) the Great British Public is likely to hate, and b) they're not sure they can manufacture anyway. That suggests (even more strongly than before) that in planning the ID card the government operated like an American company filing a dodgy patent: if we specify it, they will come.

I sympathize with IBM and the other companies, I really do. Anyone else remember 1996, when nearly all the early stories coming out of the Atlanta Olympics blamed IBM' prominently for every logistical snafu? Some really weren't IBM's fault (such as the traffic jams). Given the many failures of UK government IT systems, being associated with the most public, widespread, visible system of all could be real stock market poison.

But there's a secondary aspect to the ID card that I, at least, never considered before. It's akin to the effect often seen in the US when an amendment to the Constitution is proposed. Even if it doesn't get ratified in enough states – as, for example, the Equal Rights Amendment did not – the process of considering it often inspires a wave of related legislation. The fact that ID cards, biometric identifiers, and databases are being planned and thought about at such a high level seems to be giving everyone the idea that identity is the hammer for every nail.

Take, for example, the announcement a couple of days ago of NetIDme, a virtual ID card intended to help kids identify each other online and protect them from the pedophiles our society apparently now believes are lurking behind every electron.

There are a lot of problems with this idea, worthy though the intentions behind it undoubtedly are. For one thing, placing all your trust in an ID scheme like this is a risk in itself. To get one of these IDs, you fill out a form online and then a second one that's sent to your home address and must be counter-signed by a professional person (how like a British passport) and a parent if you're under 18. It sounds to me as though this system would be relatively easy to spoof, even if you assume that no professional person could possibly be a bad actor (no one has, after all, ever fraudulently signed passports). No matter how valid the ID is when it's issued, in the end it's a computer file protected by a password; it is not physically tied to the holder in any way, any more than your Hotmail ID and password are. For a third thing, "the card removes anonymity," the father who designed the card, Alex Hewitt, told The Times. But anonymity can protect children as well as crooks. And you'd only have to infiltrate the system once to note down a long list of targets for later use.

But the real kicker is in NetIDme's privacy policy, in which the fledgling company makes it absolutely explicit that the database of information it will collect to issue IDs is an asset of a business: it may sell the database, the database will be "one of the transferred assets" if the company itself is sold, and you explicitly consent to the transfer of your data "outside of your country" to wherever NetIDme or its affiliates "maintain facilities". Does this sound like child safety to you?

But NetIDme and other systems – fingerprinting kids for school libraries, iris-scanning them for school cafeterias – have the advantage that they can charge for their authentication services. Customers (individuals, schools) have at least some idea of what they're paying for. This is not true for the UK's ID card, whose costs and benefits are still unclear, even after years of dickering over the legislation. A couple of weeks ago, it became known that as of October 5 British passports will cost £66, a 57 percent increase that No2ID attributes in part to the costs of infrastructure needed for ID cards but not for passports. But if you believe the LSE's estimates, we're not done yet. Most recent government estimates are that an ID card/passport will cost £93, up from £85 at the time of the LSE report. So, a little quick math: the LSE report also guessed that entry into the national register would cost £35 to £40 with a small additional charge for a card, so revising that gives us a current estimate of £38.15 to £43.60 for registration alone. If no one can be found to make the cards but the government tries to forget ahead with the database anyway, it will be an awfully hard sell. "Pay us £40 to give us your data, which we will keep without any very clear idea of what we're going to do with it, and in return maybe someday we'll sell you a biometric card whose benefits we don't know yet." If they can sell that, they may have a future in Alaska selling ice boxes to Eskimos.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 16, 2006

Security vs security, part II

It's funny. Half the time we hear that the security of the nation depends on the security of its networks. The other half the time we're being told by governments that if the networks are too secure the security of the nation is at risk.

This schizophrenia was on display this week in a ruling by the US Court of Appeals in the District of Columbia, which ruled in favor of the Federal Communications Commission: yes, the FCC can extend the Communications Assistance for Law Enforcement Act to VoIP providers. Oh, yeah, and other people providing broadband Internet access, like universities.

Simultaneously, a clutch of experts – to wit, Steve Bellovin (Columbia University), Matt Blaze (University of Pennsylvania), Ernest Brickell (Intel), Clinton Brooks (NSA, retired), Vinton Cerf (Google), Whifield Diffie (Sun), Susan Landau (Sun), Jon Peterson (NeuStar), and John Treichler (Applied Signal Technology) – released a paper explaining why requiring voice over IP to accommodate wiretapping is dangerous. Not all of these folks are familiar to me, but the ones who are could hardly be more distinguished, and it seems to me when experts on security, VOIP, Internet protocols, and cryptography all get together to tell you there's a problem, you (as in the FCC) should listen. Together, this week they released Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP (PDF), which carefully documents the problems.

First of all – and they of course aren't the only ones to have noticed this – the Internet is not your father's PSTN. On the public switched telephone network, you have fixed endpoints, you have centralized control, and you have a single, continuously open circuit. The whole point of VoIP is that you take advantage of packet switching to turn voice calls into streams of data that are more or less indistinguishable from all the other streams of data whose packets are flying alongside. Yes, many VoIP services give you phone numbers that sound the same as geographically fixed numbers – but the whole point is that neither caller nor receiver need to wait by the phone. The phone is where your laptop is. Or, possibly, where your secretary's laptop is. Or you're using Skype instead of Vonage because your contact also uses Skype.

Nonetheless, as the report notes, the apparent simplicity of VoIP, its design that makes it look as though it functions the same as old-style telephones, means that people wrongly conclude that anything you can do on the PSTN you should be able to do just as easily with VoIP.

But the real problems lie in security. There's no getting round the fact that when you make a hole in something you've made a hole through which stuff leaks out. And where in the PSTN world you had just a few huge service providers and a single wire you could follow along and place your wiretap wherever was most secure, in the VoIP world you have dozens of small providers, and an unpredictable selection of switching and routing equipment. You can't be sure any wiretap you insert will be physically controlled by the VoIP provider, which may be one of dozens of small operators. Your targets can create new identities at no cost faster than you can say "pre-pay mobile phone". You can't be sure the signals you intercept can be securely transported to Wiretap Central. The smart terminals we use have a better chance of detecting the wiretap – which is both good and bad, in terms of civil liberties. Under US law, you're supposed to tap only the communications pertaining to the court authorization; difficult to do because of the foregoing. And then, there's a hole, as the IETF observed in 2000, which could be exploited by someone else. Whom do you fear more will gain access to your communications: government, crook, hacker, credit reporting agency, boss, child, parent, or spouse? Fun, isn't it?

And then there's the money. American ISPs can look forward to the cost of CALEA with all the enthusiasm that European ISPs had for data retention. Here, the government helpfully provided its own data: a VoIP provider paid $100,000 to a contractor to develop its CALEA solution, plus a monthly fee of $14,000 to $15,000 and, on top of that, $2,000 for each intercept.

Two obvious consequences. First: VoIP will be primarily sold by companies overseas into the US because in general the first reason people buy VoIP is that it's cheap. Second: real-time communications will migrate to things that look a lot less like phone calls. The report mentions massively multi-player online role-playing games and instant messaging. Why shouldn't criminals adopt pink princess avatars and kill a few dragons while they plot?

It seems clear that all of this isn't any way to run a wiretap program, though even the report (two of whose authors, Landau and Diffie, have written a history of wiretapping) allows that governments have a legitimate need to wiretap, within limits. But the last paragraph sounds like a pretty good way to write a science fiction novel. In fact, something like the opening scenes of Vernor Vinge's new Rainbows End.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 5, 2006

Computers, Freedom, and Privacy XVI

“Everyone seems depressed,” someone said a half-day into this year’s Computers, Freedom, and Privacy conference.

It’s true. Databases are everywhere this year. FEMA databases made of records from physicians, pharmacists, insurers. The databases we used to call the electoral rolls. Choicepoint. The National Health infrastructure they want to build. Real ID. RFID tracks and trails – coming soon to a database near you. And so on.

The only really positive moment is when Senator Leahy (Democrat – VT) bounds in to deliver a keynote, saying that the society we are creating is “a different US society than the one we know”. He ringingly denounces the claim that voting for the resolution to pursue Al-Qaida included voting for warrantless wiretapping, and everyone applauds.
Wait. CFP is being bucked up by a senator? Ten years ago, this conference thought it could code its way out of anything. PGP and Internet architecture could beat any lawmaker.

Now, someone says, “The governments are moving in in a big way” and there is a sense that the only hope lies in policy-making and persuasion. Privacy advocates are brainstorming legislative proposals. The Electronic Frontier Foundation is opening an office in DC.

Even the government types are depressed. Stewart Baker, who in 1994 baited this group by claiming that opposition to key escrow was coming only from those who couldn’t go to Woodstock because they had to finish their math homework, is now at the Department of Homeland Security, and tells us that in an emergency we should save ourselves.

Actually, that was one of the few moments of levity. What he did was ask how many libertarians were there in the room who believed that a government governs best that governs least. About ten percent of the crowd raised their hands (another major change from minus ten years, when at least half would have done so). How many actually had provisions of food and water for 72 hours? Most hands dropped. “Who,” Baker asked, “are you expecting to rescue you?” Gotcha.

The science fiction writer Vernor Vinge, who’s been wandering the conference to sample the zeitgeist preparatory to delivering his wrap-up late Friday, summed it up in an advance sample.

“The angle that’s somewhat discouraging,” he says, “is the sense I have in many of these issues that they do reflect an almost implacable advance and on many different fronts on the part of the government in support of the fundamental government idea that total information awareness (no trademark) is absolutely essential to the national interest. I see that as clearly and explicitly recognized by the government as essential to national security, so that in the long run opposing it is at best a matter of slowing its advance down and at worst giving it the appearance of slowing its advance down.”

Vinge was last at CFP in 1996, when he, Bruce Sterling, and Pat Cadigan all participated in a panel called “We Know Where You Will Live”. I remember it as one of the best CFP panels, ever. It was, to be sure, somewhat gloomy. I remember, for example, predictions that a supermarket might know what foods you had been eating from your sales records and, in cahoots with your medical insurer, order you off the potato chips and onto the celery sticks. But being able to imagine this dysfunctional future gave the sense that we would be able to avert it. And without, as one prominent CFPer has done since last year, moving to Canada.

This year, Katherine Albrecht, the leading campaigner against RFID tags and their prospective use to tag and track goods and people, presented her latest findings. Some of her scenarios are far-fetched enough to be truly lame (for example, the idea that someone could sit next to you in a plane and scan your bag so they could steal exactly what they wanted while you were in the lavatory) and others are too clearly chosen to try to manipulate emotional hot buttons (such as the idea that someone passing on the street could point a cellphone reader at a woman and be able to tell what model and color bra she was wearing; I mean, so what?). But the tracking, storing, and eventually sharing of data are all logical consequences of the infrastructure her research shows they are building. I’m not convinced we will go there. But the possibility is no longer outlandish enough for me to feel empowered by considering it any more.

“We haven’t,” a privacy activist now in the corporate sector said to me over dinner, “had one single success. It’s just a long list of failures.”

It is the health care situation that is particularly depressing. The UK has its flaws, but one benefit of nationalized health care is a real reduction in the number of people and organizations who are intensely interested in your medical records. In the US, it seems as though everyone is lining up hoping to get a glimpse of what might be wrong with you.

In one panel we learn that medical identity theft is one of the biggest and fastest growing problems. Now, I wouldn’t mind that so much if they’d take my ailments, too. Such as this growing sensation of being surrounded, spied upon, watched by cameras…

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML)

March 31, 2006

Protect people, not data

I spent some of this week talking to parents about the phenomenon of fingerprinting kids in schools for the Guardian. (Surely fingerpainting was more fun.) One of the real frustrations among the people I spoke to was the lack of (helpful) response from the Information Commissioner's Office.

The systems that are being deployed in many school libraries in the UK (with doubtless other countries to follow if they succeed here) are made by Micro Librarian Systems. The fingerprinting side of it is really an add-on; without fingerprint readers, the kids use barcodes. One of the system's selling points seems to be that it doesn't need adult supervision, unlike library cards.

You can see why fingerprints sound appealing as a way to unlock the system: quick, easy, efficient, nothing to lose and/or replace. Slight problem, maybe, that kids' fingers are often dirty, sticky, or damaged, but at least they can't lose them.

What took me aback a bit was discovering that MLS has on its Web site – and quotes to schools – letters from the Information Commissioner's Office and from the Department of Education saying they saw nothing wrong with the system.

It's not my purpose here to rehash whether fingerprinting kids to let them take out library books is appropriate; the parents who were against it had plenty to say in my Guardian piece. But the whole incident has made me think about the role of the Information Commissioner's Office. Whenever I've spoken to anyone there it's seemed clear that ICO's job (PDF) is to explain the law and ensure that organizations obey it. They don't go around looking for things to investigate; they respond to complaints from the public. In this case, they say, they haven't had many. They do advise, as the letter MLS received says, that schools consult parents before instituting fingperprinting "as it may be a sensitive issue".

Indeed, it might. It's a measure of how much both technology and the willingness to be monitored are infiltrating everyone's consciousness that there has been so little public outcry over this. The manufacturers are, of course, very reassuring: the system doesn't store whole fingerprints but an encrypted, very large number mathematically derived from the scanned finger. The image cannot be reconstructed from the number.

But that doesn't actually help because if what unlocks the system is the number it is actually more easily forged than a fingerprint image would be. Now, no one's suggesting that some crook is going to break into a school and steal the computer that holds the kids' fingerprints just so he can take out all the library books. If you were told that your government records were protected by a very large number in encrypted form would you feel reassured that it wasn't an image? I'm not sure you should, because first of all, even encryption that can't be cracked today probably can be tomorrow. Second of all, there are plenty of people out there with good reasons to try to deconstruct how these systems work. And third of all, a number is, well…sometimes it's just a number. In the case of MLS, it's a number generated by a system created by Digital Persona, who supply enterprise biometric solutions to all sorts of other clients. How many of them will accept the same numbers because they use the same algorithms?

In this case, it seems to me that it's not sufficient to say whether the precautions taken to protect the data are adequate. It seems to me the real question is whether the proposed system is proportionate to the problem it's being installed to solve. Is the desire to provide a quick and easy method for kids to check out their own library books sufficient justification for fingerprinting children? This is a question the ICO is not in business to answer.

It's always seemed to me that no amount of data protection really solves anything: data always seems to go where it's not supposed to, whether that's because someone leaves a laptop in a cab or a CD in the back of an airplane seat, or because the database's owner has become infected with what writer Ellen Ullman has called "the fever of the system". The MLS literature states that fingerprints are removed from the system when the child leaves school. But how would a parent check this? And how often do people really throw out data they have. You never know, you might need it someday.

It seems to be an inalienable truth of human nature: if you have two databases you want to link them together; if you have one database you want to keep adding to it and using it for more and more stuff. In the end it's like Mark Twain's old adage, that "Three people can keep a secret if two of them are dead." Databases are not designed to keep secrets; they are designed to help people find things out. If you really want to protect privacy, the only certain option is not to create the database.

In the meantime, it would be nice if the ICO's job description and title went further to ask, "Is this an appropriate use of technology? Is it possible there will be consequences down the line that make this a bad idea?" Now that the government has its way to build the national identity register, these are questions we should all be asking.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. She has an intermittent blog. Readers are welcome to post there, at the official net.wars blog, or to send email, but please turn off HTML.

March 24, 2006

The IDs of March

Ping-pong is so much fun. If you haven't been following the ins and outs of the latest incarnation of the Identity Cards Bill, it's on about its fifth bat between the House of Commons and the House of Lords, with doubtless more to come. In the fourth bat, the Lords had proposed an amendment making the system opt-in until 2012, which would have meant that when you applied for your new passport (or other designated document, such as residence permit or driver's license) you would have the option of also being added to the National Identity Register and an ID card. That, of course, is not what the government wants.

Despite all the lip service paid in the election manifesto to the scheme's being voluntary, the government wants registration to be compulsorily tied to the issuance of documents that most people want to have. In the government's scheme, the ID card will be "voluntary" in the sense that having a passport is "voluntary". Don't want an ID card? Fine. Don't travel, and if you drive, don't lose your old license, change address, or turn 70. (For Americans: old-style British licenses are a fancily printed piece of folded paper that are valid until you're 70; these are gradually being replaced by new-style plastic photo licenses that are valid for ten years, but it's a very long process. Few people in Britain carry them as daily identification, and the only time you need to produce it is within ten days of being stopped for a traffic violation, so there's little incentive to update them.)

The Commons rejected that (PDF). What's supposed to happen next: the House will reject the Lords' amendments again, and the Lords will amend it again.

We even know something about how the Lords will try to amend it: Lord Armstrong of Ilminster has already tabled the amendment to be proposed. The new amendment will offer people the chance to opt out of registering in the national identity database and acquiring an ID card alongside a passport application. It's an interesting idea for a compromise. Most people will not bother to opt out. The ones who do will be the ones who otherwise might decide to forgo travelling in order to avoid registering as long as possible.

If that amendment were to succeed—and it seems likely to garner more support than the last round—there is one significance class of people we know would not opt out of getting ID cards: criminals. Just as you'd probably opt to wear a business suit and tie and get your hair cut respectably short if you were a dope dealer traveling internationally, if you are up to any nefarious schemes you will want the credibility having an ID card will lend you.

Eventually, the most likely outcome is that the Commons will win. There are several reasons for that, none of them really to do with ID cards. The most important is the balance of power between the Commons and the Lords: everyone agrees that the Commons, as the elected body, has supremacy. But the whole mess could easily drag on long enough to block the bill until everyone goes on their summer vacation. If the Commons has to invoke the Parliamentary Act to override the Lords' dissent, we could be into November before the government can even get going on it. The ID card is already behind the govenrment's original schedule. But, hey, what's the hurry?

The ID card is becoming secondary in these debates to the question of how much say the Lords should have and how much the government is railroading its proposals through. The ID card is increasingly controversial; the most recent YouGov/Daily Telegraph survey (PDF) showed only 45 percent in favor of the thing, and support decreases when the estimated cost rises from £6 billion (government) to £18 billion (LSE's upper figure). Are the Lords in fact more representative than the elected Commons?

It's interesting to consider this question in the light of the Power Inquiry, which is considering the question of voter alienation. Two of its conclusions: that there should be a "rebalancing of power away from the executive and unaccountable bodies towards Parliament and local government" and that electoral systems should be more responsive "allowing citizens a much more direct and focused say over political decisions and policies". Exactly the opposite is happening with respect to the ID Cards bill which, given the size of the shift it would cause in British national life and its cost, arguably should be the subject of a referendum.

In this context it's also worth reading testimony given recently to the US Congress by Stephen T. Kent, author of two books on ID card systems about the difficulties of doing what the UK government is proposing (and the US government has in mind with Real ID). Kent reminds us that what we are proposing to build is not an ID card but an ID system, and asks the same questions that even technology vendors have been asking the British government all along: what is the system for? What problem are you trying to solve? Without a clear set of goals, how can the technology fail to fail? In Britain's case, though, getting the ID card bill passed seems to be the problem the ID card is trying to solve.

You can, of course, still promise to refuse.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of linkts to all the earlier columns in this series. Readers are welcome to send email ( but please turn off HTML), or to post comments at the net.wars blog