Main

December 14, 2012

Defending Facebook

The talks at the monthly Defcon London are often too abstruse for the "geek adjacent". Not so this week, when Chris Palow, Facebook's engineering manager, site integrity London, outlined the site's efforts to defend itself against attackers.

This is no small thing: the law of truly large numbers means that a tiny percentage of a billion users is still a lot of abusers. And Palow has had to scale up very quickly: when he joined five years ago, the company had 30 million users. Today, that's just a little more than a third of the site's *fake* accounts, based on the 83 million the company claimed in its last quarterly SEC filing.

As became rapidly apparent, there are fakes and there are fakes. Most of those 83 million are relatively benign: accounts for people's dogs, public/private variants, duplicate accounts created when a password is lost, and so on. The rest, about 1.5 percent - which is still 14 million - are the troublemakers, spreading spam and malicious links such as the Koobface worm. Eliminating these is important; there is little more damaging to a social network than rampant malware that leverages the social graph to put users in danger in a space they use because they believe it is safe.

This is not an entirely new problem, but none of the prior solutions are really available to Facebook. Prehistoric commercial social environments like CompuServe and AOL, because people paid to use them, could check credit cards. (Yes, the irony was that in the window between sign-up and credit card verification lay a golden opportunity for abusers to harass the rest of the Net from throwaway email accounts.) Usenet and other free services were defenseless against malicious posters, and despite volunteer community efforts most of the audience fled as a result. As a free service whose business model requires scale, Facebook can't require a credit card or heavyweight authentication, and its ad-supported business model means it can't afford to lose any of its audience, so it's damned in all directions. It's also safe to say that the online criminal underground is hugely more developed and expert now.

Fake accounts are the entry points for all sorts of attacks; besides the usual issues of phishing attacks and botnet recruitment, the more fun exploit is using those links to vacuum up people's passwords in order to exploit them on all the other sites across the Web where those same people have used those same passwords.

So a lot of Palow's efforts are directed at making sure those accounts don't get opened in the first place. Detection is a key element; among other techniques is a lightweight captcha-style request to identify a picture.

"It's still easy for one user to have three or four accounts," he said, "but we can catch anyone registering 1 million fakes. Most attacks need scale."

For the small-scale 16-year-old in the bedroom, he joked that the most effective remedy is found in the site's social graph: their moms are on Facebook. In a more complicated case from the Philippines using cheap human labor to open 500 accounts a day in order to spam links selling counterfeit athletic shoes the miscreants talked about their efforts *on* Facebook.

Another key is preventing, or finding and fixing, bugs in the code that runs the site. Among the strategies Palow listed for this, which included general improvements to coding practice such as better testing, regular reviews, and static and dynamic analysis, is befriending the community of people who find and report bugs.

Once accounts have been created, spotting the spammers involves looking for patterns that sound very much like the ones that characterize Usenet spam: are the same URLs being posted across a range of accounts, do those accounts show other signs of malware infection, are they posted excessively on a single channel, and so on.

Other more complex historical attacks include the Tunisian government's effort to steal passwords. Palow also didn't have much nice to say about ad-replacement schemes such as the now-defunct Phorm.

The current hot issue is what Palow calls "toolbars" and I would call browser extensions. Many of these perform valuable functions from the user's point of view, but the price, which most users don't see until it's too late, is that they operate across all open windows, from your insecure reading of the tennis headlines to your banking session. This particular issue is beginning to be locked down by browser vendors, who are implementing content security policies, essentially the equivalent of the Android and iOS curated app stores. As this work is progressing at different rates, in some cases Facebook can leverage the browsers' varying blocking patterns to identify malware.

More complex responses involve partnerships with software and anti-virus vendors. There will be more of this: the latest trend is stealing tokens on Facebook (such as the iPhone Facebook app's token) to enable spamming off-site.

A fellow audience member commented that sometimes it's more effective long-term to let the miscreants ride for a month while you formulate a really heavy response and then drop the anvil. Perhaps: but this is the law of truly large numbers again. When you have a billion users the problem is that during that month a really shocking number of people can be damaged. Palow's life, therefore, is likely to continue to be patch, patch, patch.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.


October 26, 2012

Lie to me

I thought her head was going to explode.

The discussion that kicked off this week's Parliament and Internet conference revolved around cybersecurity and trust online, harmlessly at first. Then Helen Goodman (Labour - Bishop Auckland), the shadow minister for Culture, Media, and Sport, raised a question: what was Nominet doing to get rid of anonymity online? Simon McCalla, Nominet's CTO, had some answers: primarily, they're constantly trying to improve the accuracy and reliability of the Whois database, but it's only a very small criminal element that engage in false domain name registration. Like that.

A few minutes later, Andy Smith, PSTSA Security Manager, Cabinet Office, in answer to a question about why the government was joining the Open Identity Exchange (as part of the Identity Assurance Programme) advised those assembled to protect themselves online by lying. Don't give your real name, date of birth, and other information that can be used to perpetrate identity theft.

Like I say, bang! Goodman was horrified. I was sitting near enough to feel the splat.

It's the way of now that the comment was immediately tweeted, picked up by the BBC reporter in the room, published as a story, retweeted, Slashdotted, tweeted some more, and finally boomeranged back to be recontextualized from the podium. Given a reporter with a cellphone and multiple daily newspaper editions, George Osborne's contretemps in first class would still have reached the public eye the same day 15 years ago. This bit of flashback couldn't have happened even five years ago.

For the record, I think it's clear that Smith gave good security advice, and that the headline - the greater source of concern - ought to be that Goodman, an MP apparently frequently contacted by constituents complaining about anonymous cyberbullying, doesn't quite grasp that this is a nuanced issue with multiple trade-offs. (Or, possibly, how often the cyberbully is actually someone you know.) Dates of birth, mother's maiden names, the names of first pets...these are all things that real-life friends and old schoolmates may well know, and lying about the answers is a perfectly sensible precaution given that there is no often choice about giving the real answers for more sensitive purposes, like interacting with government, medical, and financial services. It is not illegal to fake or refuse to disclose these things, and while Facebook has a real names policy it's enforced with so little rigor that it has a roster of fake accounts the size of Egypt.

Although: the Earl of Erroll might be a bit busy today changing the fake birth date - April 1, 1900 - he cheerfully told us and Radio 4 he uses throughout; one can only hope that he doesn't use his real mother's maiden name, since that, as Tom Scott pointed out later, is in Erroll's Wikipedia entry. Since my real birth date is also in *my* Wikipedia entry and who knows what I've said where, I routinely give false answers to standardized security questions. What's the alternative? Giving potentially thousands of people the answers that will unlock your bank account? On social networking sites it's not enough for you to be taciturn; your birth date may be easily outed by well-meaning friends writing on your wall. None of this is - or should be - illegal.

It turns out that it's still pretty difficult to explain to some people how the Internet works or why . Nominet can work as hard as it likes on verifying its own Whois database, but it is powerless over the many UK citizens and businesses that choose to register under .com, .net, and other gTLDs and country codes. Making a law to enjoin British residents and companies from registering domains outside of .uk...well, how on earth would you enforce that? And then there's the whole problem of trying to check, say, registrations in Chinese characters. Computers can't read Chinese? Well, no, not really, no matter what Google Translate might lead you to believe.

Anonymity on the Net has been under fire for a long, long time. Twenty years ago, the main source of complaints was AOL, whose million-CD marketing program made it easy for anyone to get a throwaway email address for 24 hours or so until the system locked you out for providing an invalid credit card number. Then came Hotmail, and you didn't even need that. Then, as now, there are good and bad reasons for being anonymous. For every nasty troll who uses the cloak to hide there are many whistleblowers and people in private pain who need its protection.

Smith's advice only sounds outrageous if, like Goodman, you think there's a valid comparison between Nominet's registration activity and the function of the Driver and Vehicle Licensing Agency (and if you think the domain name system is the answer to ensuring a traceable online identity). And therein lies the theme of the day: the 200-odd Parliamentarians, consultants, analysts, government, and company representatives assembled repeatedly wanted incompatible things in conflicting ways. The morning speakers wanted better security, stronger online identities, and the resources to fight cybercrime; the afternoon folks were all into education and getting kids to hack and explore so they learn to build things and understand things and maybe have jobs someday, to their own benefit and that of the rest of the country. Paul Bernal has a good summary.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


October 12, 2012

My identity, my self

Last week, the media were full of the story that the UK government was going to start accepting Facebook logons for authentication. This week, in several presentations at the RSA Conference, representatives of the Government Digital Service begged to differ: the list of companies that have applied to become identity providers (IDPs) will be published at the end of this month and until then they are not confirming the presence or absence of any particular company. According to several of the spokesfolks manning the stall and giving presentations, the press just assumed that when they saw social media companies among the categories of organization that might potentially want to offer identity authentication, that meant Facebook. We won't actually know for another few weeks who has actually applied.

So I can mercifully skip the rant that hooking a Facebook account to the authentication system you use for government services is a horrible idea in both directions. What they're actually saying is, what if you could choose among identification services offered by the Post Office, your bank, your mobile network operator (especially for the younger generation), your ISP, and personal data store services like Mydex or small, local businesses whose owners are known to you personally? All of these sounded possible based on this week's presentations.

The key, of course, is what standards the government chooses to create for IDPs and which organizations decide they can meet those criteria and offer a service. Those are the details the devil is in: during the 1990s battles about deploying strong cryptography, the government's wanted copies of everyone's cryptography keys to be held in escrow by a Trusted Third Party. At the time, the frontrunners were banks: the government certainly trusted those, and imagined that we did, too. The strength of the disquiet over that proposal took them by surprise. Then came 2008. Those discussions are still relevant, however; someone with a long memory raised the specter of Part I of the Electronic Communications Act 2000, modified in 2005, as relevant here.

It was this historical memory that made some of us so dubious in 2010, when the US came out with proposals rather similar to the UK's present ones, the National Strategy for Trusted Identities in Cyberspace (NSTIC). Ross Anderson saw it as a sort of horror-movie sequel. On Wednesday, however, Jeremy Grant, the senior executive advisor for identity management at the US National Institute for Standards and Technology (NIST), the agency charged with overseeing the development of NSTIC, sounded a lot more reassuring.

Between then and now came both US and UK attempts to establish some form of national ID card. In the US, "Real ID", focused on the state authorities that issue driver's licenses. In the UK, it was the national ID card and accompanying database. In both countries the proposals got howled down. In the UK especially, the combination of an escalating budget, a poor record with large government IT projects, a change of government, and a desperate need to save money killed it in 2006.

Hence the new approach in both countries. From what the GDS representatives - David Rennie (head of proposition at the Cabinet Office), Steven Dunn (lead architect of the Identity Assurance Programme; Twitter: @cuica), Mike Pegman (security architect at the Department of Welfare and Pensions, expected to be the first user service; Twitter: @mikepegman), and others manning the GDS stall - said, the plan is much more like the structure that privacy advocates and cryptographers have been pushing for 20 years: systems that give users choice about who they trust to authenticate them for a given role and that share no more data than necessary. The notion that this might actually happen is shocking - but welcome.

None of which means we shouldn't be asking questions. We need to understand clearly the various envisioned levels of authentication. In practice, will those asking for identity assurance ask for the minimum they need or always go for the maximum they could get? For example, a bar only needs relatively low-level assurance that you are old enough to drink; but will bars prefer to ask for full identification? What will be the costs; who pays them and under what circumstances?

Especially, we need to know what the detail of the standards organizations must meet to be accepted as IDPs, in particular, what kinds of organization they exclude. The GDS as presently constituted - composed, as William Heath commented last year, of all the smart, digitally experienced people you *would* hire to reinvent government services for the digital world if you had the choice - seems to have its heart in the right place. Their proposals as outlined - conforming, as Pegman explained happily, to Kim Cameron's seven laws of identity - pay considerable homage to the idea that no one party should have all the details of any given transaction. But the surveillance-happy type of government that legislates for data retention and CCDP might also at some point think, hey, shouldn't we be requiring IDPs to retain all data (requests for authentication, and so on) so we can inspect it should we deem it necessary? We certainly want to be very careful not to build a system that could support such intimate secret surveillance - the fundamental objection all along to key escrow.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.


September 28, 2012

Don't take ballots from smiling strangers

Friends, I thought it was spam, and when I explain I think you'll see why.

Some background. Overseas Americans typically vote in the district of their last US residence. In my case, that's a county in the fine state of New York, which for much of my adult life, like clockwork, has sent me paper ballots by postal mail. Since overseas residents do not live in any state, however, you are eligible to vote only in federal elections (US Congress, US Senate, and President). I have voted in every election I have ever been eligible for back to 1972.

So last weekend three emails arrived, all beginning, "Dear voter".

The first one, from nysupport@secureballotusa.com, subject line "Electronic Ballot Access for Military/Overseas Voters":

An electronic ballot has been made available to you for the GE 11/6/12 (Federal) by your local County Board of Elections. Please access www.secureballotusa.com/NY to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

The second, from "NYS Board of Elections", move@elections-ny.gov, subject "Your Ballot is Now Available":

An electronic ballot has been made available to you for the November 6, 2012 General Election. Please access https://www.secureballotusa.com/NY to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

If you have any questions or experience any problems, please email NYsupport@secureballotusa.com or visit the NYS Board of Elections' website at http://www.elections.ny.gov for additional information.

The third, from nysupport@secureballot.com, subject, "Ballot Available Notification":

An electronic ballot has been made available to you for the GE 11/6/12 (Federal) by your local County Board of Elections. Please access www.secureballotusa.com/diaspora_ny-1.5/NY_login.action to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

In all my years as a voter, I've never had anything to do with the NY Board of Elections. I had not received any notification from the county board of elections telling me to expect an email, confirming the source, or giving the Web site address I would eventually use. But the county board of elections Web site had no information indicating they were providing electronic ballots for overseas voters. So I ask you: what would you think?

What I thought was that the most likely possibilities were both evil. One was that it was just ordinary, garden-variety spam intended to facilitate a more than usually complete phishing job. That possibility made me very reluctant to check out the URL in the message, even by typing it in. The security expert Rebecca Mercuri, whose PhD dissertation in 2000 was the first to really study the technical difficulties of electronic voting, was more intrepid. She examined the secureballotusa.com site and noted errors, such as the request for the registrant's Alabama driver's license number on this supposedly New York state registration page. Plus, the amount of information requested for verification is unnerving; I don't know these people, even though secureballotusa.com checks out as belonging to the Spanish company Scytl, which provides election software to a variety of places, including New York state.

The second possibility was that these messages were the latest outbreak of longstanding deceptive election practices which include disseminating misinformation with the goal of disenfranchising particular groups of voters. All I know about this comes from the 2008 Computers, Freedom, and Privacy conference, a panel organized by EPIC's Lillie Coney. And it's nasty stuff: leaflets, phone calls, mailings, saying stuff like Republicans vote on Tuesday (the real US election day), Democrats on Wednesday. Or that you can't vote if you've ever been found guilty of anything. Or if you voted in an earlier election this year. Or the polling location has changed. Or you'll be deported if you try to vote and you're an illegal immigrant. Typically, these efforts have been targeted at minorities and the poor. But the panel fully expected them to move increasingly online and to target a wider variety of groups, particularly through spam email. So that was my second thought. Is this it? Someone wants me not to vote?

This election year, of course, the efforts to disenfranchise groups of voters are far more sophisticated. Why send out leaflets when you can push for voter identification laws on the basis that voter fraud is a serious problem? This issue is being discussed at length by the New York Times, the Atlanticelsewhere. Deceptive email seems amateurish by comparison.

I packed up the first two emails and forwarded them to an email address at my county's board of elections from which I had previously received a mailing. On Monday, there came a prompt response. No, the messages are genuine. Some time that I don't remember I ticked a box saying "OR email", and hence I was being notified that an electronic ballot was available. I wrote back, horrified: paper ballot, by postal mail, please. And get a security expert to review how you've done this. Because seriously: the whole setup is just dangerously wrong. Voting security matters. Think like a bank.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.


August 10, 2012

Wiped out

There are so many awful things in the story of what happened this week to technology journalist Matt Honan that it's hard to know where to start. The fundamental part - that through not particularly clever social engineering an outsider was able in about 20 minutes to take over and delete his Google account, take over and defame his Twitter account, and then wipe all the data on his iPhone, iPad, and MacBook - would make a fine nightmare, or maybe a movie with some of the surrealistic quality of Martin Scorsese's After Hours (1985). And all, as Honan eventually learned, because the hacker fancied an outing with his three-digit Twitter ID, a threat so unexpected there's no way you'd make it your model.

Honan's first problem was the thing Suw Charman-Anderson put her finger on for an Infosecurity Magazine piece I did earlier this year: gaining access to a single email address to which every other part of your digital life - ecommerce accounts, financial accounts, social media accounts, password resets all over the Web - is locked puts you in for "a world of hurt". If you only have one email account you use for everything, given access to it, an attacker can simply request password resets all over the place - and then he has access to your accounts and you don't. There are separate problems around the fact that the information required for resets is both the kind of stuff people disclose without thinking on social networks and commonly reused. None of this requires fancy technology fix, just smarter, broader thinking

There are simple solutions to the email problem: don't use one email account for everything and, in the case of Gmail, use two-factor authentication. If you don't operate your own server (and maybe even if you do) it may be too complicated to create a separate address for every site you use, but it's easy enough to have a public address you use for correspondence, a private one you use for most of your site accounts, and then maybe a separate, even less well-known one for a few selected sites that you want to protect as much as you can.

Honan's second problem, however, is not so simple to fix unless an incident like this commands the attention of the companies concerned: the interaction of two companies' security practices that on their own probably seemed quite reasonable. The hacker needed just two small bits of information: Honan's address (sourced from the Whois record for his Internet domain name), and the last four digits of a credit card number, The hack to get the latter involved adding a credit card to Honan's Amazon.com account over the phone and then using that card number, in a second phone call, to add a new email address to the account. Finally, you do a password reset to the new email address, access the account, and find the last four digits of the cards on file - which Apple then accepted, along with the billing address, as sufficient evidence of identity to issue a temporary password into Honan's iCloud account.

This is where your eyes widen. Who knew Amazon or Apple did any of those things over the phone? I can see the point of being able to add an email address; what if you're permanently locked out of the old one? But I can't see why adding a credit card was ever useful; it's not as if Amazon did telephone ordering. And really, the two successive calls should have raised a flag.

The worst part is that even if you did know you'd likely have no way to require any additional security to block off that route to impersonators; telephone, cable, and financial companies have been securing telephone accounts with passwords for years, but ecommerce sites do not (or haven't) think of themselves as possible vectors for hacks into other services. Since the news broke, both Amazon and Apple have blocked off this phone access. But given the extraordinary number of sites we all depend on, the takeaway from this incident is that we ultimately have no clue how well any of them protect us against impersonation. How many other sites can be gamed in this way?

Ultimately, the most important thing, as Jack Schofield writes in his Guardian advice column is not to rely on one service for everything. Honan's devastation was as complete as it was because all his devices were synched through iCloud and could be remotely wiped. Yet this is the service model that Apple has and that Microsoft and Google are driving towards. The cloud is seductive in its promises: your data is always available, on all your devices, anywhere in the world. And it's managed by professionals, who will do all the stuff you never get around to, like make backups.

But that's the point: as Honan discovered to his cost, the cloud is not a backup. If all your devices are hooked to it, it is your primary data pool, and, as Apple co-founder Steve Wozniak pointed out this week it is out of your control. Keep your own backups, kids. Develop multiple personalities. Be careful out there.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


July 14, 2012

The ninth circle of HOPE

Why do technologies fail? And what do we mean by failure?

These questions arise in the first couple of hours of HOPE 9, this year's edition of the hacker conference run biannually by 2600, the hacker quarterly.

Technology failure has a particular meaning in the UK, where large government projects have traditionally wasted large amounts of public money and time. Many failures are more subtle. To take a very simple example: this morning, the elevators failed. It was not a design flaw or loss of functionality: the technology worked perfectly as intended. It was not a usability flaw: what could be simpler than pushing a button? It was not even an accessibility or availability flaw: there were plenty of elevators. What it was, in fact, was a social - or perhaps a contextual - flaw. This group of people who break down complex systems to their finest components to understand them and make them jump through hoops simply failed to notice or read the sign that gave the hours of operation even though it was written in big letters and placed at eye level, just above the call button. This was, after all, well-understood technology that needed no study. And so they stood around in groups, waiting until someone came, pointed out the sign, and chased them away. RTFM, indeed.

But this is what humans do: we make assumptions based on our existing knowledge. To the person with a hammer, everything looks like a nail. To the person with a cup and nowhere to put it, the unfamiliar CD drive looks like a cup holder. To the kids discovering the Hole in the Wall project, a 2000 experiment with installing a connected computer in an Indian slum, the familiar wait-and-wait-some-more hourglass was a drum. Though that last is only a failure if you think it's important that the kids know it's an hourglass; they understood perfectly well the thing that mattered, which is that it was a sign the thing in the wall was doing something and they had to wait.

We also pursue our own interests, sometimes at the expense of what actually matters in a situation. Far Kron, speaking on the last four years of community fabrication, noted that the Global Village Construction project, which is intended to include a full set of the machines necessarily to build a civilization, includes nothing to aid more mundane things like fetching fresh water and washing clothes, which are overall a bigger drain on human time. I am tempted to suggest that perhaps the project needs to recruit some more women (who around the world tend to do most of the water fetching and clothes washing), but it may simply be that small, daily chores are things you worry about after you have your village. (Though this is the inverse of how human settlements have historically worked.)

A more intriguing example, cited by Chris Anderson, a former organizer with New York's IndyMedia, in the early panel on Technology to Change Society that inspired this piece, is Twitter. How is one of the most important social networks and messaging platforms in the world a failure?

"If you define success in technical terms you might only *be* successful in technical terms," he said. Twitter, he explained grew out of a number of prior open-source projects the founders were working. "Indymedia saw technology as being in service to goals, but lacks the social goals those projects started with."

Gus Andrews, producer of The Media Show, a YouTube series on digital media literacy, focused on the hidden assumptions creators make. Some believed, for example, that open source software was vital to One Laptop Per Child, for example, believed that being able to fix the software was a crucial benefit for the recipients.

In 2000, Lawrence Lessig argued that "code is law", and that technological design controls how it can be used. Andrews took a different view: "To believe that things are ineluctably coded into technology is to deny free will." Pointing at Everett Rogers' 1995 book, The Diffusion of Innovations, she said, "There are things we know about how technology enacts social change and one of the thing we know is that it's not the technology."

Not the technology? You might think that if anyone were going to be technology obsessed it would be the folks at a hacker conference. And certainly the public areas are filled with people fidgeting with radio frequencies, teaching others to solder, and showing off their latest 3D printers and their creations (this year's vogue: printing in brightly colored Lego plastic). But the roots of the hacker movement in general and of 2600 in particular are as much social and educational as they are technological.

Eric Corley, who has styled himself "Emmanuel Goldstein", edits the magazine, and does a weekly radio show for WBAI-FM in New York. At a London hacker conference in 1995, he summed up this ethos for me (and The Independent) by talking about hacking as a form of consumer advocacy. His ideas about keeping the Internet open and free, and about ferreting out information corporations would rather keep hidden were niche - and to many people scary - then, but mainstream now.

HOPE continues through Sunday.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 15, 2012

A license to print money

"It's only a draft," Julian Huppert, the Liberal Democrat MP for Cambridge, said repeatedly yesterday. He was talking about the Draft Communications Data Bill (PDF), which was published on Wednesday. Yesterday, in a room in a Parliamentary turret, Hupper convened a meeting to discuss the draft; in attendance were a variety of Parliamentarians plus experts from civil society groups such as Privacy International, the Open Rights Group, Liberty, and Big Brother Watch. Do we want to be a nation of suspects?

The Home Office characterizes the provisions in the draft bill as vital powers to help catch criminals, save lives, and protect children. Everyone else - the Guardian, ZDNet UK, and dozens more - is calling them the "Snooper's charter".

Huppert's point is important. Like the Defamation Bill before it, publishing a draft means there will be a select committee with 12 members, discussion, comments, evidence taken, a report (by November 30, 2012), and then a rewritten bill. This draft will not be voted on in Parliament. We don't have to convince 650 MPs that the bill is wrong; it's a lot easier to talk to 12 people. This bill, as is, would never pass either House in any case, he suggested.

This is the optimistic view. The cynic might suggest that since it's been clear for something like ten years that the British security services (or perhaps their civil servants) have a recurring wet dream in which their mountain of data is the envy of other governments, they're just trying to see what they can get away with. The comprehensive provisions in the first draft set the bar, softening us up to give away far more than we would have in future versions. Psychologists call this anchoring, and while probably few outside the security services would regard the wholesale surveillance and monitoring of innocent people as normal, the crucial bit is where you set the initial bar for comparison for future drafts of the legislation. However invasive the next proposals are, it will be easy for us to lose the bearings we came in with and feel that we've successfully beaten back at least some of the intrusiveness.

But Huppert is keeping his eye on the ball: maybe we can not only get the worst stuff out of this bill but make things actually better than they are now; it will amend RIPA. The Independent argues that private companies hold much more data on us overall but that article misses that this bill intends to grant government access to all of it, at any time, without notice.

The big disappointment in all this, as William Heath said yesterday, is that it marks a return to the old, bad, government IT ways of the past. We were just getting away from giant, failed public IT projects like the late unlamented NHS platform for IT and the even more unlamented ID card towards agile, cheap public projects run by smart guys who know what they're doing. And now we're going to spend £1.8 billion of public money over ten years (draft bill, p92) building something no one much wants and that probably won't work? The draft bill claims - on what authority is unclear - that the expenditure will bring in £5 to £6 billion in revenues. From what? Are they planning to sell the data?

Or are they imagining the economic growth implied by the activity that will be necessary to build, install, maintain, and update the black boxes that will be needed by every ISP in order to comply with the law. The security consultant Alec Muffet has laid out the parameters for this SpookBox 5000: certified, tested, tamperproof, made by, say, three trusted British companies. Hundreds of them, legally required, with ongoing maintenance contracts. "A license to print money," he calls them. Nice work if you can get it, of course.

So we're talking - again - about spending huge sums of government money on a project that only a handful of people want and whose objectives could be better achieved by less intrusive means. Give police better training in computer forensics, for example, so they can retrieve the evidence they need from the devices they find when executing a search warrant.

Ultimately, the real enemy is the lack of detail in the draft bill. Using the excuse that the communications environment is changing rapidly and continuously, the notes argue that flexibility is absolutely necessary for Clause 1, the one that grants the government all the actual surveillance power, and so it's been drafted to include pretty much everything, like those contracts that claim copyright in perpetuity in all forms of media that exist now or may hereinafter be invented throughout the universe. This is dangerous because in recent years the use of statutory instruments to bypass Parliamentary debate has skyrocketed. No. Make the defenders of this bill prove every contention; make them show the evidence that makes every extra bit of intrusion necessary.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


June 8, 2012

Insecure at any speed

"I have always depended on the kindness of strangers," Blanche says graciously to the doctor hauling her off to the nuthouse at the end of Tennessee Williams' play A Streetcar Named Desire. And while she's quite, quite mad in her genteel Old Southern delusional way she is still nailing her present and future situation, which is that she's going to be living in a place where the only people who care about her are being paid to do so (and given her personality, that may not be enough).

Of course it's obvious to anyone who's lying in a hospital bed connected to a heart monitor that they are at the mercy of the competence of the indigenous personnel. But every discussion of computer passwords tends to go as though the problem is us. Humans choose bad passwords: short, guessable, obvious, crackable. Or we use the same ones everywhere, or we keep cycling the same two or three when we're told to change them frequently. We are the weakest link.

And then you read this week's stories that major sites for whom our trust is of business-critical importance - LinkedIn, eHarmony, and Last.fm" - have been storing these passwords in such a way that they were vulnerable to not only hacking attacks but also decoding once they had been copied. My (now old) password, I see by typing it into LeakedIn for checking, was leaked but not cracked (or not until I typed it in, who knows?).

This is not new stuff. Salting passwords before storing them - the practice of adding random characters to make the passwords much harder to crack - has been with us for more than 30 years. If every site does these things a little differently, the differences help mitigate the risk we users bring upon ourselves by using the same passwords all over the place. It boggles the mind that these companies could be so stupid as to ignore what has been best practice for a very long time.

The leak of these passwords is probably not immediately critical. For one thing, although millions of passwords leaked out, they weren't attached to user names. As long as the sites limit the number of times you can guess your password before they start asking you more questions or lock you out, the odds that someone can match one of those 6.5 million passwords to your particular account are...well, they're not 6.5 million to one if you've used a password like "password" or "1233456", but they're small. Although: better than your chances of winning the top lottery prize.

Longer term may be the bigger issue. As Ars Technica notes, the decoded passwords from these leaks and their cryptographically hashed forms will get added to the rainbow tables used in cracking these things. That will shrink the space of good, hard-to-crack passwords.

Most of the solutions to "the password problem" aim to fix the user in one way or another. Our memories have limits - so things like Password Safe will remember them for us. Or those impossible strings of letters and numbers are turned into a visual pattern by something like GridSure, which folded a couple of years ago but whose software and patents have been picked up by CryptoCard.

An interesting approach I came across late last year is sCrib, a USB stick that you plug into your computer and that generates a batch of complex passwords it will type in for you. You can pincode-protect the device and it can also generate one-time passwords and plug into a keyboard to protect against keyloggers. All very nice and a good idea except that the device itself is so *complicated* to use: four tiny buttons storing 12 possible passwords it generates for you.

There's also the small point that Web sites often set rules such that any effort to standardize on some pattern of tough password is thwarted. I've had sites reject passwords for being too long, or for including a space or a "special character". (Seriously? What's so special about a hyphen?) Human factors simply escape the people who set these policies, as XKCD long ago pointed out.

But the key issue is that we have no way of making an informed choice when we sign up for anything. We have simply no idea what precautions a site like Facebook or Gmail takes to protect the passwords that guard our personal data - and if we called to ask we'd run into someone in a call center whose job very likely was to get us to go away. That's the price, you might say, of a free service.

In every other aspect of our lives, we handle this sort of thing by having third-party auditors who certify quality and/or safety. Doctors have to pass licensing exams and answer to medical associations. Electricians have their work inspected to ensure it's up to code. Sites don't want to have to explain their security practices to every Sheldon and Leonard? Fine. But shouldn't they have to show *someone* that they're doing the right things?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


June 1, 2012

The pet rock manifesto

I understand why government doesn't listen to security experts on topics where their advice conflicts with the policies it likes. For example: the Communications Capabilities Development Programme, where experts like Susan Landau, Bruce Schneier, and Ross Anderson have all argued persuasively that a hole is a hole and creating a vulnerability to enable law enforcement surveillance is creating a vulnerability that can be exploited by...well, anyone who can come up with a way to use it.

All of that is of a piece with recent UK and US governments' approach to scientific advice in general, as laid out in The Geek Manifesto, the distillation of Mark Henderson's years of frustration serving as science correspondent at The Times (he's now head of communications for the Wellcome Trust). Policy-based evidence instead of evidence-based policy, science cherry-picked to support whatever case a minister has decided to make, the role of well-financed industry lobbyists - it's all there in that book, along with case studies of the consequences.

What I don't understand is why government rejects experts' advice when there's no loss of face involved, and where the only effect on policy would be to make it better, more relevant, and more accurately targeted at the problem it's trying to solve. Especially *this* government, which has in other areas has come such a long way.

Yet this is my impression from Wednesday's Westminster eForum on the UK's Cybersecurity strategy (PDF). Much was said - for example, by James Quinault, the director of the Office of Cybersecurity and Information Assurance - about information and intelligence sharing and about working collaboratively to mitigate the undeniably large cybersecurity threat (even if it's not quite as large as BAe Systems Detica's seemingly-pulled-out-of-the-air £27 billion would suggest; Detica's technical director, Henry Harrison didn't exactly defend that number, but said no one's come up with a better estimate for the £17 billion that report attributed to cyberespionage.)

It was John Colley, the managing director EMEA for (ISC)2 who said it: in a meeting he attended late last year with, among others, the MP James Brokenshire, Minister for Crime and Security at the Home Office shortly before the publication of the UK's four-year cybersecurity strategy (PDF), he asked who the document's formulators had talked to among practitioners, "the professionals involved at the coal face". The answer: well, none. GCHQ wrote a lot of it (no surprise, given the frequent, admittedly valid, references to its expertise and capabilities), and some of the major vendors were consulted. But the actual coal face guys? No influence. "It's worrying and distressing," Colley concluded.

Well, it is. As was Quinault's response when I caught him to ask whether he saw any conflict between the government's policies on CCDP and surveillance back doors built into communications equipment versus the government's goal of making Britain "one of the most secure places in the world to do business". That response was, more or less precisely: No.

I'm not saying the objectives are bad; but besides the issues raised when the document was published, others were highlighted Wednesday. Colley, for example, noted that for information sharing to work it needs two characteristics: it has to go both ways, and it has to take place inside a network of trust; GCHQ doesn't usually share much. In addition, it's more effective, according to both Colley and Stephen Wolthusen, a reader in mathematics at Royal Holloway's Information Security Group, to share successes rather than problems - which means that you need to be able to phone the person who's had your problem to get details. And really, still so much is down to human factors and very basic things, like changing the default passwords on Internet-facing devices. This is the stuff the coalface guys see every day.

Recently, I interviewed nearly a dozen experts of varying backgrounds about the future of infosecurity; the piece is due to run in Infosecurity Magazine sometime around now. What seemed clear from that exercise is that in the long run we would all be a lot more secure a lot more cheaply if we planned ahead based on what we have learned over the past 50 years. For example: before rolling out wireless smart meters all over the UK, don't implement remote disconnection. Don't link to the Internet legacy systems such as SCADA that were never designed with remote access in mind and whose security until now has depended on securing physical access. Don't plant medical devices in people's chests without studying the security risks. Stop, in other words, making the same mistakes over and over again.

The big, upcoming issue, Steve Bellovin writes in Privacy and Cybersecurity: the Next 100 Years (PDF), a multi-expert document drafted for the IEEE, is burgeoning complexity. Soon, we will be surrounded by sensors, self-driving cars, and the 2012 version of pet rocks. Bellovin's summation, "In 20 years, *everything* will be connected...The security implications of this are frightening." And, "There are two predictions we can be quite certain about: there will still be dishonest people, and our software will still have some bugs." Sounds like a place to start, to me.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 30, 2012

The ghost of cash

"It's not enough to speak well of digital money," Geronimo Emili said on Wednesday. "You must also speak negatively of cash." Emili has a pretty legitimate gripe. In his home country, Italy, 30 percent of the economy is black and the gap between the amount of tax the government collects and the amount it's actually owed is €180 billion. Ouch.

This sets off a bit of inverted nationalist competition between him and the Greek lawyer Maria Giannakaki, there to explain a draft Greek law mandating direct payment of VAT from merchants' tills to eliminate fraud: which country is worse? Emili is sure it's Italy.

"We invented banks," he said. "But we love cash." Italy's cash habit costs the country €10 billion a year - and 40 percent of Europe's bank robberies.

This exchange took place at this year's Digital Money Forum, an annual event that pulls together people interested in everything from the latest mobile technology to the history of Anglo-Saxon coinage. Their shared common interest: what makes money work? If you, like most of this group, want to see physical cash eliminated, this is the key question.

Why Anglo-Saxon coinage? Rory Naismith explains that the 8th century began the shift from valuing coins merely for their metal content and assigning them a premium for their official status. It was the beginning of the abstraction of money: coins, paper, the elimination of the gold standard, numbers in cyberspace. Now, people like Emili and this event's convenor, David Birch, argue it's time to accept money's fully abstract nature and admit the truth: it's a collective hallucination, a "promise of a promise".

These are not just the ravings of hungry technology vendors: Birch, Emili, and others argue that the costs of cash fall disproportionately on the world's poor, and that cash is the key vector for crime and tax evasion. Our impressions of the costs are distorted because the costs of electronic payments, credit cards, and mobile wallets are transparent, while cash is free at the point of use.

When I say to Birch that eliminating cash also means eliminating the ability to transact anonymously, he says, "That's a different conversation." But it isn't, if eliminating crime and tax evasion are your drivers. In the two days only Bitcoin offers anonymity, but it's doomed to its niche market, for whatever reason. (I think it's too complicated; Dutch financial historian Simon Lelieveldt says it will fail because it has no central bank.)

I pause to be annoyed by the claim that cash is filthy and spreads disease. This is Microsoft-level FUD, and not worthy of smart people claiming to want to benefit the poor and eliminate crime. In fact, I got riled enough to offer to lick any currency (or coins; I'm not proud) presented. I performed as promised on a fiver and a Danish note. And you know, they *kept* that money?

In 1680, says Birch, "Pre-industrial money was failing to serve an industrial revolution." Now, he is convinced, "We are in the early part of the post-industrial revolution, and we're shoehorning industrial money in to fit it. It can't last." This is pretty much what John Perry Barlow said about copyright in 1993, and he was certainly right.

But is Birch right? What kind of medium is cash? Is it a medium of exchange, like newspapers, trading stored value instead of information, or is it a format, like video tape? If it's the former, why shouldn't cash survive, even if only as a niche market? Media rarely die altogether - but formats come and go with such speed that even the more extreme predictions at this event - such as Sandra Alzetta, who said that her company expects half its transactions to be mobile by 2020 -seem quite modest. Her company is Visa International, by the way.

I'd say cash is a medium of exchange, and today's coins and notes are its format. Past formats have included shells, feathers, gold coins, and goats; what about a format for tomorrow that printed or minted on demand, at ATMs? I ask the owner of the grocery shop around the corner if his life would be better if cash were eliminated, and he shrugs no. "I'd still have to go out and get the stuff."

What's needed is low-cost alternatives that fit in cultural contexts. Lydia Howland, whose organization IDEO works to create human-centered solutions to poverty, finds the same needs in parts of Britain that exist in countries like Kenya, where M-Pesa is succeeding in bringing access to banking and remote payments to people who have never had access to financial services before.

"Poor people are concerned about privacy," she said on Wednesday. "But they have so much anonymity in their lives that they pay a premium for every financial service." Also, because they do so much offline, there is little understanding of how they work or live. "We need to create a society where a much bigger base has a voice."

During a break, I try to sketch the characteristics of a perfect payment mechanism: convenient; transparent to the user; universally accepted; universally accessible and usable; resistant to tracking, theft, counterfeiting, and malware; and hard to steal on a large scale. We aren't there yet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 6, 2012

Only the paranoid

Yesterday's news that the Ramnit worm has harvested the login credentials of 45,000 British and French Facebook users seems to me a watershed moment for Facebook. If I were an investor, I'd wish I had already cashed out. Indications are, however, that founding CEO Mark Zuckerberg is in it for the long haul, in which case he's going to have to find a solution to a particularly intractable problem: how to protect a very large mass of users from identity fraud when his entire business is based on getting them to disclose as much information about themselves as possible.

I have long complained about Facebook's repeatedly changing privacy controls. This week, while working on a piece on identity fraud for Infosecurity, I've concluded that the fundamental problem with Facebook's privacy controls is not that they're complicated, confusing, and time-consuming to configure. The problem with Facebook's privacy controls is that they exist.

In May 2010, Zuckerberg enraged a lot of people, including me, by opining that privacy is no longer a social norm. As Judith Rauhofer has observed, the world's social norms don't change just because some rich geeks in California say so. But the 800 million people on Facebook would arguably be much safer if the service didn't promise privacy - like Twitter. Because then people wouldn't post all those intimate details about themselves: their kids' pictures, their drunken, sex exploits, their incitements to protest, their porn star names, their birth dates... Or if they did, they'd know they were public.

Facebook's core privacy problem is a new twist on the problem Microsoft has: legacy users. Apple was willing to make earlier generations of its software non-functional in the shift to OS X. Microsoft's attention to supporting legacy users allows me to continue to run, on Windows 7, software that was last updated in 1997. Similarly, Facebook is trying to accommodate a wide variety of privacy expectations, from those of people who joined back when membership was limited to a few relatively constrained categories to those of people joining today, when the system is open to all.

Facebook can't reinvent itself wholesale: it is wholly and completely wrong to betray users who post information about themselves into what they are told is a semi-private space by making that space irredeemably public. The storm every time Facebook makes a privacy-related change makes that clear. What the company has done exceptionally well is to foster the illusion of a private space despite the fact that, as the Australian privacy advocate Roger Clarke observed in 2003, collecting and abusing user data is social networks' only business model.

Ramnit takes this game to a whole new level. Malware these days isn't aimed at doing cute, little things like making hard drive failure noises or sending all the letters on your screen tumbling into a heap at the bottom. No, it's aimed at draining your bank account and hijacking your identity for other types of financial exploitation.

To do this, it needs to find a way inside the circle of trust. On a computer network, that means looking for an unpatched hole in software to leverage. On the individual level, it means the malware equivalent of viral marketing: get one innocent bystander to mistakenly tell all their friends. We've watched this particular type of action move through a string of vectors as the human action moves to get away from spam: from email to instant messaging to, now, social networks. The bigger Facebok gets, the bigger a target it becomes. The more information people post on Facebook - and the more their friends and friends of friends friend promiscuously - the greater the risk to each individual.

The whole situation is exacerbated by endemic, widespread, poor security practices. Asking people to provide the same few bits of information for back-up questions in case they need a password reset. Imposing password rules that practically guarantee people will use and reuse the same few choices on all their sites. Putting all the eggs in services that are free at point of use and that you pay for in unobtainable customer service (not to mention behavioral targeting and marketing) when something goes wrong. If everything is locked to one email account on a server you do not control, if your security questions could be answered by a quick glance at your Facebook Timeline and a Google search, if you bank online and use the same passwords throughout...you have a potential catastrophe in waiting.

I realize not everyone can run their own mail server. But you can use multiple, distinct email addresses and passwords, you can create unique answers on the reset forms, and you can limit your exposure by presuming that everything you post *is* public, whether the service admits it or not. Your goal should be to ensure that when - it's no longer safe to say "if" - some part of your online life is hacked the damage can be contained to that one, hopefully small, piece. Relying on the privacy consciousness of friends means you can't eliminate the risk; but you can limit the consequences.

Facebook is facing an entirely different risk: that people, alarmed at the thought of being mugged, will flee elsewhere. It's happened before.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 25, 2011

Paul Revere's printing press

There is nothing more frustrating than watching smart, experienced people reinvent known principles. Yesterday's Westminster Forum on cybersecurity was one such occasion. I don't blame them, or not exactly: it's just maddening that we have made so little progress, while the threats keep escalating. And it is from gatherings like this one that government policy is made.

Rephrasing Bill Clinton's campaign slogan, "It's the people, stupid," said Philip Virgo, chairman of the security panel of the IT Livery Company, to kick off the day, a sentiment echoed repeatedly by nearly every other speaker. Yes, it's the people - who trust when they shouldn't, who attach personal devices to corporate networks, who disclose passwords when they shouldn't, who are targeted by today's Facebook-friending social engineers. So how many people experts on the program? None. Psychologists? No. Nor any usability experts or people whose jobs revolve around communication, either. (Or women, but I'm prepared to regard that as a separate issue.)

Smart, experienced guys, sure, who did a great job of outlining problems and a few possible solutions. Somewhere toward the end of the proceedings, someone allowed in passing that yes, it's not a good idea to require people to use passwords that are too complex to remember easily. This is the state of their art? It's 12 years since Angela Sasse and Anne Adams covered this territory in Users Are Not the Enemy. Sasse has gone on to help found the field of security economics, which seeks to quantify the cost of poorly designed security - not just in data breaches and DoS attacks but in the lost productivity of frustrated, overburdened users. Sasse argues that the problem isn't so much the people as user-hostile systems and technology.

"As user-friendly as a cornered rat," Virgo says he wrote of security software back in 1983. Anyone who's looked at configuring a firewall lately knows things haven't changed that much. In a world of increasingly mass-market software and devices, security software has remained resolutely elitist: confusing error messages, difficult configuration, obscure technology. How many users know what to do when their browser says a Web site certificate is invalid? Or how to answer anti-virus software that asks whether you want to authorise HIPS/RegMod-007?

"The current approach is not working," said William Beer, director of information security and cybersecurity for PriceWaterhouseCoopers. "There is too much focus on technology, and not enough focus from business and government leaders." How about academics and consumers, too?

There is no doubt, though, that the threats are escalating. Twenty years ago, the biggest worry was that a teenaged kid would write a virus that spread fast and furious in the hope of getting on the evening news. Today, an organized criminal underground uses personal information to target a small group of users inside RSA, leveraging that into a threat to major systems worldwide. (Trend Micro CTO Andy Dancer said the attack began in the real world with a single user befriended at their church. I can't find verification, however.)

The big issue, said Martin Smith, CEO of The Security Company, is that "There's no money in getting the culture right." What's to sell if there's no technical fix? Like when your plane is held to ransom by the pilot, or when all it takes to publish 250,000 US diplomatic cables is one alienated, low-ranked person with a DVD burner and a picture of Lady Gaga? There's a parallel here to pharmaceuticals: one reason we have few weapons to combat rampaging drug resistance is that for decades developing new antibiotics was not seen as a profitable path.

Granted, you don't, as Dancer said afterwards, want to frame security as an issue of "fixing the people" (but we already know better than that). Nor is it fair to ban company employees from social media lest some attacker pick it up and use it to create a false sense of trust. Banning the latest new medium, said former GCHQ head John Bassett, is just the instinctive reaction in a disturbance; in 1775 Boston the "problem" was Paul Revere's printing press stirring up trouble.

Nor do I, personally, want to live in a trust-free world. I'm happy to assume the server next to me is compromised, but "Trust no one" is a lousy way to live.

Since perfect security is not possible, Dancer advised, organizations should plan for the worst. Good advice. When did I first hear it? Twenty years ago and most months since, by Peter Neumann in his RISKS Forum. It is depressing and frustrating that we are still having this conversation as if it were new - and that we will have it all over again over the next decade as smart meters roll out to 26 million British households by 2020, opening up the electrical grid to attacks that are already being predicted and studied.

Neumann - and Dancer - is right. There is no perfect security because it's in no one's interest to create it. Plan for the worst.

To Gene Spafford, 1989: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room protected by armed guards - and even then I have my doubts."

For everything else, there's a stolen Mastercard.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 4, 2011

The identity layer

This week, the UK government announced a scheme - Midata - under which consumers will be able to reclaim their personal information. The same day, the Centre for the Study of Financial Innovation assembled a group of experts to ask what the business model for online identification should be. And: whatever that model is, what the the government's role should be. (For background, here's the previous such discussion.)

My eventual thought was that the government's role should be to set standards; it might or might not also be an identity services provider. The government's inclination now is to push this job to the private sector. That leaves the question of how to serve those who are not commercially interesting; at the CSFI meeting the Post Office seemed the obvious contender for both pragmatic and historical reasons.

As Mike Bracken writes in the Government Digital Service blog posting linked above, the notion of private identity providers is not new. But what he seems to assume is that what's needed is federated identity - that is, in Wikipedia's definition, a means for linking a person's electronic identity and attributes across multiple distinct systems. What I meant is a system in which one may have many limited identities that are sufficiently interoperable that you can make a choice which to use at the point of entry to a given system. We already have something like this on many blogs, where commenters may be offered a choice of logging in via Google, OpenID, or simply posting a name and URL.

The government gateway circa Year 2000 offered a choice: getting an identity certificate required payment of £50 to, if I remember correctly, Experian or Equifax, or other companies whose interest in preserving personal privacy is hard to credit. The CSFI meeting also mentioned tScheme - an industry consortium to provide trust services. Outside of relatively small niches it's made little impact. Similarly, fifteen years ago, the government intended, as part of implementing key escrow for strong cryptography, to create a network of trusted third parties that it would license and, by implication, control. The intention was that the TTPs should be folks that everyone trusts - like banks. Hilarious, we said *then*. Moving on.

In between then and now, the government also mooted a completely centralized identity scheme - that is, the late, unlamented ID card. Meanwhile, we've seen the growth a set of competing American/global businesses who all would like to be *the* consumer identity gateway and who managed to steal first-mover advantage from existing financial institutions. Facebook, Google, and Paypal are the three most obvious. Microsoft had hopes, perhaps too early, when in 1999 it created Passport (now Windows Live ID). More recently, it was the home for Kim Cameron's efforts to reshape online identity via the company's now-cancelled CardSpace, and Brendon Lynch's adoption of U-Prove, based on Stefan Brands' technology. U-Prove is now being piloted in various EU-wide projects. There are probably lots of other organizations that would like to get in on such a scheme, if only because of the data and linkages a federated system would grant them. Credit card companies, for example. Some combination of mobile phone manufacturers, mobile network operators, and telcos. Various medical outfits, perhaps.

An identity layer that gives fair and reasonable access to a variety of players who jointly provide competition and consumer choice seems like a reasonable goal. But it's not clear that this is what either the UK's distastefully spelled "Midata" or the US's NSTIC (which attracted similar concerns when first announced, has in mind. What "federated identity" sounds like is the convenience of "single sign-on", which is great if you're working in a company and need to use dozens of legacy systems. When you're talking about identity verification for every type of transaction you do in your entire life, however, a single gateway is a single point of failure and, as Stephan Engberg, founder of the Danish company Priway, has often said, a single point of control. It's the Facebook cross-all-the-streams approach, embedded everywhere. Engberg points to a discussion paper) inspired by two workshops he facilitated for the Danish National IT and Telecom Agency (NITA) in late 2010 that covers many of these issues.

Engberg, who describes himself as a "purist" when it comes to individual sovereignty, says the only valid privacy-protecting approach is to ensure that each time you go online on each device you start a new session that is completely isolated from all previous sessions and then have the choice of sharing whatever information you want in the transaction at hand. The EU's LinkSmart project, which Engberg was part of, created middleware to do precisely that. As sensors and RFID chips spread along with IPv6, which can give each of them its own IP address, linkages across all parts of our lives will become easier and easier, he argues.

We've seen often enough that people will choose convenience over complexity. What we don't know is what kind of technology will emerge to help us in this case. The devil, as so often, will be in the details.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 28, 2011

Crypto: the revenge

I recently had occasion to try out Gnu Privacy Guard, the Free Software Foundation's version of PGP, Phil Zimmermann's legendary Pretty Good Privacy software. It was the first time I'd encrypted an email message since about 1995, and I was both pleasantly surprised and dismayed.

First, the good. Public key cryptography is now implemented exactly the way it should have been all along: once you've installed it and generated a keypair, encrypting a message is ticking a box or picking a menu item inside your email software. Even key management is handled by a comprehensible, well-designed graphical interface. Several generations of hard work have created this and also ensured that the various versions of PGP, OpenPGP, and GPG are interoperable, so you don't have to worry about who's using what. Installation was straightforward and the documentation is good.

Now, the bad. That's where the usability stops. There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners.

Item: the subject line doesn't get encrypted. There is nothing you can do about this except put a lot of thought into devising a subject line that will compel people to read the message but that simultaneously does not reveal anything of value to anyone monitoring your email. That's a neat trick.

Item: watch out for attachments, which are easily accidentally sent in the clear; you need to encrypt them separately before bundling them into the message.

Item: while there is a nifty GPG plug-in for Thunderbird - Enigmail - Outlook, being commercial software, is less easily supported. GPG's GpgOL module works only with 2003 (SP2 and above) and 2007, and not on 64-bit Windows. The problem is that it's hard enough to get people to change *one* habit, let alone several.

Item: lacking appropriate browser plug-ins, you also have to tell them to stop using Webmail if the service they're used to won't support IMAP or POP3, because they won't be able to send encrypted mail or read what others send them over the Web.

Let's say you're running a field station in a hostile area. You can likely get users to persevere despite these points by telling them that this is their work system, for use in the field. Most people will put up with a some inconvenience if they're being paid to do so and/or it's temporary and/or you scare them sufficiently. But that strategy violates one of the basic principles of crypto-culture, which is that everyone should be encrypting everything so that sensitive traffic doesn't stand out. They are of course completely right, just as they were in 1993, when the big political battles over crypto were being fought.

Item: when you connect to a public keyserver to check or download someone's key, that connection is in the clear, so anyone surveilling you can see who you intend to communicate with.

Item: you're still at risk with regard to traffic data. This is what RIPA and data retention are all about. What's more significant? Being able to read a message that says, "Can you buy milk?" or the information that the sender and receiver of that message correspond 20 times a day? Traffic data reveals the pattern of personal relationships; that's why law enforcement agencies want it. PGP/GPG won't hide that for you; instead, you'll need to set up a proxy or use Tor to mix up your traffic and also protect your Web browsing, instant messaging, and other online activities. As Tor's own people admit, it slows performance, although they're working on it (PDF).

All this says we're still a long way from a system that the mass market will use. And that's a damn shame, because we genuinely need secure communications. Like a lot of people in the mid-1990s, I'd have thought that by now encrypted communications would be the norm. And yet not only is SSL, which protects personal details in transit to ecommerce and financial services sites, the only really mass-market use, but it's in trouble. Partly, this is because of the technical issues raised in the linked article - too many certification authorities, too many points of failure - but it's also partly because hardly anyone understands how to check that a certificate is valid or knows what to do when warnings pop up that it's expired or issued for a different name. The underlying problem is that many of the people who like crypto see it as both a cool technology and a cause. For most of us, it's just more fussy software. The big advance since the mid 1990s is that at least now the *developers* will use it.

Maybe mobile phones will be the thing that makes crypto work the way it should. See, for example, Dave Birch's current thinking on the future of identity. We've been arguing about how to build an identity infrastructure for 20 years now. Crypto is clearly the mechanism. But we still haven't solved the how.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 21, 2011

Printers on fire

It used to be that if you thought things were spying on you, you were mentally disturbed. But you're not paranoid if they're really out to get you, and new research at Columbia University, with funding from DARPA's Crash program, exposes how vulnerable today's devices are. Routers, printers, scanners - anything with an embedded system and an IP address.

Usually what's dangerous is monoculture: Windows is a huge target. So, argue Columbia computer science professor Sal Stolfo and PhD student Ang Cui, device manufacturers rely on security by diversity: every device has its own specific firmware. Cui estimates, for example, that there are 300,000 different firmware images for Cisco routers, varying by feature set, model, operating system version, hardware, and so on. Sure, what's the payback? Especially compared to that nice, juicy Windows server over there?

"In every LAN there are enormous numbers of embedded systems in every machine that can be penetrated for various purposes," says Cui.

The payback is access to that nice, juicy server and, indeed, the whole network Few update - or even check - firmware. So once inside, an attacker can lurk unnoticed until the device is replaced.

Cui started by asking: "Are embedded systems difficult to hack? Or are they just not low-hanging fruit?" There isn't, notes Stolfo, an industry providing protection for routers, printers, the smart electrical meters rolling out across the UK, or the control interfaces that manage conference rooms.

If there is, after seeing their demonstrations, I want it.

Their work is two-pronged: first demonstrate the need, then propose a solution.

Cui began by developing a rootkit for Cisco routers. Despite the diversity of firmware and each image's memory layout, routers are a monoculture in that they all perform the same functions. Cui used this insight to find the invariant elements and fingerprint them, making them identifiable in the memory space. From that, he can determine which image is in place and deduce its layout.

"It takes a millisecond."

Once in, Cui sets up a control channel over ping packets (ICMP) to load microcode, reroute traffic, and modify the router's behaviour. "And there's no host-based defense, so you can't tell it's been compromised." The amount of data sent over the control channel is too small to notice - perhaps a packet per second.

"You can stay stealthy if you want to."

You could even kill the router entirely by modifying the EEPROM on the motherboard. How much fun to be the army or a major ISP and physically connect to 10,000 dead routers to restore their firmware from backup?

They presented this at WOOT (Quicktime), and then felt they needed something more dramatic: printers.

"We turned off the motor and turned up the fuser to maximum." Result: browned paper and...smoke.

How? By embedding a firmware update in an apparently innocuous print job. This approach is familiar: embedding programs where they're not expected is a vector for viruses in Word and PDFs.

"We can actually modify the firmware of the printer as part of a legitimate document. It renders correctly, and at the end of the job there's a firmware update." It hasn't been done before now, Cui thinks, because there isn't a direct financial pay-off and it requires reverse-engineering proprietary firmware. But think of the possibilities.

"In a super-secure environment where there's a firewall and no access - the government, Wall Street - you could send a resume to print out." There's no password. The injected firmware connects to a listening outbound IP address, which responds by asking for the printer's IP address to punch a hole inside the firewall.

"Everyone always whitelists printers," Cui says - so the attacker can access any computer. From there, monitor the network, watch traffic, check for regular expressions like names, bank account numbers, and social security numbers, sending them back out as part of ping messages.

"The purpose is not to compromise the printer but to gain a foothold in the network, and it can stay for years - and then go after PCs and servers behind the firewall." Or propagate the first printer worm.

Stolfo's and Cui's call their answer a "symbiote" after biological symbiosis, in which two biological organisms attach to each other to mutual benefit.

The goal is code that works on an arbitrarily chosen executable about which you have very little knowledge. Emulating a biological symbiote, which finds places to attach to the host and extract resources, Cui's symbiote first calculates a secure checksum across all the static regions of the code, then finds random places where its code can be injected.

"We choose a large number of these interception points - and each time we choose different ones, so it's not vulnerable to a signature attack and it's very diverse." At each device access, the symbiote steals a little bit of the CPU cycle (like an RFID chip being read) and automatically verifies the checksum.

"We're not exploiting a vulnerability in the code," says Cui, "but a logical fallacy in the way a printer works." Adds Stolfo, "Every application inherently has malware. You just have to know how to use it."

Never mind all that. I'm still back at that printer smoking. I'll give up my bank account number and SSN if you just won't burn my house down.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


September 30, 2011

Trust exercise

When do we need our identity to be authenticated? Who should provide the service? Whom do we trust? And, to make it sustainable, what is the business model?

These questions have been debated ever since the early 1990s, when the Internet and the technology needed to enable the widespread use of strong cryptography arrived more or less simultaneously. Answering them is a genuinely hard problem (or it wouldn't be taking so long).

A key principle that emerged from the crypto-dominated discussions of the mid-1990s is that authentication mechanisms should be role-based and limited by "need to know"; information would be selectively unlocked and in the user's control. The policeman stopping my car at night needs to check my blood alcohol level and the validity of my driver's license, car registration, and insurance - but does not need to know where I live unless I'm in violation of one of those rules. Cryptography, properly deployed, can be used to protect my information, authenticate the policeman, and then authenticate the violation result that unlocks more data.

Today's stored-value cards - London's Oyster travel card, or Starbucks' payment/wifi cards - when used anonymously do capture some of what the crypto folks had in mind. But the crypto folks also imagined that anonymous digital cash or identification systems could be supported by selling standalone products people installed. This turned out to be wholly wrong: many tried, all failed. Which leads to today, where banks, telcos, and technology companies are all trying to figure out who can win the pool by becoming the gatekeeper - our proxy. We want convenience, security, and privacy, probably in that order; they want security and market acceptance, also probably in that order.

The assumption is we'll need that proxy because large institutions - banks, governments, companies - are still hung up on identity. So although the question should be whom do we - consumers and citizens - trust, the question that ultimately matters is whom do *they* trust? We know they don't trust *us*. So will it be mobile phones, those handy devices in everyone's pockets that are online all the time? Banks? Technology companies? Google has launched Google Wallet, and Facebook has grand aspirations for its single sign-on.

This was exactly the question Barclaycard's Tom Gregory asked at this week's Centre for the Study of Financial Innovation round-table discussion (PDF) . It was, of course, a trick, but he got the answer he wanted: out of banks, technology companies, and mobile network operators, most people picked banks. Immediate flashback.

The government representatives who attended Privacy International's 1997 Scrambling for Safety meeting assumed that people trusted banks and that therefore they should be the Trusted Third Parties providing key escrow. Brilliant! It was instantly clear that the people who attended those meetings didn't trust their banks as much as all that.

One key issue is that, as Simon Deane-Johns writes in his blog posting about the same event, "identity" is not a single, static thing; it is dynamic and shifts constantly as we add to the collection of behaviors and data representing it.

As long as we equate "identity" with "a person's name" we're in the same kind of trouble the travel security agencies are when they try to predict who will become a terrorist on a particular flight. Like the browser fingerprint, we are more uniquely identifiable by the collection of our behaviors than we are by our names, as detectives who search for missing persons know. The target changes his name, his jobs, his home, and his wife - but if his obsession is chasing after trout he's still got a fishing license. Even if a link between a Starbucks card and its holder's real-world name is never formed, the more data the card's use enters into the system the more clearly recognizable as an individual he will be. The exact tag really doesn't matter in terms of understanding his established identity.

What I like about Deane-Johns' idea -

the solution has to involve the capability to generate a unique and momentary proof of identity by reference to a broad array of data generated by our own activity, on the fly, which is then useless and can be safely discarded"

is two things. First, it has potential as a way to make impersonation and identity fraud much harder. Second is that implicit in it is the possibility of two-way authentication, something we've clearly needed for years. Every large organization still behaves as though its identity is beyond question whereas we - consumers, citizens, employees - need to be thoroughly checked. Any identity infrastructure that is going to be robust in the future must be built on the understanding that with today's technology anyone and anything can be impersonated.

As an aside, it was remarkable how many people at this week's meeting were more concerned about having their Gmail accounts hacked than their bank accounts. My reasoning is that the stakes are higher: I'd rather lose my email reputation than my house.. Their reasoning is that the banking industry is more responsive to customer problems than technology companies. That truly represents a shift from 1997, when technology companies were smaller and more responsive.

More to come on these discussions...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 16, 2011

The world at ten

Like Meetup.org, net.wars-the-column is to some extent a child of 9/11 (the column was preceded by the book. four years of near-weekly news analysis pieces for the Daily Telegraph, and a sequel book, From Anarchy to Power: the Net Comes of Age). On November 2, 2011 the column will be ten years old, its creation sparked by a burst of frustrated anger when then foreign minister Jack Straw wagged a post-9/11 finger at those who had opposed his plans to restrict the use of strong encryption and implement key escrow in the mid 1990s when he was at the Home Office and blamed us.

Ten years on, we can revisit his claim. We now know, for example, that when Osama bin Laden wanted to hide, he didn't use cryptography to cloak his whereabouts. Instead, the reason his safe house stood out from those around it was that it was a technological black spot: "no phones, no broadband. In other words, bin Laden feared the power of technology as much as Straw and his cohorts: both feared it would empower their enemies. That paranoia was justified - but backfired spectacularly.

In our own case, it's clear that "the terrorists" have scored a substantial amount of victory. We - the US, the UK, Europe - would have had some kind of recession anyway, given the rapacious and unregulated behavior of banks and brokers leading up to 2008 - but we would have been much better placed to cope with it if we - the US - hadn't been simultaneously throwing $1.29 trillion at invading Iraq and Afghanistan. If you include medical and disability care for current and future veterans, according to the Eisenhower Research Project at Brown University that number rises to as much as $4 trillion.

But more than that, as Ryan Singel writes US-specifically at Wired, the West has built up a gigantic and expensive inward-turned surveillance infrastructure that is unlikely to be dismantled when or if the threat it was built to control goes away. In the last ten years, countless hundreds of millions of dollars and countless million of hours of lost productivity have been spent on airport security when, as Bruce Schneier frequently writes, the only two changes that have made a significant difference to air travel safety have been reinforcing the cockpit doors and teaching passengers to fight back. The Department of Homeland Security's budget for its 2011 financial year is $56.3 billion (PDF) - which includes $214.7 million for airport scanners and another $218.9 million for people to staff them (so much for automation).

The UK in particular has spent much of the last ten years building the database state, creating dozens of large databases aimed at tracking various portions of society through various parts of their lives. Some of this has been dismantled by the coalition, but not all. The most visible part of the ID card is gone - but the key element was always the database of the nation's residents, and as data-sharing between government departments becomes ever easier, the equivalent may be built in practice rather than by explicit plan. In every Western country CCTV cameras are proliferating, as are surveillance-by-design policies such as data retention, built-in wiretapping, and widespread filtering. Every time a new system is built - the London congestion charge, for example, or the mooted smart road pricing systems - there are choices that would allow privacy to be built in. And so far, each time those choices are not taken.

But if the policies aimed at our ourselves are misguided, as net.wars has frequently argued, the same is true of the policies we have directed at others. As part of the British Science Festival, Paul Rogers, a researcher with the Oxford Group, presented A War Gone Badly Wrong - The War on Terror Ten Years On, looking back at the aftermath of the attacks rather than the attacks themselves; the Brown research shows that in the various post-9/11 military actions 80 people have died for every 9/11 victim. Like millions of others who were ignored, the Oxford Research Group opposed the war at the time.

"The whole approach was a mistake." he told the press last Friday, arguing that the US should instead have called it an act of international criminality and sworn to work with everyone to bring the criminals to justice. "The US would have had worldwide support for that kind of action that it did not have for Afghanistan - or, especially, Iraq." He added, "If they had treated al-Qaeda as a common, bitter, vicious criminal movement, not a brave, religious movement worthy of fighting, that degrades it."

What he hopes his research will lead to now is "a really serious understanding of what wrong, and the risks of early recourse to early military responses." And, he added, "sustainable security" that focuses on conflict prevention. "Why it's important to look at the experience of the war on terror is to discern and learn those lessons."

They say that a conservative is a liberal who's been mugged. By analogy, it seems that a surveillance state is a democracy that's been attacked.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 2, 2011

White rabbits

I feel like I am watching magicians holding black top hats. They do...you're not sure what...to a mess of hexagonal output on the projection screen so comprehensible words appear...and people laugh. And then some command line screens flash in and out before your eyes and something absurd and out-of-place appears, like the Windows calculator, and everyone applauds. I am at 44con, a less-crazed London offshoot of the Defcon-style mix of security and hacking. Although, this being Britain, they're pushing the sponsored beer.

In this way we move through exploits: iOS, Windows Phone 7, and SAP, whose protocols are pulled apart by Sensepost's Ian de Villiers. And after that Trusteer Rapport, which seems to be favored by banks and other financial services, and disliked by everyone else. All these talks leave a slightly bruised feeling, not so much like you'd do better to eschew all electronics and move to a hut on a deserted beach without a phone as that even if you did that you'd be vulnerable to other people's decisions. While exploring the inner workings of USB flash drives (PDF), for example, Phil Polstra noted in passing that the Windows Registry logs every single time you insert one. I knew my computer tracked me, but I didn't quite realize the full extent.

The bit of magic that most clearly makes this point is Maltego. This demonstration displays neither hexagonal code nor the Windows calculator, but rolls everything privacy advocates have warned about for years into one juice tool that all the journalists present immediately start begging for. (This is not a phone hacking joke; this stuff could save acres of investigative time.) It's a form of search that turns a person or event into a colorful display of whirling dots (hits) that resolve into clusters. Its keeper, Roelof Temmingh, uses a mix of domain names, IP addresses, and geolocation to discover the Web sites White House users like to visit and tweets from the NSA parking lot. Version 4 - the first version of the software dates to 2007 - moves into real-time data mining.

Later, I ask a lawyers with a full, licensed copy to show me an ego search. We lack the time to finish, but our slower pace and diminished slickness make it plain that this software takes time and study to learn to drive. This is partly comforting: it means that the only people who can use it to do the full spy caper are professionals, rather than amateurs. Of course, those are the people who will also have - or be able to command - access to private databases that are closed to the rest of us, such as the utility companies' electronic customer records, which, when plugged in can link cyberworld and real world identities. "A one-click stalking machine," Temmingh calls it.

As if your mobile phone - camera, microphone, geolocation, email, and Web browsing history - weren't enough. One attendee tells me seriously that he would indeed go to jail for two years rather than give up his phone's password, even if compelled under the Regulation of Investigatory Powers Act. Even if your parents are sick and need you to take care of them? I ask. He seems to feel I'm unfairly moving the bar.

Earlier the current mantra that every Web site should offer secure HTTP came under fire. IOActive's Vincent Berg showed off how to figure out which grid tile of Google Maps and which Wikipedia pages someone has been looking at despite the connection's being carred over SSL. The basis of this is our old friend traffic analysis. It's not a great investigative tool because, as Berg himself points out, there would be many false positives, but side-channel leaks in Web pages are still a coming challenge (PDF). SSL has its well-documented problems, but "At some point the industry will get it right." We can but hope.

It was left to Alex Conran, whose TV program The Real Hustle starts its tenth season on BBC Three on Monday, to wind things up by reminding us that the most enduring hacks are the human ones. Conran says that after perpetrating more than 500 scams on an unsuspecting public (and debunking them afterwards), he has concluded that just as Western music relies on endless permutations of the same seven notes, scams rely on variations on the same five elements. They will sound familiar to anyone who's read The Skeptic over the last 24 years.

The five: misdirection, social compliance, the love of a special deal, time pressure, social proof (or reenforcement). "Con men are the hackers of human nature", Conran said, but noted that part of the point of his show is that if you educate people what the risks are they will take the necessary steps to protect themselves. And then dispensed this piece of advice: if you want to control the world, buy a hi-vis jacket. They're cheap, and when you're wearing one, apparently anyone you meet will do anything you tell them without question. No magic necessary.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 5, 2011

Cheaters in paradise

It seems that humans in general are not particularly good at analyzing incentives. How else can you explain the number of decisions we make with adverse, unintended consequences? Three examples.

One: this week US newspapers - such as the LA Times, the New York Times, and Education Week - report that myriad states have discovered a high number of erasures on standardized tests or suspiciously sudden improvement in test scores. (At one Pennsylvania school, for example, eighth graders' reading proficiency jumped from 28.9 percent to 63.8 percent between 2008 and 2009.)

The culprits: teachers and principals. When tests determined only the future of the students taking them, the only cheaters were students. Now that tests determine school rankings and therefore the economic future of teachers, principals, and schools, many more people are motivated to ensure that students score highly.

Don't imagine the kids don't grasp this. In 2002, when I wrote about plagiarism for the Independent, all the kids I interviewed noted that despite their teachers' warnings of dire consequences schools would not punish plagiarists and risk hurting their rankings in the league tables.

A kid in an American school this week might legitimately ask why he should be punished for cheating or plagiarism when his teachers are doing the same thing on a much grander scale for greater and far more immediate profit. A similar situation applies to our second example, this week's decision by the International Tennis Federation to suspend 31-year-old player Robert Kendrick for 12 months after testing positive for the banned stimulant methylhexaneamine.

At his age, a 12-month ban is an end-of-career notice. Everyone grants that he did not intend to cheat and that the amount of the drug was not performance-enhancing. Like a lot of people who travel through many time zones on the way to work, he took a jetlag pill whose ingredients he believed to be innocuous. He admits he screwed up; he and his lawyers have simply asked for what a fairer sentence. Fairer because in January 2010, when fellow player Wayne Odesnik was caught by Australian Customs with eight vials of human growth hormone, he was suspended for two years - double the sentence but far more than double the offense. And Odesnik didn't even stay out that long; his sentence was commuted to time served after seven months.

At the time, the ITF said that he had bought his way out of purgatory by cooperating with its anti-doping program, presumably under the rule that allows such a reversal when the player has turned informant. No follow-up has disclosed who Odesnik might have implicated, and although it's possible that it all forms part of a lengthy, ongoing investigation, the fact remains: his offense was a lot worse than Kendrick's but has cost him a lot less.

It says a lot that the other players are scathing about Odesnik, sympathetic to Kendrick. This is a watershed moment, where the athletes are openly querying the system's fairness despite any suspicions that might be raised by their doing so.

The anti-doping system as it is presently constructed has never made sense to me: it is invasive, unwieldly, and a poor fit for some sports (like tennis, where players are constantly on the move). The The lesson sent by these morality plays is: don't get caught. And there is enough money in professional sports to ensure that there are many actors invested in ensuring exactly that: coaches, agents, managers, corporate sponsors, and the tours themselves. Of course testing and punshing athletes is going to fail to contain the threat.

Kamakshi Tandon's ideas on this are very close to mine: do traditional policing. Instead of relying on test samples, which can be mishandled, misread, or unreliable, use other types of evidence when they're available. Why, for example, did the anti-doping authorities refuse Martina Hingis's request to do a hair strand test when a urine sample tested positive for cocaine at Wimbledon in 2007? Why are the A and B samples tested at the same lab instead of different labs? (What lab wants to say it misread the first sample?) My personal guess is that it's because the anti-doping authorities believe that anyone playing professional sports is probably guilty anyway, so why bother assembling the quality of evidence that would be required for a court case? That might even be true - but in that case anti-doping efforts to date have been a total failure.

Our third example: last week's decision by Fox to allow only verified paying cable customers to watch TV shows on Hulu in the first week after their initial broadcast. (Yet more evidence that Murdoch does not get the Internet.) We are in the 12th year of the wars on file-sharing, and still rights holders make decisions like this that increase the incentives to use unauthorized sources.

In the long scheme of things, as Becky Hogge used to say while she was the executive director of the Open Rights Group the result or poorly considered incentives that make bad law is that they teach people not to respect the law. That will have many worse consequences down the line.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

July 29, 2011

Name check

How do you clean a database? The traditional way - which I still experience from time to time from journalist directories - is that some poor schnook sits in an office and calls everyone on the list, checking each detail. It's an immensely tedious job, I'm sure, but it's a living.

The new, much cheaper method is to motivate the people in the database to do it themselves. A government can pass a law and pay benefits. Amazon expects the desire to receive the goods people have paid for to be sufficient. For a social network it's a little harder, yet Facebook has managed to get 750 million users to upload varying amounts of information. Google hopes people will do the same with Google+,

The emotional connections people make on social networks obscure their basic nature as databases. When you think of them in that light, and you remember that Google's chief source of income is advertising, suddenly Google's culturally dysfunctional decision to require real names on |Google+ makes some sense. For an advertising company,a fuller, cleaner database is more valuable and functional. Google's engineers most likely do not think in terms of improving the company's ability to serve tightly targeted ads - but I'd bet the company's accountants and strategists do. The justification - that online anonymity fosters bad behavior - is likely a relatively minor consideration.

Yet it's the one getting the attention, despite the fact that many people seem confused about the difference between pseudonymity, anonymity, and throwaway identity. In the reputation-based economy the Net thrives on, this difference matters.

The best-known form of pseudonymity is the stage name, essentially a form of branding for actors, musicians, writers, and artists, who may have any of a number of motives for keeping their professional lives separate from their personal lives: privacy for themselves, their work mates, or their families, or greater marketability. More subtly, if you have a part-time artistic career and a full-time day job you may not want the two to mix: will people take you seriously as an academic psychologist if they know you're also a folksinger? All of those reasons for choosing a pseudonym apply on the Net, where everything is a somewhat public performance. Given the harassment some female bloggers report, is it any wonder they might feel safer using a pseudonym?

The important characteristic of pseudonyms, which they share with "real names", is persistence. When you first encounter someone like GrrlScientist, you have no idea whether to trust her knowledge and expertise. But after more than ten years of blogging, that name is a known quantity. As GrrlScientist writes about Google's shutting down her account, it is her "real-enough" name by any reasonable standard. What's missing is the link to a portion of her identity - the name on her tax return, or the one her mother calls her. So what?

Anonymity has long been contentious on the Net; the EU has often considered whether and how to ban it. At the moment, the driving justification seems to be accountability, in the hope that we can stop people from behaving like malicious morons, the phenomenon I like to call the Benidorm syndrome.

There is no question that people write horrible things in blog and news site comments pages, conduct flame wars, and engage in cyber bullying and harassment. But that behaviour is not limited to venues where they communicate solely with strangers; every mailing list, even among workmates, has flame wars. Studies have shown that the cyber versions of bullying and harassment, like their offline counterparts, are most often perpetrated by people you know.

The more important downside of anonymity is that it enables people to hide, not their identity but their interests. Behind the shield, a company can trash its competitors and those whose work has been criticized can make their defense look more robust by pretending to be disinterested third parties.

Against that is the upside. Anonymity protects whistleblowers acting in the public interest, and protesters defying an authoritarian regime.

We have little data to balance these competing interests. One bit we do have comes from an experiment with anonymity conducted years ago on the WELL, which otherwise has insisted on verifying every subscriber throughout its history. The lesson they learned, its conferencing manager, Gail Williams, told me once, was that many people wanted anonymity for themselves - but opposed it for others. I suspect this principle has very wide applicability, and it's why the US might, say, oppose anonymity for Bradley Manning but welcome it for Egyptian protesters.

Google is already modifying the terms of what is after all still a trial service. But the underlying concern will not go away. Google has long had a way to link Gmail addresses to behavioral data collected from those using its search engine, docs, and other services. It has always had some ability to perform traffic analysis on Gmail users' communications; now it can see explicit links between those pools of data and, increasingly, tie them to offline identities. This is potentially far more powerful than anything Facebook can currently offer. And unlike government databases, it's nice and clean, and cheap to maintain.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 25, 2011

Wartime economy

Everyone loves a good headline, and £27 billion always makes a *great* one. In this case, that was the sum that a report written by the security consultancy firm Detica, now part of BAE Systems and issued by the Office of Cyber Security and Information Assurance (PDF) estimates that cybercrime is costing the UK economy annually. The claim was almost immediately questioned by ZDNet's Tom Espiner, who promptly checked it out with security experts. Who complained that the report was full of "fake precision" (LSE professor Peter Sommer), "questionable calculations" (Harvard's Tyler Moore), and "nonsense" (Cambridge's Richard Clayton).

First, some comparisons.

Twenty-seven billion pounds (approximately $40 billion) is slightly larger than a year's worth of the International Federation of the Phonographic Industry's estimate of the cumulative retail revenue lost to piracy by the European creative industries from 2008 to 2015 (PDF) (total €240 billion, about £203 million, eight years, £25.4 billion a year). It is roughly the estimated cost of the BP oil spill, the amount some think Facebook will be worth at an IPO, and noticeably less than Apple's $51 billion cash hoard. But: lots smaller than the "£40 billion underworld" The Times attributed to British gangs in 2008.

Several things baffle about this report. The first is that so little information is given about the study's methodology. Who did the researchers talk to? What assumptions did they make and what statistical probabilities did they assign in creating the numbers and charts? How are they defining categories like "online scams" or "IP theft" (they're clear about one thing: they're not including file-sharing in that figure)? What is the "causal model" they developed?

We know one person they didn't talk to: Computer Weekly notes the omission of Detective superintendent Charlie McMurdie, head of the Metropolitan Police's Central e-Crime Unit, who you'd' think would be one of the first ports of call for understanding the on-the-ground experience.

One issue the report seems to gloss over is how very difficult it is to define and categorize cybercrime. Last year, the Oxford Internet Institute conducted a one-day forum on the subject, out of which came the report Mapping and Measuring Cybercrime (PDF) , published in June 2010. Much of this report is given over to the difficulty of such definitions; Sommer, who participated in the forum, argued that we shouldn't worry about the means of commission - a crime is a crime. More recently - perhaps a month ago - Sommer teamed up with the OII's Ian Brown to publish a report for an OECD project on future global shocks, Reducing Systemic Cybersecurity Risk (PDF). The authors' conclusion: "very few single cyber-related events have the capacity to cause a global shock". This report also includes considerable discussion of cybercrime in assessing whether "cyberwarfare" is a genuine global threat. But the larger point about both these reports is that they disclose their methodology in detail.

And as a result, they make much more modest and measured claims, which is one reason that critics have looked at the source of the OCSIA/Detica report - BAE - and argued that the numbers are inflated and the focus largely limited to things that fit BAE's business interests (that is, IP theft and espionage; the usual demon, abuse of children, is left untouched).

The big risk here is that this report will be used in determining how policing resources are allocated.

"One of the most important things we can do is educate the public," says Sommer. "Not only about how to protect themselves but to ensure they don't leave their computers open to be formed into botnets. I am concerned that the effect of all these hugely military organizations lobbying for funding is that in the process things like Get Safe Online will suffer."

There's a broader point that begins with a personal nitpick. On page four, the report says this: "...the seeds of criminality planted by the first computer hackers 20 years ago." Leaving aside the even smaller nitpick that the *real*, original computer hackers, who built things and spent their enormous cleverness getting things to work, date to 40 and 50 years ago, it is utterly unfair to compare today's cybercrime to the (mostly) teenaged hackers of 1990, who spent their Saturday nights in their bedrooms war-dialling sites and trying out passwords. They were the computer equivalent of joy-riders, caused little harm, and were so disproportionately the targets of freaked-out, uncomprehending law enforcement that the the Electronic Frontier Foundation was founded to spread some sanity on the situation. Today's cybercrime underground is composed of professional criminals who operate in an organized and methodical way. There is no more valid comparison between the two than there is between Duke Nukem and al-Qaeda.

One is not a gateway to the other - but the idea that criminals would learn computer techniques and organized crime would become active online was repeatedly used as justification for anti-society legislation from cryptographic key escrow to data retention and other surveillance. The biggest risk of a report like this is that it will be used as justification for those wrong-headed policies rather than as it might more rightfully be, as evidence of the failure of no less than five British governments to plan ahead on our behalf.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 21, 2011

Fogged

The Reform Club, I read on its Web site, was founded as a counterweight to the Carlton Club, where conservatives liked to meet and plot away from public scrutiny. To most of us, it's the club where Phileas Fogg made and won his bet that he could travel around the world in 80 days, no small feat in 1872.

On Wednesday, the club played host to a load of people who don't usually talk to each other much because they come at issues of privacy from such different angles. Cityforum, the event's organizer, pulled together representatives from many parts of civil society, government security, and corporate and government researchers.

The key question: what trade-offs are people willing to make between security and privacy? Or between security and civil liberties? Or is "trade-off" the right paradigm? It was good to hear multiple people saying that the "zero-sum" attitude is losing ground to "proportionate". That is, the debate is moving on from viewing privacy and civil liberties as things we must trade away if we want to be secure to weighing the size of the threat against the size of the intrusion. It's clear to all, for example, that one thing that's disproportionate is local councils' usage of the anti-terrorism aspects of the Regulation of Investigatory Powers Act to check whether householders are putting out their garbage for collection on the wrong day.

It was when the topic of the social value of privacy was raised that it occurred to me that probably the closest model to what people really want lay in the magnificent building all around us. The gentleman's club offered a social network restricted to "the right kind of people" - that is, people enough like you that they would welcome your fellow membership and treat you as you would wish to be treated. Within the confines of the club, a member like Fogg, who spent all day every day there, would have had, I imagine, little privacy from the other members or, especially, from the club staff, whose job it was to know what his favorite drink was and where and when he liked it served. But the club afforded members considerable protection from the outside world. Pause to imagine what Facebook would be like if the interface required each would-be addition to your friends list to be proposed and seconded and incomers could be black-balled by the people already on your list.

This sort of web of trust is the structure the cryptography software PGP relies on for authentication: when you generate your public key, you are supposed to have it signed by as many people as you could. Whenever someone wanted to verify the key, they could look at the list of who had signed it for someone they themselves knew and could trust. The big question with such a structure is how you make managing it scale to a large population. Things are a lot easier when it's just a small, relatively homogeneous group you have to deal with. And, I suppose, when you have staff to support the entire enterprise.

We talk a lot about the risks of posting too much information to things like Facebook, but that may not be its biggest issue. Just as traffic data can be more revealing than the content of messages, complex social linkages make it impossible to anonymize databases: who your friends are may be more revealing than your interactions with them. As governments and corporations talk more and more about making "anonymized" data available for research use, this will be an increasingly large issue. An example: an little-known incident in 2005, when the database of a month's worth of UK telephone calls was exported to the US with individuals' phone numbers hashed to "anonymize" them. An interesting technological fix comes from Microsoft' in the notion of differential privacy, a system for protecting databases both against current re-identification and attacks with external data in the future. The catch, if it is one, is that you must assign to your database a sort of query budget in advance - and when it's used up you must burn the database because it can no longer be protected.

We do know one helpful thing: what price club members are willing to pay for the services their club provides. Public opinion polls are a crude tool for measuring what privacy intrusions people will actually put up with in their daily lives. A study by Rand Europe released late last year attempted to examine such things by framing them in economic terms. The good news is they found that you'd have to pay people £19 to get them to agree to provide a DNA sample to include in their passport. The weird news is that people would pay £7 to include their fingerprints. You have to ask: what pitch could Rand possibly have made that would make this seem worth even one penny to anyone?

Hm. Fingerprints in my passport or a walk across a beautiful, mosaic floor to a fine meal in a room with Corinthian columns, 25-foot walls of books, and a staff member who politely fails to notice that I have not quite confirmed to the dress code? I know which is worth paying for if you can afford it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 7, 2011

Scanning the TSA

There are, Bruce Schneier said yesterday at the Electronic Privacy Information Center mini-conference on the TSA (video should be up soon), four reasons why airport security deserves special attention, even though it directly affects a minority of the population. First: planes are a favorite terrorist target. Second: they have unique failure characteristics - that is, the plane crashes and everybody dies. Third: airlines are national symbols. Fourth: planes fly to countries where terrorists are.

There's a fifth he didn't mention but that Georgetown lawyer Pablo Molina and We Won't Fly founder James Babb did: TSAism is spreading. Random bag searches on the DC Metro and the New York subways. The TSA talking about expanding its reach to shopping malls and hotels. And something I found truly offensive, giant LED signs posted along the Maryland highways announcing that if you see anything suspicious you should call the (toll-free) number below. Do I feel safer now? No, and not just because at least one of the incendiary devices sent to Maryland state offices yesterday apparently contained a note complaining about those very signs.

Without the sign, if you saw someone heaving stones at the cars you'd call the police. With it, you peer nervously at the truck in front of you. Does that driver look trustworthy? This is, Schneier said, counter-productive because what people report under that sort of instruction is "different, not suspicious".

But the bigger flaw is cover-your-ass backward thinking. If someone tries to bomb a plane with explosives in a printer cartridge, missing a later attempt using the exact same method will get you roasted for your stupidity. And so we have a ban on flying with printer cartridges over 500g and, during December, restrictions on postal mail, something probably few people in the US even knew about.

Jim Harper, a policy scholar with the Cato Institute and a member of the Department of Homeland Security's Data Privacy and Integrity Advisory Committee, outlined even more TSA expansion. There are efforts to create mobile lie detectors that measure physiological factors like eye movements and blood pressure.

Technology, Lillie Coney observed, has become "like butter - few things are not improved if you add it."

If you're someone charged with blocking terrorist attacks you can see the appeal: no one wants to be the failure who lets a bomb onto a plane. Far, far better if it's the technology that fails. And so expensive scanners roll through the nation's airports despite the expert assessment - on this occasion, from Schneier and Ed Luttwak, a senior associate with the Center for Strategic and International Studies - that the scanners are ineffective, invasive, and dangerous. As Luttwak said, the machines pull people's attention, eyes, and brains away from the most essential part of security: watching and understanding the passengers' behavior.

"[The machine] occupies center stage, inevitably," he said, "and becomes the focus of an activity - not aviation security, but the operation of a scanner."

Equally offensive in a democracy, many speakers argued, is the TSA's secrecy and lack of accountability. Even Meera Shankar, the Indian ambassador, could not get much of a response to her complaint from the TSA, Luttwak said. "God even answered Job." The agency sent no representative to this meeting, which included Congressmen, security experts, policy scholars, lawyers, and activists.

"It's the violation of the entire basis of human rights," said the Stanford and Oxford lawyer Chip Pitts around the time that the 112th Congress was opening up with a bipartisan reading of the US Constitution. "If you are treated like cattle, you lose the ability to be an autonomous agent."

As Libertarian National Committee executive director Wes Benedict said, "When libertarians and Ralph Nader agree that a program is bad, it's time for our government to listen up."

So then, what are the alternatives to spending - so far, in the history of the Department of Homeland Security, since 2001 - $360 billion, not including the lost productivity and opportunity costs to the US's 100 million flyers?

Well, first of all, stop being weenies. The number of speakers who reminded us that the US was founded by risk-takers was remarkable. More people, Schneier noted, are killed in cars every month than died on 9/11. Nothing, Ralph Nader said, is spent on the 58,000 Americans who die in workplace accidents every year or the many thousands more who are killed by pollution or medical malpractice.

"We need a comprehensive valuation of how to deploy resources in a rational manner that will be effective, minimally invasive, efficient, and obey the Constitution and federal law," Nader said

So: dogs are better at detecting explosives than scanners. Intelligent profiling can whittle down the mass of suspects to a more manageable group than "everyone" in a giant game of airport werewolf. Instead, at the moment we have magical thinking, always protecting ourselves from the last attack.

"We're constantly preparing for the rematch," said Lillie Coney. "There is no rematch, only tomorrow and the next day." She was talking as much about Katrina and New Orleans as 9/11: there will always, she said, be some disaster, and the best help in those situations is going to come from individuals and the people around them. Be prepared: life is risky.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 24, 2010

Random acts of security

When I was in my 20s in the 1970s, I spent a lot of time criss-crossing the US by car. One of the great things about it, as I said to a friend last week, was the feeling of ownership that gave me wherever I was: waking up under the giant blue sky in Albuquerque, following the Red River from Fargo to Grand Forks, or heading down the last, dull hour of New York State Thruway to my actual home, Ithaca, NY, it was all part of my personal backyard. This, I thought many times, is my country!

This year's movie (and last year's novel) Up in the Air highlighted the fact that the world's most frequent flyers feel the same way about airports. When you've traversed the same airports so many times that you've developed a routine it's hard not to feel as smug as George Clooney's character when some disorganized person forgets to take off her watch before going through the metal detector. You, practiced and expert, slide through smoothly without missing a beat. The check-in desk staff and airline club personnel ask how you've been. You sit in your familiar seat on the plane. You even know the exact moment in the staff routine to wander back to the galley and ask for a mid-flight cup of tea.

Your enemy in this comfortable world is airport security, which introduces each flight by putting you back in your place as an interloper.

Our equivalent back then was the Canadian border, which we crossed in quite isolated places sometimes. The border highlighted a basic fact of human life: people get bored. At the border crossing between Grand Forks, ND and Winnipeg, Manitoba, for example, the guards would keep you talking until the next car hove into view. Sometimes that was one minute, sometimes 15.

We - other professional travelers and I - had a few other observations. If you give people a shiny, new toy they will use it, just for the novelty. One day when I drove through Lewiston-Queenston they had drug-sniffing dogs on hand to run through and around the cars stopped for secondary screening. Fun! I was coming back from a folk festival in a pickup truck with a camper on the back, so of course I was pulled over. Duh: what professional traveler who crosses the border 12 times a year risks having drugs in their car?

Cut to about a week ago, at Memphis airport. It was 10am on a Saturday, and the traffic approaching the security checkpoint was very thin. The whole body image scanners - expensive, new, the latest in cover-your-ass-ness - are in theory only for secondary screening: you go through them if you alarm the metal detectors or are randomly selected.

How does that work? When there's little traffic everyone goes through the scanner. For the record, I opted out and was given an absolutely professional and courteous pat-down, in contrast to the groping reports in the media for the last month. Yes: felt around under my waistband and hairline. No: groping. You've got to love the Net's many charming inhabitants: when I posted this report to a frequent flyer forum a poster hazarded that I was probably old and ugly.

My own theory is simply that it was early in the day, and everyone was rested and fresh and hadn't been sworn at a whole lot yet. So no one was feeling stressed out or put-upon by a load of uppity, obnoxious passengers.

It seems clear, however, that if you wanted to navigate security successfully carrying items that are typically unwanted on a flight, your strategy for reducing the odds of attracting extra scrutiny would be fairly simple, although the exact opposite of what experienced (professional) travelers are in the habit of doing:

- Choose a time when it's extremely crowded. Scanners are slower than metal detectors, so the more people there are the smaller the percentage going through them. (Or study the latest in scanner-defeating explosives fashions.)

- Be average and nondescript, someone people don't notice particularly or feel disposed to harass when they're in a bad mood. Don't be a cute, hot young woman; don't be a big, fat, hulking guy; don't wear clothes that draw the eye: expensive designer fashions, underwear, Speedos, a nun's habit (who knows what that could hide? and anyway isn't prurient curiosity about what could be under there a thing?).

- Don't look rich, powerful, special, or attitudinous. The TSA is like a giant replication of Stanley Milgram's experiment. Who's the most fun to roll over? The business mogul or the guy just like you who works in a call center? The guy with the video crew spoiling for a fight, or the guy who treats you like a servant? The sexy young woman who spurned you in high school or the crabby older woman like your mean second-grade teacher? Or the wheelchair-bound or medically challenged who just plain make you uncomfortable?

- When you get in line, make sure you're behind one or more of the above eye-catching passengers.

Note to TSA: you think the terrorists can't figure this stuff out, too? The terrorist will be the last guy your agents will pick for closer scrutiny.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 26, 2010

Like, unlike

Some years back, the essayist and former software engineer Ellen Ullman wrote about the tendency of computer systems to infect their owners. The particular infectious she covered in Close to the Machine: Technophilia and Its Discontents was databases. Time after time, she saw good, well-meaning people commission a database to help staff or clients, and then begin to use it to monitor those they originally intended to help. Why? Well, because they *can*.

I thought - and think - that Ullman was onto something important there, but that this facet of human nature is not limited to computers and databases. Stanley Milgram's 1961 experiments showed that humans under the influence of apparent authority will obey instructions to administer treatment that outside of such a framework they would consider abhorrent. This seems to me sufficient answer to Roger Ebert's comment that no TSA agent has yet refused to perform the "enhanced pat-down", even on a child.

It would almost be better if the people running the NHS Choices Web site had been infected with the surveillance bug because they would be simply wrong. Instead, the NHS is more complicatedly wrong: it has taken the weird decision that what we all want is to . share with our Facebook friends the news that we have just looked at the page on gonorrhea. Or, given the well-documented privacy issues with Facebook's rapid colonization of the Web via the "Like" button, allow Facebook to track our every move whether we're logged in or not.

I can only think of two possibilities for the reasoning behind this. One is that NHS managers have little concept of the difference between their site, intended to provide patient information and guidance, and that of a media organization needing advertising to stay afloat. It's one of the truisms of new technologies that they infiltrate the workplace through the medium of people who already use them: email, instant messaging, latterly social networks. So maybe they think that because they love Facebook the rest of us must, too. My other thought is that NHS managers think this is what we want because their grandkids have insisted they get onto Facebook, where they now occupy their off-hours hitting the "like" button and poking each other and think this means they're modern.

There's the issue Tim Berners-Lee has raised, that Facebook and other walled gardens are dividing the Net up into incompatible silos. The much worse problem, at least for public services and we who must use them, is the insidiously spreading assumption that if a new technology is popular it must be used no matter what the context. The effect is about as compelling as a TSA agent offering you a lollipop after your pat-down.

Most likely, the decision to deploy the "Like" button started with the simple, human desire for feedback. At some point everyone who runs a Web site wonders what parts of the site get read the most...and then by whom...and then what else they read. It's obviously the right approach if you're a media organization trying to serve your readers better. It's a ludicrously mismatched approach if you're the NHS because your raison d'être is not to be popular but to provide the public with the services they need at the most vulnerable times in their lives. Your page on rare lymphomas is not less valuable or important just because it's accessed by fewer people than the pages on STDs, nor are you actually going to derive particularly useful medical research data from finding that people who read about lymphoma also often read pages on osteoporosis. But it's easy, quick, and free to install Google Analytics or Facebook Like, and so people do it without thought.

Both of these incidents have also exposed once and for all the limited value of privacy policies. For one thing, a patient in distress is not going to take time out from bleeding to read the fine print ("when you visit pages on our site that display a Facebook Like button, Facebook will collect information about your visit") or check for open, logged-in browser windows. The NHS wants its sites to be trusted; but that means more than simply being medically accurate; it requires implementing confidentiality as well. The NHS's privacy policy is meaningless if you need to be a technical expert to exercise any choice. Similarly, who cares what the TSA's privacy policy says if the simple desire to spend Christmas with your family requires you to submit to whatever level of intimate inspection the agent on the ground that day feels like dishing out? What privacy policy makes up for being required to covered in urine spilled from your roughly handled urostomy bag? Milgram moments, both.

It's at this point that we need our politicians to act in our interests, because the thinking has to change at the top level.

Meantime, if you're traveling in the US this Christmas, the ACLU, and Edward Hasbrouck have handy guides to your rights. But pragmatically, if you do get patted down and really want to make your flight, it seems like your best policy is to lie back and think of the country of your choice.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 12, 2010

Just between ourselves

It is, I'm sure, pure coincidence that a New York revival of Vaclav Havel's wonderfully funny and sad 1965 play The Memorandum was launched while the judge was considering the Paul Chambers "Twitter joke trial" case. "Bureaucracy gone mad," they're billing the play, and they're right, but what that slogan omits is that the bureaucracy in question has gone mad because most of its members don't care and the one who does has been shut out of understanding what's going on. A new language, Ptydepe, has been secretly invented and introduced as a power grab by an underling claiming it will improve the efficiency of intra-office communications. The hero only discovers the shift when he receives a memorandum written in the new language and can't get it translated due to carefully designed circular rules. When these are abruptly changed the translated memorandum restores him to his original position.

It is one of the salient characteristics of Ptydepe that it has a different word for every nuance of the characters' natural language - Czech in the original, but of course English in the translation I read. Ptydepe didn't work for the organization in the play because it was too complicated for anyone to learn, but perhaps something like it that removes all doubt about nuance and context would assist older judges in making sense of modern social interactions over services such as Twitter. Clearly any understanding of how people talk and make casual jokes was completely lacking yesterday when Judge Jacqueline Davies upheld the conviction of Paul Chambers in a Doncaster court.

Chambers' crime, if you blinked and missed those 140 characters, was to post a frustrated message about snowbound Doncaster airport: "Crap! Robin Hood airport is closed. You've got a week and a bit to get your shit together otherwise I'm blowing the airport sky high!" Everyone along the chain of accountability up to the Crown Prosecution Service - the airport duty manager, the airport's security personnel, the Doncaster police - seems to have understood he was venting harmlessly. And yet prosecution proceeded and led, in May, to a conviction that was widely criticized both for its lack of understanding of new media and for its failure to take Chambers' lack of malicious intent into account.

By now, everyone has been thoroughly schooled in the notion that it is unwise to make jokes about bombs, plane crashes, knives, terrorists, or security theater - when you're in an airport hoping to get on a plane. No one thinks any such wartime restraint need apply in a pub or its modern equivalent, the Twitter/Facebook/online forum circle of friends. I particularly like Heresy Corner's complaint that the judgement makes it illegal to be English.

Anyone familiar with online writing style immediately and correctly reads Chambers' Tweet for what it was: a perhaps ill-conceived expression of frustration among friends that happens to also be readable (and searchable) by the rest of the world. By all accounts, the judge seems to have read it as if it were a deliberately written personal telegram sent to the head of airport security. The kind of expert explanation on offer in this open letter apparently failed to reach her.

The whole thing is a perfect example of the growing danger of our data-mining era: that casual remarks are indelibly stored and can be taken out of context to give an utterly false picture. One of the consequences of the Internet's fundamental characteristic of allowing the like-minded and like-behaved to find each other is that tiny subcultures form all over the place, each with its own set of social norms and community standards. Of course, niche subcultures have always existed - probably every local pub had its own set of tropes that were well-known to and well-understood by the regulars. But here's the thing they weren't: permanently visible to outsiders. A regular who, for example, chose to routinely indicate his departure for the Gents with the statement, "I'm going out to piss on the church next door" could be well-known in context never to do any such thing. But if all outsiders saw was a ten-second clip of that statement and the others' relaxed reaction that had been posted to YouTube they might legitimately assume that pub was a shocking hotbed of anti-religiou slobs. Context is everything.

The good news is that the people on the ground whose job it was to protect the airport read the message, understood it correctly, and did not overreact. The bad news is that when the CPS and courts did not follow their lead it opened up a number of possibilities for the future, all bad. One, as so many have said, is that anyone who now posts anything online while drunk, angry, stupid, or sloppy-fingered is at risk of prosecution - with the consequence of wasting huge amounts of police and judicial time that would be better spent spotting and stopping actual terrorists. The other is that everyone up the chain felt required to cover their ass in case they were wrong.

Chambers still may appeal to the High Court; Stephen Fry is offering to pay his fine (the Yorkshire Post puts his legal bill at £3,000), and there's a fund accepting donations.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 23, 2010

An affair to remember

Politicians change; policies remain the same. Or if, they don't, they return like the monsters in horror movies that end with the epigraph, "It's still out there..."

Cut to 1994, my first outing to the Computers, Freedom, and Privacy conference. I saw: passionate discussions about the right to strong cryptography. The counterargument from government and law enforcement and security service types was that yes, strong cryptography was a fine and excellent thing at protecting communications from prying eyes and for that very reason we needed key escrow to ensure that bad people couldn't say evil things to each other in perfect secrecy. The listing of organized crime, terrorists, drug dealers, and pedophiles as the reasons why it was vital to ensure access to cleartext became so routine that physicist Timothy May dubbed them "The Four Horsemen of the Infocalypse". Cypherpunks opposed restrictions on the use and distribution of strong crypto; government types wanted at the very least a requirement that copies of secret cryptographic keys be provided and held in escrow against the need to decrypt in case of an investigation. The US government went so far as to propose a technology of its own, complete with back door, called the Clipper chip.

Eventually, the Clipper chip was cracked by Matt Blaze, and the needs of electronic commerce won out over the paranoia of the military and restrictions on the use and export of strong crypto were removed.

Cut to 2000 and the run-up to the passage of the UK's Regulation of Investigatory Powers Act. Same Four Horsemen, same arguments. Eventually RIPA passed with the requirement that individuals disclose their cryptographic keys - but without key escrow. Note that it's just in the last couple of months that someone - a teenager - has gone to jail in the UK for the first time for refusing to disclose their key.

It is not just hype by security services seeking to evade government budget cuts to say that we now have organized cybercrime. Stuxnet rightly has scared a lot of people into recognizing the vulnerabilities of our infrastructure. And clearly we've had terrorist attacks. What we haven't had is a clear demonstration by law enforcement that encrypted communications have impeded the investigation.

A second and related strand of argument holds that communications data - that is traffic data such as email headers and Web addresses - must be retained and stored for some lengthy period of time, again to assist law enforcement in case an investigation is needed. As the Foundation for Information Policy Research and Privacy International have consistently argued for more than ten years, such traffic data is extremely revealing. Yes, that's why law enforcement wants it; but it's also why the American Library Association has consistently opposed handing over library records. Traffic data doesn't just reveal who we talk to and care about; it also reveals what we think about. And because such information is of necessity stored without context, it can also be misleading. If you already think I'm a suspicious person, the fact that I've been reading proof-of-concept papers about future malware attacks sounds like I might be a danger to cybersociety. If you know I'm a journalist specializing in technology matters, that doesn't sound like so much of a threat.

And so to this week. The former head of the Department of Homeland Security, Michael Chertoff, at the RSA Security Conference compared today's threat of cyberattack to nuclear proliferation. The US's Secure Flight program is coming into effect, requiring airline passengers to provide personal data for the US to check 72 hours in advance (where possible). Both the US and UK security services are proposing the installation of deep packet inspection equipment at ISPs. And language in the UK government's Strategic Defence and Security Review (PDF) review has led many to believe that what's planned is the revival of the we-thought-it-was-dead Interception Modernisation Programme.

Over at Light Blue Touchpaper, Ross Anderson links many of these trends and asks if we will see a resumption of the crypto wars of the mid-1990s. I hope not; I've listened to enough quivering passion over mathematics to last an Internet lifetime.

But as he says it's hard to see one without the other. On the face of it, because the data "they" want to retain is traffic data and note content, encryption might seem irrelevant. But a number of trends are pushing people toward greater use of encryption. First and foremost is the risk of interception; many people prefer (rightly) to use secured https, SSH, or VPN connections when they're working over public wi-fi networks. Others secure their connections precisely to keep their ISP from being able to analyze their traffic. If data retention and deep packet inspection become commonplace, so will encrypted connections.

And at that point, as Anderson points out, the focus will return to long-defeated ideas like key escrow and restrictions on the use of encryption. The thought of such a revival is depressing; implementing any of them would be such a regressive step. If we're going to spend billions of pounds on the Internet infrastructure - in the UK, in the US, anywhere else - it should be spent on enhancing robustness, reliability, security, and speed, not building the technological infrastructure to enable secret, warrantless wiretapping.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 8, 2010

The zero effect

Some fifteen years ago I reviewed a book about the scary future of our dependence on computers. The concluding note: that criminals could take advantage of a zero-day exploit to disseminate a virus around the world that at a given moment would shut down all the world's computers.

I'm fairly sure I thought this was absurd and said so. Since when have we been able to get every computer to do anything on command? But there was always the scarier and less unlikely prospect that building computers into more and more everyday things would add more and more chances for code to increase the physical world's vulnerability to attack.

By any measure, the Stuxnet worm that has been dominating this week's technology news is an impressive bit of work (it may even have had a beta test). So impressive, in fact, that you imagine its marketing brochure said, like the one for the spaceship Heart of Gold in The Hitchhiker's Guide to the Galaxy, "Be the envy of other major governments."

The least speculative accounts, like those of Bruce Schneier, Business Standard, and Symantec, and that of long-time Columbia University researcher Steve Bellovin agree on a number of important points.

First, whoever coded this worm was an extremely well-resourced organization. Symantec estimates the effort would have required five to ten people and six months - and, given that the worm is nearly bug-free, teams of managers and quality assurance folks. (Nearly bug-free: how often can you say that of any software?) In a paper he gave at Black Hat several years ago, Peter Gutmann documented the highly organized nature of the malware industry (PDF). Other security researchers have agreed: there is a flourishing ecosystem around malware that includes myriad types of specialist groups who provide all the features of other commercial sectors, up to and including customer service.

In addition, the writers were willing to use up three zero-day exploits (many reports say four, but Schneier notes that one has been identified as a previously reported vulnerability). This is expensive, in the sense that these vulnerabilities are hard to find and can only be used once (because once used, they'll be patched). You don't waste them on small stuff.

Plus, the coders were able to draw on rather specialised knowledge of the inner workings of Siemens programmable logic controller systems and gain access to the certificates needed to sign drivers. And the worm was both able to infect widely and target specifically. Interesting.

The big remaining question is what the goal was: send a message; do something to one specific target; as yet unidentified; simple proof of concept? Whatever the purpose was, it's safe to say that this will not be the last piece of weapons-grade malware (as Bellovin calls it) to be unleashed on the world. If existing malware is any guide, future Stuxnets will be less visible, harder to find and stop, and written to more specific goals. Yesterday's teenaged bedroom hacker defacing Web pages has been replaced by financial criminals whose malware cleans other viruses off the systems it infects and steals very specifically useful data. Today's Stuxnet programmers will most likely be followed by more complex organizations with much clearer and more frightening agendas. They won't stop all the world's computers (because they'll need their own to keep running); but does that matter if they can disrupt the electrical grid and the water supply, or reroute trains and disrupt air traffic control?

Schneier notes that press reports incorrectly identified the Siemens systems Stuxnet attacked as SCADA (for Supervisory Control and Data Acquisition) rather than PLC. But that doesn't mean that SCADA systems are invulnerable: Tom Fuller, who ran the now-defunct Blindside project in 2006-2007 for the government consultancy Kable under a UK government contract, spotted the potential threats to SCADA systems as long ago as that. Post-Stuxnet, others are beginning to audit these systems and agree. An Australia audit of Victoria's water systems concluded that these are vulnerable to attack, and it seems likely many more such reports will follow.

But the point Bellovin makes that is most likely to be overlooked is this one: that building a separate "secure" network will not provide a strong defense. To be sure, we are making many pieces of infrastructure more vulnerable by adding new vulnerabilities. Many security experts agree that the deployment of wireless electrical meters and the "smart grid" has failed to understand the privacy and security issues this is going to raise.

The temptation to overlook Bellovin's point is going to be very strong. But the real-world equivalent is to imagine that because your home computer is on a desert island surrounded by a moat filled with alligators it can't be stolen. Whereas, the reality is that a family member or invited guest can still copy your data and make off with it or some joker can drop in by helicopter.

Computers are porous. Infrastructure security must assume that and limit the consequences.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats all non-spam comments; I still don't know why.

September 24, 2010

Lost in a Haystack

In the late 1990s you could always tell when a newspaper had just gotten online because it would run a story about the Good Times virus.

Pause for historical detail: the Good Times virus (and its many variants) was an email hoax. An email message with the subject heading "Good Times" or, later, "Join the Crew", or "Penpal Greetings", warned recipients that opening email messages with that header would damage their computers or delete the contents of their hard drives. Some versions cited Microsoft, the FCC, or some other authority. The messages also advised recipients to forward the message to all their friends. The mass forwarding and subsequent complaints were the payload.

The point, in any case, is that the Good Times virus was the first example of mass social engineering that spread by exploiting not particularly clever psychology and a specific kind of technical ignorance. The newspaper staffers of the day were very much ordinary new users in this regard, and they would run the story thinking they were serving their readers. To their own embarrassment, of course. You'd usually see a retraction a week or two later.

Austin Heap, the progenitor of Haystack, software he claimed was devised to protect the online civil liberties of Iranian dissidents, seems unlikely to have been conducting an elaborate hoax rather than merely failing to understand what he was doing. Either way, Haystack represents a significant leap upward in successfully taking mainstream, highly respected publications for a technical ride. Evgeny Morozov's detailed media critique underestimates the impact of the recession and staff cuts on an already endangered industry. We will likely see many more mess-equals-technology-plus-journalism stories because so few technology specialists remain in the post-recession mainstream media.

I first heard Danny O'Brien's doubts about Haystack in June, and his chief concern was simple and easily understood: no one was able to get a copy of the software to test it for flaws. For anyone who knows anything about cryptography or security, that ought to have been damning right out of the gate. The lack of such detail is why experienced technology journalists, including Bruce Schneier, generally avoided commenting on it. There is a simple principle at work here: the *only* reason to trust technology that claims to protect its users' privacy and/or security is that it has been thoroughly peer-reviewed - banged on relentlessly by the brightest and best and they have failed to find holes.

As a counter-example, let's take Phil Zimmermann's PGP, email encryption software that really has protected the lives and identities of far-flung dissidents. In 1991, when PGP first escaped onto the Net, interest in cryptography was still limited to a relatively small, though very passionate, group of people. The very first thing Zimmermann wrote in the documentation was this: why should you trust this product? Just in case readers didn't understand the importance of that question, Zimmermann elaborated, explaining how fiendishly difficult it is to write encryption software that can withstand prolonged and deliberate attacks. He was very careful not to claim that his software offered perfect security, saying only that he had chosen the best algorithms he could from the open literature. He also distributed the source code freely for review by all and sundry (who have to this day failed to find substantive weaknesses). He concludes: "Anyone who thinks they have devised an unbreakable encryption scheme either is an incredibly rare genius or is naive and inexperienced." Even the software's name played down its capabilities: Pretty Good Privacy.

When I wrote about PGP in 1993, PGP was already changing the world by up-ending international cryptography regulations, blocking mooted US legislation that would have banned the domestic use of strong cryptography, and defying patent claims. But no one, not even the most passionate cypherpunks, claimed the two-year-old software was the perfect, the only, or even the best answer to the problem of protecting privacy in the digital world. Instead, PGP was part of a wider argument taking shape in many countries over the risks and rewards of allowing civilians to have secure communications.

Now to the claims made for Haystack in its FAQ:

However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy.

Without proper and open testing of the entire system - peer review - they could not possibly know this. The strongest cryptographic algorithm is only as good as its implementation. And even then, as Clive Robertson writes in Financial Cryptography, technology is unlikely to be a complete solution.

What a difference a sexy news hook makes. In 1993, the Clinton Administration's response to PGP was an FBI investigation that dogged Zimmermann for two years; in 2010, Hillary Clinton's State Department fast-tracked Haystack through the licensing requirements. Why such a happy embrace of Haystack rather than existing privacy technologies such as Freenet, Tor, or other anonymous remailers and proxies remains as a question for the reader.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 27, 2010

Trust the data, not the database

"We're advising people to opt out," said the GP, speaking of the Summary Care Records that are beginning to be uploaded to what is supposed to be eventually a nationwide database used by the NHS. Her reasoning goes this way. If you don't upload your data now you can always upload it later. If you do upload it now - or sit passively by while the National Health Service gets going on your particular area - and live to regret it you won't be able to get the data back out again.

You can find the form here, along with a veiled hint that you'll be missing out on something if you do opt out - like all those great offers of products and services companies always tell you you'll get if you sign up for their advertising, The Big Opt-Out Web site has other ideas.

The newish UK government's abrupt dismissal of the darling databases of last year has not dented the NHS's slightly confusing plans to put summary care records on a national system that will move control over patient data from your GP, who you probably trust to some degree, to...well, there's the big question.

In briefings for Parliamentarians conducted by the Open Rights Group in 2009, Emma Byrne, a researcher at University College, London who has studied various aspects of healthcare technology policy, commented that the SCR was not designed with any particular use case in mind. Basic questions that an ordinary person asks before every technology purchase - who needs it? for what? under what circumstances? to solve what problem? - do not have clear answers.

"Any clinician understands the benefits of being able to search a database rather than piles of paper records, but we have to do it in the right way," Fleur Fisher, the former head of ethics, science, and information for the British Medical Association said at those same briefings. Columbia University researcher Steve Bellovin, among others, has been trying to figure out what that right way might look like.

As comforting as it sounds to say that the emergency care team looking after you will be able to look up your SCR and find out that, for example, you are allergic to penicillin and peanuts, in practice that's not how stuff happens - and isn't even how stuff *should* happen. Emergency care staff look at the patient. If you're in a coma, you want the staff to run the complete set of tests, not look up in a database, see you're a diabetic and assume it's a blood sugar problem. In an emergency, you want people to do what the data tells them, not what the database tells them.

Databases have errors, we know this. (Just last week, a database helpfully moved the town I live in from Surrey to Middlesex, for reasons best known to itself. To fix it, I must write them a letter and provide documentation.) Typing and cross-matching blood drawn by you from the patient in front of you is much more likely to have you transfusing the right type of blood into the right patient.

But if the SCR isn't likely to be so much used by the emergency staff we're all told would? might? find it helpful, it still opens up much broader possibilities of abuse. It's this part of the system that the GP above was complaining about: you cannot tell who will have access or under what circumstances.

GPs do, in a sense, have a horse in this race, in that if patient data moves out of their control they have lost an important element of their function as gatekeepers. But given everything we know about how and why large government IT projects fail, surely the best approach is small, local projects that can be scaled up once they're shown to be functional and valuable. And GPs are the people at the front lines who will be the first to feel the effects of a loss of patient trust.

A similar concern has kept me from joining at study whose goals I support, intended to determine if there is a link between mobile phone use and brain cancer. The study is conducted by an ultra-respectable London university; they got my name and address from my mobile network operator. But their letter notes that participation means giving them unlimited access to my medical records for the next 25 years. I'm 56, about the age of the earliest databases, and I don't know who I'll be in 25 years. Technology is changing faster than I am. What does this decision mean?

There's no telling. Had they said I was giving them permission for five years and then would be asked to renew, I'd feel differently about it. Similarly, I'd be more likely to agree had they said that under certain conditions (being diagnosed with cancer, dying, developing brain disease) my GP would seek permission to release my records to them. But I don't like writing people blank checks, especially with so many unknowns over such a long period of time. The SCR is a blank check.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

August 6, 2010

Bride of Clipper

"It's the Clipper chip," said Ross Anderson, or more or less, "risen out of its grave trailing clanking chains and covered in slime." Anderson was talking about the National Strategy for Trusted Identities in Cyberspace, a plan hatched in the US and announced by cybersecurity czar Howard Schmidt in June.

The Clipper chip was the net.war in progress when I went to my first Computers, Freedom, and Privacy conference, the 1994 edition, held in Chicago. The idea behind Clipper was kind of cute: the government, in the form of the NSA, had devised a cryptographic chip that could be installed in any telecommunications device to encrypt and decrypt any communications it transmitted or received. The catch: the government would retain a master key to allow it to decrypt anything it wanted whenever it felt the need. Privacy advocates and civil libertarians and security experts joined to fight a battle royal against its adoption as a government standard. We'll never know how that would have come out because while passions were still rising a funny thing happened: cryptographer Matt Blaze discovered he could bypass the government's back door (PDF) and use the thing to send really encrypted communications. End of Clipper chip.

At least, as such.

The most important element of the Clipper chip, however - key escrow - stayed with us a while longer. It means what it sounds like: depositing a copy of your cryptographic key, which is supposed to be kept secret, with an authority. During the 1990s run of fights over key escrow (the US and UK governments wanted it; technical experts, civil libertarians, and privacy advocates all thought it was a terrible idea) such authorities were referred to as "trusted third parties" (TTPs). At one event Privacy International organised to discuss the subject, government representatives made it clear their idea of TTPs were banks. They seemed astonished to discover that in fact people don't trust their banks that much. By the time the UK's Regulation of Investigatory Powers Act was passed in 2000, key escrow had been eliminated.

But it is this very element - TTPs and key escrow - that is clanking along to drip slime on the NSTIC. The proposals are, of course, still quite vague, as the Electronic Frontier Foundation has pointed out. But the proposals do talk of "trusted digital identities" and "identity providers" who may be from the public or private sectors. They talk less, as the Center for Democracy and Technology has pointed out, about the kind of careful user-centric, role-specific, transactional authentication that experts like Jan Camenisch and Stefan Brands have long advocated. (Since I did that 2007 interview with him, Brands' company, Credentica, has been bought by Microsoft and transformed into its new authentication technology, U-Prove.) Have an identity ecosystem, by all means, but the key to winning public trust - the most essential element of any such system - will be ensuring that identity providers are not devised as Medium-sized Brothers-by-proxy.

Blaze said at the time that the Feds were pretty grown-up about the whole thing. Still, I suppose it was predictable that it would reappear. Shortly after the 9/11 attacks Jack Straw, then foreign minister, called those of us who opposed key escrow in the 1990s "very naïve". The rage over that kicked off the first net.wars column.

The fact remains that if you're a government and you want access to people's communications and those people encrypt those communications there are only two approaches available to you. One: ban the technology. Two: install systems that let you intercept and decode the communications at will. Both approaches are suddenly vigorously on display with respect to Blackberry devices, which offer the most secure mobile email communications we have (which is why businesses and governments love them so much for their own use).

India wants to take the second approach, but will settle for the first if Research in Motion doesn't install a server in India, where it can be "safely" monitored. The UAE, as everyone heard this week, wants to ban it starting on October 11. (I was on Newsnight Tuesday to talk about this with Kirsty Wark and Alan West.)

No one, not CDT, PI, or EFF, not even me, disputes that there are cases where intercepting and reading communications - wiretapping - is necessary in the interest of protecting innocent lives. But what key escrow and its latter variants enables, as Susan Landau, a security researcher and co-author of Privacy on the Line: The Politics of Wiretapping and Encryption, has noted, is covert wiretapping. Or, choose your own favorite adjective: covert, warrantless, secret, unauthorized, illegal... It would be wonderful to be able to think that all law enforcement heroes are noble, honorable, and incapable of abusing the power we give them. But history says otherwise: where there is no oversight, abuse follows. Judicial oversight of wiretapping requests is our bulwark against mass surveillance.

CDT, EFF, and others are collecting ideas for improving NSTIC, starting with extending the period for public comments, which was distressingly short (are we seeing a pattern develop here?). Go throw some words at the problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 18, 2010

Things I learned at this year's CFP

- There is a bill in front of Congress to outlaw the sale of anonymous prepaid SIMs. The goal seems to be some kind of fraud and crime prevention. But, as Ed Hasbrouck points out, the principal people who are likely to be affected are foreign tourists and the Web sites who sell prepaid SIMS to them.

- Robots are getting near enough in researchers' minds for them to be spending significant amounts of time considering the legal and ethical consequences in real life - not in Asimov's fictional world where you could program in three safety llaws and your job was done. Ryan Calo points us at the work of Stanford student Victoria Groom on human-robot interaction. Her dissertation research not yet on the site, discovered that humans allocate responsibility for success and failure proportionately according to how anthropomorphic the robot is.

- More than 24 percent of tweets - and rising sharply - are sent by automated accounts, according to Miranda Mowbray at HP labs. Her survey found all sorts of strange bots: things that constantly update the time, send stock quotes, tell jokes, the tea bot that retweets every mention of tea...

- Google's Kent Walker, the 1997 CFP chair, believes that censorship is as big a threat to democracy as terrorism, and says that open architectures and free expression are good for democracy - and coincidentally also good for Google's business.

- Microsoft's chief privacy strategist, Peter Cullen, says companies must lead in privacy to lead in cloud computing. Not coincidentally, others are the conference note that US companies are losing business to Europeans in cloud computing because EU law prohibits the export of personal data to the US, where data protection is insufficient.

- It is in fact possible to provide wireless that works at a technical conference. And good food!

- The Facebook Effect is changing the attitude of other companies about user privacy. Lauren Gelman, who helps new companies with privacy issues, noted that because start-ups all see Facebook's success and want to be the next 400 million-user environment, there was a strong temptation to emulate Facebook's behavior. Now, with the angry cries mounting from consumers, she's having to spend less effort convincing them about the level of pushback companies will get from consumers if they change their policies and defy their expectations. Even so, it's important to ensure that start-ups include privacy in their budgets and not become an afterthought. In this respect, she makes me realize, privacy in 2010 is at the stage that usability was in the early 1990s.

- All new program launches come through the office of the director of Yahoo!'s business and human rights program, Ebele Okabi-Harris. "It's very easy for the press to focus on China and particular countries - for example, Australia last year, with national filtering," she said, "but for us as a company it's important to have a structure around this because it's not specific to any one region." It is, she added later, a "global problem".

- We should continue to be very worried about the database state because the ID cards repeal act continues the trend toward data sharing among government departments and agencies, according to Christina Zaba from No2ID.

- Information brokers and aggregators, operating behind the scenes, are amassing incredible amounts of details about Americans and it can require a great deal of work to remove one's information from these systems. The main customers of these systems are private investigators, debt collectors, media, law firms, and law enforcement. The Privacy Rights Clearinghouse sees many disturbing cases, as Beth Givens outlined, as does Pam Dixon's World Privacy forum.

- I always knew - or thought I knew - that the word "robot" was not coined by Asimov but by Karel Capek for his play R.U.R. (for "Rossum's Universal Robots", which coincidentally I also know that playing a robot in same was Michael Caine's first acting job). But Twitterers tell me that this isn't quite right. The word is derived from the Czech word "robota", "compulsory work for a feudal landlord". And that it was actually coined by Capek's older brother, Josef..

- There will be new privacy threats emerging from automated vehicles, other robots, and voicemail transcription services, sooner rather than later.

- Studying the inner workings of an organization like the International Civil Aviation Organization is truly difficult because the time scales - ten years to get from technical proposals to mandated standard, which is when the public becomes aware of - are a profound mismatch for the attention span of media and those who fund NGOs. Anyone who feels like funding an observer to represent civil society at ICAO should get in touch with Edward Hasbrouck.

- A lot of our cybersecurity problems could be solved by better technology.

- Lillie Coney has a great description of deceptive voting practices designed to disenfranchise the opposition: "It's game theory run amok!"

- We should not confuse insecure networks (as in vulnerable computers and flawed software) with unsecured networks (as in open wi-fi).

- Next year's conference chairs are EPIC's Lillie Coney and Jules Polonetsky. It will be in Washington, DC, probably the second or third week in June. Be there!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 30, 2010

Child's play

In the TV show The West Wing (Season 6, Episode 17, "A Good Day") young teens tackle the president: why shouldn't they have the right to vote? There's probably no chance, but they made their point: as a society we trust kids very little and often fail to take them or their interests seriously.

That's why it was so refreshing to read in 2008's < a href="http://www.dcsf.gov.uk/byronreview/actionplan/">Byron Review the recommendation that we should consult and listen to children in devising programs to ensure their safety online. Byron made several thoughtful, intelligent analogies: we supervise as kids learn to cross streets, we post warning signs at swimming pools but also teach them to swim.

She also, more controversially, recommended that all computers sold for home use in the UK should have Kitemarked parental control software "which takes parents through clear prompts and explanations to help set it up and that ISPs offer and advertise this prominently when users set up their connection."

The general market has not adopted this recommendation; but it has been implemented with respect to the free laptops issued to low-income families under Becta's £300 million Home Access Laptop scheme, announced last year as part of efforts to bridge the digital divide. The recipients - 70,000 to 80,000 so far - have a choice of supplier, of ISP, and of hardware make and model. However, the laptops must meet a set of functional technical specifications, one of which is compliance with PAS 74:2008, the British Internet safety standard. That means anti-virus, access control, and filtering software: NetIntelligence.

Naturally, there are complaints; these fall precisely in line with the general problems with filtering software, which have changed little since 1996, when the passage of the Communications Decency Act inspired 17-year-old Bennett Haselton to start Peacefire to educate kids about the inner working of blocking software - and how to bypass it. Briefly:

1. Kids are often better at figuring out ways around the filters than their parents are, giving parents a false sense of security.

2. Filtering software can't block everything parents expect it to, adding to that false sense of security.

3. Filtering software is typically overbroad, becoming a vehicle for censorship.

4. There is little or no accountability about what is blocked or the criteria for inclusion.

This case looks similar - at first. Various reports claim that as delivered NetIntelligence blocks social networking sites and even Google and Wikipedia, as well as Google's Chrome browser because the way Chrome installs allows the user to bypass the filters.

NetIntelligence says the Chrome issue is only temporary; the company expects a fix within three weeks. Marc Kelly, the company's channel manager, also notes that the laptops that were blocking sites like Google and Wikipedia were misconfigured by the supplier. "It was a manufacturer and delivery problem," he says; once the software has been reinstalled correctly, "The product does not block anything you do not want it to." Other technical support issues - trouble finding the password, for example - are arguably typical of new users struggling with unfamiliar software and inadequate technical support from their retailer.

Both Becta and NetIntelligence stress that parents can reconfigure or uninstall the software even if some are confused about how to do it. First, they must first activate the software by typing in the code the vendor provides; that gets them password access to change the blocking list or uninstall the software.

The list of blocked sites, Kelly says, comes from several sources: the Internet Watch Foundation's list and similar lists from other countries; a manual assessment team also reviews sites. Sites that feel they are wrongly blocked should email NetIntelligence support. The company has, he adds, tried to make it easier for parents to implement the policies they want; originally social networks were not broken out into their own category. Now, they are easily unblocked by clicking one button.

The simple reaction is to denounce filtering software and all who sail in her - censorship! - but the Internet is arguably now more complicated than that. Research Becta conducted on the pilot group found that 70 percent of the parents surveyed felt that the built-in safety features were very important. Even the most technically advanced of parents struggle to balance their legitimate concerns in protecting their children with the complex reality of their children's lives.

For example: will what today's children post to social networks damage their chances of entry into a good university or a job? What will they find? Not just pornography and hate speech; some parents object to creationist sites, some to scary science fiction, others to Fox News. Yesterday's harmless flame wars are today's more serious cyber-bullying and online harassment. We must teach kids to be more resilient, Byron said; but even then kids vary widely in their grasp of social cues, common sense, emotional make-up, and technical aptitude. Even experts struggle with these issues.

"We are progressively adding more information for parents to help them," says Kelly. "We want the people to keep the product at the end. We don't want them to just uninstall it - we want them to understand it and set the policies up the way they want them." Like all of us, Kelly thinks the ideal is for parents to engage with their children on these issues, "But those are the rules that have come along, and we're doing the best we can."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 23, 2010

Death, where is thy password?

When last seen, our new widow was wrestling with her late husband's password, unable to get into the Microsoft Money files he used to manage their finances or indeed his desktop computer in general. Hours of effort from the best geekish minds (we are grateful to Drew and Peter) led nowhere. Eventually, we paid £199 to Elcomsoft (the company whose employee Dmitry Sklyarov was arrested in 2001 at Defcon for cracking Adobe eBook files) for its Advanced Office Password Recovery software and it found the password after about 18 hours of constrained brute-force attempts. That password, doctored in line with the security hint my friend had left behind, unlocked his desktop.

My widow had only one digit wrong in that password, by the way. Computers have no concept of "close enough".

But the fun was only beginning. It is a rarely discussed phenomenon of modern life that when someone close to you dies, alongside the memories and any property real and personal they bequeath you a full-time job. The best-arranged, most orderly financial affairs do not transfer themselves gently after the dying of the light.

For one thing, it takes endless phone calls to close, open, or change the names on accounts. Say an average middle-class American: maybe five credit card accounts, two bank accounts, a brokerage account, a couple of IRA accounts, and a 401(K) plan per job held? Plus mortgage, utilities (gas, electric, broadband, cellphone, TV cable), government agencies (motor vehicles, Social Security, federal and state tax), plus magazine/product/service subscriptions. Shall we guess 40 to 50 accounts?

All these organizations are, of course, aware that people die, and they have Procedures. What varies massively (from eavesdropping on some of those phone calls) is the behavior of the customer service people you have to talk to. In a way, this makes sense: customer service representatives are people, too (sometimes), and if you've ever had to tell someone that your <insert close relative here> just died unexpectedly you'll know that the reactions run the gamut from embarrassed to unexpectedly kind to abrupt to uncomfortably inquisitive to (occasionally) angry. That customer service rep isn't going to be any different. Unfortunately. Because you, the customer, are making your 11th call of the day, and it isn't getting any easier or more fun.

A desire to automate this sort of thing was often the reason given for the UK to bring in an ID card. Report once, update everywhere. It sounds wonderful (assuming they've got the right dead person). Although my suspicion is that what organizations do with the information will be as different then as it is now: some automatically close accounts and send a barcoded letter with a number to call if you want to take the account over; some just want you to spell the new name; a few actually try to help you while doing the job they have to do.

What hasn't been set up with death in mind, though, is online account access. I'm told that in the UK, where direct debits and standing orders have a long history, all automated payments are immediately cancelled when the account holder dies and must be actively reinstated if they are to continue. In the US, where automated payments basically arrived with the Internet, things are different: some (such as mortgage payments) may be arranged with your bank, but others may be run through a third-party electronic payment service. In the case of one such account, we discovered that although both my friend and his wife had individual logins she could not change his settings while logged in using her ID and password. In other words, she could not cancel the payments he'd set up.

Cue another password battle. Our widow had already supplied death certificate and confirmation that she was executor. The company accordingly reset his password for her. But using her computer instead of his to access the site and enter the changed password triggered the site's suspicions, and it demanded an answer to the ancillary security question: "What city was your mother born in?"

There turned out to be some uncertainty about that. And then how the right town was spelled. By which time the site had thrown a hissy fit and locked her out for answering incorrectly too many times. And this time customer service couldn't unlock it without an in-person office visit.

Who thinks to check when they're setting up an automated payment how the site will handle matters when you're dead or incapacitated? We all should - and the services should help us by laying this stuff out up front in the FAQs.

The bottom line: these services are largely new, and they're being designed primarily by younger people who are dismissive about the technical aptitude of older people. At every technical conference the archetypal uncomprehending non-technical user geeks refer to is "your grandmother" or "my mother". Yet it does not seem to occur to them that these are the people who, at the worst moment of their lives, are likely to have to take over and operate these accounts on someone else's behalf and they are going to need help.

Death's a bitch - and then you die.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Unfortunately, this blog eats non-spam comments and I don't know why.

February 19, 2010

Death doth make hackers of us all

"I didn't like to ask him what his passwords were just as he was going in for surgery," said my abruptly widowed friend.

Now, of course, she wishes she had.

Death exposes one of the most significant mismatches between security experts' ideas of how things should be done and the reality for home users. Every piece of advice they give is exactly the opposite of what you'd tell someone trying to create a disaster recovery plan to cover themselves in the event of the death of the family computer expert, finance manager, and media archivist. If this were a business, we'd be talking about losing the CTO, CIO, CSO, and COO in the same plane crash.

Fortunately, while he was alive, and unfortunately, now, my friend was a systems programmer of many decades of expertise. He was acutely aware of the importance of good security. And so he gave his Windows desktop, financial files, and email software fine passwords. Too fine: the desktop one is completely resistant to educated guesses based on our detailed knowledge of his entire life and partial knowledge of some of his other PINs and passwords.

All is not locked away. We think we have the password to the financial files, so getting access to those is a mere matter of putting the hard drive in another machine, finding the files, copying them, installing the financial software on a different machine, and loading them up. But it would be nice to have direct as-him access to his archive of back (and new) email, the iTunes library he painstakingly built and digitized, his Web site accounts, and so on. Because he did so much himself, and because his illness was an 11-day chase to the finish, our knowledge of how he did things is incomplete. Everyone thought there was time.

With backups secured and the financial files copied, we set to the task of trying to gain desktop access.

Attempt 1: ophcrack. This is a fine piece of software that's easy to use as long as you don't look at any of the detail. Put it on a CD, boot from said CD, run it on automatic, and you're fine. The manual instructions I'm sure are fine, too, for anyone who has studied Windows SAM files.

Ophcrack took a happy 4 minutes and 39 seconds to disclose that the computer has three accounts: administrator, my friend's user account, and guest. Administrator and guest have empty passwords; 's is "not found". But that's OK, said the security expert I consulted, because you can log in as administrator using the empty password and change the user account. Here is a helpful command. Sure. No problem.

Except, of course, that this is Vista, and Vista hides the administrator account to make sure that no brainless idiot accidentally got into the administrator account and ran around the system creating havoc and corrupted files. By "brainless idiot" I mean: the user-owner of the computer. Naturally, my friend had left it hidden.

In order to unhide the administrator account so you can run the commands to reset 's password, you have to run the command prompt in administrator mode. Which we can't do because, of course, there are only two administrator accounts and one is hidden and the other is the one we want the password for. Next.

Attempt 2: Password Changer. Now, this is a really nifty thing: you download the software, use it to create a bootable CD, and boot the computer. Which would be fine, except that the computer doesn't like it because apparently command.com is missing...

We will draw a veil over the rest. But my point is that no one would advise a business to operate in this way - and now that computers are in (almost) every home, homes are businesses, too. No one likes to think they're going to die, still less without notice. But if you run your family on your computer you need a disaster recovery plan - fire, flood, earthquake, theft, computer failure, stroke, and yes, unexpected death,

- Have each family member write down their passwords. Privately, if you want, in sealed envelopes to be stored in a safe deposit box at the bank. Include: Windows desktop password, administrator password, automated bill-paying and financial record passwords, and the list of key Web sites you use and their passwords. Also the passwords you may have used to secure phone records and other accounts. Credit and debit card PINs. Etc.

- Document your directory structure so people know where the important data - family photos, financial records, Web accounts, email address books - is stored. Yes, they can figure it out, but you can make it a lot easier for them.

- Set up your printer so it works from other computers on the home network even if yours is turned off. (We can't print anything, either.)

- Provide an emergency access route. Unhide the administrator account.

- Consider your threat model.

Meanwhile, I think my friend knew all this. I think this is his way of taking revenge on me for never letting him touch *my* computer.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 1, 2010

Privacy victims

Frightened people often don't make very good decisions. If I were in charge of aviation security, I'd have been pretty freaked out by the Christmas Day crotch bomber - failure or no failure. Even so, like all of us Boxing Day quarterbacks, I'd like to believe I'd have had more sense than to demand that airline passengers stay seated and unmoving for an hour, laps empty.

But the locking-the-barn elements of the TSA's post-crotch rules are too significant to ignore: the hastily implemented rules were very specifically drafted to block exactly the attack that had just been attempted. Which, I suppose, makes sense if your threat model is a series of planned identical, coordinated attacks and copycats. But as a method of improving airport security it's so ineffective and irrelevant that even the normally rather staid Economist accused the TSA of going insane and Bruce Schneier called the new rulesmagical thinking.

Consider what actually happened on Christmas Day:

- Intelligence failed. Umar Farouk Abdulmutallab was on the watch list (though not, apparently, the no-fly list), and his own father had warned the US embassy.

- Airport screening failed. He got through with his chunk of explosive attached to his underpants and the stuff he needed to set it off. (As the flyer boards have noted, anyone flying this week should be damned grateful he didn't stuff it in a condom and stick it up his ass.)

- And yet, the plan failed. He did not blow up the plane; there were practically no injuries, and no fatalities.

That, of course, was because a heroic passenger was paying attention instead of snoozing and leaped over seats to block the attempt.

The logical response, therefore, ought to be to ask passengers to be vigilant and to encourage them to disrupt dangerous activities, not to make us sit like naughty schoolchildren being disciplined. We didn't do anything wrong. Why are we the ones who are being punished?

I have no doubt that being on the plane while the incident was taking place was terrifying. But the answer isn't to embark upon an arms race with the terrorists. Just as there are well-funded research labs churning out new computer viruses and probing new software for vulnerabilities, there are doubtless research facilities where terrorist organizations test what scanners can detect and in what quantity.

Matt Blaze has a nice analysis of why this approach won't work to deter terrorists: success (plane blown up) and failure (terrorist caught) are, he argues, equally good outcomes for the terrorist, whose goal is to sow terror and disruption. All unpredictable screening does is drive passengers nuts and, in some cases, put their health at risk. Passengers work to the rules. If there are no blankets, we wear warmer clothes; if there is no bathroom access, we drink less; if there is no in-flight entertainment, we rearrange the hours we sleep.

As Blaze says, what's needed is a correct understanding of the threat model - and as Schneier has often said, the most effective changes since 9/11 have been reinforcing the cockpit doors and the fact that passengers now know to resist hijackers.

Since the incident, much of the talk has been about whole-body scanners - "nudie scanners" Dutch privacy advocates have dubbed them - as if these will secure airplanes for once and for all. I think if people think that whole-body scanners are the answer they have misunderstood the problem.

Or problems, because there is more than one. First: how can we make air travel secure from terrorists? Second: how can we make air travelers feel secure? Third: how can we accomplish those things while still allowing travelers to be comfortable, a specification which includes respecting their right to privacy and civil liberties? If your reaction to that last is to say that you don't care whose rights are violated, all that matters is perfect security I'm going to guess that: 1) you fly very infrequently; 2) you would be happy to do so chained to your seat naked with a light coating of Saran wrap; and 3) that your image of the people who are threats is almost completely unlike your own.

It is particularly infuriating to read that we are privacy victims: that the opposition of privacy advocates to invasive practices such as whole-body scanners are the reason this clown got as close as he did. Such comments are as wrong-headed as Jack Straw claiming after 9/11 that opponents of key escrow were naïve.

The most rational response, it seems to me, is for TSA and airlines alike to solicit volunteers among their most loyal and committed passengers. Elite flyers know the rhythms of flights; they know when something is amiss. Train us to help in emergencies and to spot and deter mishaps.

Because the thing we should have learned from this incident is that we are never going to have perfect security: terrorists are a moving target. We need fallbacks, for when our best efforts fail.

The more airport security becomes intrusive, annoying, and visibly stupid, the more motive passengers will have to find workarounds and the less respect they will have for these authorities. That process is already visible. Do you feel safer now?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

December 4, 2009

Which lie did I tell?


"And what's your mother's maiden name?"

A lot of attention has been paid over the years to the quality of passwords: how many letters, whether there's a sufficient mix of numbers and "special characters", whether they're obviously and easily guessable by anyone who knows you (pet's name, spouse's name, birthday, etc.), whether you've reset them sufficiently recently. But, as someone noted this week on UKCrypto, hardly anyone pays attention to the quality of the answers to the "password hint" questions sites ask so they can identify you when you eventually forget your password. By analogy, it's as though we spent all our time beefing up the weight, impenetrability, and lock quality on our front doors while leaving the back of the house accessible via two or three poorly fitted screen doors.

On most sites it probably doesn't matter much. But the question came up after the BBC broadcast an interview with the journalist Angela Epstein, the loopily eager first registrant for the ID card, in which she apparently mentioned having been asked to provide the answers to five rather ordinary security questions "like what is your favorite food". Epstein's column gives more detail: "name of first pet, favourite song and best subject at school". Even Epstein calls this list "slightly bonkers". This, the UKCrypto poster asked, is going to protect us from terrorists?

Dave Birch had some logic to contribute: "Why are we spending billions on a biometric database and taking fingerprints if they're going to use the questions instead? It doesn't make any sense." It doesn't: she gave a photograph and two fingerprints.

But let's pretend it does. The UKCrypto discussion headed into technicalities: has anyone studied challenge questions?

It turns out someone has: Mike Just, described to me as "the world expert on challenge questions". Just, who's delivered two papers on the subject this year, at the Trust (PDF) and SOUPS (PDF) conferences, has studied both the usability and the security of challenge questions. There are problems from both sides.

First of all, people are more complicated and less standardized than those setting these questions seem to think. Some never had pets; some have never owned cars; some can't remember whether they wrote "NYC", "New York", "New York City", or "Manhattan". And people and their tastes change. This year's favorite food might be sushi; last year's chocolate chip cookies. Are you sure you remember accurately what you answered? With all the right capitalization and everything? Government services are supposedly thinking long-term. You can always start another Amazon.com account; but ten years from now, when you've lost your ID card, will these answers be valid?

This sort of thing is reminiscent of what biometrics expert James Wayman has often said about designing biometric systems to cope with the infinite variety of human life: "People never have what you expect them to have where you expect them to have it." (Note that Epstein nearly failed the ID card registration because of a burn on her finger.)

Plus, people forget. Even stuff you'd think they'd remember and even people who, like the students he tested, are young.

From the security standpoint, there are even more concerns. Many details about even the most obscure person's life are now public knowledge. What if you went to the same school for 14 years? And what if that fact is thoroughly documented online because you joined its Facebook group?

A lot depends on your threat model: your parents, hackers with scripted dictionary attacks, friends and family, marketers, snooping government officials? Just accordingly came up with three types of security attacks for the answers to such questions: blind guess, focused guess, and observation guess. Apply these to the often-used "mother's maiden name": the surname might be two letters long; it is likely one of the only 150,000 unique surnames appearing more than 100 times in the US census; it may be eminently guessable by anyone who knows you - or about you. In the Facebook era, even without a Wikipedia entry or a history of Usenet postings many people's personal details are scattered all over the online landscape. And, as Just also points out, the answers to challenge questions are themselves a source of new data for the questioning companies to mine.

My experience from The Skeptic suggests that over the long term trying to protect your personal details by not disclosing them isn't going to work very well. People do not remember what they tell psychics over the course of 15 minutes or an hour. They have even less idea what they've told their friends or, via the Internet, millions of strangers over a period of decades or how their disparate nuggets of information might match together. It requires effort to lie - even by omission - and even more to sustain a lie over time. It's logically easier to construct a relatively small number of lies. Therefore, it seems to me that it's a simpler job to construct lies for the few occasions when you need the security and protect that small group of lies. The trouble then is documentation.

Even so, says Birch, "In any circumstance, those questions are not really security. You should probably be prosecuted for calling them 'security'."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

August 1, 2009

Unsustainability

Let's face it: Las Vegas ought not to exist. A city in the middle of the desert that shows off extravagant water fountains. (No matter how efficient these are, they must lose plenty of water to the 110F dry desert air.) Where in a time of energy crisis few can live without cars or air-conditioning and many shops and hotels air-condition to a climate approximating that of Britain in winter. A city that specializes in gigantic, all-night light displays. And, a city with so little respect for its own history that it tears itself down and rebuilds every five years, on average. (It even tore down the hotel that Elvis made famous and replaced it.)

In fact, of course, the Strip is all façade. Go a block east or west and look at the backs of the hotels and what you see is the rear side of a movie set.

There is of course a real Las Vegas away from the Strip that's cooler and much prettier, but much of the above still applies: it is a perfect advertisement for unsustainability. Which is why it seemed particularly apt this year as the location for the annual Black Hat and Defcon security/hacker conferences. Just as Las Vegas itself is an exemplar of the worst abuse of a fragile ecosystem, so increasingly do the technologies we use and trust daily.

If you're not familiar with these twin conferences, they're held on successive days in Las Vegas in late July. At Black Hat during the week, a load of (mostly) guys in (mostly) suits present their latest research into new security problems and their suggested fixes. On Thursday night, (mostly) the same crowd trade in their (mostly) respectable clothes for (mostly) cargo shorts and T-shirts and head for Defcon for the weekend to present (many of) the same talks over again to a (mostly) younger, wilder crowd. Black Hat has executive stationery for sale and sit-down catered lunches; Defcon has programmable badges, pizza in the hotel's food court, and is much, much cheaper.

It's noticeable that, after years when people have been arrested for or sued to prevent their disclosures, a remarkable number of this year's speakers took pains to emphasize the responsible efforts they'd made to contact manufacturers and industry associations and warn them about what they'd found. Some of the presentations even ended with, "And they've fixed it in the latest release." What fun is that?

The other noticeable trend this year was away from ordinary computer issues and into other devices. This was only to be (eventually) expected: as computers infiltrate all parts of our lives they're bringing insecurity along with them into areas where it pretty much didn't exist before. Electric meters: once mechanical devices that went round and round; now smart gizmos that could be remotely reprogrammed. Flaws in the implementation of SMS mean that phishing messages and other scams most likely lie in the future of our mobile phones.

Even such apparently stolid mechanisms such as parking meters can be gamed. Know what's inside those things? Z80 chips! Yes, the heart of those primitive 1980s computers live on in that parking meter that just clicked over to VIOLATION.

Las Vegas seems to go on as if the planet were not in danger. Similarly, we know - because we write and read it daily - that the Internet was built as a sort of experiment on underpinnings that are ludicrously, laughably wrongly designed for the weight we're putting on them. And yet every day we go on buying things with credit cards, banking, watching our governments shift everything online, all I suppose with the shared hope that it will all come right somehow.

You do wonder, though, after two days of presentations that find the same fundamental errors we've known about for decades: passwords submitted in plain text, confusion between making things easy for users and depriving them of valuable information to help them spot frauds. The failure, as someone said in the blur of the last few days, to teach beginning programmers about the techniques of secure coding. Plus, of course, the business urgency of let's get this thing working and worry about security later.

On the other hand, it was just as alarming to hear Robert Lentz, deputy assistant secretary of Defense, say it was urgent to "get the anonymity out of the network" and ensure that real-world and cyber identities converge with multifactor biometric identification in both logical and physical worlds. My laptop computer was perfectly secure against all the inquisitors at Black Hat because it never left my immediate possession and I couldn't connect to the wireless; but that's not how I want to live.

The hardest thing about security seems to be understanding when we really need it and how. But the thing about Vegas - as IBM's Jeff Jonas so eloquently explained at etech in 2008 - is that behind the Strip (which I always like to say is what you'd get if you gave a band of giant children an unlimited supply of melted plastic and bad taste) and its city block-sized casinos lies a security system so sophisticated that it doesn't interfere with customers' having a good time. Vegas, so fake in other ways, is the opposite of security theater. Whereas, so much of our security - which is often intrusive enough to feel real - might as well be the giant plastic Sphinx in front of the Luxor.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, to follow on Twitter or send email to netwars@skeptic.demon.co.uk.

July 24, 2009

Security for the rest of us


Many governments, faced with the question of how to improve national security, would do the obvious thing: round up the usual suspects. These would be, of course, the experts - that is, the security services and law enforcement. This exercise would be a lot like asking the record companies and film studios to advise on how to improve copyright: what you'd get is more of the same.

This is why it was so interesting to discover that the US National Academies of Science was convening a workshop to consult on what research topics to consider funding, and began by appointing a committee that included privacy advocates and usability experts, folks like Microsoft researcher Butler Lampson, Susan Landau, co-author of books on privacy and wiretapping, and Donald Norman, author of the classic book The Design of Everyday Things. Choosing these people suggests that we might be approaching a watershed like that of the late 1990s, when the UK and the US governments were both forced to understand that encryption was not just for the military any more. The peace-time uses of cryptography to secure Internet transactions and protect mobile phone calls from casual eavesdropping are much broader than crypto's war-time use to secure military communications.

Similarly, security is now everyone's problem, both individually and collectively. The vulnerability of each individual computer is a negative network externality, as NYU economist Nicholas Economides pointed out. But, as many asked, how do you get people to understand remote risks? How do you make the case for added inconvenience? Each company we deal with makes the assumption that we can afford the time to "just click to unsubscribe" or remember one password, without really understanding the growing aggregate burden on us. Norman commented that door locks are a trade-off, too: we accept a little bit of inconvenience in return for improved security. But locks don't scale; they're acceptable as long as we only have to manage a small number of them.

In his 2006 book, Revolutionary Wealth, Alvin Toffler comments that most of us, without realizing it, have a hidden third, increasingly onerous job, "prosumer". Companies, he explained, are increasingly saving money by having us do their work for them. We retrieve and print out our own bills, burn our own CDs, provide unpaid technical support for ourselves and our families. One of Lorrie Cranor's students did the math to calculate the cost in lost time and opportunities if everyone in the US read annually the privacy policy of each Web site they visited once a month. Most of these things require college-level reading skills; figure 244 hours per year per person, $3,544 each...$781 billion nationally. Weren't computers supposed to free us of that kind of drudgery? As everything moves online, aren't we looking at a full-time job just managing our personal security?

That, in fact, is one characteristic that many implementations of security share with welfare offices - and that is becoming pervasive: an utter lack of respect for the least renewable resource, people's time. There's a simple reason for that: the users of most security systems are deemed to be the people who impose it, not the people - us - who have to run the gamut.

There might be a useful comparison to information overload, a topic we used to see a lot about ten years back. When I wrote about that for ComputerActive in 1999, I discovered that everyone I knew had a particular strategy for coping with "technostress" (the editor's term). One dealt with it by never seeking out information and never phoning anyone. His sister refused to have an answering machine. One simply went to bed every day at 9pm to escape. Some refused to use mobile phones, others to have computers at home..

But back then, you could make that choice. How much longer will we be able to draw boundaries around ourselves by, for example, refusing to use online banking, file tax returns online, or participate in social networks? How much security will we be able to opt out of in future? How much do security issues add to technostress?

We've been wandering in this particular wilderness a long time. Angela Sasse, whose 1999 paper Users Are Not the Enemy talked about the problems with passwords at British Telecom, said frankly, "I'm very frustrated, because I feel nothing has changed. Users still feel security is just an obstacle there to annoy them."

In practice, the workshop was like the TV game Jeopardy: the point was to generate research questions that will go into a report, which will be reviewed and redrafted before its eventual release. Hopefully, eventually, it will all lead to a series of requests for proposals and some really good research. It is a glimmer of hope.

Unless, that is, the gloominess of the beginning presentations wins out. If you listened to Lampson, Cranor, and to Economides, you got the distinct impression that the best thing that could happen for security is that we rip out the Internet (built to be open, not secure), trash all the computers (all of whose operating systems were designed in the pre-Internet era), and start over from scratch. Or, like the old joke about the driver who's lost and asking for directions, "Well, I wouldn't start from here".

So, here's my question: how can we make security scale so that the burden stays manageable?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

July 17, 2009

Human factors

For the last several weeks I've been mulling over the phrase security fatigue. It started with a paper (PDF) co-authored by Angela Sasse, in which she examined the burden that complying with security policies imposes upon corporate employees. Her suggestion: that companies think in terms of a "compliance budget" that, like any other budget (money, space on a newspaper page), has to be managed and used carefully. And, she said, security burdens weigh differently on different people and at different times, and a compliance budget needs to comprehend that, too.

Some examples (mine, not hers). Logging onto six different machines with six different user IDs and passwords (each of which has to be changed once a month) is annoying but probably tolerable if you do it once every morning when you get to work and once in the afternoon when you get back from lunch. But if the machines all log you out every time you take your hands off the keyboard for two minutes, by the end of the day they will be lucky to survive your baseball bat. Similarly, while airport security is never fun, the burden of it is a lot less to a passenger traveling solo after a good night's sleep who reaches the checkpoints when they're empty than it is to the single parent with three bored and overtired kids under ten who arrives at the checkpoint after an overnight flight and has to wait in line for an hour. Context also matters: a couple of weeks ago I turned down a ticket to Court 1 at Wimbledon on men's semi-finals day because I couldn't face the effort it would take to comply with their security rules and screening. I grudgingly accept airport security as the trade-off for getting somewhere, but to go through the same thing for a supposedly fun day out?

It's relatively easy to see how the compliance budget concept could be worked out in practice in a controlled environment like a company. It's very difficult to see how it can be worked out for the public at large, not least because none of the many companies each of us deals with sees it as beneficial to cooperate with the others. You can't, for example, say to your online broker that you just can't cope with making another support phone call, can't they find some other way to unlock your account? Or tell Facebook that 61 privacy settings is too many because you're a member of six other social networks and Life is Too Short to spend a whole day configuring them all.

Bruce Schneier recently highlighted that last-referenced paper, from Joseph Bonneau and Soeren Preibusch at Cambridge's computer lab, alongside another by Leslie John, Alessandro Acquisti, and George Loewenstein from Carnegie-Mellon, to note a counterintuitive discovery: the more explicit you make privacy concerns the less people will tell you. "Privacy salience" (as Schneier calls it) makes people more cautious.

In a way, this is a good thing and goes to show what privacy advocates have been saying along: people do care about privacy if you give them the chance. But if you're the owners of Facebook, a frequent flyer program, or Google it means that it is not in your business interest to spell out too clearly to users what they should be concerned about. All of these businesses rely on collecting more and more data about more and more people. Fortunately for them, as we know from research conducted by Lorrie Cranor (also at Carnegie-Mellon), people hate reading privacy policies. I don't think this is because people aren't interested in their privacy. I think this goes back to what Sasse was saying: it's security fatigue. For most people, security and privacy concerns are just barriers blocking the thing they came to do.

But choice is a good thing, right? Doesn't everyone want control? Not always. Go back a few years and you may remember some widely publicized research that pointed out that too many choices stall decision-making and make people feel...tired. A multiplicity of choices adds weight and complexity to the decision you're making: shouldn't you investigate all the choices, particularly if you're talking about which of 56 mutual funds to add to your 401(k)?

It seems obvious, therefore, that the more complex the privacy controls offered by social networks and other services the less likely people are to use them: too many choices, too little time, too much security fatigue. In minor cases in real life, we handle this by making a decision once and sticking to it as a kind of rule until we're forced to change: which brand of toothpaste, what time to leave for work, never buy any piece of clothing that doesn't have pockets. In areas where rules don't work, the best strategy is usually to constrain the choices until what you have left is a reasonable number to investigate and work with. Ecommerce sites notoriously get this backwards: they force you to explore group by group instead of allowing you to exclude choices you'll never use.

How do we implement security and privacy so that they're usable? This is one of the great unsolved, under-researched questions in security. I'm hoping to know more next week.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

May 8, 2009

Automated systems all the way down

Are users getting better or worse?

At what? you might ask. Naturally: at being thorns in the side of IT security people. Users see security as damage, and route around it.

You didn't need to look any further than this week's security workshop, where this question was asked, to see this principle in action. The hotel-supplied wireless was heavily filtered: Web and email access only, no VPNs, "undesirable" sites blocked. Over lunch, the conversation: how to set up VPNs using port 443 to get around this kind of thing. The perfect balanced sample: everyone's a BOFH *and* a hostile user. Kind of like Jacqui Smith, who has announced plans to largely circumvent the European Court of Human Rights' ruling that Britain has to remove the DNA of innocent people from the database. Apparently, this government perceives European law as damage.

But the question about users was asked seriously. The workshop gathered security folks from all over to brain storm and compare notes: what are the emerging security threats? What should we be worrying about? And, most important, what should people be researching?

Three working groups - smart environments, malware and fraud, and critical systems - came up with three different lists, mostly populated with familiar stuff - but the familiar stuff keeps going and getting worse. According to Symantec's latest annual report spam, for example, was up 162 percent in 2008 over 2007, with a total of 349.6 billion messages sent - simply a staggering waste of resources. What has changed is targeting; new attacks are short-lived, small distribution affairs - much harder to shut down.

Less familiar to me was the "patch window" problem, which basically goes like this: it takes 24 hours for 80 percent of Windows users to get a new patch from Windows Update. An attacker who downloads the patch as soon as it's available can quickly - within minutes - reverse-engineer it to find out what bug(s) it's fixing. Then the attacker has most of a day in which to exploit the bug. Last year, Carnegie-Mellon's David Brumley and others found a way to automate this process (PDF). An ironic corollary: the more bug-free the program, the easier a patch window attack becomes. Various solutions were discussed for this, none of them entirely satisfactory; the most likely was to roll out the patch locked, and distribute a key only after the download cycle is complete.

But back to the trouble with users: systems are getting more and more complex. A core router now has 5,000 lines of code; an edge router 11,000. Someone has to read and understand all those lines. And that's just one piece. "Today's networks are now so complex we don't understand them any more," said Cisco's Michael Behrenger. Critical infrastructures need to be more like the iPhone, a complex system that nonetheless just about anyone can operate.

As opposed, I guess, to being like what most people have now: systems that are a mish-mash of strategies for getting around things that don't work. But I do see his point. Once you could debug even a large network by reading the entire configuration. Pause to remember the early days of Demon Internet, when the technical support staff would debug your connection by directly editing the code of the dial-up software we were all using, KA9Q. If you'd taken *those* humans out of the system, no one could have gotten online.

It's my considered view that while you can blame users for some things - the one in 12.5 million spam recipients Christian Kreibich said actually buys the pharma products so advertised springs to mine - blaming them in general is a lot like the old saw about how "only a poor workman blames his tools". It's more than 20 years since Donald Norman pointed out in The Design of Everyday Things that user error is often a result of poor system design. Yet a depressing percentage of security folks complaining about system complexity don't even know his name and a failure to understand human factors is security's single biggest failure.

Joseph Bonneau made this point in a roundabout way by considering Facebook which, he said, really is inventing the Web - not just in the rounded corners sense, but in the sense of inventing its own protocols for things for which standards already exist. Plus - and more important for the user question - it's training users to do things that security people would rather they didn't, like click on emailed links without checking the URLs. "Social networks," he said, "are repeating all the Web's security problems - phishing, spam, 419 scams, identity theft, malware, cross-site scripting, click fraud, stalking...privacy is the elephant in the room." Worse, "They really don't yet have a business model, which makes dealing with security difficult."

It's a typical scenario in computing, where each new generation reinvents every wheel. And that's the trouble with automation with everything, too. Have these people never used voice menus?

Get rid of the humans and replace them with automated systems that operate perfectly, great. But won't humans have to write the automated systems? No, automated systems will do that. And who will program those? Computers. And who...

Never mind.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to follow (and reply) on , post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 10, 2008

Data mining snake oil

The basic complaints we've been making for years about law enforcement's and government's desire to collect masses of data have primarily focused on the obvious set of civil liberties issues: the chilling effect of surveillance, the right of individuals to private lives, the risk of abuse of power by those in charge of all that data. On top of that we've worried about the security risks inherent in creating such large targets from which data will, inevitably, leak sometimes.

This week, along came the National Research Council to offer a new trouble with dataveillance: it doesn't actually work to prevent terrorism. Even if it did work, the tradeoff of the loss of personal liberties against the security allegedly offered by policies that involve tracking everything everyone does from cradle to grave was hard to justify. But if it doesn't work - if all surveillance all the time won't make us actually safer - then the discussion really ought to be over.

The NAS report, Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Assessment, makes its conclusions clear: "Modern data collection and analysis techniques have had remarkable success in solving information-related problems in the commercial sector... But such highly automated tools and techniques cannot be easily applied to the much more difficult problem of detecting and preempting a terrorist attack, and success in doing so may not be possible at all."

Actually, the many of us who have had our cards stopped for no better reason than that the issuing bank didn't like the color of the Web site we were buying from, might question how successful these tools have been in the commercial sector. At the very least, it has become obvious to everyone how much trouble is being caused by false positives. If a similar approach is taken to all parts of everyone's lives instead of just their financial transactions, think how much more difficult it's going to be to get through life without being arrested several times a year.

The report again: "Even in well-managed programs such tools are likely to return significant rates of false positives, especially if the tools are highly automated." Given the masses of data we're talking about - the UK wants to store all of the nation's communications data for years in a giant shed, and a similar effort in the US would have to be many times as big - the tools will have to be highly automated. And - the report yet again - the difficulty of detecting terrorist activity "through their communications, transactions, and behaviors is hugely complicated by the ubiquity and enormoity of electronic databases maintained by both government agencies and private-sector corporations." The bigger the haystack, the harder it is to find the needle.

In a recent interview, David Porter, CEO of Detica, who has spent his entire career thinking about fraud prevention, said much the same thing. Porter's proposed solution - the basis of the systems Detica sells -is to vastly shrink the amount of data to be analyzed by throwing out everything we know is not fraud (or, as his colleague, Tom Black, said at the Homeland and Border Security conference in July, terrorist activity). To catch your hare, first shrink your haystack.

This report, as the title suggests, focuses particularly on balancing personal privacy against the needs of anti-terrorist efforts. (Although, any terrorist watching the financial markets the last couple of weeks would be justified in feeling his life's work had been wasted, since we can do all the damage that's needed without his help.) The threat from terrorists is real, the authors say - but so is the threat to privacy. Personal information in databases cannot be fully anonymized; the loss of privacy is real damage; and data varies substantially in quality. "Data derived by linking high-quality data with data of lesser quality will tend to be low-quality data." If you throw a load of silly string into your haystack, you wind up with a big mess that's pretty much useless to everyone and will be a pain in the neck to clean up.

As a result, the report recommends requiring systematic and periodic evaluation of every information-based government program against core values and proposes a framework for carrying that out. There should be "robust, independent oversight". Research and development of such programs should be carried out with synthetic data, not real data "anonymized"; real data should only be used once a program meets the proposed criteria for deployment and even then only phased in at a small number of sites and tested thoroughly. Congress should review privacy laws and consider how best to protect privacy in the context of such programs.

These things seem so obvious; but to get to this the point it's taken three years of rigorous documentation and study by a 21-person committee of unimpeachable senior scientists and review by members of a host of top universities, telephone companies, and top technology companies. We have to think the report's sponsors, who include the the National Science Foundation, and the Department of Homeland Security, will take the results seriously. Writing for Cnet, Declan McCullagh notes that the similar 1996 NRC CRISIS report on encryption was followed by decontrol of the export and use of strong cryptography two years later. We can but hope.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 12, 2008

Slow news

It took a confluence of several different factors for a six-year-old news story to knock 75 percent off the price of United Airlines shares in under an hour earlier this week. The story said that United Airlines was filing for bankruptcy, and of course was true - in 2002. Several media owners are still squabbling about whose fault it was. Trading was halted after that first hour by the systems put in place after the 1987 crash, but even so the company's shares closed 10 percent down on the day. Long-term it shouldn't matter in this case, but given a little more organization and professionalism that sort of drop provides plenty of opportunities for securities fraud.

The factor the companies involved can't sue: human psychology. Any time you encounter a story online you make a quick assessment of its credibility by considering: 1) the source; 2) its likelihood; 3) how many other outlets are saying the same thing. The paranormal investigator and magician James Randi likes to sum this up by saying that if you claimed you had a horse in your back yard he might want a neighbor's confirmation for proof, but if you said you had a unicorn in your back yard he'd also want video footage, samples of the horn, close-up photographs, and so on. The more extraordinary the claim, the more extraordinary the necessary proof. The converse is also true: the less extraordinary the claim and the better the source, the more likely we are to take the story on faith and not bother to check.

Like a lot of other people, I saw the United story on Google News on Monday. There's nothing particularly shocking these days about an airline filing for bankruptcy protection, so the reaction was limited to "What? Again? I thought they were doing better now" and a glance underneath the headline to check the source. Bloomberg. Must be true. Back to reading about the final in prospect between Andy Murray and Roger Federer at the US Open.

That was a perfectly fine approach in the days when all content was screened by humans and media were slow to publish. Even then there were mistakes, like the famous 1993 incident when a shift worker at Sky News saw an internal rehearsal for the Queen Mother's death on a monitor and mentioned it on the phone to his mother in Australia, who in turn passed it on to the media, which took it up and ran with it.

But now in the time that thought process takes daytraders have clicked in and out of positions and automated media systems have begun republishing the story. It was the interaction of several independently owned automated systems made what ought to have been a small mistake into one that hit a real company's real financial standing - with that effect, too, compounded by automated systems. Logically, we should expect to see many more such incidents, because all over the Web 2.0 we are building systems that talk to each other without human intervention or oversight.

A lot of the Net's display choices are based on automated popularity contests: on-the-fly generated lists of the current top ten most viewed stories, Amazon book rankings, Google's page rank algorithm that bumps to the top sites with the most inbound links for a given set of search terms. That's no different from other media: Jacqueline Kennedy and Princess Diana were beloved of magazine covers for the most obvious sale-boosting reasons. What's different is that on the Net these measurements are made and acted upon instantaneously, and sometimes from very small samples, which is why in a very slow news hour on a small site a single click on a 2002 story seems to have bumped it up to the top, where Google spotted it and automatically inserted it into its feed.

The big issue, really - leaving aside the squabble between the Tribune and Google over whether Google should have been crawling its site at all - is the lack of reliable dates. It's always a wonder to me how many Web sites fail to anchor their information in time: the date a story is posted or a page is last updated should always be present. (I long, in fact, for a browser feature that would display at the top of a page the last date a page's main content was modified.)

Because there's another phenomenon that's insufficiently remarked upon: on the Internet, nothing ever fully dies. Every hour someone discovers an old piece of information for the first time and thinks it's new. Most of the time, it doesn't matter: Dave Barry's exploding whale is hilariously entertaining no matter how many times you've read it or seen the TV clip. But Web 2.0 will make new money for endless recycling part of our infrastructure rather than a rare occurrence.

In 1998 I wrote that crude hacker defacement of Web sites was nothing to worry about compared to the prospect of the subtle poisoning of the world's information supply that might become possible as hackers became more sophisticated. This danger is still with us, and the only remedy is to do what journalists used to be paid to do: check your facts. Twice. How do we automate that?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 1, 2008

All paid up

"His checks keep bouncing because his signature varies," says a CIA operative (Sam Waterston) admiringly of the movie's retired spy hero Miles Kendig (Walter Matthau) in the 1980 movie Hopscotch. "He's a class act."

These days, Kendig would be using credit cards. And he'd be having the same problem: the part of his signature would be played by his usage patterns as seen by the credit card company's computers.

This would be doubly true if he used Amazon's Marketplace sellers. It seems - or so Barclaycard tells me every time they block my card - that putting through several purchases through Amazon Marketplace and then, a few days later, buying something larger like a plane ticket or a digital recorder exactly fits one of the fraud patterns their computers are programmed to look for.

Buy a dozen items in a day on eBay (go on, I dare you), and your statement will show a dozen transactions - but they'll all be from Paypal. Buy a dozen items in a single shopping basket on Amazon, and you'll get a dozen transactions all from different unknown sellers. To the computer what seems to you to be a single Amazon purchase looks exactly like someone testing the card with a dozen small transactions to see if it's a) live and b) possessed of available credit. Then, y'see, when the card has passed the test, the fraudster goes for the big one - that airplane ticket or digital recorder.

It's not clear to me why Barclaycard's computer doesn't recognize this pattern as typical after the first outing or two. (I fly one route, but my Barclaycard will not buy me a plane ticket.) Nor is it clear to me why it doesn't occur to the Barclaycard computer that as frauds go buying a digital recorder or a plane ticket for delivery to the cardholder's name and address ranks as fairly incompetent. Why doesn't it check that point before causing trouble?

You might ask a similar question of one of my US cards, which trips the fraud meter any time it's used outside the US. Even though they know I live in...London.

This week Amazon announced that it's offering its payment system, including One-Click, to third party sellers as one of its Web services offerings.

Much of the early press coverage of Amazon's decision seems to be characterizing Amazon Checkout, along with Google Checkout, as a competitor to Paypal. In fact, things are more complicated than that. Paypal, before it was bought by eBay, was one of the oldest businesses on the Net. Its roots, which still show every time you go through the intricate procedure of opting to use a credit card instead of a bank transfer, are in making it possible for anyone to send cash to anyone with an email address. Its first competitor was Western Union; its long tail business opportunity was online sellers who couldn't get credit card authorizations because they were too small. For eBay, buying Paypal meant being able to integrate payments into its ecology with some additional control over fraud while making extra money off each transaction.

Paypal is being adopted as an alternative payment method by all sorts of third parties, and as much of a pain as Paypal is (it can't cope with multinational people and you cannot opt out of giving it a bank account to verify) this is useful for consumers. Its security is generally well regarded by both banks and credit card companies and surely it's better to store financial details with one known company than with dozens of less familiar ones you may only trade with once. Given the choice, I'd far rather that single account were with the much-pleasanter-to-use Amazon. It's clear, though, that if you're offering a platform for others to build businesses on, as Amazon is, payment services are an obvious tool you want to include. Most likely, just as many stores now display multiple credit and debit card logos, many Web sellers will offer users a choice among multiple payment aggregators. Who wants to call the whole thing off because you say Google and I say Paypal?

Unfortunately, none of this solves my actual problem, those damn fraud-detecting algorithms. If Amazon actually aggregated payments into a single transaction - which is actually what you imagine it's doing the first time you buy from Marketplace - and spit the money back out to the intended destinations, there'd be no problem. For you: for Amazon, of course, it would raise a host of questions about whether it's a financial service, and how much responsibility it should assume for fraud. Those are, of course, very much the reasons why Paypal is so unpleasant - and yet also why it offers eBay buyers insurance.

What is clear is that this is yet another step that brings Amazon and eBay into closer competiton with each other: they are increasingly alike. Amazon's recent quarterly statement notes that about 30 percent of its revenues come from Marketplace sellers - and that the profitability of a sale is roughly the same whether it's direct or indirect. On eBay 42 percent of items now are straightforward sales, not auctions, and the changes it's made that favor its biggest sellers are making it more Wal-Mart than flea market.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 25, 2008

Who?

A certain amount of government and practical policy is being made these days based on the idea that you can take large amounts of data and anonymize it so researchers and others can analyze it without invading anyone's privacy. Of particular sensitivity is the idea of giving medical researchers access to such anonymized data in the interests of helping along the search for cures and better treatments. It's hard to argue with that as a goal - just like it's hard to argue with the goal of controlling an epidemic - but both those public health interests collide with the principle of medical confidentiality.

The work of Latanya Sweeney was I think the first hint that anonymizing data might not be so straightforward; I've written before about her work. This week, at the Privacy Enhancing Technologies Symposium in Leuven, Belgium (which I regrettably missed) researchers Arvind Narayanan and Vitaly Shmatikov from the University of Texas at Austin won an award sponsored by Microsoft for taking reidentifying supposedly anonymized data a step further.

The pair took a database released by the online DVD rental company Netflix last year as part of the $1 million Netflix Prize, a project to improve upon the accuracy of the system's predictions. You know the kind of thing, since it's built into everything from Amazon to Tivos - you give the system an idea of your likes and dislikes by rating the movies you've rented and the system makes recommendations for movies you'll like based on those expressed preferences. To enable researchers to work on the problem of improving these recommendations, Netflix released a dataset containing more than 100 million movie ratings contributed by nearly 500,000 subscribers between December 1999 and December 2005 with, as the service stated in its FAQ, all customer identifying information removed.

Maybe in a world where researchers only had one source of information that would be a valid claim. But just as Sweeney showed in 1997 that it takes very little in the way of public records to re-identify a load of medical data supplied to researchers in the state of Massachusetts, Narayananan and Shamtikov's work reminds us that we don't live in a world like that. For one thing, people tend disproportionately to rate their unusual, quirky favorites. Rating movies takes time; why spend it on giving The Lord of the Rings another bump when what people really need is to know about the wonders of King of Hearts, All That Jazz, and The Tall Blond Man with One Black Shoe? The consequence is that the Netflix dataset is what they call "sparse" - that is, there few subscribers have very similar records.

So: how much does someone need to know about you to identify a particular user from the database? It turns out, not much. The is the public ratings and dates at the Internet Movies Database, which include dates and real names. Narayanan and Shmatikov concluded that 99 percent of records could be uniquely identified from only eight matching ratings (of which two could be wrong); for 68 percent of the records you only need two (and reidentifying the rest becomes easier). And of course, if you know a little bit about the particular person whose record you want to identify things get a lot easier - the three movies I've just listed would probably identify me and a few of my friends.

Even if you don't care if your tastes in movies are private - and both US law and the American Library Association's take on library loan records would protect you more than you yourself would - there are couple of notable things here. First of all, the compromise last week whereby Google agreed to hand Viacom anonymized data on YouTube users isn't as good a deal for users as they might think. A really dedicated searcher might well think it worth the effort to come up with a way to re-identify the data - and so far rightsholders have shown themselves to be very dedicated indeed.

Second of all, the Thomas-Walport review on data-sharing actually recommends requiring NHS patients to agree to sharing data with medical researchers. There is a blithe assumption running through all the government policies in this area that data can be anonymized, and that as long as they say our privacy is protected it will be. It's a perfect example of what someone this week called "policy-based evidence-making".

Third of all, most such policy in this area assumes it's the past that matters. What may be of greater significance, as Narayanan and Shmatikov point out, is the future: forward privacy. Once a virtual identity has been linked to a real-world identity, that linkage is permanent. Yes, you can create a new virtual identity, but any slip that links it to either your previous virtual or your real-world identity blows your cover.

The point is not that we should all rush to hide our movie ratings. The point is that we make optimistic assumptions every day that the information we post and create has little value and won't come back to bite us on the ass. We do not know what connections will be possible in the future.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 4, 2008

The new normal

The (only) good thing about a war is you can tell when it's over.

The problem with the "War on Terror" is that terrorism is always with us, as Liberty's director, Shami Chakrabarti, said yesterday at the Homeland and Border Security 08 conference. "I do think the threat is very serious. But I don't think it can be addressed by a war." Because, "We, the people, will not be able to verify a discernible end."

The idea that "we are at war" has justified so much post 9/11 legislation, from the ID card (in the UK) and Real ID (US) to the continued expansion of police powers.

How long can you live in a state of emergency before emergency becomes the new normal? If there is no end, when do you withdraw the latitude wartime gives a government?

Several of yesterday's speakers talked about preserving "our way of life" while countering the threat with better security. But "our way of life" is a moving target.

For example, Baroness Pauline Neville-Jones, the shadow security minister, talked about the importance of controlling the UK's borders. "Perimeter security is absolutely basic." Her example: you can't go into a building without having your identity checked. But it's not so long ago - within the 18 years I've been living in London - that you could do exactly that, even sometimes in central London. In New York, of course, until 9/11, everything was wide open; these days midtown Manhattan makes you wait in front of barriers while you're photographed, checked, and treated with great suspicion if the person you're visiting doesn't answer the phone.

Only seven years ago, flying did not involve two hours of standing in line. Until January, tourists do not have to register three days before flying to the US for pre-screening.

It's not clear how much would change with a Conservative government. "There is a very great deal by this government we would continue," said Neville-Jones. But, she said, besides trackling threats, whether motivated (terrorists) or not (floods, earthquakes, "we are also at any given moment in the game of deciding what kind of society we want to have and what values we want to preserve." She wants "sustainable security, predicated on protecting people's freedom and ensuring they have more, not less, control over their lives." And, she said, "While we need protective mechanisms, the surveillance society is not the route down which we should go. It is absolutely fundamental that security and freedom lie together as an objective."

To be sure, Neville-Jones took issue with some of the present government's plans - the Conservatives would not, she said, go ahead with the National Identity Register, and they favour "a more coherent and wide-ranging border security force". The latter would mean bringing together many currently disparate agencies to create a single border strategy. The Conservatives also favour establishing a small "homeland command for the armed forces" within the UK because, "The qualities of the military and the resources they can bring to complex situations are important and useful." At the moment, she said, "We have to make do with whoever happens to be in the country."

OK. So take the four core elements of the national security strategy according to Admiral Lord Alan West, a Parliamentary under-secretary of state at the Home Office: pursue, protect, prepare, and prevent. "Prevent" is the one that all this is about. If we are in wartime, and we know that any measure that's brought in is only temporary, our tolerance for measures that violate the normal principles of democracy is higher.

Are the Olympics wartime? Security is already in the planning stages, although, as Tarique Ghaffur pointed out, the Games are one of several big events in 2012. And some events like sailing and Olympic football will be outside London, as will 600 training camps. Add in the torch relay, and it's national security.

And in that case, we should be watching very closely what gets brought in for the Olympics, because alongside the physical infrastructure that the Games always leave behind - the stadia and transport - may be a security infrastructure that we wouldn't necessarily have chosen for daily life.

As if the proposals in front of us aren't bad enough. Take for example, the clause of the counterterrorism bill (due for its second reading in the Lords next week) that would allow the authorities to detain suspects for up to 42 days without charge. Chakrabarti lamented the debate over this, which has turned into big media politics.

"The big frustration," she said, "is that alternatives created by sensible, proportionate means of early intervention are being ignored." Instead, she suggested, make the data legally collected by surveillance and interception admissible in fair criminal trials. Charge people with precursor terror offenses so they are properly remanded in custody and continue the investigation for the more serious plot. "That is a way of complying with ancient principles that you should know what you are accused of before being banged up, but it gives the police the time and powers they need."

Not being at war gives us the time to think. We should take it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 27, 2008

Mistakes were made

This week we got the detail on what went wrong at Her Majesty's Revenue and Customs that led to the loss of those two CDs full of the personal details of 25 million British households last year with the release of the Poynter Review (PDF). We also got a hint of how and whether the future might be different with the publication yesterday of Data Handling: Proecures in Government (PDF), written by Sir Gus O'Donnell and commissioned by the Prime Minister after the HMRC loss. The most obvious message of both reports: government needs to secure data better.

The nicest thing the Poynter review said was that HMRC has already made changes in response to its criticisms. Otherwise, it was pretty much a surgical demonstration of "institutional deficiencies".

The chief points:


- Security was not HMRC's top priority.

- HMRC in fact had the technical ability to send only the selection of data that NAO actually needed, but the staff involved didn't know it.

- There was no designated single point of contact between HMRC and NAO.

- HMRC used insecure methods for data storage and transfer.

- The decision to send the CDs to the NAO was taken by junior staff without consulting senior managers - which under HMRC's own rules they should have done.

- The reason HMRC's junior staff did not consult managers was that they believed (wrongly) that NAO had absolute authority to access any and all information HMRC had.

- The HMRC staffer who dispatched the discs incorrectly believed the TNT Post service was secure and traceable, as required by HMRC policy. A different TNT service that met those requirements was in fact available.

- HMRC policies regarding information security and the release of data were not communicated sufficiently through the organization and were not sufficiently detailed.

- HMRC failed on accountability, governance, information security...you name it.

The real problem, though, isn't any single one of these things. If junior staff had consulted senior staff, it might not have mattered that they didn't know what the policies were. If HMRC used proper information security and secure methods for data storage (that is, encryption rather than simple password protection), they wouldn't have had access to send the discs. If they'd understood TNT's services correctly, the discs wouldn't have gotten lost - or at least been traceable if they had.

The real problem was the interlocking effect of all these factors. That, as Nassim Nicholas Taleb might say, was the black swan.

For those who haven't read Taleb's The Black Swan: The Impact of the Highly Improbable, the black swan stands for the event that is completely unpredictable - because, like black swans until one was spotted in Australia, no such thing has ever been seen - until it happens. Of course, data loss is pretty much a white swan; we've seen lots of data breaches. The black swan, really, is the perfectly secure system that is still sufficiently open for the people who need to use it.

That challenge is what O'Donnell's report on data handling is about and, as he notes, it's going to get harder rather than easier. He recommends a complete rearrangement of how departments manage information as well as improving the systems within individual departments. He also recommends greater openness about how the government secures data.

"No organisation can guarantee it will never lose data," he writes, "and the Government is no exception." O'Donnell goes on to consider how data should be protected and managed, not whether it should be collected or shared in the first place. That job is being left for yet another report in progress, due soon.

It's good to read that some good is coming out of the HMRC data loss: all departments are, according to the O'Donnell report, reviewing their data practices and beginning the process of cultural change. That can only be a good thing.

But the underlying problem is outside the scope of these reports, and it's this government's fondness for creating giant databases: the National Identity Register, ContactPoint, the DNA database, and so on. If the government really accepted the principle that it is impossible to guarantee complete data security, what would they do? Logically, they ought to start by cancelling the data behemoths on the understanding that it's a bad idea to base public policy on the idea that you can will a black swan into existence.

It would make more sense to create a design for government use of data that assumes there will be data breaches and attempts to limit the adverse consequences for the individuals whose data is lost. If my privacy is compromised alongside 50 million other people's and I am the victim of identity theft does it help me that the government department that lost the data knows which staff member to blame?

As Agatha Christie said long ago in one of her 80-plus books, "I know to err is human, but human error is nothing compared to what a computer can do if it tries." The man-machine combination is even worse. We should stop trying to breed black swans and instead devise systems that don't create so many white ones.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 13, 2008

Naked in plain sight

I couldn't have been more embarrassed than if the tall guy carrying a laptop had just told me I was wearing a wet T-shirt.

There I was, sitting in the Queen's club international press room. And there was he, the only other possessor of a red laptop in the entire building, showing me a screen full of a hotel reservation from a couple of months back, in full detail. With my name and address on it.

"If I can see it," he said in that maddening you-must-be-an-idiot IT security guy way, "so can everyone else."

DUH.

I took that laptop to Defcon!

(And nothing bad happened. That I know of. Yet.)

Despite the many Guardian readers who are convinced that I am technically incompetent because I've written pieces in which it seemed more entertaining to pretend to be so for dramatic effect, I am not an idiot. I'm not even technically incompetent, or not completely so. I am just, like most people, busy, and, like most people, the problem most of the time is to get my computers to work, not to stop them from working. And I fall into that shadowland of people who know just enough to want to run their computers their way but not enough to understand all the ramifications of what they're doing.

So, for example: file shares (not file-sharing, a different kettle of worms entirely). What you are meant to do, because you are an ignorant and brain-challenged consumer, is drop any files you need to share on the network into the Shared Documents folder. While it's no more secure than any other folder (and its name is eminently guessable by outside experts), the fact that you have to knowingly put files in it means that very little of your system is exposed.

I, of course, am far too grand (and perverse) to put up with Microsoft telling me how to organize my system, so of course I don't do things that way. Instead, I share specific directories using a structure I devised myself that is the same on all my machines. That's where I fouled up, of course. That laptop runs XP, and in XP, as I suppose I am the last to notice, the default settings have what's known as "simple file-sharing" turned on, so that if you share a directory it's basically open to all comers. XP warns you you're doing something risky; what it doesn't do is tell you in a simple way how to reduce the risk.

Yes, I tried to read the help files. They're impenetrable. Help files, like most of the rest of computing, separate into two types: either they're written for the completely naïve user, or they're written for the professional system administrator. Despite the fact that people like me are a growing class of users, we have to learn this stuff behind the bicycle shed from people randomly selected via Google.

This is what it should have said. Do one of the following two things: either set permissions so that only those users who have passwords on your system can access this directory or stick a $ sign at the end of the directory name to make it hidden. If you do the latter, you will have to map the directory as a network drive on all the machines that want to use it. I note that they seem to have improved things in Vista, which I will no doubt start using sometime around 21012). I know Apple probably does this better and Linux is secured out the wazoo, but that's not the point: the point is that it's incredibly easy for moderately knowledgeable users to leave their systems with gaping wide open holes. What I would have liked them to do is offer me the option to view how my system looks to someone connecting from outside with no authentication. I feel sure this could be done.

The problem for Microsoft on this kind of thing is the same problem that afflicts everyone trying to do IT security: everything you do to make the system more secure makes it harder for users to make things work. In the case of the file shares, as long as your computer is at home sitting behind the kind of firewalled router the big ISPs supply, it's more important to grant access to other household members than it is to worry about outsiders. It's when you take that laptop out of the house...and the really awkward thing is that there isn't any really easy way to test for open shares within your own network if, like many people, you tend to use the same login ID and password on all your machines for simplicity's sake. Do friends let friends drive open shares?

The security guys (really, the wi-fi suppliers and tech support), who were only looking around the network for open shares because they were bored, had a good laugh, especially when I told them who I write for (latest addition to the list: Infosecurity magazine!). And they obligingly produced some statistics. Out of the 60 to 100 journalists in the building using the wireless, three had open shares. One, they said, was way more embarrassing than mine, though they declined to elaborate. I think they were just being nice.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 23, 2008

The haystack conundrum

Early this week the news broke that the Home Office wants to create a giant database in which will be stored details of all communications sent in Britain. In other words, instead of data retention, in which ISPs, telephone companies, and other service providers would hang onto communications data for a year or seven in case the Home Office wanted it, everything would stream to a Home Office data center in real time. We'll call it data swallowing.

Those with long memories - who seem few and far between in the national media covering this sort of subject - will remember that in about 1999 or 2000 there was a similar rumor. In the resulting outraged media coverage it was more or less thoroughly denied and nothing had been heard of it since, though privacy advocates continued to suspect that somewhere in the back of a drawer the scheme lurked, dormant, like one of those just-add-water Martians you find in the old Bugs Bunny cartoons. And now here it is again in another leak that the suspicious veteran watcher of Yes, Minister might think was an attempt to test public opinion. The fact that it's been mooted before makes it seem so much more likely that they're actually serious.

This proposal is not only expensive, complicated, slow, and controversial/courageous (Yes, Minister's Fab Four deterrents), but risk-laden, badly conceived, disproportionate, and foolish. Such a database will not catch terrorists, because given the volume of data involved trying to use it to spot any one would-be evil-doer will be the rough equivalent of searching for an iron filing in a haystack the size of a planet. It will, however, make it possible for anyone trawling the database to make any given individual's life thoroughly miserable. That's so disproportionate it's a divide-by-zero error.

The risks ought to be obvious: this is a government that can't keep track of the personal details of 25 million households, which fit on a couple of CDs. Devise all the rules and processes you want, the bigger the database the harder it will be to secure. Besides personal information, the giant communications database would include businesses' communication information, much of likely to be commercially sensitive. It's pretty good going to come up with a proposal that equally offends civil liberties activists and businesses.

In a short summary of the proposed legislation, we find this justification: "Unless the legislation is updated to reflect these changes, the ability of public authorities to carry out their crime prevention and public safety duties and to counter these threats will be undermined."

Sound familiar? It should. It's the exact same justification we heard in the late 1990s for requiring key escrow as part of the nascent Regulation of Investigatory Powers Act. The idea there was that if the use of strong cryptography to protect communications became widespread law enforcement and security services would be unable to read the content of the messages and phone calls they intercepted. This argument was fiercely rejected at the time, and key escrow was eventually dropped in favor of requiring the subjects of investigation to hand over their keys under specified circumstances.

There is much, much less logic to claiming that police can't do their jobs without real-time copies of all communications. Here we have real analogies: postal mail, which has been with us since 1660. Do we require copies of all letters that pass through the post office to be deposited with the security services? Do we require the Royal Mail's automated sorting equipment to log all address data?

Sanity has never intervened in this government's plans to create more and more tools for surveillance. Take CCTV. Recent studies show that despite the millions of pounds spent on deploying thousands of cameras all over the UK, they don't cut crime, and, more important, the images help solve crime in only 3 percent of cases. But you know the response to this news will not be to remove the cameras or stop adding to their number. No, the thinking will be like the scheme I once heard for selling harmless but ineffective alternative medical treatments, in which the answer to all outcomes is more treatment. (Patient gets better - treatment did it. Patient stays the same - treatment has halted the downward course of the disease. Patient gets worse - treatment came too late.)

This week at Computers, Freedom, and Privacy, I heard about the Electronic Privacy Information Center's work on fusion centers, relatively new US government efforts to mine many commercial and public sources of data. EPIC is trying to establish the role of federal agencies in funding and controlling these centers, but it's hard going.

What do these governments imagine they're going to be able to do with all this data? Is the fantasy that agents will be able to sit in a control room somewhere and survey it all on some kind of giant map on which criminals will pop up in red, ready to be caught? They had data before 9/11 and failed to collate and interpret it.

Iron filing; haystack; lack of a really good magnet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 9, 2008

Swings and roundabouts

There was a wonderful cartoon that cycled frequently around computer science departments in the pre-Internet 1970s - I still have my paper copy - that graphically illustrated the process by which IT systems get specified, designed, and built, and showed precisely why and how far they failed the user's inner image of what it was going to be. There is a scan here. The senior analyst wanted to make sure no one could possibly get hurt; the sponsor wanted a pretty design; the programmers, confused by contradictory input, wrote something that didn't work; and the installation was hideously broken.

Translate this into the UK's national ID card. Consumers, Sir James Crosby wrote in March (PDF)want identity assurance. That is, they - or rather, we - want to know that we're dealing with our real bank rather than a fraud. We want to know that the thief rooting through our garbage can't use any details he finds on discarded utility bills to impersonate us, change our address with our bank, clean out our accounts, and take out 23 new credit cards in our name before embarking on a wild spending spree leaving us to foot the bill. And we want to know that if all that ghastliness happens to us we will have an accessible and manageable way to fix it.

We want to swing lazily on the old tire and enjoy the view.

We are the users with the seemingly simple but in reality unobtainable fantasy.

The government, however - the project sponsor - wants the three-tiered design that barely works because of all the additional elements in the design but looks incredibly impressive. ("Be the envy of other major governments," I feel sure the project brochure says.) In the government's view, they are the users and we are the database objects.

Crosby nails this gap when he draws the distinction between ID assurance and ID management:

The expression 'ID management' suggests data sharing and database consolidation, concepts which principally serve the interests of the owner of the database, for example, the Government or the banks. Whereas we think of "ID assurance" as a consumer-led concept, a process that meets an important consumer need without necessarily providing any spin-off benefits to the owner of any database.

This distinction is fundamental. An ID system built primarily to deliver high levels of assurance for consumers and to command their trust has little in common with one inspired mainly by the ambitions of its owner. In the case of the former, consumers will extend use both across the population and in terms of applications such as travel and banking. While almost inevitably the opposite is true for systems principally designed to save costs and to transfer or share data.

As writer and software engineer Ellen Ullman wrote in her book Close to the Machine, databases infect their owners, who may start with good intentions but are ineluctibly drawn to surveillance.

So far, the government pushing the ID card seems to believe that it can impose anything it likes and if it means the tree collapses with the user on the swing, well, that's something that can be ironed out later. Crosby, however, points out that for the scheme to achieve any of the government's national security goals it must get mass take-up. "Thus," he writes, "even the achievement of security objectives relies on consumers' active participation."

This week, a similarly damning assessment of the scheme was released by the Independent Scheme Assurance Panel (PDF) (you may find it easier to read this clean translation - scroll down to policywatcher's May 8 posting). The gist: the government is completely incompetent at handling data, and creating massive databases will, as a result, destroy public trust in it and all its systems.

Of course, the government is in a position to compel registration, as it's begun doing with groups who can't argue back, like foreigners, and proposes doing for employees in "sensitive roles or locations, such as airports". But one of the key indicators of how little its scheme has to do with the actual needs and desires of the public is the list of questions it's asking in the current consultation on ID cards, which focus almost entirely on how to get people to love, or at least apply for, the card. To be sure, the consultation document pays lip service to accepting comments on any ID card-related topic, but the consultation is specifically about the "delivery scheme".

This is the kind of consultation where we're really damned if we do and damned if we don't. Submit comments on, for example, how best to "encourage" young people to sign up ("Views are invited particularly from young people on the best way of rolling out identity cards to them") without saying how little you like the government asking how best to market its unloved policy to vulnerable groups and when the responses are eventually released the government can say there are now no objectors to the scheme. Submit comments to the effect that the whole National Identity scheme is poorly conceived and inappropriate, and anything else you say is likely to be ignored on the grounds that they've heard all that and it's irrelevant to the present consultation. Comments are due by June 30.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 28, 2008

Leaving Las Vegas

Las Vegas shouldn't exist. Who drops a sprawling display of electric lights with huge fountains and luxury hotels that into the best desert scenery on the planet during an energy crisis? Indoors, it's Britain in mid-winter; outdoors you're standing in a giant exhaust fan. The out-of-proportion scale means that everything is four times as far away as you think, including the jackpot you're not going to win at one of its casinos. It's a great place to visit if you enjoy wallowing in self-righteous disapproval.

This all makes it the stuff of song, story, and legend and explains why Jeff Jonas's presentation at etech was packed.

The way Jonas tells it in his blog and at his presentation, he got into the gaming industry by driving through Las Vegas in 1989 idly wondering what was going on behind the scenes at the casinos. A year later he got the tiny beginnings of an answer when he picked up a used couch he'd found in the newspaper classified ads (boy, that dates it, doesn't it?) and found that its former owner played blackjack "for a living". Jonas began consulting to the gaming industry in 1991, helping to open Treasure Island, Bellagio, and Wynn.

"Possibly half the casinos in the world use technology we created," he said at etech.

Gaming revenues are now less than half of total revenues, he said, and despite the apparent financial win they might represent problem gamblers are in fact bad for business. The goal is for people to have fun. And because of that, he said, a place like the Bellagio is "optimized for consumer experience over interference. They don't want to spend money on surveillance."

Jonas began with a slide listing some common ideas about how Las Vegas works, culled from movies like Ocean's 11 and the TV show Las Vegas. Does the Bellagio have a vault? (No.) Do casinos perform background checks on guests based on public records? (No.) Is there a gaming industry watch list you can put yourself on but not take yourself off? (Yes, for people who know they have a gambling addiction.) Do casinos deliberately hire ex-felons? (Yes, to rehabilitate them.) Do they really send private jets for high rollers? (Cue story.)

There was, he said, a casino high roller who had won some $18 million. A win like that is going to show up in a casino's quarterly earnings. So, yes, they sent a private jet to his town and parked a limo in front of his house for the weekend. If you've got the bug, we're here for you, that kind of thing. He took the bait, and lost $22 million.

Do they help you create cover stories? (Yes.) "What happens in Vegas stays in Vegas" is an important part of ensuring that people can have fun that does not come back to bite them when they go home. The casinos' problem is with identity, not disguises, because they are required by anti-money laundering rules to report it any time someone crosses the $10,000 threshold for cash transactions. So if you play at several different tables, then go upstairs and change disguises, and come back and play some more, they have to be able to track you through all that. ID, therefore, is extremely important. Disguises are welcome; fake ID is not.

Do they use facial recognition to monitor the doors to spot cheaters on arrival? (Well...)

Of course technology-that-is-indistinguishable-from-magic-because-it-actually-is-magic appears on every crime-solving TV show these days. You know, the stuff where Our Heroes start with a fuzzy CCTV image and they punch in on a tiny piece of it and blow it up. And then someone says, "Can you enhance that?" and someone else says, "Oh, yes, we have new software," and a second later a line goes down the picture filling in detail. And a second after that you can read the brand on the face of a wrist watch (Numb3rs or the manufacturer's coding on a couple of pills (Las Vegas. Or they have a perfect matching system that can take a partial fingerprint lifted off a strand of hair or something and bang! the database can find not only the person's identity but their current home address and phone number (Bones). And who can ever forget the first episode of 24, when Jack Bauer, alarmed at the disappearance of his daughter, tosses his phone number to an underling and barks, "Find me all the Internet passwords associated with this phone number."

And yet...a surprising number of what ought to be the technically best-educated audience on the planet thought facial recognition was in operation to catch cheaters. Folks, it doesn't work in airports, either.

Which is the most interesting thing Jonas said: he now works for IBM (which bought his company) on privacy and civil liberties issues, including work on software to help the US government spot terrorists without invading privacy. It's an interesting concept, partly because security at airports and other locations is now so invasive. But also because if Las Vegas can find a way to deploy surveillance such that only the egregious problems are caught and everyone else just has a good time...why can't governments?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 29, 2008

Phormal ware

In the last ten days or so a stormlet has broken out about the announcement that BT, Carphone Warehouse, and TalkTalk, who jointly cover about 70 percent of British Internet subscribers, have signed up for a new advertising service. The supplier, Phorm (previously, 121Media), has developed Open Internet Exchange (OIX), a platform to serve up "relevant" ads to ISPs' customers. Ad agencies and Web sites also sign up to the service which, according to Phorm's FAQ, can serve up ads to any Web site "in the regular places the website shows ads". Partners include most British national newspapers, iVillage, and MGM OMD.

A brief chat with BT revealed that the service, known to consumers as Webwise, will apply only to BT's retail customers, not its wholesale division. Consumers will be able to opt out, and BT is planning an educational exercise to explain the service.

Obviously all concerned hope Webwise will be acceptable to consumers, but to make it a little more palatable, not signing out of it gets you warnings if you land on suspected phishing sites. I don't think improved security should, ethically, be tied to a person's ad-friendliness, but this is the world we live in.

"We've done extensive research with our customer base," says BT's spokesman, "and it's very clear that when customers know what is happening they're overwhelmingly in favor of it, particularly in terms of added security."

But the Net folk are suspicious folk, and words like "spyware" and "adware" are circling, partly because Phorm's precursor, 121Media, was blocked by Symantec and F-Secure as spyware. Plus, The Register discovered that BT had been sharing data with Phorm as long ag as last summer, and, apparently, lying about it.

Phorm's PR did not reply to a request for an interview, but a spokeswoman contacted briefly last week defended the company. "We are absolutely not and in no way an adware product at all."

The overlooked aspect: Phorm called in Privacy International's new commercial arm, 80/20, to examine its system.

PI's executive director, Simon Davies, one of the examiners, says, "Phorm has done its very best to eliminate and minimise the use of personal information and build privacy into the core of the technology. In that sense, it's a privacy-friendly technology, but that does not get us away from the intrusion aspect." In general, the principle is that ads shouldn't be served on an opt-out basis; users should have to opt in to receive them.

Tailoring advertising to the clickstream of user interests is of course endemic online now; it's how Google does AdSense, and it's why that company bought DoubleClick, which more or less invented the business of building up user profiles to create personalized ads. Phorm's service, however, does not build user profiles.

A cookie with a unique ID is stored on the user's system - but does not associate that ID with an individual or the computer it's stored on. Say you're browsing car sites like Ford and Nissan. The ISP does not give Phorm personally identifiable information like IP addresses, but does share the information that the computer this cookie is on is looking at car sites right now. OIX serves up car ads. The service ignores niche sites, secure sites (HTTPS), and low-traffic sites. Firewalling between Phorm and the ISP means that the ISP doesn't know and can't deduce the information that the OIX platform knows about what ads are being served. Nothing is stored to create a profile. Phorm instead offers advertisers instead is the knowledge that they are serving ads that reflect users' interests in real time.

The difference to Davies is that Google, which came last in Privacy International's privacy rankings, stores search histories and browsing data and ties them to personal identifiers, primarily login IDs and IP addresses. (Next month, the Article 29 Group will report its opinion as to whether IP addresses are personal information, so we will know better then which way the cookie crumbles.)

"The potential to develop a profile covertly is extremely limited, if not eliminated," says Davies.

Phorm itself says, "We really think what our stuff does dispells the myth that in order to provide relevance you have to store data."

I hate advertising as much as the next six people. But most ISPs are operating on razor-thin margins if they make money at all, and they're looking at continuously increasing demand for bandwidth. That demand can only get worse as consumers flock to the iPlayer and other sources of streaming video. The pressure on pricing is steadily downward with people like TalkTalk and O2 offering free or extremely cheap broadband as an add-on to mobile phone accounts. Meanwhile, the advertising revenues go to everyone but them. Is it surprising that they'd leap at this? Analysts estimate that BT will pick up £85 million in the first year. Nice if you can get it.

We all want low-cost broadband and free content. None of us wants ads. How exactly do we propose all this free stuff is going to be paid for?

As for Phorm, it's going to take a lot to make some users trust them. I'd say, though, that the jury is still out. Sometimes people do learn from past mistakes.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 8, 2008

If you have ID cards, drink alcohol


One of the key identifiers of an addiction is that indulgence in it persists long after all the reasons for doing it have turned from good to bad.

A sobered-up Scottish alcoholic once told me the following examplar of alcoholic thinking. A professor is lecturing to a class of alcoholics on the evils of drinking. To make his point, he takes two glasses, one filled with water, the other with alcohol. Into each glass he drops a live worm. The worm in the glass of water lives; the worm in the glass of alcohol dies.

"What," the professor asks, "can we learn from this?"

One of the alcoholics raises his hand. "If you have worms, drink alcohol."

In alcoholic thinking, of course, there is no circumstance in which the answer isn't "Drink alcohol."

So, too, with the ID card. The purpose as mooted between 2001 and 2004 was preventing benefit fraud and making life more convenient for UK citizens and residents. The plan promised perfect identification via the combination of a clean database (the National Identity Register) and biometrics (fingerprints and iris scans). The consultation document made a show of suggesting the cheaper alternative of a paper card with minimal data collection, but it was clear what they really wanted: the big, fancy stuff that would make them the envy of other major governments.

Opponents warned of the UK's poor track record with large IT projects, the privacy-invasiveness, and the huge amount such a system was likely to cost. Government estimates, now at £5.4 billion, have been slowly rising to meet Privacy International's original estimate of £6 billion.

By 2006, when the necessary legislation was passed, the government had abandoned the friendly "entitlement card" language and was calling it a national ID card. By then, also, the case had changed: less entitlement, more crime prevention.

It's 2008, and the wheels seem to be coming off. The government's original contention that the population really wanted ID cards has been shredded by the leaked documents of the last few weeks. In these, it's clear that the government knows the only way it will get people to adopt the ID card is by coercion, starting with the groups who are least able to protest by refusal: young people and foreigners.

Almost every element deemed important in the original proposal is now gone - the clean database populated through interviews and careful documentation (now the repurposed Department of Work and Pensions database); the iris scans (discarded); probably the fingerprints (too expensive except for foreigners). The one element that for sure remains is the one the government denied from the start: compulsion.

The government was always open about its intention for non-registration to become increasingly uncomfortable and eventually to make registration compulsory. But if the card is coming at least two years later than they intended, compulsion is ahead of schedule.

Of course, we've always maintained that the key to the project is the database, not the card. It's an indicator of just how much of a mess the project is that the Register, the heart of the system, was first to be scaled back because of its infeasibility. (I mean, really, guys. Interview and background-check the documentation of every one of 60 million people in any sort of reasonable time scale?)

The project is even fading in popularity with the very vendors who want to make money supplying the IT for it. How can you specify a system whose stated goals keep changing?

The late humorist and playwright Jean Kerr (probably now best known for her collection of pieces about raising five boys with her drama critic husband in a wacky old house in Larchmont, NY, Please Don't Eat the Daisies) once wrote a piece about the trials and tribulations of slogging through the out-of-town openings of one of her plays. In these pre-Broadway trial runs, lines get cut and revised; performances get reshaped and tightened. If the play is in trouble, the playwright gets no sleep for weeks. And then, she wrote, one day you look up at the stage, and, yes, the play is much better, and the performances are much better, and the audience seems to be having a good time. And yet - the play you're seeing on the stage isn't the play you had in mind at all.

It's one thing to reach that point in a project and retain enough perspective to be honest about it. It may be bad - but it isn't insane - to say, "Well, this play isn't what I had in mind, but you know, the audience is having a good time, and it will pay me enough to go away and try again."

But if you reach the point where the project you're pushing ahead clearly isn't any more the project you had in mind and sold hard, and yet you continue to pretend to yourself and everyone else that it is - then you have the kind of insanity problem where you're eating worms in order to prove you're not an alcoholic.

The honorable thing for the British government to do now is say, "Well, folks, we were wrong. Our opponents were right: the system we had in mind is too complicated, too expensive, and too unpopular because of its privacy-invasiveness. We will think again." Apparently they're so far gone that eating worms looks more sensible.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 23, 2007

Road block

There are many ways for a computer system to fail. This week's disclosure that Her Majesty's Revenue and Customs has played lost-in-the-post with two CDs holding the nation's Child Benefit data is one of the stranger ones. The Child Benefit database includes names, addresses, identifying numbers, and often bank details, on all the UK's 25 million families with a child under 16. The National Audit Office requested a subset for its routine audit; the HMRC sent the entire database off by TNT post.

There are so many things wrong with this picture that it would take a village of late-night talk show hosts to make fun of them all. But the bottom line is this: when the system was developed no one included privacy or security in the specification or thought about the fundamental change in the nature of information when paper-based records are transmogrified into electronic data. The access limitations inherent in physical storage media must be painstakingly recreated in computer systems or they do not exist. The problem with security is it tends to be inconvenient.

With paper records, the more data you provide the more expensive and time-consuming it is. With computer records, the more data you provide the cheaper and quicker it is. The NAO's file of email relating to the incident (PDF) makes this clear. What the NAO wanted (so it could check that the right people got the right benefit payments): national insurance numbers, names, and benefit numbers. What it got: everything. If the discs hadn't gotten lost, we would never have known.

Ironically enough, this week in London also saw at least three conferences on various aspects of managing digital identity: Digital Identity Forum, A Fine Balance, and Identity Matters. All these events featured the kinds of experts the UK government has been ignoring in its mad rush to create and collect more and more data. The workshop on road pricing and transport systems at the second of them, however, was particularly instructive. Led by science advisor Brian Collins, the most notable thing about this workshop is that the 15 or 20 participants couldn't agree on a single aspect of such a system.

Would it run on GPS or GSM/GPRS? Who or what is charged, the car or the driver? Do all roads cost the same or do we use differential pricing to push traffic onto less crowded routes? Most important, is the goal to raise revenue, reduce congestion, protect the environment, or rebalance the cost of motoring so the people who drive the most pay the most? The more purposes the system is intended to serve, the more complicated and expensive it will become, and the less likely it is to answer any of those goals successfully. This point has of course also been made about the National ID card by the same sort of people who have warned about the security issues inherent in large databases such as the Child Benefit database. But it's clearer when you start talking about something as limited as road charging.

For example: if you want to tag the car you would probably choose a dashboard-top box that uses GPS data to track the car's location. It will have to store and communicate location data to some kind of central server, which will use it to create a bill. The data will have to be stored for at least a few billing cycles in case of disputes. Security services and insurers alike would love to have copies. On the other hand, if you want to tag the driver it might be simpler just to tie the whole thing to a mobile phone. The phone networks are already set up to do hand-off between nodes, and tracking the driver might also let you charge passengers, or might let you give full cars a discount.

The problem is that the discussion is coming from the wrong angle. We should not be saying, "Here is a clever technological idea. Oh, look, it makes data! What shall we do with it?" We should be defining the problem and considering alternative solutions. The people who drive most already pay most via the fuel pump. If we want people to drive less, maybe we should improve public transport instead. If we're trying to reduce congestion, getting employers to be more flexible about working hours and telecommuting would be cheaper, provide greater returns, and, crucially for this discussion, not create a large database system that can be used to track the population's movements.

(Besides, said one of the workshop's participants: "We live with the congestion and are hugely productive. So why tamper with it?")

It is characteristic of our age that the favored solution is the one that creates the most data and the biggest privacy risk. No one in the cluster of organisations opposing the ID card - No2ID, Privacy International, Foundation for Information Policy Research, or Open Rights Group - wanted an incident like this week's to happen. But it is exactly what they have been warning about: large data stores carry large risks that are poorly understood, and it is not enough for politicians to wave their hands and say we can trust them. Information may want to be free, but data want to leak.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 9, 2007

Watching you watching me

A few months ago, a neighbour phoned me and asked if I'd be willing to position a camera on my windowsill. I live at the end of a small dead-end street (or cul-de-sac), that ends in a wall about shoulder height. The railway runs along the far side of the wall, and parallel to it and further away is a long street with a row of houses facing the railway. The owners of those houses get upset because graffiti keeps appearing alongside the railway where they can see it and covers flat surfaces such as the side wall of my house. The theory is that kids jump over the wall at the end of my street, just below my office window, either to access the railway and spray paint or to escape after having done so. Therefore, the camera: point it at the wall and watch to see what happens.

The often-quoted number of times the average Londoner is caught on camera per day is scary: 200. (And that was a few years ago; it's probably gone up.) My street is actually one of those few that doesn't have cameras on it. I don't really care about the graffiti; I do, however, prefer to be on good terms with neighbours, even if they're all the way across the tracks. I also do see that it makes sense at least to try to establish whether the wall downstairs is being used as a hurdle in the getaway process. What is the right, privacy-conscious response to make?

I was reminded of this a few days ago when I was handed a copy of Privacy in Camera Networks: A Technical Perspective, a paper published at the end of July. (We at net.wars are nothing if not up-to-date.)

Given the amount of money being spent on CCTV systems, it's absurd how little research there is covering their efficacy, their social impact, or the privacy issues they raise. In this paper, the quartet of authors – Marci Lenore Meingast (UC Berkeley), Sameer Pai (Cornell), Stephen Wicker (Cornell), and Shankar Sastry (UC Berkeley) – are primarily concerned with privacy. They ask a question every democratic government deploying these things should have asked in the first place: how can the camera networks be designed to preserve privacy? For the purposes of preventing crime or terrorism, you don't need to know the identity of the person in the picture. All you want to know is whether that person is pulling out a gun or planting a bomb. For solving crimes after the fact, of course, you want to be able to identify people – but most people would vastly prefer that crimes were prevented, not solved.

The paper cites model legislation (PDF) drawn up by the Constitution Project. Reading it is depressing: so many of the principles in it are such logical, even obvious, derivatives of the principles that democratic governments are supposed to espouse. And yet I can't remember any public discussion of the idea that, for example, all CCTV systems should be accompanied by identification of and contact information for the owner. "These premises are protected by CCTV" signs are everywhere; but they are all anonymous.

Even more depressing is the suggestion that the proposals for all public video surveillance systems should specify what legitimate law enforcement purpose they are intended to achieve and provide a privacy impact assessment. I can't ever remember seeing any of those either. In my own local area, installing CCTV is something politicians boast about when they're seeking (re)election. Look! More cameras! The assumption is that more cameras equals more safety, but evidence to support this presumption is never provided and no one, neither opposing politicians nor local journalists, ever mounts a challenge. I guess we're supposed to think that they care about us because they're spending the money.
The main intention of Meingast, Pai, et al, however, is to look at the technical ways such networks can be built to preserve privacy. They suggest, for example, collecting public input via the Internet (using codes to identify the respondents on whom the cameras will have the greatest impact). They propose an auditing system whereby these systems and their usage is reviewed. As the video streams become digital, they suggest using layers of abstraction of the resulting data to limit what can be identified in a given image. "Information not pertinent to the task in hand," they write hopefully, "can be abstracted out leaving only the necessary information in the image." They go on into more detail about this, along with a lengthy discussion of facial recognition.

The most depressing thing of all: none of this will ever happen, and for two reasons. First, no government seems to have the slightest qualm of conscience about installing surveillance systems. Second, the mass populace don't seem to care enough to demand these sorts of protections. If these protections are to be put in place at all, it must be done by technologists. They must design these systems so that it's easier to use them in privacy-protecting ways than to use them in privacy-invasive ways. What are the odds?

As for the camera on my windowsill, I told my neighbour after some thought that they could have it there for a maximum of a couple of weeks to establish whether the end of my street was actually being used as an escape route. She said something about getting back to me when something or other happened. Never heard any more about it. As far as I am aware, my street is still unsurveilled.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 12, 2007

The permission-based society

It was Edward Hasbrouck who drew my attention to a bit of rulemaking being proposed by the Transportation Security Agency. Under current rules, if you want to travel on a plane out of, around, into, or over the US you buy a ticket and show up at the airport, where the airline compares your name and other corroborative details to the no-fly list the TSA maintains. Assuming you're allowed onto the flight, unbeknownst to you, all this information has to be sent to the TSA within 15 minutes of takeoff (before, if it's a US flight, after if it's an international flight heading for the US).

Under the new rules, the information will have to arrive at the TSA 72 hours before the flight takes off – after all, most people have finalised their travel plans by that time, and only 7 to 10 percent of itineraries change after that – and the TSA has to send back an OK to the airline before you can be issued a boarding pass.

There's a whole lot more detail in the Notice of Proposed Rulemaking, but that's the gist. (They'll be accepting comments until October 22, if you would like to say anything about these proposals before they're finalised.)

There are lots of negative things to say about these proposals – the logistical difficulties for the travel industry, the inadequacy of the mathematical model behind this (which at the public hearing the ACLU's Barry Steinhardt compared to trying to find a needle in a haystack by pouring more hay on the stack), and the privacy invasiveness inherent in having the airlines collect the many pieces of data the government wants and, not unnaturally, retaining copies while forwarding it on to the TSA. But let's concentrate on one: the profound alteration such a scheme will make to American society at large. The default answer to the question of whether you had the right to travel anywhere, certainly within the confines of the US, has always been "Yes". These rules will change it to "No".

(The right to travel overseas has, at times, been more fraught. The folk scene, for example, can cite several examples of musicians who were denied passports by the US State Department in the 1950s and early 1960s because of their left-wing political beliefs. It's not really clear to me why the US wanted to keep people whose views it disapproved of within its borders but some rather hasty marriages took place in order to solve some of these immigration problems, though everyone's friends again now and it's fresh passports all round.)

Hasbrouck, Steinhardt, and EFF founder John Gilmore, who sued the government over the right to travel anonymously within the US, have all argued that the key issue here is the right to assemble guaranteed in the First Amendment. If you can't travel, you can't assemble. And if you have to ask permission to travel, your right of assembly is subject to disruption at any time. The secrecy with which the TSA surrounds its decision-making doesn't help.

Nor does the amount of personal data the TSA is collecting from airline passenger name records. The Identity Project's recent report on the subject highlights that these records may include considerable detail: what books the passenger is carrying, what answer you give when asked where you've been or are going, names and phone numbers given as emergency contacts, and so on. Despite the data protection laws, it isn't always easy to find out what information is being stored; when I made such a request of US Airways last year, the company refused to show me my PNR from a recent flight and gave as the reason: "Security." Civilisation as we know it is at risk if I find out what they think they know about me? We really are in trouble.

In Britain, the chief objections to the ID card and, more important, the underlying database, have of course been legion, but they have generally focused on the logistical problems of implementing it (huge cost, complex IT project, bound to fail) and its general privacy-invasiveness. But another thing the ID card – especially the high-tech, biometric, all-singing, all-dancing kind – will do is create a framework that could support a permission-based society in which the ID card's interaction with systems is what determines what you're allowed to do, where you're allowed to go, and what purchases you're allowed to make. There was a novel that depicted a society like this: Ira Levin's This Perfect Day, in which these functions were all controlled by scanner bracelets and scanners everywhere that lit up green to allow or red to deny permission. The inhabitants of that society were kept drugged, so they wouldn't protest the ubiquitous controls. We seem to be accepting the beginnings of this kind of life stone, cold sober.

American children play a schoolyard game called "Mother, May I?" It's one of those games suitable for any number of kids, and it involves a ritual of asking permission before executing a command. It's a fine game, but surely it isn't how we want to live.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 21, 2007

The summer of lost hats

I seem to have spent the summer dodging in and out of science fiction novels featuring four general topics: energy, security, virtual worlds, and what someone at the last conference called "GRAIN" technologies (genetic engineering, robotics, AI, and nanotechnology). So the summer started with doom and gloom and got progressively more optimistic. Along the way, I have mysteriously lost a lot of hats. The phenomena may not be related.

I lost the first hat in June, a Toyota Motor Racing hat (someone else's joke; don't ask) while I was reading the first of many very gloomy books about the end of the world as we know it. Of course, TEOTWAWKI has been oft-predicted, and there is, as Damian Thompson, the Telegraph's former religious correspondent, commented when I was writing about Y2K – a "wonderful and gleeful attention to detail" in these grand warnings. Y2K was a perfect example: a timetable posted to comp.software.year-2000 had the financial system collapsing around April 1999 and the cities starting to burn in October…

Energy books can be logically divided into three categories. One, apocalyptics: fossil fuels are going to run out (and sooner than you think), the world will continue to heat up, billions will die, and the few of us who survive will return to hunting, gathering, and dying young. Two, deniers: fossil fuels aren't going to run out, don't be silly, and we can tackle global warming by cleaning them up a bit. Here. Have some clean coal. Three, optimists: fossil fuels are running out, but technology will help us solve both that and global warming. Have some clean coal and a side order of photovoltaic panels.

I tend, when not wracked with guilt for having read 15 books and written 30,000 words on the energy/climate crisis and then spent the rest of the summer flying approximately 33,000 miles, toward optimism. People can change – and faster than you think. Ten years ago, you'd have been laughed off the British isles for suggesting that in 2007 everyone would be drinking bottled water. Given the will, ten years from now everyone could have a solar collector on their roof.

The difficulty is that at least two of those takes on the future of energy encourage greater consumption. If we're all going to die anyway and the planet is going inevitably to revert to the Stone Age, why not enjoy it while we still can? All kinds of travel will become hideously expensive and difficult; go now! If, on the other hand, you believe that there isn't a problem, well, why change anything? The one group who might be inclined toward caution and saving energy is the optimists – technology may be able to save us, but we need time to create create and deploy it. The more careful we are now, the longer we'll have to do that.

Unfortunately, that's cautious optimism. While technology companies, who have to foot the huge bills for their energy consumption, are frantically trying to go green for the soundest of business reasons, individual technologists don't seem to me to have the same outlook. At Black Hat and Defcon, for example (lost hats number two and three: a red Canada hat and a black Black Hat hat), among all the many security risks that were presented, no one talked about energy as a problem. I mean, yes, we have all those off-site backups. But you can take out a border control system as easily with an electrical power outage as you can by swiping an infected RFID passport across a reader to corrupt the database. What happens if all the lights go out, we can't get them back on again, and everything was online?

Reading all those energy books changes the lens through which you view technical developments somewhat. Singapore's virtual worlds are a case in point (lost hat: a navy-and-tan Las Vegas job): everyone is talking about what kinds of laws should apply to selling magic swords or buying virtual property, and all the time in the back of your mind is the blog posting that calculated that the average Second Life avatar consumes as much energy as the average Brazilian. And emits as much carbon as driving an SUV for 2,000 miles. Bear in mind that most SL avatars aren't figured up that often, and the suggestion that we could curb energy consumption by having virtual conferences instead of physical ones seems less realistic. (Though we could, at least, avoid airport security.) In this, as in so much else, the science fiction writer Vernor Vinge seems to have gotten there first: his book Marooned in Real Time looks at the plight of a bunch of post-Singularity augmented humans knowing their technology is going to run out.

It was left to the most science fictional of the conferences, last week's Center for Responsible Nanotechnology conference (my overview is here) to talk about energy. In wildly optimistic terms: technology will not only save us but make us all rich as well.

This was the one time all summer I didn't lose any hats (red Swiss everyone thought was Red Cross, and a turquoise Arizona I bought just in case). If you can keep your hat while all around you everyone is losing theirs…

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 10, 2007

Wall of sheep

Last week at Defcon my IM ID and just enough of the password to show they knew what it was appeared on the Wall of Sheep. This screen projection of the user IDs, partial passwords, and activities captured by the installed sniffer inevitably runs throughout the conference.

It's not that I forgot the sniffer was there, or that there is a risk in logging onto an IM client unencrypted over a Wi-Fi hot spot (at a hacker conference!) but that I had forgotten that it was set to log in automatically whenever it could. Easily done.

It's strange to remember now that once upon a time this crowd – or at least, type of crowd – was considered the last word in electronic evil. In 1995 the capture of Kevin Mitnick made headlines everywhere because he was supposed to be the baddest hacker ever. Yet other than gaining online access and free phone calls, Mitnick is not known to have ever profited from his crimes – he didn't sell copied source code to its owners' competitors, and he didn't rob bank accounts. We would be grateful – really grateful – if Mitnick were the worst thing we had to deal with online now.

Last night, the House of Lords Science and Technology Committee released its report on Personal Internet Security. It makes grim reading even for someone who's just been to Defcon and Black Hat. The various figures the report quotes, assembled after what seems to have been an excellent information-gathering process (that means, they name-check a lot of people I know and would have picked for them to talk to) are pretty depressing. Phishing has cost US banks around $2 billion, and although the UK lags well behind - £33.5 million in bank fraud in 2006 – here, too, it's on the rise. Team Cymru found (PDF) that on IRC channels dedicated to the underground you could buy credit card account information for between $1 (basic information on a US account) to $50 (full information for a UK account); $1,599,335.80 worth of accounts was for sale on a single IRC channel in one day. Those are among the few things that can be accurately measured: the police don't keep figures breaking out crimes committed electronically; there are no good figures on the scale of identity theft (interesting, since this is one of the things the government has claimed the ID card will guard against); and no one's really sure how many personal computers are infected with some form of botnet software – and available for control at four cents each.

The House of Lords recommendations could be summed up as "the government needs to do more". Most of them are unexceptional: fund more research into IT security, keep better statistics. Some measures will be welcomed by a lot of us: make banks responsible for losses resulting from electronic fraud (instead of allowing them to shift the liability onto consumers and merchants); criminalize the sale or purchase of botnet "services" and require notification of data breaches. (Now I know someone is going to want to say, "If you outlaw botnets, only outlaws will have botnets", but honestly, what legitimate uses are there for botnets? The trick is in defining them to include zombie PCs generating spam and exclude PCs intentionally joined to grids folding proteins.)

Streamlined Web-based reporting for "e-crime" could only be a good thing. Since the National High-Tech Crime Unit was folded into the Serious Organised Crime Agency there is no easy way for a member of the public to report online crime. Bringing in a central police e-crime unit would also help. The various kite mark schemes – for secure Internet services and so on – seem harmless but irrelevant.

The more contentious recommendations revolve around the idea that we the people need to be protected, and that it's no longer realistic to lay the burden of Internet security on individual computer users. I've said for years that ISPs should do more to stop spam (or "bad traffic") from exiting their systems; this report agrees with that idea. There will likely be a lot of industry ink spilled over the idea of making hardware and software vendors liable if "negligence can be demonstrated". What does "vendor" mean in the context of the Internet, where people decide to download software on a whim? What does it mean for open source? If I buy a copy of Red Hat Linux with a year's software updates, that company's position as a vendor is clear enough. But if I download Ubuntu and install it myself?

Finally, you have to twitch a bit when you read, "This may well require reduced adherence to the 'end-to-end' principle." That is the principle that holds that the network should carry only traffic, and that services and applications sit at the end points. The Internet's many experiments and innovations are due to that principle.
The report's basic claim is this: criminals are increasingly rampant and increasingly rapacious on the Internet. If this continues, people will catastrophically lose confidence in the Internet. So we must improve security by making the Internet safer. Couldn't we just make it safer by letting people stop using it? That's what people tell you to do when you're going to Defcon.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 3, 2007

The house always wins

Las Vegas really is the perfect place to put a security conference: don't security people always feel like an island of sanity surrounded by lunatic gamblers? Although, equally, it's probably true that Las Vegas casinos probably have some of the smartest security in the world when it comes to making sure that the house will always win.

A repeated source of humor this week at Black Hat has been the responses from various manufacturers when they're told that their systems are in fact hackable. My favorite was the presentation explaining how to hack the RDS-TMC radio service that delivers information about upcoming traffic jams and other disruptions to in-car satellite navigation systems. The industry's response to the news that Italian guys could effectively control traffic was pretty much that even if it was possible, which they seemed inclined to doubt, it would take a lot of knowledge, and anyway, it's illegal…

Adam Laurie got a similar response from RFID people when he showed you could in fact crack one of those all-singing, all-dancing new e-passports and, more than that, that you can indeed clone those supposedly "unique" RFID chips with a device small enough that you could pick up the information you need just standing next to someone in an elevator. (What a Las Vegas close-up magician could do with one of those…)

The industry's response to the news that Laurie could clone ID tags was to complain triumphantly that Laurie's clones "don't have the same form factor". You're an RFID chip reader. What do you see?
"I believe in full disclosure," said Laurie. "They must know you can program in any ID you want." But that's not what they tell the public.

And then there's mobile phone malware, which according to F-secure's Mikko Hypponen is about where PC viruses were ten years ago. We have, he figures, a chance to stop them now, so we won't wind up ten years from now with all the same security risks that we face with PCs. Some of the biggest manufacturers have joined the Trusted Computing Group (an effort to secure computer systems that unfortunately has the problem that it treats the user as a potentially hostile invader).

But viruses and other bad things spread a lot faster between mobile phones because they are specifically designed for…communication. The average smartphone has Bluetooth, infrared, USB, and its network connection, and each of those is a handy way of getting a virus into the phone, not to mention also MMS, user downloads, and memory card slots. And, in future, probably WLAN, email, SMS, and even P2P. This is the bad side of having phones that can run third-party applications and that are designed to be, damn it, communications devices.

Viruses that spread by Bluetooth are particularly entertaining because of the way Bluetooth's software handles incoming connections. Say a nearby phone tries to send your phone a virus. Your phone puts up a message asking you to confirm that you want to accept it. You click No. The message instantly reappears (viruses don't like to take no for an answer). There is in fact a simple solution: walk out of range. But most users don't know to do this, and in the meantime until they say Yes, their phone is unusable. The first virus to appear in the wild, 2004's Cabir, spreads very easily if users do something risky – like turn on their phone.

This is obviously a design problem caused by a failure of imagination, even though anti-virus companies such as Kaspersky have been warning for at least a decade that as the computing power of mobile phones increased they would become vulnerable to the same problems as desktop computers.

By far the vast majority of mobile phone malware is written for Symbian phones, by the way. Palm, Windows Mobile, and other operating systems barely figure in F-Secure's statistics. Trojans are the biggest threat, and the biggest way phones get infected is user downloads.

It would not noticeably ruin the user experience for mobile phone manufacturers to change the way Bluetooth handles such incoming requests.

It took the Meet the Feds panel to regain a sense of proportion. The most a mobile phone virus can do to a new phone equipped with a mobile wallet is steal your money and send out text messages to all your contacts that will alienate them forever, leaving you with a ruined life. (Take comfort from the words of the novelist Edward Whittemore, in his book Sinai Tapestry: "No one was safe, and there was no security – just life itself.")

Bad security is still bad security, and "the Feds" sure do a lot of it, and the rather stolid face they present to the public pushes us to regard them as comical. But they're gambling with far bigger consequences than any of us, as Chris Marshall of the NSA reminded everyone. He was out to dinner with his counterparts from a variety of countries, and they were discussing what "homeland security" really means. The representative from New Zealand spoke up: he has children living in New Zealand, Australia, the US, and France, where he also has grandchildren.

"Homeland security," he said simply, "is where my children are."

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 9, 2007

Getting out the vote

Voter-verified paper audit trails won't save us. That was the single clearest bit of news to come out of this week's electronic voting events.

This is rather depressing, because for the last 15 years it's looked as though VVPAT (as they are euphoniously calling it) might be something everyone could compromise on.: OK, we'll let you have your electronic voting machines as long as we can have a paper backup that can be recounted in case of dispute. But no. According to Rebecca Mercuri in London this week (and others who have been following this stuff on the ground in the US), what we thought a paper trail meant is definitely not what we're getting. This is why several prominent activist organisations have come out against the Holt bill HR811, introduced into Congress this week, despite its apparent endorsement of paper trails.

I don't know about you, but when I imagined a VVPAT, what I saw in my mind's eye was something like an IBM punch card dropping individually into some kind of display where a voter would press a key to accept or reject. Instead, vendors (who hate paper trails) are providing cheap, flimsy, thermal paper in a long roll with no obvious divisions to show where individual ballots are. The paper is easily damaged, it's not clear whether it will survive the 22 months it's supposed to be stored, and the mess is not designed to ease manual recounts. Basically, this is paper that can't quite aspire to the lofty quality of a supermarket receipt.

The upshot is that yesterday you got a programme full of computer scientists saying they want to vote with pencils and paper. Joseoph Kiniry, from University College, Dublin, talked about using formal methods to create a secure system – and says he wants to vote on paper. Anne-Marie Ostveen told the story of the Dutch hacker group who bought up a couple of Nedap machines to experiment on and wound up publicly playing chess on them – and exposing their woeful insecurity – and concluded, "I want my pencil back." And so on.

The story is the same in every country. Electronic voting machines – or, more correctly, electronic ballot boxes – are proposed and brought in without public debate. Vendors promise the machines will be accurate, reliable, secure, and cheaper than existing systems. Why does anyone believe this? How can a voting computer possibly be cheaper than a piece of paper and a pencil? In fact, Jason Kitcat, a longtime activist in this area, noted that according to the Electoral Commission the cost of the 2003 pilots were astounding – in Sheffield £55 per electronic vote, and that's with suppliers waiving some charges they didn't expect either. Bear in mind, also, that the machines have an estimated life of only ten years.

Also the same: governments lack internal expertise on IT, basically because anyone who understand IT can make a lot more money in industry than in either government or the civil service.

And: everywhere vendors are secretive about the inner workings of their computers. You do not have to be a conspiracy theorist to see that privatizing democracy has serious risks.

On Tuesday, Southport LibDem MP John Pugh spoke of the present UK government's enchantment with IT. "The procurers who commission IT have a starry-eyed view of what it can do," he said. "They feel it's a very 'modern' thing." Vendors, also, can be very persuasive (I'd like to see tests on what they put in the ink in those brochures, personally). If, he said, Bill Gates were selling voting machines and came up against Tony Blair, "We would have a bill now."

Politicians are, probably, also the only class of people to whom quick counts appeal. The media, for example, ought to love slow counts that keep people glued to their TV sets, hitting the refresh button on their Web browsers, and buying newspapers throughout. Florida 2000 was a media bonanza. But it's got to be hard on the guys who can't sleep until they know whether they have a job next month.

I would propose the following principles to govern the choice of balloting systems:

- The mechanisms by which votes are counted should be transparent. Voters should be able to see that the vote they cast is the vote they intended to cast,

- Vendors should be contractually prohibited from claiming the right to keep secret their source code, the workings of their machines, or their testing procedures, and they should not be allowed to control the circumstances or personnel under which or by whom their machines are tested. (That's like letting the psychic set the controls of the million-dollar test.)

- It should always be possible to conduct a public recount of individual ballots.

Pugh made one other excellent point: paper-based voting systems are mature. "The old system was never perfect," he said, but over time "we've evolved a way of dealing with almost every conceivable problem." Agents have the right to visit every polling station and watch the count, recounts can consider every single spoiled ballot. By contrast, electronic voting presumes everything will go right.

Guys, it's a computer. Next!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 26, 2007

Vote early, vote often...

It is a truth that ought to be universally acknowledged that the more you know about computer security the less you are in favor of electronic voting. We thought – optimists that we are – that the UK had abandoned the idea after all the reports of glitches from the US and the rather indeterminate results of a couple of small pilots a few years ago. But no: there are plans for further trials for the local elections in May.

It's good news, therefore, that London is to play host to two upcoming events to point out all the reasons why we should be cautious. The first, February 6, is a screening of the HBO movie Hacking Democracy, a sort of documentary thriller. The second, February 8, is a conference bringing together experts from several countries, most prominently Rebecca Mercuri, who was practically the first person to get seriously interested in the security problems surrounding electronic voting. Both events are being sponsored by the Open Rights Group and the Foundation for Information Policy Research, and will be held at University College London. Here is further information and links to reserve seats. Go, if you can. It's free.

Hacking Democracy (a popular download) tells the story of ,a href="http://www.blackboxvoting.org">Bev Harris and Andy Stephenson. Harris was minding her own business in Seattle in 2000 when the hanging chad hit the Supreme Court. She began to get interested in researching voting troubles, and then one day found online a copy of the software that runs the voting machines provided by Diebold, one of the two leading manufacturers of such things. (And, by the way, the company whose CEO vowed to deliver Ohio to Bush.) The movie follows this story and beyond, as Harris and Stephenson dumpster-dive, query election officials, and document a steady stream of glitches that all add up to the same point: electronic voting is not secure enough to protect democracy against fraud.

Harris and Stephenson are not, of course, the only people working in this area. Among computer experts such as Mercuri, David Chaum, David Dill, Deirdre Mulligan, Avi Rubin, and Peter Neumann, there's never been any question that there is a giant issue here. Much argument has been spilled over the question of how votes are recorded; less so around the technology used by the voter to choose preferences. One faction – primarily but not solely vendors of electronic voting equipment – sees nothing wrong with Direct Recording Electronic, machines that accept voter input all day and then just spit out tallies. The other group argues that you can't trust a computer to keep accurate counts, and that you have to have some way for voters to check that the vote they thought they cast is the vote that was actually recorded. A number of different schemes have been proposed for this, but the idea that's catching on across the US (and was originally promoted by Mercuri) is adding a printer that spits out a printed ballot the voter can see for verification. That way, if an audit is necessary there is a way to actually conduct one. Otherwise all you get is the machine telling you the same number over again, like a kid who has the correct answer to his math homework but mysteriously can't show you how he worked the problem.

This is where it's difficult to understand the appeal of such systems in the UK. Americans may be incredulous – I was – but a British voter goes to the polls and votes on a small square of paper with a stubby, little pencil. Everything is counted by hand. The UK can do this because all elections are very, very simple. There is only one election – local council, Parliament – at a time, and you vote for one of only a few candidates. In the US, where a lemon is the size of an orange, an orange is the size of a grapefruit, and a grapefruit is the size of a soccer ball, elections are complicated and on any given polling day there are a lot of them. The famous California governor's recall that elected Arnold Schwarzeneger, for example, had hundreds of candidates; even a more average election in a less referendum-happy state than California may have a dozen races, each with six to ten candidates. And you know Americans: they want results NOW. Like staying up for two or three days watching the election returns is a bad thing.

It is of course true that election fraud has existed in all eras; you can "lose" a box of marked paper ballots off the back of a truck, or redraw districts according to political allegiance, or "clean" people off the electoral rolls. But those types of fraud are harder to cover up entirely. A flawed count in an electronic machine run by software the vendor allows no one to inspect just vanishes down George Orwell's memory hole.

What I still can't figure out is why politicians are so enthusiastic about all this. Yes, secure machines with well-designer user interfaces might get rid of the problem of "spoiled" and therefore often uncounted ballots. But they can't really believe – can they? – that fancy voting technology will mean we're more likely to elect them? Can it?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 6, 2006

A different kind of poll tax

Elections have always had two parts: the election itself, and the dickering beforehand (and occasionally afterwards) over who gets to vote. The latest move in that direction: at the end of September the House of Representatives passed the Federal Election Integrity Act of 2006 (H.R. 4844), which from 2010 will prohibit election officials from giving anyone a ballot who can't present a government-issued photo ID whose issuing requirements included proof of US citizenship. (This lets out driver's licenses, which everyone has, though I guess it would allow passports, which relatively few have.)
These days, there is a third element: specifying the technology that will tabulate the votes. Democracy depends on the voters' being able to believe that what determines the election is the voters' choices rather than the latter two.

The last of these has been written about a great deal in technology circles over the last decade. Few security experts are satisfied with the idea that we should trust computers to do "black box voting" where they count up and just let us know the results. Even fewer security experts are happy with the idea that so many politicians around the world want to embrace: Internet (and mobile phone) voting.

The run-up to this year's mid-term US elections has seen many reports of glitches. My favorite recent report comes from a test in Maryland, where it turned out that the machines under test did not communicate with each other properly when the touch screens were in use. If they don't communicate correctly, voters might be able to vote more than once. Attaching mice to the machines solves the problem – but the incident is exactly the kind of wacky glitch that's familiar from everyday computing life and that can take absurd amounts of time to resolve. Why does anyone think that this is a sensible way to vote? (Internet voting has all the same risks of machine glitches, and then a whole lot more.)

The 2000 US Presidential election isn’t as famous for the removal from the electoral rolls in Florida of few hundred thousand voters as it is for hanging chad – but read or watch on the subject. Of course, wrangling over who gets to vote didn't start then. Gerrymandering districts, fighting over giving the right to vote to women, slaves, felons, expatriates…

The latest twist in this fine, old activity is the push in the US towards requiring Voter ID. Besides the federal bill mentioned above, a couple of dozen states have passed ID requirements since 2000, though state courts in Missouri, Kentucky, Arizona, and California are already striking them down. The target here seems to be that bogeyman of modern American life, illegal immigrants.

Voter ID isn't as obviously a poll tax. After all, this is just about authenticating voters, right? Every voter a legal voter. But although these bills generally include a requirement to supply a voter ID free of charge to people too poor to pay for one, the supporting documentation isn't free: try getting a free copy of your birth certificate, for example. The combination of the costs involved in that aspect, plus the effort involved in getting the ID are a burden that falls disproportionately on the usual already disadvantaged groups (the same ones stopped from voting in the past by road blocks, insufficient provision of voting machines in some precincts, and indiscriminate cleaning of the electoral rolls). Effectively, voter ID creates an additional barrier between the voter and the act of voting. It may not be the letter of a poll tax, but it is the spirit of one.

This is in fact the sort of point that opponents are making.

There are plenty of other logistical problems, of course, such as: what about absentee voters? I registered in Ithaca, New York, in 1972. A few months before federal primaries, the Board of Elections there mails me a registration form; returning it gets me absentee ballots for the Democratic primaries and the elections themselves. I've never known whether my vote is truly anonymous; nor whether it's actually counted. I take those things on trust, just as, I suppose, the Board of Elections trusts that the person sending back these papers is not some stray British person who's does my signature really well. To insert voter ID into that process would presumably require turning expatriate voters over to, say, the US Embassies, who are familiar with authentication and checking identity documents.

Given that most countries have few such outposts, the barriers to absentee voting would be substantially raised for many expatriates. Granted, we're a small portion of the problem. But there's a direct clash between the trend to embrace remote voting - the entire state of Oregon votes by mail – and the desire to authenticate everyone.
We can fix most of the voting technology problems by requiring voter-verifiable, auditable, paper trails, as Rebecca Mercuri began pushing for all those years ago (and most computer scientists now agree with), and there seem to be substantial moves in that direction as state electors test the electronic equipment and scientists find more and more serious potential problems. Twenty-seven states now have laws requiring paper trails. But how we control who votes is the much more difficult and less talked-about frontier.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 15, 2006

Mobile key infrastructure

Could mobile phones be the solution to online security problems? Fred Piper posed this question yesterday to a meeting of the UK branch of the Information Systems Security Association (something like half of whom he'd taught at one point or another).

It wasn't that Piper favored the idea. He doesn't, he said, have a mobile phone. He was putt off the whole idea long ago by an ad he saw on TV that said the great thing about mobile phones was when you left the office it would go with you. He doesn't want to be that available. (This is, by the way, an old concern. I have a New Yorker cartoon from about the 1970s that shows a worried, harassed-looking businessman walking down the street being followed by a ringing telephone on a very long cord.)

But from his observation, mobile phones (PPT) are quietly sneaking their way into the security chain without anyone's thinking too much or too deeply about it. This trend he calls moving from two-factor authentication to two-channel authentication. You can see the sense of it. You want to do some online banking, so for extra security your bank could, in response to your entering your user name and password, send you a code to your previously registered mobile phone, which you then type into the Web site (PDF) as an extra way of proving you're you.

One reason things are moving in this direction is that even though security is supposed to be getting better in some ways it's actually regressing. For one thing, these days impersonating someone is easier than cracking the technology – so impersonation has become the real threat.

For another thing, there are traditionally three factors that may be used in creating an authentication system: something you know (a PIN or credit card number), something you have (a physical credit, ATM, or access card), or something you are (a personal characteristic such as a biometric). In general, good security requires at least two such factors. That way, if one factor is compromised although the security system is weakened it's not broken altogether.

But, despite the encryption protecting credit card details online, since you are not required to present the physical card, most of the time our online transactions rely for authentication on a single factor: something we know. The upshot is that credit cards no longer are as secure as in the physical world, where they rely on two factors, the physical card and something you know (the PIN or the exact shape of your signature). "The credit card number has become an extended password," he said.

Mobile phones have some obvious advantages. Most people have one, so you're not asking people to buy special readers, as you would have to if you wanted to use a smart card as an authentication token. To the consumer, using a mobile phone for authentication seems like a freel lunch. Most people, once they have one, carry them everywhere. So you're not asking them to keep track of anything more than they already are. The channel, as in the connection to the mobile phone, is owned by known entities and already secured by them. And mobile phones are intelligent devices (even if the people speaking into them on the Tube are not).

In addition, if you compare the cost of using mobile phones as a secure channel to exchange one-time passwords for specific sessions to the cost of setting up a public key infrastructure to do the same thing, it's clearly cheaper and less unwieldy.

There are some obvious disadvantages, too. There are black holes with no coverage. Increasingly, mobile phones will be multi-network devices. They will be able tocommunicate over the owned, relatively secure channel – but they will also be able to use insecure channels such as wi-fi. In addition, Bluetooth can add more risks.

Another possibility that occurs to me is that if mobile phones start being used in bank authentication systems we will see war-dialling of mobile phone numbers and phishing attacks on a whole new scale. Yes, such an attack would require far greater investment than today's phishing emails, but the rewards could be worth it. In a different presentation at the same meeting, Mike Maddison, a consultant with Deloitte, presented the results of surveys it's conducted of three industry sectors: financial services, telecommunications and media, and life sciences. All three say the same thing: attacks are becoming more sophisticated and more dangerous, and the teenaged hacker has been largely replaced by organised crime.

Piper was not proposing a "Mobile Key Infrastructure" as a solution. What he was suggesting is that phones are already being used in this way, and security professionals should be thinking about what it means and where the gotchas are going to be. In privacy circles, we talk a lot about mission creep. In computer software we talk about creeping featurism. I don't know if security folks have a standard phrase for what we're talking about here. But it seems to me that if you're going to build a security infrastructure it ought to be because you had a plan, not because a whole bunch of people converged on it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 18, 2006

Travel costs

I've never been much for conspiracy theories – in general, I tend to believe that I'm not important enough to be worth conspiring against – but if I were, this week would be a valid time. On Monday, they began lifting the draconian baggage restrictions imposed late last week. On Wednesday, we began seeing stories questioning the plausibility, chemistrywise, of the plot as we have been told it so far. On Thursday, the Guardian published a package of stories outlining the wonderful increased surveillance we have in store. Repeat after me: the timing is just coincidence. It is sheer paranoia to attribute to conspiracy what can be accounted for by coincidence.

Right.

One of the things I meant to mention in last week's net.wars but forgot is the US's new rules on passenger data, which require airlines to submit passenger records before the plane takes off instead of, as formerly, afterwards. Ed Hasbrouck has a helpful analysis of these new rules, their problems, and their probable costs. (has anyone calculated the lost productivity cost of the hours in airport security?). The EU now wants to adopt those rules for its ownself, a sad reversal from the notion that the EU might decline to provide passenger data to a country that has so little privacy protection.

One possibility that's been raised on both sides of the Atlantic is a "trusted passenger" scheme, whereby frequent travelers can register to be fast-tracked through the airport. In a sense, most airports already have the beginnings of such a scheme: frequent flyers. As a Gold Preferred US Airways Dividend Miles member, you use the first-class check-in, and in some airports even sped through security via a special line. Do I love it? You betcha. Do I think it's good security? No. If I were a terrorist wanting to get some of my cellmates onto planes to wreak havoc, I would have them flying all over the place building up a stainless profile until everyone trusted them. Only then would they be ready for activation. Obviously the scheme the security services have in mind will be more sophisticated and involve far more background checking, but the problem of the sleeper remains. It's like people who used to talk about gaming the system by getting a "dope-dealer's haircut" before traveling internationally: short, neat, and business-like. That will be the "terrorist's travel identity": suit, tie, briefcase, laptop, frequent flyer gold status, documented blameless existence.

The UK is also talking about "positive profiling" (although Statewatch notes that no explicit references to this appear in the joint press statement), which I guess is supposed to be more sophisticated than "Let's strip search all the Asian passengers" The now-MP-formerly-one-of-my-favorite-actresses Glenda Jackson has published a fairly cogent set of counter-arguments, though I'll note picayunally that the algorithm for picking passengers to search randomly had better be less clearly visible than just picking every third passenger in the queue. (You must immediately! report anyone who asks to change places with you!) The Home Secretary, John Reid, has said that such profiling will not target racial or religious groups but will be based on biometrics – fingerprints, iris scans. We hope Reid is aware of the years of research into fingerprints (DOC) attempting to prove that you could identify criminality in a fingerprint.

Closer to net.wars' heart is ministers' intention to make the Web hostile to terrorists. For example: by blocking Web sites that incite acts of terrorism or contain instructions on how to make a bomb. Aside from the years of evidence that blocking does not work, it's hard to see how you can get rid of bomb-making instructions, such as they are, without also getting rid of pretty much any Web site devoted to chemistry or safety. Though if you're an arts-educated politician who is proud of knowing little of science, that may seem like a perfectly reasonable thing to do. Show me someone who's curious, who wants to know how things work, who likes to try making things and soldering things, and playing with electrical circuits, and I'll show you a dangerous specimen.

But beyond that, I'll bet professional terrorists do not learn how to make bombs by reading Wikipedia.or typing "how make bomb" into Google.

I'm not sure how you make the Web hostile to terrorists without making it hostile to everyone. If you really want to make the Web hostile, the simplest way is simply to limit, by government fiat, the speed of the connection anyone is allowed to buy. Shove us all back to dial-up, and not only does the Web become hostile for terrorists trying to find information on how to make bombs, but you've pretty much solved music/video file-trading, too. Bonus!

We hear quoted a lot, now, the American master-of-all-trades Benjamin Franklin who probably said, "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety." But the liberties people deem essential seem to be narrowing, and no one wants to believe that safety is temporary. No plane full of passengers declines screening, saying, "We'll take our chances."

If there's a conspiracy, I guess that means we're in on it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 11, 2006

Any liquids discovered must be removed from the passenger

The most interesting thing I heard anyone say yesterday came from Simon Sole, of Exclusive Analysis, a risk assessment company specializing in the Middle East and Africa, in an interview on CNBC. He said that it's no longer possible to track terrorists by following the money, because terrorism is now so cheap they don't need much: it cost, he said, only $4,000 to crash two Russian planes. That is an awfully long way from the kind of funding we used to hear the IRA had.

It is also clear that the arms race of trying to combat terrorism (by throwing out toothpaste in Grand Forks, North Dakota?) is speeding up. Eighteen years ago, the Lockerbie crash was caused by a bomb in checked baggage; that sparked better baggage screening. Five years ago, planes were turned into bombs by hijackers armed with small knives. The current chaos is being caused by a chemistry plot: carry on ordinary-looking items that aren't weapons in themselves and mix to make dangerous. Banning people from carrying liquids is, obviously, a lot more complicated than banning knives when you already have metal detectors. Lacking scanning technology sophisticated enough to distinguish dangerous liquids from safe ones, banning specific liquids requires detailed, item-by-item searching. Which is why, presumably, the UK (unlike the US) has banished all hand luggage to the hold: maybe they just don't have the staffing levels to search everything in a reasonable amount of time.

The UK has always hated cabin baggage anyway, and the CAA has long restricted it to 6Kg, where the US has always been far more liberal. So yesterday's extremity is the kind of panic response someone might make in a crisis when they already think carrying a laptop is a sign of poor self-control (a BA staffer once said that to me). Ban everything! Yes, even newspapers! Go!

Passengers in and leaving the US can still carry stuff on, just not liquids or gels. You may regard this as reckless or enlightened, depending on your nationality or point of view. Rich pickings in the airport garbage tonight.

The Times this morning speculates that the restrictions on hand luggage may become permanent. When you read a little more closely, they mean the restrictions on liquids, not necessarily all hand luggage. If I ran an airline, I'd be campaigning pretty heavily against yesterday's measures. For one thing, it's going to kill business travel, the industry's most profitable segment. If you can't read or use your laptop in-flight or while waiting in the airport club before departure, you're losing many hours of productivity. Plus, imagine being cabin staff trying to control a plane full of hundreds of bored, hungry, frustrated people, some of them adults.

Security expert Bruce Schneier links to a logical reason why the restrictions should be temporary: blocking a particular method only works until the terrorists switch to something else. Schneier also links to a collection of weapons made in prison under the most unpromising of circumstances with the most limited materials. View those, and you know the truth: it will never be possible to secure everything completely. Anything we do to secure air travel is a balance of trade-offs.

One reason people are so dependent on their carry-on luggage is that airline travel is uncomfortable and generally unpleasant. Passengers carry bottles of water because aircraft air is dry. I carry a milk on selected flights because I hate synthetic creamer, and dried fruit and biscuits because recent cutbacks mean on some flights you go hungry. I also carry travel chopsticks (easier to eat with than a plastic fork), magazines to read and discard, headphones with earplugs in them, and I never check anything because I hate waiting for hours for luggage that may be lost, stolen, or tampered with. Better service would render a lot of this unnecessary.

I know these measures are for our own good: we don't want planes falling out of the sky and killing people. (Though we seem perfectly innured to the many thousands more deaths every year from car crashes.) But the extremity of the British measures seems like punishment: if you are so morally depraved as to want to travel by air, you deserve to be thoroughly miserable while doing it. In fact, part of air travel's increasingly lousy service is the cost of security. We lose twice.

I read – somewhere – about a woman of a much earlier generation who expressed sadness at the thought that today's weary wanders would never know "the pleasure of traveling". We know the pleasure of being somewhere else. But we do not know the pleasure of the process of getting there as they did in a time when you were followed by trunks that were packed by servants, who came along to ensure that you were comfortable and your needs catered to. Of course, you had to be rich to afford all that. I bet it wasn't so much fun for the servants, or for the starving poor stuffed into the ship's hold. Even so, yesterday I thought about that a lot, and with the same kind of sadness.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 4, 2006

Hard times at the identity corral

If there is one thing we always said about the ID card it's that it was going to be tough to implement. About ten days ago, the Sunday Times revealed how tough: manufacturers are oddly un-eager to bid to make something that a) the Great British Public is likely to hate, and b) they're not sure they can manufacture anyway. That suggests (even more strongly than before) that in planning the ID card the government operated like an American company filing a dodgy patent: if we specify it, they will come.

I sympathize with IBM and the other companies, I really do. Anyone else remember 1996, when nearly all the early stories coming out of the Atlanta Olympics blamed IBM' prominently for every logistical snafu? Some really weren't IBM's fault (such as the traffic jams). Given the many failures of UK government IT systems, being associated with the most public, widespread, visible system of all could be real stock market poison.

But there's a secondary aspect to the ID card that I, at least, never considered before. It's akin to the effect often seen in the US when an amendment to the Constitution is proposed. Even if it doesn't get ratified in enough states – as, for example, the Equal Rights Amendment did not – the process of considering it often inspires a wave of related legislation. The fact that ID cards, biometric identifiers, and databases are being planned and thought about at such a high level seems to be giving everyone the idea that identity is the hammer for every nail.

Take, for example, the announcement a couple of days ago of NetIDme, a virtual ID card intended to help kids identify each other online and protect them from the pedophiles our society apparently now believes are lurking behind every electron.

There are a lot of problems with this idea, worthy though the intentions behind it undoubtedly are. For one thing, placing all your trust in an ID scheme like this is a risk in itself. To get one of these IDs, you fill out a form online and then a second one that's sent to your home address and must be counter-signed by a professional person (how like a British passport) and a parent if you're under 18. It sounds to me as though this system would be relatively easy to spoof, even if you assume that no professional person could possibly be a bad actor (no one has, after all, ever fraudulently signed passports). No matter how valid the ID is when it's issued, in the end it's a computer file protected by a password; it is not physically tied to the holder in any way, any more than your Hotmail ID and password are. For a third thing, "the card removes anonymity," the father who designed the card, Alex Hewitt, told The Times. But anonymity can protect children as well as crooks. And you'd only have to infiltrate the system once to note down a long list of targets for later use.

But the real kicker is in NetIDme's privacy policy, in which the fledgling company makes it absolutely explicit that the database of information it will collect to issue IDs is an asset of a business: it may sell the database, the database will be "one of the transferred assets" if the company itself is sold, and you explicitly consent to the transfer of your data "outside of your country" to wherever NetIDme or its affiliates "maintain facilities". Does this sound like child safety to you?

But NetIDme and other systems – fingerprinting kids for school libraries, iris-scanning them for school cafeterias – have the advantage that they can charge for their authentication services. Customers (individuals, schools) have at least some idea of what they're paying for. This is not true for the UK's ID card, whose costs and benefits are still unclear, even after years of dickering over the legislation. A couple of weeks ago, it became known that as of October 5 British passports will cost £66, a 57 percent increase that No2ID attributes in part to the costs of infrastructure needed for ID cards but not for passports. But if you believe the LSE's estimates, we're not done yet. Most recent government estimates are that an ID card/passport will cost £93, up from £85 at the time of the LSE report. So, a little quick math: the LSE report also guessed that entry into the national register would cost £35 to £40 with a small additional charge for a card, so revising that gives us a current estimate of £38.15 to £43.60 for registration alone. If no one can be found to make the cards but the government tries to forget ahead with the database anyway, it will be an awfully hard sell. "Pay us £40 to give us your data, which we will keep without any very clear idea of what we're going to do with it, and in return maybe someday we'll sell you a biometric card whose benefits we don't know yet." If they can sell that, they may have a future in Alaska selling ice boxes to Eskimos.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 16, 2006

Security vs security, part II

It's funny. Half the time we hear that the security of the nation depends on the security of its networks. The other half the time we're being told by governments that if the networks are too secure the security of the nation is at risk.

This schizophrenia was on display this week in a ruling by the US Court of Appeals in the District of Columbia, which ruled in favor of the Federal Communications Commission: yes, the FCC can extend the Communications Assistance for Law Enforcement Act to VoIP providers. Oh, yeah, and other people providing broadband Internet access, like universities.

Simultaneously, a clutch of experts – to wit, Steve Bellovin (Columbia University), Matt Blaze (University of Pennsylvania), Ernest Brickell (Intel), Clinton Brooks (NSA, retired), Vinton Cerf (Google), Whifield Diffie (Sun), Susan Landau (Sun), Jon Peterson (NeuStar), and John Treichler (Applied Signal Technology) – released a paper explaining why requiring voice over IP to accommodate wiretapping is dangerous. Not all of these folks are familiar to me, but the ones who are could hardly be more distinguished, and it seems to me when experts on security, VOIP, Internet protocols, and cryptography all get together to tell you there's a problem, you (as in the FCC) should listen. Together, this week they released Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP (PDF), which carefully documents the problems.

First of all – and they of course aren't the only ones to have noticed this – the Internet is not your father's PSTN. On the public switched telephone network, you have fixed endpoints, you have centralized control, and you have a single, continuously open circuit. The whole point of VoIP is that you take advantage of packet switching to turn voice calls into streams of data that are more or less indistinguishable from all the other streams of data whose packets are flying alongside. Yes, many VoIP services give you phone numbers that sound the same as geographically fixed numbers – but the whole point is that neither caller nor receiver need to wait by the phone. The phone is where your laptop is. Or, possibly, where your secretary's laptop is. Or you're using Skype instead of Vonage because your contact also uses Skype.

Nonetheless, as the report notes, the apparent simplicity of VoIP, its design that makes it look as though it functions the same as old-style telephones, means that people wrongly conclude that anything you can do on the PSTN you should be able to do just as easily with VoIP.

But the real problems lie in security. There's no getting round the fact that when you make a hole in something you've made a hole through which stuff leaks out. And where in the PSTN world you had just a few huge service providers and a single wire you could follow along and place your wiretap wherever was most secure, in the VoIP world you have dozens of small providers, and an unpredictable selection of switching and routing equipment. You can't be sure any wiretap you insert will be physically controlled by the VoIP provider, which may be one of dozens of small operators. Your targets can create new identities at no cost faster than you can say "pre-pay mobile phone". You can't be sure the signals you intercept can be securely transported to Wiretap Central. The smart terminals we use have a better chance of detecting the wiretap – which is both good and bad, in terms of civil liberties. Under US law, you're supposed to tap only the communications pertaining to the court authorization; difficult to do because of the foregoing. And then, there's a hole, as the IETF observed in 2000, which could be exploited by someone else. Whom do you fear more will gain access to your communications: government, crook, hacker, credit reporting agency, boss, child, parent, or spouse? Fun, isn't it?

And then there's the money. American ISPs can look forward to the cost of CALEA with all the enthusiasm that European ISPs had for data retention. Here, the government helpfully provided its own data: a VoIP provider paid $100,000 to a contractor to develop its CALEA solution, plus a monthly fee of $14,000 to $15,000 and, on top of that, $2,000 for each intercept.

Two obvious consequences. First: VoIP will be primarily sold by companies overseas into the US because in general the first reason people buy VoIP is that it's cheap. Second: real-time communications will migrate to things that look a lot less like phone calls. The report mentions massively multi-player online role-playing games and instant messaging. Why shouldn't criminals adopt pink princess avatars and kill a few dragons while they plot?

It seems clear that all of this isn't any way to run a wiretap program, though even the report (two of whose authors, Landau and Diffie, have written a history of wiretapping) allows that governments have a legitimate need to wiretap, within limits. But the last paragraph sounds like a pretty good way to write a science fiction novel. In fact, something like the opening scenes of Vernor Vinge's new Rainbows End.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).