" /> net.wars: July 2009 Archives

« June 2009 | Main | August 2009 »

July 24, 2009

Security for the rest of us


Many governments, faced with the question of how to improve national security, would do the obvious thing: round up the usual suspects. These would be, of course, the experts - that is, the security services and law enforcement. This exercise would be a lot like asking the record companies and film studios to advise on how to improve copyright: what you'd get is more of the same.

This is why it was so interesting to discover that the US National Academies of Science was convening a workshop to consult on what research topics to consider funding, and began by appointing a committee that included privacy advocates and usability experts, folks like Microsoft researcher Butler Lampson, Susan Landau, co-author of books on privacy and wiretapping, and Donald Norman, author of the classic book The Design of Everyday Things. Choosing these people suggests that we might be approaching a watershed like that of the late 1990s, when the UK and the US governments were both forced to understand that encryption was not just for the military any more. The peace-time uses of cryptography to secure Internet transactions and protect mobile phone calls from casual eavesdropping are much broader than crypto's war-time use to secure military communications.

Similarly, security is now everyone's problem, both individually and collectively. The vulnerability of each individual computer is a negative network externality, as NYU economist Nicholas Economides pointed out. But, as many asked, how do you get people to understand remote risks? How do you make the case for added inconvenience? Each company we deal with makes the assumption that we can afford the time to "just click to unsubscribe" or remember one password, without really understanding the growing aggregate burden on us. Norman commented that door locks are a trade-off, too: we accept a little bit of inconvenience in return for improved security. But locks don't scale; they're acceptable as long as we only have to manage a small number of them.

In his 2006 book, Revolutionary Wealth, Alvin Toffler comments that most of us, without realizing it, have a hidden third, increasingly onerous job, "prosumer". Companies, he explained, are increasingly saving money by having us do their work for them. We retrieve and print out our own bills, burn our own CDs, provide unpaid technical support for ourselves and our families. One of Lorrie Cranor's students did the math to calculate the cost in lost time and opportunities if everyone in the US read annually the privacy policy of each Web site they visited once a month. Most of these things require college-level reading skills; figure 244 hours per year per person, $3,544 each...$781 billion nationally. Weren't computers supposed to free us of that kind of drudgery? As everything moves online, aren't we looking at a full-time job just managing our personal security?

That, in fact, is one characteristic that many implementations of security share with welfare offices - and that is becoming pervasive: an utter lack of respect for the least renewable resource, people's time. There's a simple reason for that: the users of most security systems are deemed to be the people who impose it, not the people - us - who have to run the gamut.

There might be a useful comparison to information overload, a topic we used to see a lot about ten years back. When I wrote about that for ComputerActive in 1999, I discovered that everyone I knew had a particular strategy for coping with "technostress" (the editor's term). One dealt with it by never seeking out information and never phoning anyone. His sister refused to have an answering machine. One simply went to bed every day at 9pm to escape. Some refused to use mobile phones, others to have computers at home..

But back then, you could make that choice. How much longer will we be able to draw boundaries around ourselves by, for example, refusing to use online banking, file tax returns online, or participate in social networks? How much security will we be able to opt out of in future? How much do security issues add to technostress?

We've been wandering in this particular wilderness a long time. Angela Sasse, whose 1999 paper Users Are Not the Enemy talked about the problems with passwords at British Telecom, said frankly, "I'm very frustrated, because I feel nothing has changed. Users still feel security is just an obstacle there to annoy them."

In practice, the workshop was like the TV game Jeopardy: the point was to generate research questions that will go into a report, which will be reviewed and redrafted before its eventual release. Hopefully, eventually, it will all lead to a series of requests for proposals and some really good research. It is a glimmer of hope.

Unless, that is, the gloominess of the beginning presentations wins out. If you listened to Lampson, Cranor, and to Economides, you got the distinct impression that the best thing that could happen for security is that we rip out the Internet (built to be open, not secure), trash all the computers (all of whose operating systems were designed in the pre-Internet era), and start over from scratch. Or, like the old joke about the driver who's lost and asking for directions, "Well, I wouldn't start from here".

So, here's my question: how can we make security scale so that the burden stays manageable?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

July 17, 2009

Human factors

For the last several weeks I've been mulling over the phrase security fatigue. It started with a paper (PDF) co-authored by Angela Sasse, in which she examined the burden that complying with security policies imposes upon corporate employees. Her suggestion: that companies think in terms of a "compliance budget" that, like any other budget (money, space on a newspaper page), has to be managed and used carefully. And, she said, security burdens weigh differently on different people and at different times, and a compliance budget needs to comprehend that, too.

Some examples (mine, not hers). Logging onto six different machines with six different user IDs and passwords (each of which has to be changed once a month) is annoying but probably tolerable if you do it once every morning when you get to work and once in the afternoon when you get back from lunch. But if the machines all log you out every time you take your hands off the keyboard for two minutes, by the end of the day they will be lucky to survive your baseball bat. Similarly, while airport security is never fun, the burden of it is a lot less to a passenger traveling solo after a good night's sleep who reaches the checkpoints when they're empty than it is to the single parent with three bored and overtired kids under ten who arrives at the checkpoint after an overnight flight and has to wait in line for an hour. Context also matters: a couple of weeks ago I turned down a ticket to Court 1 at Wimbledon on men's semi-finals day because I couldn't face the effort it would take to comply with their security rules and screening. I grudgingly accept airport security as the trade-off for getting somewhere, but to go through the same thing for a supposedly fun day out?

It's relatively easy to see how the compliance budget concept could be worked out in practice in a controlled environment like a company. It's very difficult to see how it can be worked out for the public at large, not least because none of the many companies each of us deals with sees it as beneficial to cooperate with the others. You can't, for example, say to your online broker that you just can't cope with making another support phone call, can't they find some other way to unlock your account? Or tell Facebook that 61 privacy settings is too many because you're a member of six other social networks and Life is Too Short to spend a whole day configuring them all.

Bruce Schneier recently highlighted that last-referenced paper, from Joseph Bonneau and Soeren Preibusch at Cambridge's computer lab, alongside another by Leslie John, Alessandro Acquisti, and George Loewenstein from Carnegie-Mellon, to note a counterintuitive discovery: the more explicit you make privacy concerns the less people will tell you. "Privacy salience" (as Schneier calls it) makes people more cautious.

In a way, this is a good thing and goes to show what privacy advocates have been saying along: people do care about privacy if you give them the chance. But if you're the owners of Facebook, a frequent flyer program, or Google it means that it is not in your business interest to spell out too clearly to users what they should be concerned about. All of these businesses rely on collecting more and more data about more and more people. Fortunately for them, as we know from research conducted by Lorrie Cranor (also at Carnegie-Mellon), people hate reading privacy policies. I don't think this is because people aren't interested in their privacy. I think this goes back to what Sasse was saying: it's security fatigue. For most people, security and privacy concerns are just barriers blocking the thing they came to do.

But choice is a good thing, right? Doesn't everyone want control? Not always. Go back a few years and you may remember some widely publicized research that pointed out that too many choices stall decision-making and make people feel...tired. A multiplicity of choices adds weight and complexity to the decision you're making: shouldn't you investigate all the choices, particularly if you're talking about which of 56 mutual funds to add to your 401(k)?

It seems obvious, therefore, that the more complex the privacy controls offered by social networks and other services the less likely people are to use them: too many choices, too little time, too much security fatigue. In minor cases in real life, we handle this by making a decision once and sticking to it as a kind of rule until we're forced to change: which brand of toothpaste, what time to leave for work, never buy any piece of clothing that doesn't have pockets. In areas where rules don't work, the best strategy is usually to constrain the choices until what you have left is a reasonable number to investigate and work with. Ecommerce sites notoriously get this backwards: they force you to explore group by group instead of allowing you to exclude choices you'll never use.

How do we implement security and privacy so that they're usable? This is one of the great unsolved, under-researched questions in security. I'm hoping to know more next week.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

July 10, 2009

The public interest

It's not new for journalists to behave badly. Go back to 1930s plays-turned-movies like The Front Page (1931) or Mr Smith Goes to Washington (1939), and you'll find behavior (thankfully, fictional) as bad as this week's Guardian story that the News of the World paid out £1 million to settle legal cases that would have revealed that its staff journalists were in the habit of hiring private investigators to hack into people's phone records and voice mailboxes.

The story's roots go back to 2006, when the paper's Royal editor, Clive Goodman, was jailed for illegally intercepting phone calls. The paper's then editor, Andy Coulson, resigned and the Press Complaints Commission concluded the paper's executives did not know what Goodman was doing. Five months later, Coulson became the chief of communications for the Tory party.

There are so many cultural failures here that you almost don't know where to start counting. The first and most obvious is the failure of a newsroom to obey the dictates of common sense, decency, and the law. That particular failure is the one garnering the most criticism, and yet it seems to me the least surprising, especially for one of Britain's most notorious tabloids. Journalists have competed for stories big enough to sell papers since the newspaper business was founded; the biggest rewards generally go to the ones who expose the stories their subjects least wanted exposed. It's pretty sad if any newspaper's journalists think the public interest argument is as strong for listening to Gwyneth Paltrow's voice mail as it was to exposing MPs' expenses, but that leads to the second failure: celebrity culture.

This one is more general: none of this would happen if people didn't flock to buy stories about intimate celebrity details. And newspapers are desperate for sales.

The third failure is specific to politicians: under the rubric of "giving people a second chance" Tory leader David Cameron continues to defend Coulson, who continues to claim he didn't know what was going on. Either Coulson did know, in which case he was condoning it, or he didn't, in which case he had only the shakiest grasp of his newsroom. The latter is the same kind of failure that at other papers and magazines has bred journalistic fraud: surely any editor now ought to be paying attention to sourcing. Either way, Coulson does not come off well and neither does Cameron. It would be more tolerable if Cameron would simply say outright that he doesn't care whether Coulson is honorable or not because he's effective at the job Cameron is paying him for.

The fourth failure is of course the police, the Press Complaints Commission, and the Information Commissioner, all of whom seem to have given up rather easily in 2007.

The final failure is also general: the problem that more and more intimate information about each of us is held in databases whose owners may have incentives (legal, regulatory, commercial) for keeping them secured but which are of necessity accessible by minions whose risks and rewards are different. The weakest link in security is always the human factor, and the problem of insiders who can be bribed or conned into giving up confidential information they shouldn't is as old as the hills, whether it's a telephone company employee, a hotel chambermaid, or a former Royal nanny. Seemingly we have learned little or nothing since Kevin Mitnick pioneered the term "social engineering" some 20 years ago or since Squidgygate, when various Royals' private phone conversations were published. At least some ire should be directed at the phone companies involved, whose staff apparently find it easy to refuse to help legitimate account holders by citing the Data Protection Act but difficult to resist illegitimate blandishments.

This problem is exacerbated by what University College of London security researcher Angela Sasse calls "security fatigue". Gaining access to targets' voice mail was probably easier than you think if you figure that many people never change the default PIN on their phones. Either your private investigator turned phone hacker tries the default PIN or, as Sophos senior fellow Graham Cluley suggests, convinces the phone company to reset the PIN to the default. Yes, it's stupid not to change the default password on your phone. But with so many passwords and PINs to manage and only so much tolerance for dealing with security, it's an easy oversight. Sasse's paper (PDF) fleshing out this idea proposes that companies should think in terms of a "compliance budget" for employees. But this will be difficult to apply to consumers, since no one company we interact with will know the size of the compliance burden each of us is carrying.

Get the Press Complaints Commission to do its job properly by all means. And stop defending the guy who was in charge of the newsroom while all this snooping was going on. Change a culture that thinks that "the public interest" somehow expands to include illegal snooping just because someone is famous.

But bear in mind that, as Privacy International has warned all along, this kind of thing is going to become endemic as Britain's surveillance state continues to develop. The more our personal information is concentrated into large targets guarded by low-paid staff, the more openings there will be for those trying to perpetrate identity fraud or blackmail, snoop on commercial competitors, sell stories about celebrities and politicians, and pry into the lives of political activists.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk.

July 3, 2009

What's in an assigned name?

There's a lot I didn't know at the time about the founding of the Internet Corporation for Assigned Names and Numbers, but I do remember the spat that preceded it. Until 1998, the systems for assigning domain names (DNS) and assigning Internet numbers (IANA) were both managed by one guy, Jon Postel, who by all accounts and records was a thoughtful and careful steward and an important contributor to much of the engineering that underpins the Internet even now. Even before he died in October 1998, however, plans were underway to create a successor organization to take over the names and numbers functions.

The first proposal was to turn these bits of management over to the International Telecommunications Union, and a memorandum of understanding was drawn up that many, especially within the ITU, assumed would pass unquestioned. Instead, there was much resentment and many complaints that important stakeholders (consumers, most notably) had been excluded. Eventually, ICANN was created under the auspices of the US Department of Commerce intended to become independent once it had fulfilled certain criteria. We're still waiting.

As you might expect, the US under Bush II wasn't all that interested in handing off control. The US government had some support in this, in part because many in the US seem to have difficulty accepting that the Internet was not actually built by the US alone. So alongside the US government's normal resistance to relinquishing control was an endemic sense that it would be "giving away" something the US had created.

All that aside, the biggest point of contention was not ICANN's connection to the US government, as desirable as that might be to those outside the US. Nor was it the assignment of numbers, which, since numbers are the way the computers find each other, is actually arguably the most important bit of the whole thing. It wasn't even, or at least not completely, the money (PDF), as staggering as it is that ICANN expects to rake in $61 million in revenue this year as its cut of domain name registrations. No, of course it was the names that are meaningful to people: who should be allowed to have what?

All this background is important because on September 30 the joint project agreement with DoC under which ICANN operates expires, and all these debates are being revisited. Surprisingly little has changed in the arguments about ICANN since 1998. Michael Froomkin argued in 2000 (PDF) that ICANN bypassed democratic control and accountability. Many critics have argued in the intervening years that ICANN needs to be reined in: its mission kept to a narrow focus on the DNS, and its structure designed to be transparent and accountable, and kept free of not only US government inteference but that of other governments as well.

Last month, the Center for Democracy and Technology published its comments to that effect. Last year, and in 2006, former elected ICANN board member Karl Auerbach">argued similarly, with much more discussion of ICANN's finances, which he regards as a "tax". Perhaps even more than might have been obvious then: ICANN's new public dashboard has revealed that the company lost $4.6 million on the stock market last year, an amount reporter John Levine equates to the 20-cent fee from 23 million domain name registrations. As Levine asks, if they could afford to lose that amount then they didn't need the money - so why did they collect it from us? There seems to be no doubt that ICANN can keep growing in size and revenues by creating more top-level domains, especially as it expands into long-mooted non-ASCII names (iDNs).

Arguing about money aside, the fact is that we have not progressed much, if at all, since 1998. We are asking the same questions and having the same arguments. What is the DNS for? Should it be a directory, a handy set of mnemonics, a set of labels, a zoning mechanism, or a free-for-all? Do languages matter? Early discussions included the notion that there would be thousands, even tens of thousands of global top-level domains. Why shouldn't Microsoft, Google, or the Electronic Frontier Foundation operate their own registries? Is managing the core of the Internet an engineering, legal, or regulatory problem? And, latterly, given the success and central role of search engines, do we need DNS at all? Personally, I lean toward the view that the DNS has become less important than it was, as many services (Twitter, instant messaging, VOIP) do not require it. Even the Web needs it less than it did. But if what really matters about the DNS is giving people names they can remember, then from the user point of view it matters little how many top-level domains there are. The domain info.microsoft is no less memorable than microsoft.info or microsoft.com.

What matters is that the Internet continues to function and that anyone can reach any part of it. The unfortunate thing is that none of these discussions have solved the problems we really have. Four years after the secured version of DNS (DNSsec) was developed to counteract security threats such as DNS cache poisoning that had been mooted for many more years than that, it's still barely deployed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.