The weakest link
There's an interesting effort at a pair of Newcastle universities to design ways to get people to make better security choices. Some of this work is complex mathematical modeling, but some is psychology, inspired by the kind of thinking that has been popularized in the 2009 book Nudge, by Richard Thaler and Cass Sunstein. The Choice Architectures project is trying to create the evidence-based cybersecurity equivalent of making it easier to grab the vegetables than reach for the desserts in a school cafeteria.
There's a lot to like about this approach. It's only logical that people will make better decisions if the easiest choice is the secure choice. No one who doesn't have to encrypts email because it's a pain. At least partly for security reasons, my version of Firefox is loaded up with Adblock Plus, Ghostery, and NoScript, and the result is frequent reloads, and workarounds when some, even many, sites don't work without one or more features turned back on. No one normal chooses to live like this.
One of the techniques that interests project researcher Lynne Coventry is an approach familiar from health contexts to turn intentions into behavior: identify the behavior you want to change; identify the trigger situations; and figure out a consequence or substitute. Let's say the behaviour you want to change is choosing easily-cracked passwords. If you're asked for a password when you're in a hurry, you choose something familiar and quick just to get the task done. People don't experience enough personal damage to scare them off of this. But it's fixable: draw up a list of strong passwords in advance to choose from when you're pressed or use a password generator.
At a workshop about a month ago, Coventry had a group think up security scenarios for such interventions. This is when it became clear to me that the real problem is not us, the users, the people who are constantly being told we are the weakest link. It's *them*, the people who design systems.
For example, one idea was how people take care of the keys to the most complex and expensive computer networks individuals own: that is, their cars. The latest thing in keyless entry systems involves avoiding the arduous labour of pushing a button on an electronic key; the mere presence of the key is sufficient to unlock the car. To be fair, the point of such systems is to eliminate many issues with keys: they get lost, stolen, copied, cloned, and forgotten. But, as so often, new technologies also introduce new risks. Your modern car thief brings a signal booster instead of a wire hanger. So we were mulling: is the solution to teach people to buy RFID wallets? If the desirable behavior is for people to change the administrative password for their wifi routers, do we scare them (evil hackers might pwn you) or get ISPs to reward them with extra bandwidth? Are we after consciousness-raising, carrots, sticks, or some combination of all three?
There are of course orthogonal solutions to these things, such as hard-wiring your machines or buying a bicycle. But the underlying common problem is that consumers often can't behave securely because of system design. When open wifi became a problem, manufacturers began supplying pre-configured boxes with complex passwords written on their sides. Nothing similar has yet (though it may) happened with the administrative passwords for those routers, which are still shipped with known default settings. These are just two examples among many of situations where we as consumers are being "educated" to pick up slack that more thoughtful design would have avoided entirely.
The broader question, of course, which other parts of the RISCS projects tackle, is what *are* the better decisions we should make? So much of modern security advice is folk knowledge, built up by long practice and habit but originally conceived for situations very different from the one we're in now. While looking up the wifi administrator router password issue, I yet again encountered the advice to change passwords every 30 days. Long-time Purdue security professor Gene Spafford noted in 2006 that this advice was conceived 30 years ago for mainframes based on a very specific threat model that had calculated how long it would take contemporary machines to crack them by brute force. What does this have to do with today's data breaches, rainbow tables, and phishing emails?
The appalling thing is that copyright is starting to surface as an impediment to some choices. You can work freely on a 50-year-old car, but not so much a modern one, as manufacturers use the Digital Millennium Copyright Act to block such activities - even farmers with large, expensive equipment. As a result, a number of states are considering Fair Repair laws - lest you think copyright is all about abstractions. On what Chris Preimesburg in eWeek dubs the Internet of Other People's Things this is all going to be so, so much worse, in so many ways. At the moment, the smart decision is not to buy - or rent - "smart" things. In the near future, "smart" things with built-in flaws will be the path of least resistance. How do we make good decisions then?
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.
4