The pet rock manifesto
I understand why government doesn't listen to security experts on topics where their advice conflicts with the policies it likes. For example: the Communications Capabilities Development Programme, where experts like Susan Landau, Bruce Schneier, and Ross Anderson have all argued persuasively that a hole is a hole and creating a vulnerability to enable law enforcement surveillance is creating a vulnerability that can be exploited by...well, anyone who can come up with a way to use it.
All of that is of a piece with recent UK and US governments' approach to scientific advice in general, as laid out in The Geek Manifesto, the distillation of Mark Henderson's years of frustration serving as science correspondent at The Times (he's now head of communications for the Wellcome Trust). Policy-based evidence instead of evidence-based policy, science cherry-picked to support whatever case a minister has decided to make, the role of well-financed industry lobbyists - it's all there in that book, along with case studies of the consequences.
What I don't understand is why government rejects experts' advice when there's no loss of face involved, and where the only effect on policy would be to make it better, more relevant, and more accurately targeted at the problem it's trying to solve. Especially *this* government, which has in other areas has come such a long way.
Yet this is my impression from Wednesday's Westminster eForum on the UK's Cybersecurity strategy (PDF). Much was said - for example, by James Quinault, the director of the Office of Cybersecurity and Information Assurance - about information and intelligence sharing and about working collaboratively to mitigate the undeniably large cybersecurity threat (even if it's not quite as large as BAe Systems Detica's seemingly-pulled-out-of-the-air £27 billion would suggest; Detica's technical director, Henry Harrison didn't exactly defend that number, but said no one's come up with a better estimate for the £17 billion that report attributed to cyberespionage.)
It was John Colley, the managing director EMEA for (ISC)2 who said it: in a meeting he attended late last year with, among others, the MP James Brokenshire, Minister for Crime and Security at the Home Office shortly before the publication of the UK's four-year cybersecurity strategy (PDF), he asked who the document's formulators had talked to among practitioners, "the professionals involved at the coal face". The answer: well, none. GCHQ wrote a lot of it (no surprise, given the frequent, admittedly valid, references to its expertise and capabilities), and some of the major vendors were consulted. But the actual coal face guys? No influence. "It's worrying and distressing," Colley concluded.
Well, it is. As was Quinault's response when I caught him to ask whether he saw any conflict between the government's policies on CCDP and surveillance back doors built into communications equipment versus the government's goal of making Britain "one of the most secure places in the world to do business". That response was, more or less precisely: No.
I'm not saying the objectives are bad; but besides the issues raised when the document was published, others were highlighted Wednesday. Colley, for example, noted that for information sharing to work it needs two characteristics: it has to go both ways, and it has to take place inside a network of trust; GCHQ doesn't usually share much. In addition, it's more effective, according to both Colley and Stephen Wolthusen, a reader in mathematics at Royal Holloway's Information Security Group, to share successes rather than problems - which means that you need to be able to phone the person who's had your problem to get details. And really, still so much is down to human factors and very basic things, like changing the default passwords on Internet-facing devices. This is the stuff the coalface guys see every day.
Recently, I interviewed nearly a dozen experts of varying backgrounds about the future of infosecurity; the piece is due to run in Infosecurity Magazine sometime around now. What seemed clear from that exercise is that in the long run we would all be a lot more secure a lot more cheaply if we planned ahead based on what we have learned over the past 50 years. For example: before rolling out wireless smart meters all over the UK, don't implement remote disconnection. Don't link to the Internet legacy systems such as SCADA that were never designed with remote access in mind and whose security until now has depended on securing physical access. Don't plant medical devices in people's chests without studying the security risks. Stop, in other words, making the same mistakes over and over again.
The big, upcoming issue, Steve Bellovin writes in Privacy and Cybersecurity: the Next 100 Years (PDF), a multi-expert document drafted for the IEEE, is burgeoning complexity. Soon, we will be surrounded by sensors, self-driving cars, and the 2012 version of pet rocks. Bellovin's summation, "In 20 years, *everything* will be connected...The security implications of this are frightening." And, "There are two predictions we can be quite certain about: there will still be dishonest people, and our software will still have some bugs." Sounds like a place to start, to me.
Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.