The last couple of years have seen growing awareness that modern cars are becoming as security-challenged as the other newly electronified systems: smart meters, smart TVs, SCADA, and everything else that will eventually make up the Internet of Things. This week, a group of academics and other concerned parties gathered in Oxford to hash out the shape of the nascent monster approaching on wheels. Officially, the main event was autonomous vehicles, but the reality is that current cars are already computer networks on wheels, with all the security issues that implies. Two days later, the story broke about the vulnerabilities in the Nissan Leaf. Last week Volvo recalled 59,000 vehicles because of a software flaw that caused engines to shut down.
The two scariest sources of trouble that emerged: human nature and what one speaker called the "supply mesh" (because "chain" is too simple). Human nature we see on the roads every day; the fact that humans and computers have different expectations and foibles may necessitate, as some suggest, segregating them from each other, something we're still struggling to do for bicycles.
Google dominates the UAV headlines to such an extent that some people apparently wonder why Paul Newman bothers with his Oxford-based Mobile Robotics Group. It's disturbing. This is a game-changing global market being mentally awarded to a single company before the first product launch. Motivations matter a lot in new technology design, and European researchers tend to talk about convoys, fuel savings, and better use of road space. Newman imagines cars sharing data, doing collaborative mapping. This week, Mother Jones imagined the end of parking. There is more than one possible future than optimizing for monetizing passengers by displaying ads, favoring sites that pay for the privilege, paid apps, and comprehensive data tracking, as Tom Slee writes in What's Yours Is Mine?
The between-sessions schmoozing at an event like this tosses up all sorts of possibilities. For example: public discomfort might preclude sending unmanned 18-wheeler trucks careening around the I-95 or the M1. But maybe there'd be less of a problem with putting one guy in charge of a convoy of four, much like a Clear Channel disk jockey overseeing a dozen stations from a studio in Atlanta. Your quad-trucker would act as a security guard for the trucks and their contents, and apply human judgement and intelligence when needed. One speaker produced a fine example of when that might be: a burning oil tanker truck by the roadside. It wouldn't trigger any of an autonomous vehicle's alarm systems - no people to worry about, road is clear, good visibility (for now). Although this specific case could be fixed with outside thermometers and sensors that sample the air and test for particulates and other indicators of smoke, the point stands that automated vehicles can only be programmed with what their developers foresee. Humans have life experience (although also variable stupidity). By the time an autonomous vehicle has collected enough data from daily runs to gain the equivalent, it may have been blown up.
But all that is still in the prototype stage. The immediate security issue is today's increasingly automated and wireless-equipped cars; the UAVs built on top of them can't be more secure than they are. Every industry seems to have to learn separately that when you add wireless communications you must change your threat model. The supply chain affects all cars now, not just UAVs in a future we have time to prepare for.
Probably most people's image of where cars come from is the old assembly line. Of course, it's far more complicated than that: as in the computer industry itself, original equipment manufacturers (OEMs) buy pieces from Tier 1 suppliers, who in turn build systems out of components sourced from Tier 2 suppliers. All these players are densely interconnected (the "mesh") and, as an added complication, the lower-tier suppliers are often bigger than the car companies themselves. So people build subsystems and then systems and then cars out of components whose security they can't inspect or test, and whose manufacturers they simply have to trust. A trust chain isn't so bad when the guy on the factory line can look at the T-joint he's installing and say, "This one doesn't look so good - send it for testing", but is potentially disastrous when you're talking about embedded systems whose vulnerabilities may not appear for a year or two but will be catastrophic when they do.
Business nature also adds risk. One of the most disturbing bits of news is VW's response to Birmingham University researcher Flavio Garcia's paper on cracking the Megamos crypto immobilizer system in use in many cars. Lawsuits from VW - more recently famous for gaming its emission control systems' test results - kept Garcia's work gagged for two and a half years before he was cleared to publish the weaknesses he found. Car manufacturers didn't take safety seriously before Ralph Nader in the 1970s; similar force is needed for security today. Playing Matt Blaze's Security Problem Excuse Bingo is a lot less fun when the object under discussion is a two-ton killer cyborg.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.