" /> net.wars: December 2014 Archives

« November 2014 | Main | January 2015 »

December 26, 2014

The ugly, the bad, and the good: 2014 in review

Ugly: the year in privacy.

From the Target data breach to the Sony hack, from the continuing Snowden revelations to the UK's power grab, from Facebook's manipulation of its users' newsfeeds to the Samaritans' ham-fisted attempt at leveraging social media, and finally from the UK government's early, disastrous stab at reusing medical data to Heartbleed and the holes in the Internet's basic infrastructure it's been a rough, rough year.

Back in January at the Computers, Freedom, and Data Protection conference, Mireille Hildebrandt asked whether people would rather have privacy or the right to privacy. Without trustworthy technical tools and infrastructure we can't have the former and the latter is theoretical fantasy. Each of those cases highlights a different problem: vulnerabilities introduced down the chain of suppliers (Target); the deep dependence we all have on third parties to protect the information we have no choice but to give them (Sony, eBay, government health services).

Bad: mobile phone obliviousness is reaching absurd proportions.

A couple of weeks ago a friend and I watched with some bemusement as a young woman walked up the busiest section of Pancras Road, where it runs alongside London's Kings Cross railway station. While traffic flowed around and behind her, she walked slower and slower as a taxi came up behind her, slowed, and inched forward, the driver doubtless fuming in legitimate frustration. She was, of course, engrossed in whatever was happening on the screen of her mobile phone, which was more real to her than the large, life-threatening chunks of metal around her. Similarly, it's now routine to see swathes of passengers blocked on the stairs exiting underground stations by people who can't wait two more steps to check their email.

Worse, this behavior is spreading to cyclists: on a busy Cambridge street in July the cyclist in front of me couldn't hear my shouted question because she was listening to music over headphones. In many jurisdictions, blocking off your hearing is illegal for car drivers. For cyclists it's suicidal, and you see it even in London.

The mindset in which our personal bubbles are preferable to the physical world around us is a real and serious threat to our social fabric, and it's one that's likely to spread through the vector of "smart city" and "smart home" technologies. How cool is it to have a smartphone app that will remotely control your home's heating system so you can come home from a two-week vacation to a warm house? It makes sense if your nearest neighbor is five miles away, as in parts of the American West. But when your next-door neighbor is only a few feet away...trading those sorts of favors is how a local cohesive community is built. Costs like these are rarely factored in when we think about new technologies. I remember hearing on a radio program once that air conditioning killed much of the social fabric of the American South: people gave up sitting on their breezy porches and chatting with neighbors passing by in favor of shutting the windows and staying indoors. Similarly, remote garage door openers (drive up, open door, drive in, close door, enter house through garage) mean many American suburban neighbors never talk to each other. Granted, many urban dwellers don't either...but then carry this lack of cooperation over into areas where shared resources are an issue - such as the increasingly tight spaces on airplanes and we're going to be in real trouble..

Good: there is still hope.

Despite all the above, the increasingly acrimonious disputes over network neutrality, and the inroads into physical-world privacy being set up by developments such as Google's purchase of NEST. there are still those who are pushing back against government and corporate capture of the Internet. Danny O'Brien, now at the Electronic Frontier Foundation, highlighted some of the hard-working civil society folk last year; others have spoken at Eva Pascoe's Cybersalon series. Still others are trying to rebuild the Internet infrastructure so we can rely on it more safely. Finally, efforts to return control over personal data to its owner such as personal data stores offer the possibility of a much safer and more resilient way of doing things. (Obdisclosure: I do some work for Mydex.)

None of these efforts will be enough by themselves. As all these "smart" technologies spread outwards into the rest of our lives via ubiquitous sensors feeding the cloud, the risks inherent in more of the same will continue to rise. As Cory Doctorow argues in Wired this week, the restrictions on technology that are today merely inconvenient will, in a few years, place fundamental limitations on our physical autonomy. The principle that anything in your machine that you can't control can hurt you will be infinitely more meaningful when the machine in question is a chip that allows your limbs, your vision, or your brain to function.

So, for 2015: I wish you computers, freedom, and privacy - and the will to fight for all three.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 19, 2014

Losing it

Alone among the coverage I've seen of the Sony hack, Gizmodo last week pointed out the impact on the company's many thousands of employees. Here are people who did nothing wrong - unless you consider allowing personal information to pass over company email wrong - but whose medical records, social security numbers, and home addresses are now strewn across the Internet, making them targets for all sorts of thieves, identity fraudsters, and attackers. The scope for harm here dwarfs the amount extra that Sony will have to pay (in both money and personal grovelling) to secure the services of the stars who were dissed in the more gossipy leaked emails. The latter is embarrassment that will pass; the former may reverberate through the lives of the employee and their families for decades.

It is because of those employees - not the shareholders, management, prospective audiences, or talent - that I have to take issue with all the celebrities lambasting Sony for pulling the movie at issue, THE INTERVIEW from release. No, the threat of bombing theatres was not particularly credible, but there is plenty of scope for retaliation to those employees. Sony is in the position of a company whose headquarters has been captured with all the employees inside held hostage. First, you do your best to get out of, or at least calm, the crisis. Then you think what to do longer-term. The movie and its release date are what Alfred Hitchcock used to call a MacGuffin - utterly irrelevant in itself, but the thing everyone fights over that drives the plot.

The news is changing too fast to keep up with. The latest, as I type, is that the FBI has formally accused North Korea of perpetrating the hack, and that the US is consulting Japan and China on what to do next. Books will be written explaining what really happened; it's going to be a while before we know the detail.

Nonetheless, there are already lessons for companies and individuals to apply to their own arrangements. One has to do with the way we think about risk. Humans are notoriously poor at quantifying this, and the Sony hack shows this perfectly.

A commenter to Bruce Schneier's blog posting on this topic pointed out a telling quote from Sony security head Jason Spaltro. In a 2005 interview, Spaltro outlined his thoughts about the tradeoffs a business must make between security and profitability. To wit:

Spaltro offers a hypothetical example of a company that relies on legacy systems to store and manage credit card transactions for its customers. The cost to harden the legacy database against a possible intrusion could come to $10 million, he says. The cost to notify customers in case of a breach might be $1 million. With those figures, says Spaltro, "it's a valid business decision to accept the risk" of a security breach. "I will not invest $10 million to avoid a possible $1 million loss," he suggests.

Earlier in the article, Spaltro also said: "We're trying to remain profitable for our shareholders, and we literally could go broke trying to cover for everything. So, you make risk-based decisions: What're the most important things that are absolutely required by law?"

Two things have gone wrong here. First: Spaltro is the company's head of security, not its business manager or CEO. Second, Spaltro is quantifying the business's risk by measuring its level of compliance with Sarbanes-Oxley. He is not measuring it against the kinds of threats Sony might expect to encounter. Third: there were many more threats than he thought that could cost the company far more than he likely imagined. Maybe we *shouldn't* be sending security people to business school!

Some of his points, such as the one about the counter-productiveness of requiring overly complex passwords, were not necessarily wrong. Again, it comes back to the threat model. If attacker has no access to your offices, not even by Webcam or keylogger, writing passwords on post-it notes doesn't matter; it's insiders who will see those. The better choice is to avoid threats by using a password manager.

The Sony hack shows just how many losses can face a single hacked company: it had valuable intellectual property representing hundreds of millions of dollars of investment; confidential employee information; sensitive customer and partner information; and the film industry equivalent of the diplomatic cables published by Wikileaks. Each poses its own threat to the company's future viability. The irony is that the company's own leaked emails provide the best evidence that it was a soft target, taken down by a crude, bug-ridden attack.

The fact that until now companies that have suffered data breaches have recovered without much damage may have bred some complacency. Target, the most notorious consumer data breach to date, is not suffering from a lack of customers; Sony's previous hacks left it standing. The costs of customer data breaches really fall on the financial industries if customers switch back to paying cash. The depth of the Sony hack should show companies that they have much more to lose than they think. As much, in fact, as their security personnel have been telling them all these years. Let's hope they react by improving their security rather than threatening the media for reporting on their breach.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 13, 2014

Telescope

One of the more intriguing legal cases going on at the moment is Microsoft's effort to keep from handing over the contents of emails stored on its Dublin-based servers. The company has already provided some data relating to the account, but has drawn the line at the emails themselves. The company bases its argument on a parallelt: that the US would be outraged if a German court ordered a German bank's US-based branch to hand over documents stored by an American customer in one of its safety deposit boxes. The whole thing may come down to an interpretation of what exactly Congress meant in the Electronic Communications Privacy Act, a debate of necessity left to the experts - lawyers.

The question of jurisdiction is one techies have long liked to play with. In the early 1990s, there was a notion that the Principality of Sealand, which consists of a former World War I defensive platform, might be a suitable data haven. The resulting effort, HavenCo, soon vanished. Despite the subversive geek logic, it always seemed a fragile idea, dependent on a few people (prospectively central points of failure) and too easy to cut off or simply raid.

At the time, the Internet was fairly simple, with typically only two locations involved. The earliest online case of jurisdictional conflict was in 1993, when a postal inspector based in Tennessee dialed into a bulletin board system belonging to California residents Robert and Carleen Thomas and then prosecuted them for obscenity under Tennessee's community standards. They were tried, convicted, and lost on appeal in 1996.

Today's Internet is far more complex. A single Web page may be assembled out of data held on myriad servers in multiple countries, none of them in the same country as either the official service provider or the requesting user. Who gets precedence: the nationality, residency, or location of the user, the location of the service provider's headquarters, or the location of the server on which the data is held? No options is without problems.

Technical people seem unhappy about the idea that the location of the data is the determining factor, largely because companies make these decisions for all sorts of reasons that have nothing to do with the law. When it comes to serving up data quickly and securely, geography and physics are more important factors. Reliability often demands that data be backed up in multiple locations, not necessarily all in one country. Users often don't know where their data is held and, in the interests of ease of use, shouldn't have to - but that ignorance leaves them in a state of legal uncertainty. For most individuals this may not matter; but for some, and for businesses, it surely will.

In that sense - rather than the sense of what might win in court - it is a more usable argument if we say that the reason Microsoft should resist is not so much that the data is located in Dublin but that its Irish users are protected by EU laws regarding privacy and data protection and that US authorities wishing access should therefore get a warrant under existing arrangements such as MLAT. The US's PATRIOT Act, however, would require Microsoft's Irish subsidiary to hand over the data.

I have argued previously that users need better mental models for understanding where their data is and which nations' law enforcement have access rights. For them, determining jurisdiction by the national origin of the service provider is a much simpler option. You don't necessarily know at any single moment where Facebook is storing the data that makes up your profile - it may even be split among servers in multiple countries simultaneously - but you do know that Facebook is a US company. If you read net.wars or follow the work of the independent privacy advocate FISA Amendments Act (2008) there is no protection from US government warrants. Separately, the US generally discriminates against foreigners: rights in the US are for citizens.

The US is, of course, not the only country now demanding extraterritorial jurisdiction. This was one of the broadest extension of surveillance powers in the UK's DRIP Act (2014). Under DRIPA, the UK claims the right to compel disclosure even where no party has a UK connection, even while admitting this clause is likely unenforceable.

Alternatively, awarding jurisdiction to the nation under whose flag the service provider originates places the burden on the company's users and lets the company preempt the policies of democratically elected governments. We know already how this works out: Google, Facebook, and other US-based data-driven companies have argued vehemently against EU data protection law.

Finally, if the governing jurisdiction is the country of origin of the users, then the hosting company must contend with myriad jurisdictions and inconsistent data usage policies, easier for large companies than small ones. Basing jurisdiction on nationality is even worse: does anyone think it should be Microsoft's or Google's business to demand to see a copy of their passport or birth certificate as part of the new user signup process?

I'm not sure what common sense would dictate in this situation. Caught between two competing legal systems, Microsoft doesn't have much choice.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 5, 2014

Collaborators

"How can we get people to care about this?"

The person asking this question was in the audience for a screening of CitizenFour that was held in Birmingham earlier this week (find a UK screening or read filmmaker Laura Poitras's interview with Democracy Now.)

It's a sadly legitimate question. Unlike the US itself, where the president commissioned and published a review, or Germany, which objected strenuously to having its leader surveilled, and the EU in general, which the revelations spurred to take data protection reform more seriously, Britain has done...not much. There's been no Parliamentary debate; only the Guardian has taken up the cause; and outside of civil society groups public protest was minimal.

When you ask people about this, the responses that come up most often are that "they" already know everything about us anyway or a shrugging- shoulders "I have nothing to hide". The former seems more an expression of powerlessness than of cynicism. The latter is absurd. Everyone has something they would rather everybody didn't know, whether it's the details of their finances, their daughter's abortion, or their shady past selling incense and love beads out of a garishly painted school bus back in 1968. Even David Blunkett, the politician who pushed hardest for a national identity card in the late 2000s on the basis that No one has anything to fear from being correctly identified (PDF), turned out to have a pretty big secret he didn't like seeing made public. A few years later, there was Blunkett, speaking in favor of privacy in the 2010 film Erasing David.

It seems to me that people care about privacy in specific but inconsistent ways. On social media they post pictures of their kids but not their bank statements. They object strenuously to unwanted ads arriving in their email and SMS inboxes, telesales calls, and what a friend calls "God botherers" ringing their doorbells. Many would still balk at being required to carry identification. Yet somehow, the mass deployment of CCTV cameras throughout the UK has been met with complacency or even enthusiasm, even in schools. Yes, as a result of a range of US policies from fingerprinting foreigners at the border to the PATRIOT Act, some people refuse to travel to the US, but I suspect that for the general public the rising cost of air fares is a bigger issue.

A friend claims that the thing that typically really sparks a public response in Britain is a tragedy involving a teenaged girl. There seems to be some legitimacy to this idea: Milly Dowler certainly focused public attention on the media and phone hacking; suicides by young, female Ask.fm users spurred public concern about cyber bullying. Lacking such a poster child who has been deeply and specifically damaged by NSA/GCHQ surveillance and being oddly unwilling to deliberately create one...then what?

The most obvious deterrent historical example, the Third Reich, has, with time, lost a lot of its power. There are plenty of people in the US and Europe who no longer believe that such horrors are a realistic contemporary likelihood. This was another question raised in Birmingham: what kind of dystopia could we be heading towards? I passed over Orwell in favor of the less well-known This Perfect Day by Rosemary's Baby author Ira Levin. This book's events take place in a permission-based society in which everyone must place a bracelet to a scanner and wait for it to wink green whenever they want to do anything; I think of it often when I tap my Oyster card. Behind the scenes, an elite group of programmers run everything, including remote islands where malcontents are safely segregated from the compliant masses. The questioner thought more likely something like Dave Eggers' recent novel, The Circle. In that book, the constantly monitored characters learn to see privacy as theft and a near-lone holdout commits suicide to escape the drones following him. Either way, both books have their elites and therefore resonate with recent revelations that in God Mode Uber employees (and NSA operatives) have browsed their organization's databases for, more or less, fun.

Such revelations tend, however, to damage only the services themselves: people cancel their Uber accounts or decide to avoid the US without necessarily taking on board any larger principle about data collection and protesting changing laws and norms. Near the end of CitizenFour, Jacob Appelbaum comments that "What we used to call liberty and freedom we now call privacy...and now people are saying that privacy is dead." Most people do not see that connection. What they see - and one can hardly blame them for this - is the service, discount, or travel destination they want and that is being marketed to them. Giving up data to get those seems a small, painless thing. If "they" know everything about us, it's because we've colluded in making it easy for "them".

My best guess is that the first step is to give people effective alternatives. Only then can we tell if people really do care about privacy.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.