" /> net.wars: May 2014 Archives

« April 2014 | Main | June 2014 »

May 30, 2014

Software is forever

The end of official support for Windows XP has occasioned a lot of unsympathetic comments like: Windows 7 (and 8) has fundamentally better built-in security, you should have switched long ago anyway; they gave you years of notice; sheesh, they supported it for 13 years; nothing lasts forever.

The notable dissenter, whom I encountered at the event launching Trustwave's 2014 report, was Matt Palmer, chair of the Channel Islands Information Security Forum, who argued instead that the industry needs a fundamental rethink: "Very few organizations, small or large, can afford to turn over their software estate on a three-to-five-year basis," he said, going on to ask: "Why are they manufacturing software and only supporting it for a short period?"

In other words, as he put it more succinctly afterwards: we need to stop thinking of software as temporary.

This resonates strongly to anyone who remembers that this exact short-term attitude that software was temporary was the precise cause of the Y2K problem. For those who came in late or believe that the moon landings were faked: despite much media silliness (I remember being asked if irons might be affected), Y2K was a genuine problem. It affected many types of both visible and invisible software in some trivial, some serious ways. The root cause was that throughout most of the second half of the 20th century coders saved on precious memory resources by coding two-digit fields to indicate the year. Come 2000, such software couldn't distinguish 1935 from 2035: disambiguation required four-digit fields. "Nothing happened" because coder-millennia were spent fixing code. Remediating Y2K cost $100 billion was spent in the US alone, and all because coders in the 1950s, 1960s, 1970s, 1980s, and even some of the 1990s did not believe their software would still be in use come January 1, 2000. The date of the earliest warning not to think like that? A 1979 paper by Bob Bemer.

Thirteen years ago, when XP was launched, computer hardware was still relatively expensive. It was before cheap netbooks, let alone tablets or smartphones. But unlike other high-cost items at the time, most people knew that computers were advancing very quickly, and that the machine they bought was more likely to become obsolete than wear out. At that time you bought a new computer because you got frustrated when your old machine couldn't keep up with the demands of the new things you wanted to be able to do.

That is not true any more except at the high end. My current desktop machine dates to 2007, although it's had facelifts since then: a couple of bigger hard drives and more memory. The depth of the change in attitudes became apparent in 2012, when a double power outage blatted the graphics card. Even technical people couldn't see any reason to replace anything more than just that graphics card. Ten years ago, everyone would have been telling me to replace the whole machine because it was so outdated. Even now, although I'm sure I'd be impressed by the increased speed of a new one, the old one has no apparent limitations. So when the Windows 8 upgrade dingus says I need a new PC, I beg to differ.

Tablets and smartphones, which are still changing and adding capabilities at a rapid pace, are a different story. For now: the reality is that these segments of the industry, too, will mature and slow down when they reach the point where most people find their existing equipment is adequate for their needs.

You can still argue that XP is dead, get over it. But in the longer term living with old software is where we're going. This is particularly the case in regard to the "Internet of Things" vendors are so eager to build.

A few weeks ago, I was at an entertaining lunchtime demonstration by NCC Group that made this point nicely. The team reprogrammed hotel door locks using an NFC-enabled smartphone, attacked broadband routers and Homeplugs, and turned a smart TV into an audio/video surveillance device.

The group listed four main issues
- Embedded software designers still assume that only machines will communicate with their devices; they don't plan for malicious humans and therefore tend to think security does not matter.
- Embedded software designers still think "security by obscurity" works.
- Vulnerabilities are likely to persist for many years, since even if firmware updates become available, no one will risk bricking their TV or car by applying them. The Homeplugs, for example, which carry networking around the house via the electrical wiring, all have the same default password, which you can only change via an elaborate procedure that the manufacturer warns could make the network inoperable. What's that line you always hear on TV? Oh, yes: don't try this at home.
- Interoperability always trumps security.

And this: analysts are predicting 20 billion Internet of Things devices by 2020.

People expect to measure the lives of refrigerators, thermostats, cars, or industrial systems in decades, not months or years. Even if you want to say it's unreasonable and stupid that people and companies still have old XP boxes running specialized, irreplaceable applications today, one day soon it's your attitude that will be unreasonable. Software has a much longer lifespan than its coders like to think about, and this will be increasingly true.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 23, 2014


"Your printer is probably full of vulnerabilities," an interviewee said to me cheerfully this week. His company sells network security scanning software he thought I should test on my home network. I wouldn't be at all surprised, although he thought I would be.

This is the current normal: everyone's network is full of vulnerabilities: even if you patch everything and tie up the network so effectively that none of the computers can communicate with each other you still have all those human weak links.

A certain amount of the coverage of this week's announcement of the eBay data breach has focused on the length of time it took the company to realize its systems had been breached - they're talking intrusion in February or March, discovery in May. The 2014 Trustwave Global Security Report, also released this week, suggests that eBay's time to discovery was about average (ouch). In the 691 investigations Trustwave conducted in 2013, the median number of days it took companies to detect intrusions was 87. The good news is that the time lag has substantially decreased since 2012. The bad news is that an attacker can bed in very thoroughly and steal massive amounts of information in two months. At least eBay detected the intrusion itself; 71 percent of compromises were discovered by a third party - and in those cases both detection and containment took longer. The 2014 Verizon Data Breach Investigations Report, released a few weeks back, points out a more depressing disparity: the time to intrusion is measured in minutes; the time to detection is measured in days.

That's to go with all the other disparities. Attackers are better funded and are better at sharing information - and only have to find one hole. Experience, as shown by the breach at eBay, one of the first handful of all ecommerce sites, doesn't count if you don't keep up. Which eBay clearly hadn't: reports say that customer information such as names, addresses, phone numbers, and date of birth were all stored unencrypted, and The Register also questions its methods of protecting passwords. The affected 145 million of eBay's 233 million customers now need to change passwords (or delete their accounts) and wait to see how the rest of their information is misused. The one bit of entertainment really isn't worth the trouble: for a modest 1.45 bitcoin ($770) you can buy a fake copy of the customer database. Somewhere else, doubtless the real thing is being sold and parceled into other services and profiles in a shadowy imitation of the legal advertising and profiling industry.

The fallout from this breach will be long-running as the stolen information radiates outwards and is matched to databases copied in other breaches and used to craft better and more persuasive scams. It is a massive resource for those who want to perpetrate identity theft, and there is nothing any of eBay's customers could have done to protect themselves: we have no right to audit the company's security arrangements. Our only option would have been to use an old-style accommodation address for all transactions and lie about everything else. The truly outrageous thing is that eBay still has not officially notified its customers.

Target's CEO resigned - but will eBay's? Lawmakers are not helping as much as they should. The state of European data protection reform is still uncertain. Yesterday, the House of Congress passed a weakened version of the Freedom Act - so weak that its original sponsors were disappointed, and tech-savvy civil society organizations that originally supported it such as CDT, EFF, and Access Now all disclaimed it - passed 303 to 121. It now goes to the Senate, where we can only hope it gets fixed. And even if it does, there will be no reprieve for non-Americans.

This is also the current normal: the vast majority of us are being extensively profiled and surveilled by three separate sectors, all extremely well-funded: the commercial advertising and marketing industry; official state-sponsored security agencies; and criminal enterprises. From the last 11 months of revelations from the Snowden documents we know that the first of those - advertising - provides opportunities for the second to exploit. The NSA has been found exploiting advertiser-placed cookies and availing itself of user data collected by companies such as Facebook and Google. Despite the lack - one hopes - of formal agreements to collaborate, these three sectors magnify each other's efforts. Both security services and criminals exploit the vulnerabilities in computer systems; in some cases we know the NSA has acted to create them.

Most of these sectors' surveillance is not active or targeted at us as individuals. Instead, it's what the STRINT workshop in March was trying to fix: passive pervasive monitoring that leaves each of us randomly vulnerable in ways we can't predict. To understand your security risk, first you have to understand the threat model: who are your attackers, and what do they want? George Orwell posited the state as Big Brother. At CFP 2000, Neal Stephenson posited commercial companies as Little Brothers. Here in 2014, the risk we face is all of those - and more.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 16, 2014

Memory hole

Some years ago, a long-time member of the WELL became the CEO of a public company. He promptly, to some amusement among those with functioning memories and archived conference topics, deleted his old postings, particularly the ones that might be embarrassing if unexpectedly exported to a newspaper or the Web.

He could do this because although the WELL's original design did not include the ability to delete posts, its interface was open enough that an early user had written one and made it available to the community at large. The second is that the WELL's owners and members respected the system's motto, then and now: "You own your own words".

The ethos of modern data-driven companies is rather different. You do not have - or, until this week, did not have - any ability to control what information about you pops up in search results or in what order.

On Tuesday, the European Court of Justice issued a ruling that will change the way Internet search engines balance the public interest and personal privacy. The case dates to 2011, when Costeja Gonzalez filed a complaint with the Spanish data protection regulator that Google searches on his name raised links to a 1998 newspaper notice of long-resolved debts. Writing for CNN, Paul Bernal outlines the background.

It is a confusing and messy judgment whose full implications will take time to reveal themselves. So far, there's a pronounced tendency for Americans to see it as outrageous censorship and Europeans to cheer the privacy-protecting aspects. The New York Times takes a pretty measured view. Caspar Bowden's Storify has many more links to background and legal precedents.

Some of the points made by the outraged:
- It's a blow to freedom of expression. See for example Jimmy Wales: the founder of Wikipedia, who calls it wide-sweeping Internet censorship.
- This will disproportionately benefit the rich and powerful, who will use it to erase things they do not want reported. The BBC reports early unsavory requests; Google reports ">a deluge.
- Haven't they ever heard of the Streisand Effect? (They mean the Scientology effect.)
- This will kill the open Internet.
- It's Orwell's memory hole.

In response:
- Google is not the Internet. It is a business, not a public-interest body in need of protection.
- Search engines are not the only way to find information. We should be teaching less lazy alternatives; it is dangerous for society's access to information to be solely mediated by (foreign) businesses.
- Serious researchers do not rely solely on search engines. The historical record is intact; what's being choked off is the interstate highway accessing it. A pause to balance competing values is no bad thing.
- There have always been limits to freedom of expression, even in the US. You can say "Fire" quietly to your neighbor in a crowded theater but you can't falsely shout it. Google makes its money being a megaphone in that scenario.
- EU and US laws on privacy and data protection have been at odds for nearly 20 years, and will diverge further with the EU's data protection reform package . Companies like Google are at the forefront of lobbying against it.
- Google is also rich and powerful.
- As Jonathan Zittrain writes, maybe this is a moment of opportunity: there are alternatives to the binary simplicity of publish/delete. Google once had, he reminds us, a comment feature allowing users to add context, arguably a better solution.
- Post-Laurence Godfrey notice-and-takedown rules have not killed the open Internet, despite automatically delete first, consider later. What if there is a conflict and removing information helps one person but harms another? How do we ensure correct identification? In USA Today, EPIC's Marc Rotenberg argues that regulating Google's business practices is not the same as regulating the Internet and praises the court for distinguishing between news organizations and search engines.

This seems largely right. When a news organization is challenged about the truth or invasiveness of a story, its representatives can appear in court and explain the process by which the story was discovered, decided upon, commissioned, researched, edited, and published. On Google's behalf, Eugene Volokh has argued that the company has First Amendment free speech rights in the ordering of search results - as a defense against antitrust accusations of favoring its own services. But its algorithms are trade secrets, and no one outside can audit their decisions - a key issue in how this ruling plays out since there will be no way to audit what information is being exiled or why.

Especially since Big (and Open) Data is about to become the basis for many, many black box decisions, the conversation about how to enforce accountability on giant businesses whose missions are not the public interest, seems an essential one to have, however messy the reason.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 9, 2014

The cost of surveillance

A friend - a cultured friend with a fine understanding of the Internet - commented recently that maybe mass surveillance wasn't such a bad idea if it meant people not getting blown up. It seems worthwhile to document here what I should have said in case I need it again.

Everyone supports the goal that people should not get blown up. But.

For now, we will make some stipulations. Let's ignore that being under constant observation would likely mean greater social conformity; he granted that. In arguing that this might not be a bad thing, he cited the "honest experiment". This was conducted in 2006 at the University of Newcastle in 2006, where an honor system was in place: you took your coffee and dropped the suggested amount in an honesty box. For ten weeks, psychologists Melissa Bateson, Daniel Nettle, and Gabriel Roberts added a picture over the box that alternated weekly between a flower and a pair of eyes. The result: the eyes picture correlated with 2.76 times the money collected (PDF).

My suspicion is that ten weeks isn't long, and that it's possible that after a longer time of habituation the effect would fade, like clicking "OK". Call it a quibble, and let's say for the purposes of this discussion that we don't care about chilling effects, despite their effect on things like investigative journalism; useful blogs like Groklaw; Internet use, free speech, and religious practices in targeted populations; or inhibits public debate .

We didn't discuss the cost of the surveillance infrastructure. Given that any society has limited resources, tradeoffs must be made. The billions spent on spying are billions unavailable for health care, education, or closing the poverty gap that helps create angry, alienated radicals, Granted, costs are dropping on a micro level, but not on a macro level. As John Mueller and Mark G. Stewart point out in a January 2014 paper studying the 215 program(PDF), even if you minimize concerns about privacy and civil liberties concerns, the program that collects and analyzes all that metadata would likely fail a cost-benefit test. Besides the direct cost, which the NSA does not disclose, the huge indirect costs include opportunity costs as the FBI chases down the millions of useless leads churned out of the data. Working from the publicly available information on the 53 terrorist plots that have come to light in the US since 9/11, involving under 100 suspected terrorists, the paper notes, "Overall, where the plots have been disrupted, the task was accomplished by ordinary policing methods. The NSA programs scarcely come up at all." Using generous assumptions, the authors estimate that the program would be cost-effective only if its full cost is less than $33.3 million a year, out of the NSA's annual budget of $10 billion.

But let's say that either we don't care or that it's worth it. Let's also ignore the statistic Bruce Schneier likes to cite about the 500 extra car deaths a year when people avoid flying because of post-9/11 airport security.

And, just for grins, let's assume that the spy agency itself is trustworthy and would never abuse the data in its control.

What, then, is wrong with surveillance?

A platform built for spying can be coopted by others for their own purposes. When you create the mechanisms that could underpin a police state you are creating a system that will be dangerous if one ever arises.

More immediately, having spent today listening to some quite terrifying real scenarios about cyberattacks mounted by criminals and nation-states, I think we must take seriously the danger that such a platform can be penetrated and subverted. The old threat model was blackmail: your secrets could be used against you and anyone you worked for. Today, our vulnerabilities are our children and loved ones, whose details on social networks can be linked, matched, and studied to craft utterly convincing spear-phishing messages that a specialist working for a security company may open - and with it awormhole into products entering into critical infrastructure. A spy platform is a spy platform. If you can't break in from the outside, insiders can be bribed, blackmailed, or coerced - or grow your own specialists to deploy and serve as moles.

Spy systems tend to expand: function creep, one of those irregular verbs again. I collect data to catch terrorists; you use it to find criminals; he uses it to find litterbugs.

In her 2008 paper, Technological Due Process, Danielle Citron studied what happens when algorithms make decisions about people's lives. One of the underpinnings of democracy - law - loses out because of the difficulty of translating human-written law into binary code (as Ellen Ullman discussed in Close to the Machine.

Finally, the IF in "if it meant people not getting blown up" is significant. Everything I've seen suggests that even if such a system may work someday it does not work now, as noted above. Millions of us; tiny number of terrorists who become expert at evading surveillance. Much easier to watch us than catch them.

Now toss back in all the stuff we left out: the cost to democracy, the chilling effect on free association, freedom of expression, even search terms. Still worth it?

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 2, 2014

Rumors at 11

A couple of days ago, The Atlantic pronounced that Twitter is dying. In response, the Washington Post reminded us that it can't be dying because it already died back in 2009. Slate explains that on the contrary, Twitter is about to get a lot bigger.

Journalists! Repeat after me the screenwriter William Goldman's famous line: "Nobody knows anything". Dave Swarbrick survived his obit; why shouldn't Twitter?

The basis of the bang-you're-dead story is, of course, Twitter's most recent quarterly results, which showed a substantial increase in revenues, but a continuing slowdown in the growth of the user base. The shares promptly plunged, because, well, because the price depends on believing that the system will be a giant money-spinner one day, not on anything pesky and old-fashioned like profits. Meanwhile Forbes tells us not to worry about the expiration on Monday of the insiders' share lock-up, which soon-to-be ex-TWOPpers would call hanging a lampshade on it.

I suspect that what The Atlantic's Adrienne LaFrance and Robinson Meyer actually mean is that they don't find Twitter as engaging as they used to. Well, neither do I, but in this case three don't make a trend, and there's a simple reason for that: journalists are not Twitter's target audience. Nor should we be if the company wants to survive and prosper. No one ever made (much) money targeting journalists, who are notoriously reluctant to pay for anything and notoriously likely to skip like butterflies on to the next shiny thing as soon as someone tells them what it is. Sure, in its early years Twitter has benefited enormously from the fact that it was practically perfect for journalists, who want quick, short hits of information, curated links to interesting stuff, and a way to push their own work. I spent years begging PR people to send press releases on the backs of postcards and was always laughed at - until along came Twitter, which is exactly that but the new-technology buzz made it seem to them like a great idea instead of a stupid one.

The unique selling points Twitter started with - mobile integration, real-time public messaging - are no longer unique. So there's no question that if it wants to grow to be a sustainable, mass-market company, Twitter will have to change, just like Facebook, Google, and umpteen others before it. To anyone who's used Twitter for a long time - I registered back in 2008 - it's clear that it's in the process of doing just that. And I suspect the company will be happy to leave its early adopters and even promoters behind if in return it picks up a much larger, truly mass audience. Of course, it may not succeed; the Net is littered with the shriveled husks of formerly vibrant communities whose appeal baffles all but a fraction of their most habituated users.

That said, I'm happy to join the throng and voice my particular frustrations. Twitter has never been a good Web-based experience. The Web version is slow to load, clunky to operate, and completely unsuitable for running multiple accounts. It is particularly poor at handling the volume of information you need to process if you're following more than a handful of people.

I didn't start to enjoy Twitter until I found a desktop client I liked: Tweetdeck, which showed me separate accounts, lists, groups, and hashtag searches in adjoining columns, and allowed me to post the same tweet simultaneously to Twitter, LinkedIn, and Facebook. Then Twitter bought Tweetdeck and began slowly closing out the ecology of third-party desktop clients that fueled its growth in its early years. To take only one example among many, new "token limits" killed off MetroTwit just last month. Meanwhile, Twitter has killed off the desktop (Air), Android, and iPhone versions of TweetDeck". The Web version has some of the features I miss, but it still feels clunky compared to the old desktop client. So I have a weird and rickety setup that involves reading Twitter via a version of TweetDeck so ancient that it can't post or search; post to Twitter via one of two browsers (signed into two different accounts); access it via SMS or Twicca when I'm away from my desk; read only a sliver of what I used to; and enjoy it way less.

Yes: a reasonable person changes alongside the service. An unreasonable person complains that Microsoft is antisocial for not providing security patches for XP and longs for the reading efficiency of Usenet. It's clear that many other people want the features that don't interest me: easier ways to post pictures and video, recommendations for celebrities to follow, and so on. For me, Twitter is stagnant - but that's because *I* am. Changes made to find the mass market are nothing new: they are the same kinds of changes that drove me away from Google in 2010, and would have driven me away from Facebook if I'd ever really embraced it. A *rational* person realizes that they are atypical of the company's desired customers and either decides they'd be better off as a shareholder than a user or pesters the coolest of their friends to find out what's new and fun. Life off-screen, maybe?

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.