" /> net.wars: September 2013 Archives

« August 2013 | Main

September 27, 2013


Until a call came from Sky News, I had failed to notice that today is Google's 15th birthday. Sky wanted someone to take a negative view, since they already had a positive one. (Yes, this is how television works...) I see no reason not to oblige.

But first, a bit of history. It is hard to convey to people now what a wonderful discovery Google was in 1998. As a friend said a couple of years ago, "We thought we'd love it forever." The competitors of the day - of which the best was Altavista - had bought into the peculiarly 1997 belief that you had to be a portal. So they were competent search engines but their home pages were filled with clutter - weather reports, news headlines, banner ads - that buzzed in your head and made it hard to think what you wanted to ask them. And then, suddenly there was Google: slick, fast, more accurate, and above all, clean. That white page with nothing but a search box, that loaded in a couple of seconds (we were still on dial-up modems back then, you know, and paying by the minute for phone access). It practically had a halo around its URL.

What no one could tell at that stage and for some years afterwards was how Google might make money. Now we know: it's an ad agency. The more we remember to think of it that way instead of as a search engine, a navigational tool, or an email service, the better. Ad agencies seek to manipulate us into buying things; these days that means piling up and crunching through tons of personal data. They are also, Mad Men notwithstanding, unglamorous.

Getting from the halo around the URL to the taken-for-granted demon of 2013 took many steps. Perhaps the first hint of the data-driven future came in 2001, when Google acquired and compiled a nearly complete Usenet archive. In 2004, when it went public; financial analysts saw it as a risky investment because, they thought, rivals would find it easy to poach Google's customers by providing better search results (ha!). Even then, Google's biggest challenge seemed likely to be retaining trust, but in 2006, when privacy advocates were beginning to attack the company in earnest, their alarm seemed premature - though much less so by 2008, when Chrome was launched. By then Google's growing piles of user data were becoming a much clearer concern.

In retrospect, Google's tenth birthday probably came at the peak of public affection for the company and its services. To mark the occasion the BBC asked this: would Google ever be hated the way Microsoft is? Remember what 2008 was like: the iPhone was a year old, there were no tablets; Facebook had 100 million users; and millions of people cursed Microsoft every time their system crashed. It was the latter that made me think Google was safe from that kind of hatred. Where you only notice Microsoft when something fails, Google grabs your attention by giving you things you want: search results, images, video clips, a successful arrival at journey's end. Microsoft did traditional marketing through TV, PR, and product reviews. Google marketed itself by creating cool, new services that everyone wanted to write about and use. Who cares that it's an ad agency; it's given us Google Earth!

It was 2009 when the really controversial stuff began. Though there were cries of unhappiness over the Usenet archive's transformation of supposedly ephemeral postings into universally accessible personal history, Usenet was a minority interest. The new arriving services hit the mainstream: Street View (defining question: "Am I in it?") and the anti-trust case over Google Books. Four years later, that case is still percolating through the courts. (Side note on double standards: Aaron Swartz copies a couple of million articles from JStor for no commercial purpose that anyone's aware of and gets threatened with a dozen or two years in jail; Google copies 20 million books and puts them online and gets politely sued. Just saying.)

And so the answer to the BBC's question of 2008, at least in my case, turned out to be: yes, in 2010. Literally, yelling hatred at the computer. I wish I could say it was the privacy and dominance issues. The proximate cause was Google Instant, which, like Altavista 1997, is like having a very loud, obnoxious drunk in a pub shouting attempts to finish my sentences while I'm trying to explain something urgent and complicated. Everything since has just made the decision to transfer elsewhere seem lucky: streamlined privacy policies, the Federal Trade Commission's $22.5 million fine for overriding the Do Not Track setting in Apple's Safari browser, and this summer's many revelations about the prying habits of the NSA and GCHQ and consequent risks of giant data stores (pause to praise Google's transparency report).

A quick one-time visit to Google's home page shows a noisy cartoon congratulating itself. That's fine. It can go on without me. I deal with it by talking about it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Stories about the border wars between cyberspace and real life are posted throughout the week at the net.wars Pinboard - or follow on Twitter.

September 20, 2013

The opposite of zemblanity

A lawyer walks into a bar.

A corporate lawyer looks around for unrecognized liabilities.

A commercial lawyer wonders if the bar's owner wants to sell or expand.

An academic lawyer considers whether the laws that apply to the bar are appropriately framed.

An academic lawyer who goes to gikii starts speculating about the laws that will be needed in ten years' time when the bar is staffed by robots whose embedded scanners collate customers' brain structures, which they then print out on 3D organ printers to implant in hungry zombie kittens.

Like We Robot, gikii is lawyers riffing about the future, mixing law, technology, science fiction, and pop culture. Founded, as Richard Fisher writes in New Scientist, by Lilian Edwards and Andres Guadamuz, gikii is a safe space for speculation that, as Edwards put it earlier this week, would get you giggled at elsewhere.

Science fiction is often rightly talked about as the literature of ideas; what I hadn't realized until first Computers, Freedom, and Privacy and then We Robot and gikii entered my consciousness is that law is where ideas and real life collide first. It doesn't take a lawyer to spot the clash between a user's casual reference to their mobile phone (owned by work, pwned by Apple) as "mine", but it does take one (Andrew Adams) to draw parallels to prior concepts of "possession" back to medieval times. Limited rights of ownership apply in all sorts of cases: airspace, land, intellectual property. To an American looking to unlock a phone in order to use it with a different network carrier, the discovery of how limited those rights are is zemblanity.

Zemblanity, introduced by Chris Marsden, co-author of Regulating Code, was new to me. It means the opposite of serendipity. Serendipity is an accidental lucky discovery; it's Charles Schulz unexpectedly finding a warm puppy or searching the Internet for an old high school friend and finding he lives a few streets away. By contrast, zemblanity is an accidental unlucky discovery: in Marsden's example, the realization that on the Internet you are never alone - ever. Or the rediscovery of how much geography matters: a guy with a spade cuts off Armenia for five hours in 2011; the NSA taps cables; and you could probably still wreck an awful lot of the Internet by simply buying ten backhoes and deploying them strategically.

Among other speculations brought to me by gikii this week:

- However bad Internet security is now, it's nothing to what will soon be coming our way via the "Internet of Things" (Miranda Mowbray. Internal body sensors, water pumps, wide-open serial port servers are as inviting as a wallet left abandoned on a beach But advanced persistent threats, despite the attention-grabbing incidents, are too expensive for all but nation-states and unlikely to be turned against individuals (unless you're a known dissident caught in the crosshairs).

- The big block to widespread adoption of open access is not money or even entrenched journal business models but academics' need to have their work published in the right journals to gain citations and promotions.

- The reason both sides claimed they won in the Supreme Court decision in AMP v. Myriad Genetics is that the court ruled both that you can't patent DNA and that you can patent complementary DNA. This messy, muddled, possibly business-driven decision led Ray Corrigan to ask, "How can we begin to inject a modicum of scientific and technical literacy into the courts and legislatures?"

- If robots are evil, rather than kill us they might prefer to over-charge and price-fix us (Salil Mehra).

- Disney princesses want privacy, autonomy, and control over their identity (Paul Bernal). Since their stories are all based on much older folk, sources, what this reflects is not an obsession particular to Walt Disney and the corporation he left behind but the story of the human race back to ancient times.

- How do we extend the robot exclusion standard, first to the complex uses of third-party content for which it's already inadequate (Thomas Höppner) and then to the Internet of Things, whose connected objects will need to carry memory and history (Lachlan Urquhart?

- Given 3D organ printing, how long until someone starts printing out human steak?

And finally:

- In all the discussions about cyberbullying an item left out, said Andy Phippen, pouring some reality and research on the present government's we-must-protect-the-children filtering demands, is the growing number of teenaged girls who deliberately use the Internet as a form of self-harm and attention-getting. "There are all sorts of issues going on here," he said - many of them far more complex and sad than most of us realize. Such as, for example, the primary school child who, when asked what he'd seen on the Internet that upset him most, responded: "When my Dad told me on Facebook he didn't want to see me any more." As Phippen said, there is no law or technology that will fix that particular social problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted irregularly during the week at the net.wars Pinboard - or follow on Twitter.

September 13, 2013

Hillside views

The statue of liberty that presides over the city of Budapest from a hill all to herself is, I learned from my local host, a divisive figure. Put in place after World War II, she faces east, arms outstretched and holding a palm leaf I couldn't see through the haze. She is welcoming the Soviets, who liberated Hungry from the Nazis. For many, including my host, part of whose family is Jewish, she is a reminder of the many relatives she saved from certain death. But: she also represents Soviet rule, and for those who lost their liberty, property, and farms, she is a reminder of a different era of enslavement. After Hungary's second liberation, a number of people wanted her taken down.

My host wants her kept because she is part of history. I liked her initially as a liberating woman; after his explanation I appreciated her added value as a metaphor for mixed blessings. Democracy is a process, not a state; from the frying pan into the fire, to be sure, but the fire is still closer to safe ground.

I've been trying to imagine what a liberating statue to the folks currently plotting technical ways to turn back surveillance would look like if they're successful. It's one of those unfortunate things about the Internet: colonization is easily hidden, and celebration doesn't look like much. In the physical world, you know if you enter a different country, and the statue on the hill can inspire even before you know its background.

My presence in Budapest was as a new advisory board member for Trust in Digital Life, a collaboration of companies and universities to create technologies to enable us to protect ourselves against the many different types of cyberthreats. Two discussions in particular were provocative. The first was Erik van Zuuren, a Deloitte risk services director, who outlined Belgian efforts to create electronic IDs for government transactions; the second was a discussion of work-in-progress studying ways to improve app privacy.

Like Britain since the Government Digital Service, Belgian electronic ID efforts have focused on practical applications rather than writing plans. The country had services and it had a national ID card that everyone over 12 must carry; so the only issue was how to add electronic authentication to ensure that any user conducting an electronic transaction had a legal mandate to do so. (So much easier, as van Zuuren openly acknowledged, when you don't have to have democratic discussions about these things.)

The system also must recognize the different roles a single individual may have: you may be a person, a parent, and a civil servant, and each role is entitled to access and act upon different information. As such systems expand, the risks mount: at the design stage, the risks of fraud may be very small; but by the time you're using it to maintain birth records and authorize civic projects worth millions of euros, tracking back the source of errors and fraud is essential. There is, therefore, a real need for authoritative sources and carefully managed, highly specific authorizations. But the real headache comes when dealing across national borders: users need to be uniquely authenticated, but some countries do not allow service providers to ask for official identifiers such as social security numbers. In Hungary, a law bars linking together the country's three identifiers - tax, government, and health. How to ensure cross-border interoperability and certainty of claims is a genuine question. The EU is 20 years old now, yet member countries have no common principles in this area.

The other case seems at first more trivial: how do you ensure that you have a real sense of what the spy in your pocket is doing? The information apps currently give you about what they will do is static, coarse-grained, technical, and inappropriately timed. The project accordingly set out to test how people would respond if presented with privacy ratings when considering which apps to download and try.

You can see the need. The research just keeps coming about the sensitivity of information stored on mobile phones. At Carnegie-Mellon, researchers analyzed call and text logs (PDF) and found that just the frequency, length, and timing of calls can accurately classify the phone's owner's contacts with 90.5 percent accuracy. Other studies have highlighted the extra privacy risks posed by free apps over their paid fellows: paid apps are less likely to rely on advertising revenue and hence ad libraries, which are largely invisible back ends to most consumers. Using paid apps, however, requires surrendering one's real-world identity to Google (or Apple), a trade-off that's not easy to quantify.

The research project found that while users will not necessarily choose the app with the best privacy ratings they will examine the ones with higher privacy ratings first.

TDL's overall goal is to ensure that the various ideas its members consider and propose end up as real-world projects, not just papers. You have to hope they do; they won't be as much to look at as that statue on the hill, but right now we need function first.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 6, 2013

Snooping as a service

We've known for years that the early 1990s crypto wars were at best only partially won and at worst completely lost. The key point: whether government could continue to control the deployment of strong cryptography, first via the International Traffic in Arms Regulations preventing export, and second through key escrow, ensuring that any deployed cryptographic systems had a government-accessible back door. When, in 1991, Phil Zimmermann wrote PGP and it got uploaded onto the Internet, it was widely believed that both prongs were doomed. Later, Zimmermann commented that the three-letter agencies could have utterly discredited his work by walking up to him on a stage, shaking his hand, and giving him a medal. Instead, the FBI investigated him for an intimidating while.

The initial problem was and remains that using crypto directly requires minute attention to detail on the part of consumers. Accordingly, for it to protect the many varieties of sensitive information we send across the Internet it has to be built into systems where it becomes invisible. Very few consumers would care to manually attach emissions control systems to their cars or motorbikes every time they set out; very few who are not in hostile situations will trouble to download public keys, check who's signed them, and so on. We use crypto if it's hidden inside mobile phones, Web browsers, and VPNs. Just as we don't inspect the quality of the locks on bank vaults or our front doors, we don't inspect the crypto system. Even if we wanted to, very few of us are qualified.

The later problem was the-S-in-RSA crypto inventor Adi Shamir's third law: Cryptography is typically bypassed, not penetrated. We've had plenty of examples of this, from the DigiNotar incident and the attacks on other certificate authorities to the fact that Tor, intended to provide secure, anonymous browsing, is not really safe if your adversary is large enough.

So, we have to trust the companies who deploy security to make smart choices on our behalf. Many of them can't actually do this: they are no more qualified than we are because they are us. If one of the benefits of the Internet is that it enables anyone to create a business on it, one of the downsides is that the "anyone" may have no clue how to protect its customers. They buy in the security they need from the experts - vendors - eagerly reassure all sides that they've got it all under control. When Mikko Hypponen broke ranks last year to admit that his company, F-Secure, one of the longest-serving anti-virus vendors, had no hope of detecting today's most sophisticated viruses (like Stuxnet), it was a watershed moment. Another came earlier this year, when the attacks on news organizations made plain that against a really determined elite attacker who wants to penetrate your organization in particular, you are basically screwed. What we know now is that we're doubly screwed because the people who are supposed to be protecting us are a fundamental part of the problem. What were all those complaints of going dark about? Camouflage?

As yesterday's revelations make plain our trust model is entirely broken. You have GCHQ and NSA funding research to improve cybersecurity; then you have them paying to keep it from getting too good. The former ZDNet UK editor Rupert Goodwins points out that this shouldn't really be news; for decades software from Crypto A.G. includes a back door granting full access to the NSA. I understand that if I bore the weight of a nation's security on my shoulders I, too, would think what I did was vitally important. Would I think it was more important than every other national interest and every part of the social compact? I hope not.

Bruce Schneier recommends appointing a special prosecutor with no ties and up-ending the secrecy in which the NSA operates. The latter is similar to comments made by the investigative journalist Duncan Campbell last July: "The wraps should come off. We can have more trust if we get these programs out in the open," he said. "The walls of secrecy have to come down. We are an adult society. We have learned that terrorists are among us." In other words: we can be grown-up about joining in the discussion about what kinds of surveillance are needed; if you trust us we can trust you. Instead, as we're inside Philip K. Dick's 1977 novel, A Scanner Darkly, in which the narcs pay the protagonist to spy on himself.

Schneier's solution is to call for engineers to reclaim the Internet by redesigning it. In my daily inbox, I see signs that people are already at their drawing boards. Meantime, since we're already paying for all this surveillance we might get some use out of it. Why shouldn't NSA run a helpful answering service? Things like, "Could you give me the PIN for my bank card?" And, "When's my mother-in-law's birthday again?" And, "What did I say in the email I sent on July 12, 1992?" Goodwins suggested calling it "Snooping as a service". Snaas. Pronounced Snazz. It could be a winner.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.