" /> net.wars: January 2018 Archives

« December 2017 | Main | February 2018 »

January 26, 2018

Bodies in the clouds

andrea-matwyshyn.jpgThis year's Computers, Privacy, and Data Protection conference had the theme "The Internet of Bodies". I chaired the "Bodies in the Clouds" panel, which was convened by Lucie Krahulcova of Access Now, and this is something like what I may have said to introduce it.

The notion of "cyberspace" as a separate space derives from the early days of the internet, when most people outside of universities or large science research departments had to dial up and wait while modems mated to get there. Even those who had those permanent connections were often offline in other parts of their lives. Crucially, the people you met in that virtual land were strangers, and it was easy to think there were no consequences in real life.

In 2013, New America Foundation co-founder Michael Lind called cyberspace an idea that makes you dumber the moment you learn of it and begged us to stop believing the internet is a mythical place that governments and corporations are wrongfully invading. While I disagreed, I can see that those with no memory of those early days might see it that way. Today's 30-year-olds were 19 when the iPhone arrived, 18 when Facebook became a thing, 16 when Google went public, and eight when Netscape IPO'd. They have grown up alongside iTunes, digital maps, and GPS, surrounded online by everyone they know. "Cyberspace" isn't somewhere they go; online is just an extension of their phones or laptops..

And yet, many of the laws that now govern the internet were devised with the separate space idea in mind. "Cyberspace", unsurprisingly, turned out not to be exempt from the laws governing consumer fraud, copyright, defamation, libel, drug trafficking, or finance. Many new laws passed in this period are intended to contain what appeared to legislators with little online experience to be a dangerous new threat. These laws are about to come back to bite us.

At the moment there is still *some* boundary: we are aware that map lookups, video sites, and even Siri requests require online access to answer, just as we know when we buy a device like a "smart coffee maker" or a scale that tweets our weight that it's externally connected, even if we don't fully understand the consequences. We are not puzzled by the absence of online connections as we would be if the sun disappeared and we didn't know what an eclipse was.

Security experts had long warned that traditional manufacturers were not grasping the dangers of adding wireless internet connections to their products, and in 2016 they were proved right, when the Mirai botnet harnessed video recorders, routers, baby monitors, and CCTV cameras to delier monster attacks on internet sites and service providers.

For the last few years, I've called this the invasion of the physical world by cyberspace. The cyber-physical construct of the Internet of Things will pose many more challenges to security, privacy, and data protection law. The systems we are beginning to build will be vastly more complex than the systems of the past, involving many more devices, many more types of devices, and many more service providers. An automated city parking system might have meters, license plate readers, a payment system, middleware gateways to link all these, and a wireless ISP. Understanding who's responsible when such systems go wrong or how to exercise our privacy rights will be difficult. The boundary we can still see is vanishing, as is our control over it.

For example, how do we opt out of physical tracking when there are sensors everywhere? It's clear that the Cookie Directive approach to consent won't work in the physical world (though it would give a new meaning to "no-go areas").

Today's devices are already creating new opportunities to probe previously inaccessible parts of our lives. Police have asked for data from Amazon Echos in a Arkansas murder case. In Germany, investigators used the suspect's Apple Health app while re-enacting the steps they believed he took and compared the results to the data the app collected at the time of the crime to prove his guilt.

A friend who buys and turns on an Amazon Echo is deemed to have accepted its privacy policy. Does visiting their home mean I've accepted it too? What happens to data about me that the Echo has collected if I am not a suspect? And if it controls their whole house, how do I get it to work after they've gone to bed?

At Privacy Law Scholars in 2016, Andrea Matwyshyn introduced a new idea: the Internet of Bodies, the theme of this year's CPDP. As she spotted then, the Internet of Bodies make us dependent for our bodily integrity and ability to function on this hybrid ecosystem. At that first discussion of what I'm sure will be an important topic for many years to come, someone commented, "A pancreas has never reported to the cloud before."

A few weeks ago, a small American ISP sent a letter to warn a copyright-infringing subscriber that continuing to attract complaints would cause the ISP to throttle their bandwidth, potentially interfering with devices requiring continuous connections, such as CCTV monitoring and thermostats. The kind of conflict this suggests - copyright laws designed for "cyberspace" touching our physical ability to stay warm and alive in a cold snap - is what awaits us now.

Illustrations: Andrea Matwyshyn.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 19, 2018


Thumbnail image for discardingimages-escherbackground.jpg"Regulatory oversight is going to be inevitable," Adam Kinsley, Sky's director of policy, predicted on Tuesday. He was not alone in saying this is the internet's direction of travel, and we shouldn't feel too bad about it. "Regulation is not inherently bad," suggested Facebook's UK public policy manager, Karim Palant.

The occasion was the Westminster eForum's seminar on internet regulation (PDF). The discussion focused on the key question, posed at the outset by digital policy consultant Julian Coles: who is responsible, and for what? Free speech fundamentalists find it easy to condemn anything smacking of censorship. Yet even some of them are demanding proactive removal of some types of content.

Two government initiatives sparked this discussion. The first is the UK's Internet Safety Strategy green paper, published last October. Two aspects grabbed initial attention: a levy on social media companies and age verification for pornography sites, now assigned to the British Board of Film Classification to oversee. But there was always more to pick at, as Evelyn Douek helpfully summarized at Lawfare. Coles' question is fundamental, and 2018 may be its defining moment.

The second, noted by Graham Smith, was raised by the European Commission at the December 2017 Global Internet Forum, and aims to force technology companies to take down extremist content within one to two hours of posting. Smith's description: "...act as detective, informant, arresting officer, prosecutor, defense, judge, jury, and prison warder all at once." Open Rights Group executive director Jim Killock added later that it's unreasonable to expect technology companies to do the right thing perfectly within a set period at scale, making no mistakes.

As Coles said - and as Old Net Curmudgeons remember - the present state of the law was largely set in the mid-to-late 1990s, when the goal of fostering innovation led both the US Congress (via Section 230 of the Communications Decency Act, 1996) and the EU (via the Electronic Commerce Directive, 2000) to hold that ISPs are not liable for the content they carry.

However, those decisions also had precedents of their own. The 1991 US case Cubby v. CompuServe ended in CompuServe's favor, holding it not liable for defamatory content posted to one of its online forums. In 2000, the UK's Godfrey v. Demon Internet successfully applied libel law to Usenet postings, ultimately creating the notice and takedown rules we still live by today. Also crucial in shaping those rules was Scientology's actions in 1994-1995 to remove its top-level secret documents from the internet.

In the simpler landscape when these laws were drafted, the distinction between access providers and content providers was cleaner. Before then, the early online services - CompuServe, AOL, and smaller efforts such as the WELL, CIX, and many others were hybrids - social media platforms by a different name - providing access and a platform for content providers, who curated user postings and chat.

Eventually, when social media were "invented" (Coles's term; more correctly, when everything migrated to the web), today's GAFA (or, in the US, FAANG) inherited that freedom from liability. GAFA/FAANG straddle that briefly sharp boundary between pipes and content like the dead body on the Quebec-Ontario boundary sign in the Canadian film Bon Cop, Bad Cop. The vertical integration that is proceeding apace - Verizon buying AOL and Yahoo!; Comcast buying NBC Universal; BT buying TV sports rights - is setting up the antitrust cases of 2030 and ensuring that the biggest companies - especially Amazon - play many roles in the internet ecosystem. They might be too big for governments to regulate on their own (see also: paying taxes), but public and advertisers' opinions are joining in.

All of this history has shaped the status quo that Kinsley seems to perceive as somewhat unfair when he noted that the same video that is regulated for TV broadcast is not for Facebook streaming. Palant noted that Facebook isn't exactly regulation-free. Contrary to popular belief, he said, many aspects of the industry, such as data and advertising, are already "heavily regulated". The present focus, however, is content, a different matter. It was Smith who explained why change is not simple: "No one is saying the internet is not subject to general law. But if [Kinsley] is suggesting TV-like regulation...where it will end up is applying to newspapers online." The Authority for Television on Demand, active from 2010 to 2015, already tested this, he said, and the Sun newspaper got it struck down. TV broadcasting's regulatory regime was the exception, Smith argued, driven by spectrum scarcity and licensing, neither of which applies to the internet.

New independent Internet Watch Foundation chair Andrew Puddephatt listed five key lessons from the IWF's accumulated 21 years of experience: removing content requires clear legal definitions; independence is essential; human analysts should review takedowns, which have to be automated for reasons of scale; outside independent audits are also necessary; companies should be transparent about their content removal processes.

If there is going to be a regulatory system, this list is a good place to start. So far, it's far from the UK's present system. As Killock explained, PIPCU, CTRIU, and Nominet all make censorship decisions - but transparency, accountability, oversight, and the ability to appeal are lacking.

Illustrations: "Escher background" (from Discarding Images, Boccaccio, "Des cleres et nobles femmes" (French version of "De mulieribus claris"), France ca. 1488-1496, BnF, Français 599, fol. 89v).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 12, 2018

Local heroes

leonia-nj.pngClashes between local jurisdictions and the internet are not new. Currently, the biggest and furthest-reaching such case is the Microsoft Ireland case, more properly United States v. Microsoft Corp. That case, having moved through the lower courts, is due to be argued in the Supreme Court on February 27. The case tests the question of who has jurisdiction over data stored by a company from one nation on servers situated in a different country, and represents both a who's-in-charge dispute and a clash of competing values for privacy, particularly given the US's attitude toward non-citizens.

Such disputes are happening at all scales and in new and interesting ways. This week, Leonia, New Jersey, population under 10,000 and situated an interstate's breadth from the George Washington Bridge into Manhattan, made itself semi-famous by declaring 60 of its side streets off-limits to people using them as a GPS-recommended cut-through. The applicable $200 fine appears less intended to punish drivers than to push GPS app developers to remove the streets from their recommendations and think more closely about how much traffic they're sending down streets that weren't built for the load. Residents will have identifying yellow hang tags.

The comments under CBS's story about this plan are typically polarized. About two-thirds seem to think this is unconstitutional ("the 'everything I don't like clause', I think?" a lawyer friend quips) or a Democratic plot to bar people they don't like, along with a bunch of braggadocio about their own reaction if stopped (as if). The rest are generally more sympathetic but advocate instead putting up signs that say "No Through Traffic", installing speed bumps, and/or lowering the posted speed limit to 25 or even 20 miles per hour.

Hiding among them is the occasional poster with actual local knowledge who has seen the congestion for themselves. OpenStreetMap's view of the area (above) shows the problem exactly: there's a straight line through Leonia while the interstates make a loop around the town to approach the bridge. So the trip through Leonia is geographically shorter - and in rush hour probably feels quicker, even if the reality is only a couple of hundred seconds. Most commuters would rather feel they were moving at will instead of trapped at other's mercy. Leonia, which Wikipedia says was formed in 1894, probably negotiated intensely for that loop when the interstates were being built, to ensure that the vast majority of all that traffic would bypass their town. Before apps, that would have been true; only inveterate map examiners or people who lived, or had lived, locally would have found the shortcut. Now that GPS has eliminated the need to learn geography hordes can be directed to it.

The results are not so different from 2016's skirmishes with Pokémon Go: apps centralize geography that we, the people, experience locally. Neither GPS vendors nor the Pokémon people send out mass consultations beforehand. Abrupt changes in the amount of traffic careening through your neighborhood are direct consequences of the "Ask forgiveness, not permission" style of doing business. As software's effects increasingly become physical, there will be more and more of these conflicts, and they're not easily solved because probably all of us would like to have the option of taking the cut-through for ourselves, even if we righteously think that everyone else should stay on the interstate and spare Leonia's roads the wear and tear and its residents the annoyance.

I'm reliably advised that if there were a relevant law to bar Leonia from making this rule it would probably be the Dormant Commerce Clause, which (says Wikipedia; I am not a lawyer) leaves states free to pass legislation pertaining to federal commerce laws on any point on which these are silent as long as the state law does not discriminate against or impose a burden on interstate commerce. Case law from the trucking industry balances safety against the burden on trucking companies. The Leonia law applies to all non-residents; in fact, the most likely people to be affected are commuters living in New Jersey. The town's mayor cited a US Supreme Court case that upheld a town's right to control access to roads as long as residents and emergency vehicles are not denied access without giving its name. As a best guess, he means 1981's Memphis v. Greene, which upheld Memphis, Tennessee's right to close a portion of a street to control traffic and promote children's safety, which had been challenged on grounds of racial discrimination.

Does every local area have to correct every app and service, or do startups have to keep abreast of the desires of millions of local areas? Whichever way that goes, how can it be enforced?

Figure it takes 20 years for an issue like this to arrive at top-level courts (the idea that data in a remote foreign location should be beyond the reach of national law dates to the early 1990s). On that basis, we'll be watching clashes between app developers sending people careening around the landscape and local residents develop until about 2025. San Francisco is already beginning to restrict delivery robots and require them to have permits and human chaperones. Autonomous vehicles with identical mapping systems should nicely explode the problem, although we scarcely need them when Uber all on its own has done so much to rile regulators worldwide.

Illustrations: The geography of Leonia, NJ (OpenStreetMap).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 5, 2018

Dangerous corner

Thumbnail image for Satan_inside.jpgIt's a sign of the times that the most immediately useful explanation of the Intel CPU security flaws announced this week is probably the one at Lawfare. There, Nicholas Weaver explains the problems that Meltdown and Spectre create, and give a simple, practical immediate workaround for unnerved users: install an ad blocker to protect yourself from JavaScript exploits embedded in the vulnerable advertising supply chain.

There is, of course, plenty of other coverage. On Twitter, New York Times cybersecurity reporter Nicole Perlroth has an explanatory stream. The Guardian says it's the worst CPU bug in history, Ars Technica has a reasonably understandable technical explanation, and explains the basics, and Bruce Schneier is collecting further links. At his blog, Steve Bellovin reminds us that security is a *systems* property - that is, that a system is not secure unless every one of its components is, even the ones we've never heard of.

The gist: the Meltdown flaw affects almost all Intel processors back to 1995. The Spectre bug affects billions of processors: all CPUs from Intel, AMD, and ARM. The workaround - not really solution - is operating system patches that will replace the compromised hardware functions and therefore slow performance somewhat. The longer-term solution to the latter is to redesign processors, though since Pewwrlroth's posting CERT appears to have changed its recommended solution from replacing the processors to patching the operating system. Because the problem is in the underlying hardware, no operating system escapes the consequences.

More entertaining is Thomas Claburn's SorryWatch-style analysis of Intel's press release. If you have a few minutes, you may like to use Clayburn's translation to play Matt Blaze's Security Problem Excuse Bingo. It's also worth citing Blaze's comment on Meltdown and Spectre: "Meltdown and Spectre are serious problems. I look forward to seeing the innovative ways in which their impact will be both wildly exaggerated and foolishly dismissed over the coming weeks."

Doubtless there's a lot more to learn about the flaws and their consequences. Desktop operating systems and iPhones/iPads will clearly have fixes. What's less clear is what will happen with operating systems with less active update teams, such as older versions of Android, which are rarely, if ever, updated. Other older software matters, too: as we saw last year with Wannacry, there are a noticeable number of people still running Windows XP,. For some of those, upgrading isn't really possible because the software that runs on those machines is itself irreplaceable. Those machines should not be connected to the internet, but as we wrote in 2014 when Microsoft discontinued all support for XP, software is forever. We must assume that there are systems lurking in all sorts of places that will never be updated, though they may migrate if and when their underlying hardware eventually fails. How long does it take to replace 20 years of processors?

The upshot is that we must take the view that we should have taken all along: nothing is completely secure. The Purdue professor Gene Spafford summed this up years ago: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room with armed guards - and even then I have my doubts."

Since a computer secured in line with Spafford's specifications is unusable, clearly we must make compromises.

One of the problems that WannaCry made clear is the terrible advice that emerges in the wake of a breach. Everyone's first go-to piece of advice is usually to change your password. But this would have made no difference in the WannaCry case, where the problem was aging systems exploited by targeted malware. After the system's been replaced or remediated (Microsoft did release, exceptionally, a security patch for XP on that occasion), *then*, sure, change your password. It would make no difference in this case, either, since to date in-the-wild exploitation of these bugs is not known, and Spectre in particular requires a high level of expertise and resources to exploit.

The most effective course would be to replace our processors. This is obviously not so easy, since at least one of the flaws affects all Intel processors back to 1995 (so...Windows 95, anyone?) and many ARM, AMD, and even some Qualcomm processors as well. The problem appears to have been, as Thomas Claburn explains in an update that processor designers made incorrect assumptions about the safety of the "speculative execution" aspect of the design. They thought it was adequately fenced off; Google's researchers beg to differ. Significantly, they appear not to have revisited their original decision since 1995, despite the clearly escalating threats and attacks since then.

The result of the whole exercise is to expose two truths that have been the subject of much denial. The first is to provide yet more evidence that security is a market failure. Everything down to the basics needs to be reviewed and rethought for an era in which we must design everything on the assumption that an adversary is waiting to attack it. The second is that we, the users, need to be careful about the assumptions we make about the systems we use. The most dangerous situation is to believe you are secure, because it's then that you unknowingly take the biggest risks.

Illustrations: Satan inside (via ).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.