" /> net.wars: December 2016 Archives

« November 2016 | Main | January 2017 »

December 30, 2016

Leap second

fire.jpg"Fuck you, 2016," many people wrote on Twitter a couple of days ago in response to the news that Carrie Fisher had died. "Worst year ever," wrote veteran TV writer Ken Levine, in a fine display of chronocentricity. Worse than 1919? 1348? 72,000 B.C.?

That said, it has been a dramatic watershed year in many obvious ways. To focus on the net.warsish ones...

This was the year when the long-predicted threat that billions of connected devices would gang up on our global internet became real. Brian Krebs' summary captures the security disasters of the year nicely: billions of accounts hacked; the Mirai botnet which turned not only avoidable purchases, like "smart" thermostats and children's toys, but unavoidable bits of infrastructure, like network routers.

It's also - somewhat related - the year in which it became plain that Silicon Valley's ask-for-forgiveness-not-permission culture became physically dangerous. In the past, we've seen that habit of mind displayed rather frequently: Google, when it scanned millions of books without asking publishers or authors first, and when sending out cars to create Street View. Facebook, when its plan to merge its WhatsApp subsidiary's user data with its own, set off a skirmish with the EU. It's bad enough when the consequences of this mindset involve a bait-and-switch approach to people's privacy, but much worse when applied to catching cartoon monsters in real places, or two-ton chunks of metal tearing through a landscape filled with pedestrians and cyclists. Uber, however, insists its self-driving cars need no permits because they come equipped with humans ready to take over the wheel. Eventually, Uber took its cars away to more-welcoming (and emptier) Arizona.

dexter-palmer.jpgReaders of the most interesting book I read in 2016, Version Control, might note the prescience, although author, Dexter Palmer presumed that the parties behaving badly would likely be individuals, rather than large, money-losing companies whose owners think defying regulators is cool. The Imperical College professor Chris Hankin has made the point that in the cyber-physical systems developing now security and safety are merging, with consequences for both sides. This is therefore the turning point after which it is impossible to allow technology companies to continue living in their product liability vacuum.

This is not the first year in which one nation tried to interfere with another's elections, but it's certainly the most cyberspatial.

GCHQ got the surveillance powers it and Theresa May wanted, as did the FBI. Assisted by the Chaos Computer Congress, travel data privacy expert Edward Hasbrouck demonstrated the fraud risks associated with Passenger Name Record data.

The US is starting to ask arriving foreign travelers for their social media pages. Yes, it's voluntary (for now), but anyone arriving in a foreign country, particularly one as fussy about foreigners as the US, assumes that everything on the form is legally required, and calling attention to yourself by being a refusenik is behavior reserved for the privileged. My personal, uninformed guess is that the responses will be used in a pilot program to prove the potential. The idea is wrong-headed in many ways, not least because it chills the right of association embedded in the First Amendment. But soon every country will be doing this, and the only people who will escape are the multi-citizenship "elites" everyone is supposed to despise now.

The first sign of the potential for in-home spying arrived in the form of law enforcement's desire for data collected by the voice-driven Amazon Echo home automation gadget. Like the social media request above, this is the beginning of a predictable but new privacy battle. No one buys an Echo so that the metadata it collects showing when they're home or the commands they send to Amazon's server can become part of their permanent data shadow, any more than they curated their Facebook page with an eye to its inspection by immigration. There are two collisions brewing here. First, the obvious privacy issue, but second the business challenge if people balk at buying devices that can be coopted against them. In both this and the social media/immigration case, the risk for the future is that we may lose the right to be unnetworked, along with moral crumple zones one of this year's better concepts. It's entirely believable that governments of the kind we see today in the UK and US will extend "Nothing to hide, nothing to fear" to "What are they hiding if they don't have a robot butler?"

Beyond that: ad fraud really took off; Russia was caught - and has now admitted - systematically doping hundreds of athletes; in retaliation for that exposure the Russian hacking team Fancy Bear published the details of Western athletes' Therapeutic Use Exception forms; Uber (again) spied on its customers; the UK still thinks it might like to withdraw from the European Convention on Human Rights; ICANN became independent; the FBI and Apple fought over encryption...

Phoenix_detail_from_Aberdeen_Bestiary.jpgAnd still, after all that, come Saturday night you're going to have to take an extra breath before you're done with 2016. Fifteen years of discussions on, and our clocks still just about chime with astrophysics. Happy leap second, everyone.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 23, 2016

Christmas comes but once a year

curia.jpgPreviously on surveillance versus the Court of Justice of the European Union...

...in a case brought by Digital Rights Ireland, the CJEU ruled that the EU's Data Retention Directive was invalid, thereby voiding the supporting national laws across 27 member countries...

...some of which reacted by dropping their data retention programs...but...

...the UK waited three months and then enacted the Data Retention and Investigatory Powers Act in a crazed seven-day squish before disassembling for summer vacation...after which...

...the MPs Tom Watson (Labour, West Bromwich East) and David Davis (Conservative, Haltemprice and Howden) filed a legal challenge with the Court of Appeals, which in turn was referred to the CJEU...

...and while that was pending, the UK voted to leave the EU and David Davis was appointed the minister of charge of leaving and therefore had to withdraw from the suit...

...on December 8 the passed the Investigatory Powers Act became law...

...and on December 31 DRIPA will automatically be repealed.

Thumbnail image for Tom_watson_communia2009_cropped.jpgTwo weeks after the Investigatory Powers Act received Royal Assent, the CJEU has ruled in the Watson case, which was backed by the Open Rights Group, Privacy International, Liberty, and the Law Society. The court struck down the UK's approach again. ORG has a helpful summary of the key points of the ruling, but if you've been following all this you already know the gist. Blanket retention of mass communications data is disproportionate; surveillance should be targeted at the subjects of investigations; individuals should be told after their surveillance ends; independent authorization is essential; access should be allowed only for purposes of investigating serious crime. The court also noted that data retention impinges upon freedom of expression and that retained data must remain within the EU. Chris Pounder explains that the "serious crime" bit is because "national security" is not within the CJEU's competence, but the security agencies do (legally) assist law enforcement. Pounder suggests that if a similar case found its way to the European Court of Human Rights, that court would follow CJEU's analysis.

An optimist looks at this ruling and thinks, Surely the UK must be getting it by now.

A pessimist wonders if the present government figures it can just run out the "Brexit" [ugh] clock. Let's say everyone gears up in January to file a fresh legal challenge to the IPA. Based on our admittedly small sample of rulings, it appears that it takes a case two years to get from filing to CJEU ruling, about the length of time allotted between triggering Article 50 and completed EU exit.

The realist...doesn't know. David Anderson, the Independent Reviewer of Terrorism Legislation, sees the judgement as one of a series of "marked and consistent differences of opinion between the European Courts and the British judges", calling both sides of that difference "equally legitimate". Anderson does, however, say that exiting the EU wouldn't resolve the problem of the requirement in the General Data Protection Regulation, which passed this spring and will come into force in early 2018, definitely before Britain's two Article 50 years are up. Britain won't be able to ignore the new law, which is stringent about banning data-sharing with third-party countries lacking comparable protection, unless it wants to ensure British businesses can't trade with the EU. Pounder takes a slightly different view: he thinks the Investigatory Powers Act can conform with the right textual changes to codes of practice and the Judicial Commissioner procedures. However, he, too, points out the problem posed by GDPR: with the UK outside the EU, GCHQ can't monitor French communications in bulk for the French intelligence services or share bulk data on Europeans with US authorities (his examples).

Thumbnail image for Edward Snowden - CFP2015.JPGLike so many other things in contemporary public life, the two sides of this issue seem increasingly polarized. Anderson notes that all three reviews of the draft Investigatory Powers bill - the Intelligence and Security Committee of Parliament, the Joint Committee, and Anderson's own - agreed that bulk data retention was proportionate. The Home Office has said it will continue making those arguments to the UK's Court of Appeal (the one that referred the case to the CJEU for clarification in the first place). Neither the European Courts nor civil society will accept this claim.

In an otherwise obnoxious New Yorker piece arguing that Edward Snowden doesn't belong in a class with 1960s Pentagon Papers whistleblower Daniel Ellsberg (despite Ellsberg's own frequently stated opinion is that he does), Malcolm Gladwell unearths two illuminating quotes from Ellsberg's 2002 autobiography. In them, Ellsberg talks about the wealth of information that becomes available to those entering government once they've obtained the necessary security clearance. For about two weeks, Ellsberg wrote, you will feel foolish about everything you said and thought before you had this access; after that, "it will have become very hard for you to learn from anybody who doesn't have these clearances".

To me, this explains a lot, particularly why these back-and-forth law-and-practice versus constitution-and-principles disputes will never stop. Still: it's nice to end 2016 with a good note.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 16, 2016

Facts are scarred

simon-hoggart.jpgThe late Simon Hoggart once wrote that the reason to take the trouble to debunk apparently harmless paranormal beliefs was this: they were "background noise, interfering with the truth". There were, at the time, many people who said that *of course* they did not take seriously the astrology column they read every morning. It was "just for fun".

And that was probably true or mostly true. I do think, in that humorless Skeptic way that these things can establish an outpost of uncertainty in your brain. But so many trends and habits have led to the current bust on fake news that it's hard to pick just one. The most wide-ranging discussion of this is over at science fiction writer Charlie Stross's blog.

Stross's main argument concerns Twitter and Facebook: what succeeds on platforms that have nothing to sell but eyeballs to advertisers is emotional engagement. Shock, fear, anger, horror, excitement: these sell, and they have nothing to do with reason or facts. Stross argues that these factors were less significant in traditional media because there was a limited supply of advertising space and higher barriers to entry. I'm not so sure; it has often seemed to me that the behavior on social media is just the democratization of tactics pioneered by Rupert Murdoch and the Daily Mail.

The fact that a small group of teens in a Macedonian town can make money pushing out stories they know are fake that may influence how millions of Americans think as they go to the polls on Election Day...that's new.

That those teens and other unscrupulous people are funded by large American businesses via advertising networks...that's also new.

The spotlight has unearthed some truly interesting things. In the Observer, Carole Cadwalladr used Google's autocomplete search suggestions to unearth what she describes as a vast, three-dimensional factless parallel universe that is gradually colonising the web. In a follow-up, she says Google refused to discuss matters but has quietly made some adjustments. Gizmodo reported that Google's top hit in response to the question "Did the Holocaust really happen" is myriad links to the white supremacist neo-Nazi group Stormfront. Wikipedia's explanation of Holocaust denialism is hit number four. Google told Gizmodo that while it's "saddened" that hate groups still exist, it won't remove the link.

It's always a mistake to attribute a large phenomenon to a single cause. There are many motives why people create individual fake news stories: money, like the Macedonian teens; interference with the US election, as US intelligence agencies say was intended; hijinks; political activism. It is exactly the same pattern we've seen with computer hacking; there are tools to commit news hacking at all levels from script kiddies to state-sponsored, high-level experts, and motives to match.

astro-turf-wars 125.jpgThe strategy to create the political aspect of this was clearly outlined in the 2011 documentary Astroturf Wars. In touring American Tea Party country, Australian filmmaker Taki Oldham found right-wing experts teaching attendees at conferences how to game online ratings and reputation systems to ensure that the material they liked rose to the top and material they didn't (such as any documentary made by Michael Moore) sank into invisibility. People didn't even have to read or view the material, the trainer said, showing them how to use keywords and other metadata to "give our ideals a fighting chance".

Now, five years later, everyone is sorry. Or at least, sorry enough to be hatching various schemes: flagging, labelling, rating, fact-checking, and so on. Probably eventually these first steps will be gamed, too, and we'll have to rethink, but at least there are first steps.

What really needs to change, however, is the thinking of the people who own and deploy these systems. As cyberspace continues to bleed into the physical world, thinking through consequences before building and deployment becomes increasingly important, and it's something Silicon Valley in particular is notorious for avoiding. "Do first, ask permission later", they. So this week Uber sent out some unlicensed self-driving cars onto the streets of San Francisco. Accidents ensued. Uber bizarrely says the presence of humans in the cars means the company doesn't need permits and attributes the accidents to "driver error". Well, was the car self-driving or not? Was this a real launch or a marketing stunt gone wrong? Does the company have no lawyers who worry about liability?

You may love Uber or avoid them based on stories like that one or this week's other news, in which a former forensic investigator for the company accused its staff of spying on customers. Either way, the we-are-above-the-law attitude is clear, and Uber is only the latest example.

Thumbnail image for werobot-pepper-head_zpsrvlmgvgl.jpgFake news is distributed by computers working exactly the way they are supposed to when large automated systems replace a previously labor-intensive business. The problem is not that today's ad agencies are robots that don't care about us. It's not the robots who don't care if a story might sufficiently persuade someone that Hillary Clinton is doing bad things via a pizzeria that they should load up the assault rifle and go "self-investigate". The problem is that these systems are humans all the way down.

The problem is the humans who programmed the robots. First, we need *them* to care.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 9, 2016

Retro

Phoenix_detail_from_Aberdeen_Bestiary.jpgThis week, on one of the mailing lists I read, someone asked for recommendations for organizations to which they could make charitable donations as part of their holiday deliberations. The criteria: organizations involved in protecting online rights that can make progress on solving problems such as security, privacy, cyberbullying, fake news, and hate crimes.

The problem isn't finding organizations interested in these issues. There are plenty. To name a few... In the US: American Civil Liberties Union; Electronic Frontier Foundation; Electronic Privacy Information Center. In the UK: Open Rights Group; Privacy International; Liberty; Index on Censorship The problem is the *progress* bit. This stuff is really, really hard, and the last month in particular has been dispiriting.

There is the passage of the UK's Investigatory Powers Act, the FBI's new hack-any-computer powers (for values of "computer" that include cars, Barbie dolls, and streetlights, one presumes). And the whizzing-along-progress Digital Economy bill. These bills have stuff in them the above-named folks have been fighting for 25 years; in fact, parts of the Investigatory Powers Act were the stuff of the very first net.wars column, back in November 2001. Progress in this context means continuing to find the will to continue fighting the same battles against new waves of opponents.

A day later, I read at Light Blue Touchpaper that under cover of Brexit distraction - so useful for so many things! - the UK government is overriding people's decision to opt out of sharing their health data whenever the Department of Health rules that the data has been anonymized. This means re-fighting a large part of the care.data fiasco, as medConfidential writes.

Meanwhile, as Kevin Marks pointed out a month or so back (right before I was going to!), the web is becoming increasingly unreadable. Marks's triumph was tracing the reasons back to a school of design that thinks - presumably because they're all 17 and have perfect eyesight - that contrast is tiring, and eyes want a kinder, gentler, less functional reading experience. Actually, what's tiring is all those glaring white backgrounds. There's a reason why accountants' paper is buff or pale green; it's so that staring at all those tiny numbers all day they don't get headaches. For me, the most comfortable reading on-screen is bold, white letters on a black background. I believe that's the right way round: the letters, the star attractions, are the things that should be luminous.

The present style, however, calls for skinny, grey type. I have partially solved this by adding the "Page Colors and Fonts Buttons" extension to Firefox. This gives me the discretionary power to selectively override designers' illegible choices. But I shouldn't have to. Design schools should be teaching their students the lessons the industry learned during the early 1990s push towards software usability: human eyes don't resolve blue particularly well, so it's not a good color for extended batches of text; small type is harder to read; pale type is harder to read; grey type is harder to read. Instead, what we have here is a failure to communicate.

PRZ_closeup_cropped.jpgAlso a failure, according to Filippo Valsorda: PGP. for many of the same reasons I complained about in 2011. There are just too many picky details to get right. Making it work today, when almost everyone does their email on more than one device and has to choose between spreading copies of their private keys (bad hygiene) or being locked out of encrypted mail much of the time (ditto), is nearly impossible. Besides, he writes, it's not clear that long-term keys are a good match for the threat model. He's moving on to other techniques, and I suspect anyone who's serious about their personal security will eventually follow suit. However, one note. PGP's creation was the result of two threat models: first, protecting individual privacy, but second, rumors that the US government would ban the domestic use of strong cryptography. It's arguable that it worked better to counter the second of those than the first.

Apparently inspired by Pando's Paul Carr, the Come to Satan website is gleefully monitoring the computing press for previously anti-Trump Silicon Valley CEOs who are back-tracking on all the never-Trump stuff they said during the campaign. This site feels like the Net circa 2000; put it alongside ancestors Suck.com and FuckedCompany and it would feel right at home.

Finally, The Register reports that XP is still in use in the NHS and will continue to be so into 2017. I'm not surprised; as I wrote in 2014, software is forever. The situation reminds me of a friend who commented that when he reached the age of 47 it dawned on him that things in his life that he'd thought were "penciled in" were in fact permanent conditions. Life - and time - has a way of sneaking up on you like that, like it did to the developers writing software with two-digit years in the 1950s, or will do to the inventors today hatching the Internet of Things with a cheerful "isn't security Someone Else's Problem?" disposition. Yesterday's mistakes are ready and waiting to plague us.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 2, 2016

Routers behaving badly

Thumbnail image for zyxel-router-cropped.jpgLate on Saturday night, a small laptop started having trouble connecting. This particular laptop sometimes has these issues, which I put down to the peculiarities of running wired ethernet into it via a USB converter. But the next day I realized that the desktop was timing out on some connections, and one of the other laptops was refusing to connect to the internet at all. An unhappy switch somewhere in the middle? Or perhaps a damaged cable? The wireless part of the network, which I turned on as a test, worked much better, which lent credence to the cable idea.

By Monday morning, I had concluded the thing to do was to restart the main router. Things were fine after that. On Tuesday morning, some bounced emails from my server alerted me to the fact that my IP address had been placed on one of the three blacklists Spamhaus consults. It was only then that I realized my router was one of the ones affected by the 7547 bug. If my network had been spewing botnet messages, the router was infected.

I found the patched firmware on the Zyxel site, read several sets of incomplete instructions (the ones you need are now here, and patched the router. GRC informed me the port, which had previously tested as open, was now closed. But did that mean, as the SANS instructions suggested it might, that the router was still infected, or not?

And now, a new problem: I couldn't log into the router to change its password (a consequence of not having the right instructions. Another symptom of infection, or a bungle in the vendor's patch? This was going to mean more effort than I had time for: a factory reset and complete reconfig. Fortunately, I had a spare, already-configured router to swap in, which is what I did.

Yes, I made a mistake: I should have tested the port, reset it, tested it again so I'd know whether it was infected, disconnected it, changed the password, and *then* patched it and tested it a third time. I plead that good, step-by-step instructions were hard to come by. The assumption is that the only people who are doing this kind of thing are those who already know how to do it.

The proximate cause of this particular bug is that the manufacturer of my router - Zyxel made the bizarre assumption that these routers would mostly be installed by ISPs and that therefore they should contain a facility for remote management so the ISP could push out updates as needed. There are, of course, many ways Zyxel could have done this. Especially, they could have left the port closed by default. They didn't.

They have now, of course.

Worse, on the manual page for the remote management functions, the instructions clearly say you can disable them by clicking on a radio button labeled "disable". That button is not present on any of my remote management screens. So I can't tell whether those functions are still listening on standard interfaces like www, telnet, ftp, and so on.

Thumbnail image for Thumbnail image for Hello-Barbie.jpgThis is the future we're facing. I bought the router in good faith from my ISP, which is a small, knowledgeable network consultancy run by two people I actually know personally to be smart and hard-working. They recommended it as a good, reliable router when I was switching to fiber. I did the right things: I configured the firewall to block all unnecessary ports, and changed its admin password. Reliable it has been, but neither they nor I could have guessed at its future as a security hole. So the problem, soon to be exacerbated by the Internet of Things, is not just that ignorant people buy poor-quality devices that prove to be a danger to themselves and others, but that knowledgeable people who take care to lock things down are being actively prevented from doing any better.

kieren-mccarthy-smiling-square-300px.jpgIn the world the Investigatory Powers Act made legal this week, GCHQ has the power to discover a hole like that, exploit it for their own purposes, and keep it secret. They could even, as Kieren McCarthy writes at The Register, order Zyxel to create the vulnerability and, again, keep it secret.

As we've seen, secrets like these get out. Time was when the enterprising would-be hacker had to dive into dumpsters to locate admin passwords and equipment manuals. Today, we all know all this information is easily findable on the internet.

Meanwhile, I still don't know what to do about my Zyxel router, which I'd like to put back into place because the other router is less reliable. Factory reset, full reconfig, sure. But then what? How do I know whether I can trust it? What other flaws are lurking in the gap between what its manual says and what its interface actually enables? It's easy enough to avoid most aspects of the Internet of Things. Just. Don't. Buy. Stupid. Gadgets. But the only way for me not to have a router is to choose isolation and sign off the internet.

And then what would I write net.wars about?

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.