" /> net.wars: July 2017 Archives

« June 2017 | Main | August 2017 »

July 28, 2017

Virgin passwords

cranor-badpasswordslarge.pngA couple of weeks ago, I tried to set up online access to a bank account. User ID must be nine to 20 characters, and if it's only nine characters, it can't be all numbers. Password must be at least eight characters, one must be a number, it must be different from the user ID, it cannot contain three or more of the same characters next to each other, and it's case sensitive. I don't know how you react to instructions like that, but instantly my mind's a blank. Which is why I have a personal algorithm for generating passwords. I think websites with directions as complex as this should provide a sample that shows, rather than tells, people what they mean. Isn't that the first rule of storytelling?

If you're going to have that many rules, I think you might as well just post a password generator on your site that spits out, say, ten conforming passwords and let people pick one. In this case, the site then asked me to answer five secondary security challenge questions...which are likely to be the weakest point for attackers to exploit because no one ever insists that the question - or the answers - should be unique to the site or imposes complexity rules on the replies. These answers notoriously can be leveraged by attackers.

The BBC a few weeks ago, when it offensively started requiring logins wanted: at least eight characters, at least one letter, at least one number or symbol. My house number and postcode perfectly fits that template. Is that secure? Do I care if it is? What is the worst thing an attacker will do with that account? Watch a TV show I disapprove of?

The WELL (you can see I've been taking notes) also has rules, but in an unusual bit of friendliness offers a tradeoff: longer but simpler, or shorter but more complex. That's about the most "empowered" I've ever felt about a password. Bonus question: which is harder to type on a mobile phone? Typing that prompts a thought: smart phones could make this easier by implementing a password keyboard that expands to fill most of the screen and includes numbers and special characters. My guess: something like that will never happen. And it will be blamed on us.

R-angela-sasse-cropped-.jpegI was planning to write about this...sometime, but then this week Charles Arthur's The Overspill sent me to Troy Hunt's discussion of the evolution of password guidance. Much of what he says there is sensible advice that if adopted will make life easier for users; he draws on recent guidance offered by the US National Institute for Standards and Technologies, the UK's National Cyber Security Centre, and Microsoft. Of the three, I'm most familiar with the NCSC guidance because it was based on research from the Research institute for the Science of Cyber Security, where I'm the in-house writer. Angela Sasse, who leads RISCS, has been writing about the problems with the usability of passwords since 1999, when she published Users Are Not the Enemy. Eighteen years later, many people are still asking users to do cognitively impossible things and then blaming them when they can't. That governments are adopting a saner approach is a big step forward.

For that sort of reason, one section of Hunt's discussion strikes me as unworkable and user-unfriendly: his proposed restriction on using any password that has *ever* been captured in a data breach. Commenters point out the processing overheads involved in comparing all those passwords, and the article doesn't remind sysadmins to hash and salt stored passwords securely. But the issue that occurred to me is: *in what context*?

It matters greatly if my banking password is in all the hacker dictionaries; it matters much less if the password I have to create in order to read articles on a random publication's website has been captured. Surveillance capitalists seeking to monetize tracking the articles I read don't need my password to do it. There's a real argument that sites should stop insisting on superfluous password protection.

What I'd like to see is context-sensitive password requirements. If they insist, a user ID should be sufficient for just reading articles. When the stakes associated with your account rise - say you want to post comments - then you create a password. If you store sensitive personal or financial details, only then are you reminded to create something stronger. Save the effort for where it's needed, just as you wouldn't post a cheap metal dinner fork in as many layers of protective wrapping as a 1984 Taylor guitar. Some online banks do this: logging in with just a user ID and password lets you see your account, but if you want to perform financial transactions you have to use their two-factor authentication.

Returning to Hunt, the other problem with his proposal is that it will soon hit the limits of available resources. The universe of password strings is finite; the universe of passwords people can remember or type accurately is vastly smaller. Smaller still is the reservoir of user patience. Only someone who followed all of Hunt's other recommendations - especially the one about allowing pasting and password managers - could realistically implement the virgin passwords rule.


Illustrations: Lorrie Cranor's bad password fabric; Angela Sasse.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 21, 2017

The long arm of the law

European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpgIt's a little over three years since the European Court of Justice held in Google Spain v AEPD and Mario Costeja González that in some circumstances Google must honor requests to deindex search hits based on a person's name. In data protection terms, the court ruled that Google is a data processor, and as such subject to the EU's data protection laws. At the time, the decision outraged many American internet experts, who saw (and still see) it as an unacceptable abrogation of free speech. European privacy advocates were more likely to see it as a court balancing two fundamental rights - the right to privacy and the right to freedom of expression - against each other and finding a compromise. Some also argued that the decision redressed the imbalance of power between individuals and a large corporation, which led them to ask in puzzlement: isn't self-reinvention an American tradition?

In these cases, the main search engines, which means, uniquely, Google, because it has a 91% share of the European search engine market - sit right in the crosshairs . (As Bas van der Beld writes at Search Engine Land, Europe really needs competition.) Because of that level of dominance, what Google's algorithm chooses to display and in what order has a profound effect on not only businesses but individuals.

Once the court ruled that Google met the a legal definition of a data controller, to a non-lawyer the rest appears to follow. That key decision is, nonetheless, controversial: the court's own advocate-general, Niilo Jääskinen, had advised otherwise, a recommendation the court chose to ignore. However, part of the advocate-general's argument rested on the fact that the right to be forgotten was not, at the time, part of the law. As of May 25, 2018, it will be.

It is unhelpful to talk about this in black-and-white terms as censorship. Unlike the UK's current relentless pursuit of restricting access to various types of material, the court did not rule that the underlying material should be removed, blocked, or made subject to other types of access restrictions. There is also no prohibition on having the pages in question pop up in response to other types of searches - that is, not on the person's name. It's also unhelpful to paint the situation as one that aids wealthy criminals and corrupt politicians to hide evidence of their misdeeds: Google, as already noted, has rejected nearly 60% of the requests it's received on, among other grounds, the basis of public interest. That said, transparency will continue to be crucial to ensuring that the system isn't abused in that way.

After the inevitable teething problems while Google struggled with a rush of pent-up demand, things went somewhat quiet, although the right to be forgotten did get some airplay as an element of the General Data Protection Regulation, which was passed last year. This year, however, the French data protection watchdog, Commission Nationale de l'Informatique et des Libertés (CNIL), kicked the issue back into the rotation. Instead of removing these hits - as the link above shows, Google has deindexed 43.2% of the requests it has received - only from the search results seen by visitors based in the EU, CNIL told Google they must be removed from all its sites worldwide. The ruling has been appealed, and this week it was announced that the case will be, as Ars Technica's Kelly Fiveash writes, heard by the European Court of Justice, which is expected to decide whether such results should be delisted globally, on a country-by-country basis, or across the EU.

Each of these options poses problems. Deindexing search results country-by-country, or even across the EU, is easily circumvented. Deindexing them globally raises the question of jurisdictional boundaries, previously seen most notably in the area of surveillance legislation and access to data on foreign servers. Like the issues of who gets to see whose data on distant servers, the question of how companies obey deindexing rulings is just one of the long series of demarcation disputes that will extend through most of our lifetimes as governments squabble over how far across the world their authority extends.

jaaskinen_vierailuluento.jpgA second big issue - which Jääskinen raised in his opinion - is devolving responsibility for the decision of what to remove into the hand of private companies. The problems with this prospect also feature in the UK discussions about getting social media companies to "do more" to remove extremist material, hate speech, and other undesirables. Obviously the alternative, in which the government makes these decisions is even worse. Jääskinen also suggested that the volume of requests would become unmanageable. Google's transparency report, linked above, shows otherwise: a spike at the beginning followed by a dramatic drop-off and a relatively flat trajectory thereafter. Experience over the last three years, therefore, indicates otherwise.

Costeja was a messy and controversial decision. The ECJ's decision to hear this case gives it a chance to review and revise its thinking. However, it will not be able to solve the fundamental problems: the power struggle between global data services and national governments and the direct clash between European fundamental privacy rights and the US's First Amendment. Most likely, it will contain something to offend everyone.

Illustrations: European Court of Justice (Cédric Puisney;

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 14, 2017

The harvest

Spaghetti_harvest.jpgin 1957, the BBC's flagship current affairs program, Panorama, broadcast a story about that year's extraordinarily bountiful spaghetti harvest, attributed to the "virtual disappearance of the spaghetti weevil" (it says here in Wikipedia). It was, of course, an April 1 hoax, and apparently up there with the 1937 War of the Worlds radio broadcast if it's still being pulled out in 2017 as a pertinent precursor to a knotty modern problem, as Baroness Patience Wheatcroft did yesterday at a Westminster Forum discussion of fake news (PDF). In any case, it appears that national unfamiliarity with that foreign muck called pasta meant that many people believed it and called in asking how to grow their own spaghetti trees.

Parts of the discussion proceeded along familiar lines. Some things pretty much everyone agreed on. Such as: fake news is not new. Skeptics have been fighting this stuff for years. There has long been much more money in publishing stories promoting miracles than there ever will be in debunking them. Even if belief in spaghetti trees has died in the face of greater familiarity with the product, hoaxes are perennially hard to kill. In 1862 Mark Twain found found that out, and in the 1980s so did science fiction critic David Langford.

Everyone also converged on a consistent meaning of "fake news", even though really it's a spectrum whose boundaries are as smudged as Wimbledon's baselines this week. People publish stories that aren't true for all kinds of reasons - satire, parody, public education, journalistic incompetence - but the ones everyone is exercised about are stories that are intentionally false and are distributed for political or financial gain. The discussion left a slight gap there, in that doing so just for the lulz doesn't have a fully political purpose and yet is a very likely scenario. But close enough.

Skeptics' experience shows that every strategy you adopt for identifying genuine information will be emulated by others seeking to promote its opposite: you have scientists, they have scientists. We know this from the history of Big Tobacco and Big Oil. This week, Google was accused of funding research favorable to its interests in copyright, antitrust law, privacy, and information security, a report Google calls misleading.

wheat-weevil-Sitophilus.granarius.jpgSimilar problems apply to the item everyone thought had to form part of the solution: teach digital literacy. Many suggested it should form part of the primary school curriculum, and sure, let's go for it, but human beings teach these things. Given that political polarization has reached the point where Fox News viewers and New York Times readers cannot agree on even the most basic of facts about, say, climate change or American health care, what principles do you give kids by which to determine whom to believe? What does a creationist teach kids about judging science stories? Wikipedia ought to be the teacher's friend because its talk pages lay out in detail how every page was built and curated; instead, for years many have told kids to avoid "unreliable" Wikipedia in favor of using a search engine to find better information. The result: they trust Google without understanding how it works. A more subtle problem of provenance was raised by Matt Tee, the CEO of the Independent Press Standards Organisation , who said that on social media platforms, particularly Facebook, all news stories look alike, no matter where they're from. More startling was Adblock Plus's Laura Sophie Dornheim's claim that ad blockers can help by interfering with the business model of clickbait farms. To an audience seeking solutions but to whom the loss of advertising revenue was an important part of the problem, she was a disturbing bit of precipitate.

Inevitably there was discussion of regulation. Leaving aside whether these companies are platforms, publishers, or some kind of hybrid, the significant gap in this and most other discussions is the how. The image in our minds matters; for the foreseeable future this won't be solved by computers. Instead, as Annalee Newitz recently reported in Ars Technica, the world's social media content raters are humans, many of them in countries like India, where Adrian Chen and Ciaran Cassidy followed a two-week rating training course and the Philippines. Observes an unidentified higher-up, "You definitely need man behind the machines."

This is what efforts to control fake news - a vastly more complex problem - will also look like. GAFAT et. al may be forced to hire expensive journalists and scholars to figure out what the rules for identifying fake news should be, but ultimately these rules will be put into practice by an army of subcontractors far removed from the "us" who are being protected from it. There are bound to be unintended consequences.

Fake news is yet another way that our traditional democratic values are under threat. Even small terrorist attacks have provided justification for putting into place a vast surveillance framework that's chipped away at our values of privacy and the right to travel freely. Everyone yesterday was conscious of the threat to freedom of expression attempts to disappear fake news may represent. But, like computer security, fake news is an arms race: those intent on financial gain and political disruption will attempt to turn every new system to their advantage. Computer scientists cannot solve today's security problems without consulting many other disciplines; the same will prove true of the journalists, media professionals, and scholars who are fretting about our very human tendency to go "Ooh, shiny!" at entertaining lies while putting off reading sober truths.


Illustrations: Spaghetti harvest; wheat weevil (Sarefo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 7, 2017

Finding harm

Facebook-76536_640.png"Where's the harm?" has long been a hugely difficult question to answer in privacy matters. Many answers sound vague and conspiracist. Knowing they're being watched causes people to censor themselves. It chills free speech. It isn't enough to say it's creepy?

Last week, in a US court case based on a complaint brought in 2012 in San Jose, California, US district judge Edward J. Davila ruled in favor of Facebook, though he left some claims open for the plaintiffs to amend and resubmit. The complaint raised two main points. First, that by tracking them online when they weren't logged into the site Facebook violated the US's wiretap laws. Second, that in doing so Facebook improperly profited from the collection and use of their information. Davila ruled that the plaintiffs had not made the case that they had lost the chance to sell the information themselves or that its value had dropped because Facebook had collected it. Further, they had failed to show that Facebook had illegally intercepted their communications, and anyway they could have taken steps to protect themselves. In the judgment, Davila writes:

Plaintiffs have not established that they have a reasonable expectation of privacy in the URLs of the pages they visit," Davila wrote. "Plaintiffs could have taken steps to keep their browsing histories private. For instance, as Facebook explained in its privacy policy, "[y]ou can remove or block cookies using the settings in your browser." MTD 6. Similarly, users can "take simple steps to block data transmissions from their browsers to third parties," such as "using their browsers in 'incognito' mode" or "install[ing] plugin browser enhancements.

I am not a lawyer, and can't test these arguments's legal soundness. But the case seems to violate common sense in a number of ways.

Judge_Edward_Davila.color.small.jpgFor one thing, it sounds like the judge, based in San Jose, has bought the oh-so-Silicon-Valley idea that consumers can take care of themselves. They can - but there are limits. For every person who regards installing a browser plugin as routine, there are many more who haven't a clue where to start. Putting the burden on them to figure out privacy settings, read terms and conditions, and so on is effectively setting industry norms at opt-out. The reality is that the rampant data-collection business models of social media companies is directly opposed to what many consumers would choose if they had more effective choices.

What's more disturbing is the presumption that they should have known to do this. The judge appears to believe the expectation of privacy - in your own home in front of your own computer - needs to be established. How many of us think of visiting a website as public act to be captured by anyone who's interested? Since Edward Snowden's 2013 revelations, the internet as giant surveillance platform has certainly received greater publicity, but that still doesn't make it right (and certainly not in 2012). One hopes the EU would view this case differently.

Even more disturbing is the spread of this "new normal" to institutions that really ought to know better. On July 1, the BBC - which at one point seemed like the opposition to commercial media - began requiring individual logins in order to view or listen to content on its site. Previously, anyone geolocated in the British isles was simply asked to verify that that they had a TV license. Now, it appears to that the BBC has jumped on the data collection and personalization bandwagon. You can lie, of course, about your name, your date of birth, your gender, and your postcode - but anyone who has broadband will see their IP address feed through, and they do ask you to confirm your email address (which I suppose you could set to one you use for nothing else). But the whole thing is still wholly unfair: anyone who has a TV license is already paying for the BBC, including the streaming services. Where do they come off demanding that in addition I pay them with my data and offering no opt-out alternative?

Returning to the judge, however, his basic contention is that the plaintiffs failed to show the harm, which he defined in purely economic terms. It's just that pure economics are utterly irrelevant: Facebook's business is exploiting haystacks; it doesn't sell individual needles on the open market.

So what is the harm? Whom does it hurt if Facebook tracks people around the web who believe they are safely logged out of its service? What damage is caused by the discovery that companies can collect the information you enter in web forms whether or not you ever submit it? (Facebook has been doing the same thing for some years now.)

The harm, to answer the judge, lies in a general sense of unfairness: these systems work because they rely on defying users' mental models of how the world works. There's only so many times you can do this before users learn to distrust everything they see. Society-wide distrust seems plenty harmful to me.


Illustrations: Facebook (Simon Steinberger; Judge Edward J. Davila.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.