" /> net.wars: October 2017 Archives

« September 2017 | Main | November 2017 »

October 27, 2017

The opposite of privilege

new-22portobelloroad.jpgA couple of weeks ago, Cybersalon held an event to discuss modern trends in workplace surveillance. In the middle, I found myself reminding the audience, many of whom were too young to remember, that 20 or so years ago mobile phones were known locally as "poserphones". "Poserphone" because they were still expensive enough recently enough that they were still associated with rich businessmen who wanted to show off their importance.

The same poseurship today looks like this: "I'm so grand I don't carry a mobile phone." In a sort of rerun of the 1997 anti-internet backlash, which was kicked off by Clifford Stoll's Silicon Snake-Oil, all over the place right now we're seeing numerous articles and postings about how the techies of Silicon Valley are disconnecting themselves and removing technology from the local classrooms. Granted, this has been building for a while: in 2014 the New York Times reported that Steve Jobs didn't let his children use iPhones or iPads.

It's an extraordinary inversion in a very short time. However, the notable point is that the people profiled in these stories are people with the agency to make this decision and not suffer for it. In April, Congressman Jim Sensenbrenner (R-WI), claimed airily that "Nobody has to use the internet", a statement easily disputed. A similar argument can be made about related technology such as phones and tablets: it's perfectly reasonable to say you need downtime or that you want your kids to have a solid classical education with plenty of practice forming and developing long-form thinking. But the option to opt out depends on a lot of circumstances outside of most people's control. You can't, for example disconnect your phone if your zero-hours contracts specifies you will be dumped if you don't answer when they call, nor if you're in high-urgency occupations like law, medicine, or journalism; nor can you do it if you're the primary carer for anyone else. For a homeless person, their mobile phone may be their only hope of finding a job or a place to live.

Battery concerns being what they are, I've long had the habit of turning off wifi and GPS unless I'm actively using them. As Transport for London increasingly seeks to use passenger data to understand passenger flow through the network and within stations, people who do not carry data-generating devices are arguably anti-social because they are refusing to contribute to improving the quality of the service. This argument has been made in the past with reference to NHS data, suggesting that patients who declined to share their data didn't deserve care.

cybersalon-october.jpgToday's employers, as Cybersalon highlighted and as speakers have previously pointed out at the annual Health Privacy Summit, may learn an unprecedented amount of intimate information about their employees via efforts like wellness programs and the data those capture from devices like Fitbits and smart watches. At Cornell, Karen Levy has written extensively about the because-safety black box monitoring coming to what historically has been the most independent of occupations, truck driving. At Middlesex Phoebe Moore is studying the impact of workplace monitoring on white collar workers. How do you opt out of monitoring if doing so means "opting out" of employment?

The latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location. Your voice may be captured by the waiting speech-driven device in your friend's car or home; ever tried asking someone to turn off Alexa-Siri-OKGoogle while you're there?

For these reasons, publicly highlighting your choice to opt out reads as, "Look how privileged I am", or some much more compact and much more offensive term. This will be even more true soon, when opting out will require vastly more effort than it does now and there will be vastly fewer opportunities to do it. Even today, someone walking around London has no choice about how many CCTV cameras capture them in motion. You can ride anonymously on the tube and buses as long as you are careful to buy, and thereafter always top up, your Oyster smart card with cash. But the latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location.

It's clear "normal" people are beginning to know this. This week, in a supermarket well outside of London, I was mocking a friend for paying for some groceries by tapping a credit card. "Cash," I said. "What's wrong with nice, anonymous cash?" "It took 20 seconds!" my friend said. The aging cashier regarded us benignly. "They can still track you by the mobile phones you're carrying," she said helpfully. Touché.

Illustrations: George Orwell's house at 22 Portobello road; Cybersalon (Phoebe Moore, center).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 20, 2017

Risk profile

Thumbnail image for Fingerprint-examiner-FBI-1930s.jpgSo here is this week's killer question: "Are you aware of any large-scale systems employing this protection?"

It's a killer question because this was the answer: "No."

Rewind. For as long as I can remember - and I first wrote about biometrics in 1999 - biometrics vendors have claimed that these systems are designed to be privacy-protecting. The reason, as I was told for a Guardian article on fingerprinting in schools in 2006, is that these systems don't store complete biometric images. Instead, when your biometric is captured, whether that's a fingerprint to pay for a school lunch or an iris scan for some other purpose - the system samples points in the resulting image and deploys some fancy mathematics to turn them into a "template", a numerical value that is what the system stores. The key claim: there is no way to reverse-engineer the template to derive the original image because the template doesn't contain enough information.

The claim sounds plausible to anyone used to one-way cryptographic hashes, or who is used to thinking about compressed photographs and music files, where no amount of effort can restore Humpty-Dumpty's missing data. And yet.

Even at the time, some of the activists I interviewed were dubious about the claim. Even if it was true in 1999, or 2003, or 2006, they argued, it might not be true in the future. Plus, in the meantime these systems were teaching kids that it was OK to use these irreplaceable iris scans, fingerprints, and so on for essentially trivial purposes. What would the consequences be someday in the future when biometrics might become a crucial element of secure identification?

Thumbnail image for wayman-from-video.pngWell, here we are in 2017, and biometrics are more widely used, even though not as widely deployed as they might have hoped in 1999. (There are good reasons for this, as James L. Wayman explained in a 2003 interview for New Scientist: deploying these systems is much harder than anyone ever thinks. The line that has always stuck in my mind: "No one ever has what you think they're going to have where you think they're going to have it." His example was the early fingerprint system he designed that was flummoxed on the first day by the completely unforeseen circumstance of a guy who had three thumbs.)

So-called "presentation attacks" - for example, using high-resolution photographs to devise a spoof dummy finger - have been widely discussed already. For this reason, such applications have a "liveness" test. But it turns out there are other attacks to be worried about.

Thumbnail image for rotated-nw-marta-gomez-barrerro-2017.jpgThis week, at the European Association for Biometrics held a symposium on privacy, surveillance, and biometrics, I discovered that Andrew Clymer, who said in 2003 that, "Anybody who says it is secure and can't be compromised is silly", was precisely right. As Marta Gomez-Barrero explained, in 2013 she published a successful attack on these templates she called "hill climbing". Essentially, this is an iterative attack. Say you have a database of stored templates for an identification system; a newly-presented image is compared with the database looking for a match. In a hill-climbing attack, you generate synthetic templates and run them through the comparator, and then apply a modification scheme to the synthetic templates until you get a match. The reconstructions Gomez-Barrero showed aren't always perfect - the human eye may see distortions - but to the biometrics system it's the same face. You can fix the human problem by adding some noise to the image. The same is true of iris scans (PDF), hand shapes, and so on.

Granted, someone wishing to conduct this attack has to have access to that database, but given the near-daily headlines about breaches, this is not a comforting thought.

Slightly better is the news that template protection techniques do exist; in fact, they've been known for ten to 15 years and are the subject of ISO standard 24745. Simply encrypting the data doesn't help as much as you might think, because every attempted match requires the template to be decrypted. Just like reused passwords, biometric templates are vulnerable to cross-matching that allows an attacker to extract more information. Second, if the data is available on the internet - this is especially applicable to face-based systems - an attacker can test for template matches.

It was at this point that someone asked the question we began with: are these protection schemes being used in large-scale systems? And...Gomez-Barrerra said: no. Assuming she's right, this is - again - one of those situations where no matter how carefully we behave we are the mercy of decisions outside our control that very few of us even know are out there waiting to cause trouble. It is market failure in its purest form, right up there with Equifax, which none of us chooses to use but still inflicted intimate exposure on hundreds of millions of people; and the 7547 bug, which showed you can do everything right in buying network equipment and still get hammered.

It makes you wonder: when will people learn that you can't avoid problems by denying there's any risk? Biometric systems are typically intended to handle the data of millions of people in sensitive applications such as financial transactions and smartphone authentication. Wouldn't you think security would be on the list of necessary features?


Illustrations: A 1930s FBI examiner at work (via FBI); James Wayman; Marta Gomez-Barrero.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 13, 2017

Cost basis

Thumbnail image for Social_Network_Diagram_(segment).svg.pngThere's plenty to fret about in the green paper released this week outlining the government's Internet Safety Strategy (PDF) under the Digital Economy Act (2017). The technical working group is predominantly made up of child protection folks, with just one technical expert and no representatives of civil society or consumer groups. It lacks definitions: what qualifies as "social media"? And issues discussed here before persist, such as age verification and the mechanisms to implement it. Plus there's picky details, like requiring parental consent for the use of information services by children under 13, which apparently fails to recognize how often parents help their kids lie about their ages. However.

The attention-getting item we hadn't noticed before is the proposal of an "industry-wide levy which could in the future be underpinned with legislation" in order to "combat online harms". This levy is not, the paper says, "a new tax on social media" but instead "a way of improving online safety that helps businesses grow in a sustainable way while serving the wider public good".

The manifesto commitment on which this proposal is based compares this levy to those in the gambling and alcohol industries. The The Gambling Act 2005 provides for legislation to support such a levy, though to date the industry's contributions, most of which go to GambleAware to help problem gamblers, are still voluntary. Similarly, the alcohol industry funds the Drinkaware Trust.

The problem is that these industries aren't comparable in business model terms. Alcohol producers and retailers make and sell a physical product. The gambling industry's licensed retailers also sell a product, whether it's physical (lottery tickets or slot machine rolls) or virtual (online poker). Either way, people pay up front and the businesses pay their costs out of revenues. When the government raises taxes or adds a levy or new restriction that has to be implemented, the costs are passed on directly to consumers.

No such business model applies in social media. Granted, the profits accruing to Facebook and Google (that is, Alphabet) look enormous to us, especially given the comparatively small amounts of tax they pay to the UK - 5% of UK profits for Facebook and a controversial but unclear percentage for Alphabet. But no public company adds costs without planning how to recoup them, so then the question is: how do companies that offer consumers a pay-with-data service do that, given that they can't raise prices?

The first alternative is to reduce costs. The problem is how. Reducing staff won't help with the kinds of problems we're complaining about, such as fake news and bad behavior, which require humans to solve. Machine learning and AI are not likely to improve enough to provide a substitute in the near term, though no doubt the companies hope they will in the longer term.

The second is to increase revenues, which would mean either raising prices to advertisers or finding new ways to exploit our data. The need to police user behavior doesn't seem like a hot selling point to convince advertisers that it's worth paying more. That leaves the likelihood that applying a levy will create a perverse incentive to gather and crunch yet more user data. That does not represent a win; nor does it represent "taking back control" in any sense.

It's even more unclear who would be paying the levy. The green paper says the intention is to make it "proportionate" and ensure that it "does not stifle growth or innovation, particularly for smaller companies and start-ups". It's not clear, however, that the government understands just how vast and varied "social media" are. The term includes everything from the services people feel they have little choice about using (primarily Facebook, but also Google to some extent) to the web boards on news and niche sites, to the comments pages on personal blogs, to long-forgotten precursors of the web like Usenet and IRC. Designing a levy to take account of all business models and none while not causing collateral damage is complex.

Overall, there's sense in the principle that industries should pay for the wider social damage they cause to others. It's a long-standing approach for polluters, for example, and some have suggested there's a useful comparison to make between privacy and the environment. The Equifax breach will be polluting the privacy waters for years to come as the leaked data feeds into more sophisticated phishing attacks, identity fraud, and other widespread security problems. Treating Equifax the way we treat polluters makes sense.

It's less clear how to apply that principle to sites that vary from self-expression to publisher to broadcaster to giant data miners. Since the dawn of the internet any time someone's created a space for free expression someone else has come along and colonized a corner of it where people could vent and be mean and unacceptable; 4chan has many ancestors. In 1994, Wired captured an early example: The War Between alt.tasteless and rec.pets.cats. Those Usenet newsgroups created revenue for no one, while Facebook and Google have enough money to be the envy of major governments.

Nonetheless, that doesn't make them fair targets for every social problem the government would like to dump off onto someone else. What the green paper needs most is a clear threat model, because it's only after you have one that you can determine the right tools for solving it.


Illustrations:: Social network diagram.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 6, 2017

Send lawyers, guns, and money

Thumbnail image for Las_Vegas_strip.jpgThere are many reasons why, Bryan Schatz finds at Mother Jones, people around Las Vegas disagree with President Donald Trump's claim that now is not the time to talk about gun control. The National Rifle Association probably agrees; in the past, it's been criticized for saving its public statements for proposed legislation and staying out of the post-shooting - you should excuse the expression - crossfire.

Gun control doesn't usually fit into net.wars' run of computers, freedom, and privacy subjects. There are two reasons for making an exception now. First, the discovery of the Firearm Owners Protection Act, which prohibits the creation of *any* searchable registry of firearms in the US. Second, the rhetoric surrounding gun control debates.

To take the second first, in a civil conversation on the subject, it was striking that the arguments we typically use to protest knee-jerk demands for ramped-up surveillance legislation to atrocious incidents are the same ones used to oppose gun control legislation. Namely: don't pass bad laws out of fear that do not make us safer; tackle underlying causes such as mental illness and inequality; put more resources into law enforcement/intelligence. In the 1990s crypto wars, John Perry Barlow deliberately and consciously adapted the NRA' slogan to create "You can have my encryption algorithm...when you pry it from my cold, dead fingers from my private key".

Using the same rhetoric doesn't mean both are right or both are wrong: we must decide on evidence. Public debates over surveillance do typically feature evidence about the mathematical underpinnings of how encryption works, day-to-day realities of intelligence work, and so on. The problem with gun control debates in the US is that evidence from other countries is automatically written off as irrelevant, and, more like the subject of copyright reform, lobbying money hugely distorts the debate.

Thumbnail image for Atf_ffl_check-licensed-gun-dealer.jpgThe second issue touches directly on privacy. Soon after the news of the Las Vegas shooting broke, a friend posted a link to the 2016 GQ article Inside the Federal Bureau of Way Too Many Guns. In it, writer and author Jeanne Marie Laskas pays a comprehensive visit to Martinsburg, West Virginia, where she finds a "low, flat, boring building" with a load of shipping containers kept out in the parking lot so the building's floors don't collapse under the weight of the millions of gun license records they contain. These are copies of federal form 4473, which is filled out at the time of gun purchases and retained by the retailer. If a retailer goes out of business, the forms it holds are shipped to the tracing center. When a law enforcement officer anywhere in the US finds a gun at a crime scene, this is where they call to trace it. The kicker: all those records are eventually photographed and stored on microfilm. Miles and miles of microfilm. Charlie Houser, the tracing center's head, has put enormous effort into making his human-paper-microfilm system as effective and efficient as possible; it's an amazing story of what humans can do.

Why microfilm? Gun control began in 1968, five years after the shooting of President John F. Kennedy. Even at that moment of national grief and outrage, the only way President Lyndon B. Johnson could get the Gun Control Act passed was to agree not to include a clause he wanted that would have set up a national gun registry to enable speedy tracing. In 1986, the NRA successfully lobbied for the Firearm Owners Protection Act, which prohibits the creation of *any* registry of firearms. What you register can be found and confiscated, the reasoning apparently goes. So, while all the rest of us engaged in every other activity - getting health care, buying homes, opening bank accounts, seeking employment - were being captured, collected, profiled, and targeted, the one group whose activities are made as difficult to trace as possible is...gun owners?

It is to boggle.

That said, the reasons why the American gun problem will likely never be solved include the already noted effect of lobbying money and, as E.J. Dionne Jr., Norman J. Ornstein and Thomas E. Mann discuss in the Washington Post, the non-majoritarian democracy the US has become. Even though majorities in both major parties favor universal background checks and most Americans want greater gun control, Congress "vastly overrepresents the interests of rural areas and small states". In the Senate that's by design to ensure nationwide balance: the smallest and most thinly populated states have the same number of senators - two - as the biggest, most populous states. In Congress, the story is more about gerrymandering and redistricting. Our institutions, they conclude, are not adapting to rising urbanization: 63% in 1960, 84% in 2010.

Besides those reasons, the identification of guns and personal safety endures, chiefly in states where at one time it was true.

A month and a half ago, one of my many conversations around Nashville went like this, after an opening exchange of mundane pleasantries:

"I live in London."

"Oh, I wouldn't want to live there."

"Why?"

"Too much terrorism." (When you recount this in London, people laugh.)

"If you live there, it actually feels like a very safe city." Then, deliberately provocative, "For one thing, there are practically no guns."

"Oh, that would make me feel *un"safe."

Illustrations: Las Vegas strip, featuring the Mandelay Bay; an ATF inspector checks up on a gun retailer.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.