October 14, 2022


wendyg_railway_signal_tracks_crossing-370.jpgA while back, I was trying to get a friend to install the encrypted messaging app Signal.

"Oh, I don't want another messaging app."

Well, I said, it's not *another* messaging app. Use it to replace the app you currently use for texting (SMS) and it will just sit there showing you your text messages. But whenever you encounter another Signal user those messages will be encrypted. People sometimes accepted this; more often, they wanted to know why I couldn't just use WhatsApp, like their school group, tennis club, other friends... (Well, see, it may be encrypted, but it's still owned by the Facebook currently known as Meta.)

This week I learned that soon I won't be able to make this argument any more, because...Signal will be dropping SMS support for Android users sometime in the next few months. I don't love either the plan or the vagueness of its timing. (For reasons I don't entirely understand, this doesn't apply to the nether world of iPhone users.)

The company's blog posting lists several reasons. Apparently the app's SMS integration is confusing to many users, who are unclear about when their messages are encrypted and when they're not. Whether this is true is being disputed in the related forum thread discussing this decision. On the bah! side is "even my grandmother can use it" (snarl) and on the other the valid evidence of the many questions users have posted about this over the years in the support forums. Maybe solvable with some user interface tweaks?

Second, the pricing differential between texting and Signal messages, which transit the Internet as data, has reversed since Signal began. Where data plans used to be rare and expensive, and SMS texts cheap or bundled with phone service, today data plans are common, and SMS has become expensive in some parts of the world. There, the confusion between SMS and Signal messaging really matters. I can't argue with that except to note that equally it's a problem that does *not* apply in many countries. Again, perhaps solvable with user settings...but it's fair enough to say that supporting this may not be the best use of Signal's limited resources. I don't have insight into the distribution of Signal's global user base, and users in other countries are likely to be facing bigger risks than I am.

Third is sort of a purity argument: it's inherently contradictory to include an insecure protocol in an app intended to protect security and privacy. "Inconsistent with our values." The forum discussion is split on this. While many agree with this position, many of the rest of us live in a world that includes lots of people who do not use, and do not want to use (see above), Signal, and it is vastly more convenient to have a single messaging app that handles both.

Signal may not like to stress this aspect, but one problem with trusting an encrypted messaging app in the first place is that the privacy and security are only as good as your correspondents' intentions. Maybe all your contacts set their messages to disappear after a week, password-protect and encrypt their message database, and assign every contact an alias. Or, maybe they don't password-protect anything, never delete anything, and mirror the device to three other computers, all of which they leave lying around in public. You cannot know for sure. So a certain level of insecurity is baked into the most secure installations no matter what you do. I don't see SMS as the biggest problem here.

I think this decision is going to pose real, practical problems for Signal in terms of retaining and growing its user base; it surely does not want the app's presence on a phone become governments' watch-this-person flag. At least in Western countries, SMS is inescapable. It would be better if two-factor authentication used a less hackable alternative, but at the moment SMS is the widespread vector of corporate choice. We consumers don't actually get to choose to dump it until they do. A switch is apparently happening very slowly behind the scenes in the form of RCS, which I don't even know if my aged phone supports. In the meantime, Signal becomes the "another messaging app" we began with - and historically, diminished convenience has been one of the biggest blocks to widespread adoption of privacy-enhancing technologies.

Signal's decision raises the possibility that we are heading into a time where texting people becomes far more difficult. It may become like the early days, when you could only text people using the same phone company as you - for example, Apple has yet to adopt RCS. Every new contact will have to start with a negotiation by email or phone: how do I text you? In *addition* to everything else.

The Internet isn't splintering (yet); email may be despised, but every service remains interoperable. But the mobile world looks like breaking into silos. I have family members who don't understand why they can't send me iMessages or FaceTime me (no iPhone?), and friends I can't message unless I want to adopt WhatsApp or Telegram (groan - another messaging app?).

Signal may well be right that this move is a win for security, privacy, and user clarity. But for communication? In *this* house, it's a frustrating regression.

Illustrations: Midjourney's rendering of "railway signal tracks crossing",

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 9, 2022

The lost penguin

Little_penguin_(Eudyptula_minor)_at_Kwinana_Beach,_September_2021_13-370.jpgOne of the large, ignored problems of cybersecurity is that every site, every supplier, ever software coder, every hardware manufacturer makes decisions as if theirs were the only rules you ever have to observe.

The last couple of weeks I've been renewing my adventures with Linux, which started in 2016, and continued later that year and again in 2018 and, undocumented, in 2020. The proximate cause this time was the release of Ubuntu 22.04. Through every version back to 14.04 I've had the same longrunning issue: the displays occasionally freeze for no consistent reason, and the only way out is a cold boot. Would this time be the charm?

Of course, the first thing that happened was that trying to upgrade the system in place failed. This isn't my first rodeo (see 2016, part II), and so I know that unpicking and troubleshooting a failure often takes more than doing a clean install. I had an empty hard drive at the ready...

All the good things I said about Ubuntu installation in 2018 are still true: Canonical and the open source community have done a very good job of building a computer-in-a-box. It installed and it worked, although I hate the Gnome desktop it ships with.


Everything is absolutely fine unless, as I whined in 2018, you want to connect to some Windows machines. For that, you must download and install Samba. When it doesn't work, Samba is horrible, and grappling with it revives all my memories of someone telling me, the first time I heard of Linux, that "Linux is as user-friendly as a cornered rat."

Last time round, I got the thing working by reading lots of web pages and adding more and more stuff to the config file until it worked. This was not necessarily a good thing, because in the process I opened more shares than I needed to, and because the process was so painful I never felt like going back to put in a few constraints. Why would I care? I'm one person with a very small (wired) computer network, and it's OK if the machines see more of each other's undergaments than is strictly necessary.

Since then, the powers that code have been diligently at work to make the system more secure. So to stop people from doing what I did, they have tweaked Samba so that by default it's not possible to share your Home directory. Their idea is that you'll have a Public directory that is the only thing you share, and any file that's in it is there because you made a conscious decision to put it there.

I get the thinking, but I don't want to do things their way, I want to do things my way. And my way is that I want to share three directories inside the Home directory. Needless to say, I am not the only recalcitrant person, and so people have published three workarounds. I did them all. Result: my Windows machines can now access the directories I wanted to share on the Ubuntu machine. And: the Ubuntu machine is less secure for a value of security that isn't necessarily helpful in a tiny wired home network.

That was only half the problem.

Ubuntu can see there's a Windows network, and it will even sometimes list the machines correctly, but ask it to access one of them, and it draws a blank. Almost literally a blank: it just hangs there going, "Opening >machine name<" until you give up and hit Cancel. Someone has wrapped a towel around its head, apparently thinking, like the Bugblatter Beast of Traal, that if it can't see you, you can't see it. I now see that this is exactly the same analogy, in almost the identical words, that I used in 2018. I swear I typed it all new this time.

That someone appears to be Microsoft. The *other* problem, it turns out, is that Microsoft also wanted to improve security, and so it's made it harder to open Windows 10 machines to networking with interlopers such as people who run Ubuntu. I forget now the incantation I had to wave over it to get it to cooperate, but the solution I found only worked to admit the Ubuntu shares, not open up the Windows ones.

Seems to me there's two problems here.

One is the widening gap between consumer products and expert computing. The reality of mass adoption confirms that consumer computing has in fact gotten much easier over time. But the systems we rely on are more sophisticated and complex, and they're meeting more sophisticated and complex needs - and doing anything outside that mainstream has accordingly become much harder, requiring a lot of knowledge, training, patience, and expertise. I fall right into that gap (which is why my website has no Javascript and I'm afraid to touch the blogging software that powers net.wars). In 2016, Samba just worked.

The other, though, is a problem I've touched on before: decisions about product security are made in silos without considering the wider ecosystem and differing contexts in which they're used. Microsoft or Apple's answer to the sort of connection problem I have is buy our stuff. The open source community's reaction isn't much different. Which leaves me....wanting to bang all their heads together.

Illustrations: Little penguin swimming (via Calistemon at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 3, 2022

Nine meals from anarchy*

Kate-Cooper.jpgThe untutored, asked how to handle a food crisis, are prone to suggest that everyone grow vegetables in their backyards. They did it in World War II!

I fell into this trap once myself, during the 2008 financial crisis, when I heard a Russian commentator explain on US talk radio that Americans could never survive because we were too soft and individualistic, whereas Russians were used to helping each other out in hard times, living together in cramped conditions, and working around shortages. Nonsense, I thought. Americans are quite capable of turning off their TVs, getting up off their couches, and doing useful stuff when they need to. Wishing for a Plan B, I thought of the huge backyard some Pennsylvania friends had, which backed onto three more similarly-sized backyards, and imagined a cooperative arrangement in which one family kept chickens and another grew some things, and a third grew some complementary things...and they swapped and so on.

My Pennsylvania friends were not impressed. "Is this a joke?"

"It's a Plan B! It's good to have a Plan B!"

It's not a Plan B.

A couple of years ago, at the annual conference convened by the Cybernetics Society, I learned it wasn't even a Plan Y.

"It's subsistence farming," Kate Cooper explained as part of her talk on food security. The grueling full-time unpredictability of that is is what most of us gave up in favor of selecting items off grocery store shelves once or twice a week.

The point about subsistence farming is that it's highly unreliable, highly individual, and doesn't scale to the levels required for a modern society, still less for a densely populated modern British society that imports almost all its food. Yes, people were encouraged to grow vegetables in World War II, but although the net effect was good for morale and for helping people better understand the foods they eat, it doesn't help anyone understand the food system and its scale and complexity. Basically, in terms of the problem of feeding the nation, it was a rounding error. Worth doing, but not a solution.

Cooper is the executive director of the Birmingham Food Council, a community interest company that grew out of efforts to think about the future of Birmingham. "It's our job to be an exemplar of how to think about the food system," she explained.

Two years later, with stories everywhere about escalating food prices and dangerous shortages, the interdependencies that underlie our food supply are being exposed by the intermingling of three separate crises, each of which would be bad on its own: the pandemic, Russia's invasion of Ukraine, and climate change. A source I can't recall calls this constellation a "polycrisis" - multiple simultaneous crises that interact to make them all worse. Plus, while the present government doesn't admit it, *Brexit* has added substantially to the challenges of maintaining the UK's increasingly brittle, highly complex system that few of us understand by fracturing trade relationships and pushing workers out of the industry.

As part of its research, the Council created The Game, a scenario-based role-playing game for decision makers, food sector leaders, researchers, and other policy influencers in which teams of four to six are put in charge of a city and must maintain the residents' access to enough safe and nutritious food.

I felt better about my own level of ignorance when I learned that one player's idea for combating shortages was to grow potatoes along the A38, a major route that runs from Bodmin, Cornwall, to Mansfield, Nottinghamshire. No idea of scale, you see, or the toxins passing automobiles deposit in the soil. (To say nothing of the inefficiencies of trying to farm a plot of land that's 292 miles long and a few hundred yards wide...) Another player wanted to get the national government to send in the army. Also not helping...but they were not alone, as many players found it difficult to feed their populations. People who had played it when the pandemic began forcing lockdowns and hourly changes to the food system. "Nothing had surprised [the people who had played The Game]", she said. Even so, the lockdowns showed the fragility of the food system and how powerless local officials are to do anything about it.

There are options at the national level. If you are lucky enough to have a government that has both the resources and the will to plan for the future, you can create buffer stocks to tide you through a crisis. You need a plan to rotate and resupply since some things (grain) store much better than others (fresh produce). Cooper has a simple plan for deciding which foodstuffs should be stored and which not: is it subject to VAT? That would lead to storing essentials - the healthy, nutritious stuff - and not candy, alcohol, caffeine, sugar, potato chips. Cooper calls those "drug foods", and notes that over 50% of most household budgets are spent on them, 6% of the potato crop goes to making Walker's potato chips, and a 2012 estimate found that Coca Cola's global consumption of water was enough to meet the annual daily needs of more than 2 billion people.

"Is this a sensible use of increasingly scarce land and water?" she asked.

Put like that, what can you say?

Illustrations: Kate Cooper. *Quote attributed to Alfred Henry Lewis, 1906.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 1, 2022


Boeing-737-MAX.png"The airline probably needed to do a better job to make sure its pilots understood exactly what to do in case the aircraft was performing in a unique, unusual way, and how to get out of the problem," former National Transportation Safety Board chair Mark Rosenker tells CBS News in the recent documentary Downfall: The Case Against Boeing (directed by Rory Kennedy, written by Mark Bailey and Keven McAlester, and streaming on Netflix). He then downplays the risk to passengers: "Certainly in the United States they understand how to operate this aircraft."

Rosenker was speaking soon after the 2018 Lion Air crash.

Three oh-my-god wrong things here: the smug assumption that *of course* American personnel are more competent than their Indonesian counterparts (see also contemporaneous articles dissing Indonesia's airline safety record); the presumption that a Boeing aircraft is safe and the crash a non-recurring phenomenon; and the logical sequitur that it must be the pilot's fault. All that went largely unchallenged until the Ethiopian Airlines crash, 19 weeks later. Even then, numerous countries grounded the plane before the US finally followed suit - and even *then* it was ordered by the president, not the Federal Aviation Authority. The FAA's regulatory failure needs its own movie.

As we all now know, a faulty attack sensor sent bad data to the aircraft's Maneuvering Characteristics Augmentation System, software intended to stabilize the plane. The pilot did his best in an impossible situation. Even after that became clear, Boeing still blamed the crew for not turning off MCAS. The reason: Boeing didn't tell them it was there. In Congressional testimony, the hero of the Hudson, Captain Sully Sullenberger, summed it up thusly: "We shouldn't expect pilots to have to compensate for flawed designs."

This blame game was a betrayal. One reason aviation is so safe is that all sides have understood that every crash damages everyone. The industry therefore embraced extensive cross-collaboration in which everyone is open about the causes of failures and shares solutions. Blame destroys that culture.

All of this could be a worked example in Jessie Singer's recent book There Are No Accidents: The Deadly Rise of Injury and Disaster - Who Profits and Who Pays the Price. Of course unintended injuries happen, but calling them "accidents" removes culpability and stops us from thinking too much about larger causes. "Accident" means: "nothing to see here".

With the 737 MAX, as press articles suggested at the time and the documentary shows, that larger cause was the demise of Boeing's pride-of-America safety-first engineering culture, which rewarded employees for notifying problems. The rot began in 1997, when a merger meant new bosses from McDonnell Douglas executives arrived, and, former quality manager John Barnett tells the camera, "Everything you've learned for 30 years is now wrong." Value for shareholders replaced safety-first. Employees were thinned. Planes were made of cheaper materials. Headquarters left Seattle, where engineering was based, for Chicago. The culture of safety gave way to a culture of concealment.

Aviation learned early the importance of ergonomic design to avoid pilot error. This is where the documentary is damning: Boeing's own emails show the company knew pilots needed training for MCAS and never provided it, even when directly asked - by Lion Air itself, in 2017. Boeing executives mocked them for asking, even though its own risk assessments predicted a 737 MAX crash every fifteen years. Boeing bet it could fix, test, and implement MCAS before it caused more trouble. It was wrong.

A fully-loaded plane crash makes headlines and sparks protests and Congressional investigations. Most of the "accidents" Singer writes about, however - traffic crashes, house fires, falls, drownings, and the nearly 840,000 opioid deaths classed as "unintentional injury by drug poisoning" since 1999 (see also Alex Gibney's Crime of the Century) - near-invisibly kill in a statistical trickle. One such was her best friend, killed when a car hit his bike. All these are "accidents" caused by human error. But even with undercounts of everything from shootings to medical errors, the "accidents" were the third leading cause of death in the US in 2019, behind heart disease and "malignant neoplasms" (cancer), ahead of cerebrovascular disease, chronic lower respiratory disease, Alzheimers, and diabetes. We research all those *and( covid-19, which was number three in 2020. Why not "accidents"? (Note: this all skews American; other wealthy countries are safer.)

Singer's argument resonates because during my ten years as the in-house writer for RISCS, then-director Angela Sasse argued repeatedly that users will do the secure thing if it's the easiest path to follow, and "user errors" are often failed security policies. Sometimes, fixes seem tangential, such as lessening worker stress by hiring more staff, updating computer systems, or ensuring better work-life balance, which may improve security because tired, stressed workers make more mistakes.

Singer argues that the human errors that cause "accidents" are predictable and preventable, and surviving them is a "marker of privilege". Across the US, she finds poverty correlated with "accidental" death and wealth with safety. The pandemic made this explicit. But Singer reminds that the same forces frame people crossing the street as "jaywalkers" and blame workers killed on factory lines for not following posted rules. Each time the less powerful is framed as the cause of their own demise. And so it required that second 737 MAX crash and 157 more deaths to ground that plane.

Illustrations: The Boeing 737 MAX (Boeing).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 25, 2022

Irreparable harm

Vladimir_Putin_at_award_ceremonies-with-eteri-_(2018-11-27)_31.jpgThe anti-doping systems in sports have long intrigued me as a highly visible example of a failed security system. The case of Kamila Valieva at the recent Winter Olympics provides yet another example.

I missed the event itself because: I don't watch the Olympics. The corruption in the bidding process, documented by Andrew Jennings in 1992, was the first turn-off. Joan Ryan's 1995 Little Girls in Pretty Boxes, which investigated abuse in in women's gymnastics and figure skating, made the tiny teens in both sports uncomfortable to watch. Then came the death of German luger Nodar Kumaritashvili in a training run at the 2010 Vancouver Winter Olympics. The local organizing committee had been warned that the track was dangerous as designed, and did - nothing. Care for the athletes is really the bottom line.

Anti-doping authorities have long insisted that athletes are responsible for every substance found in their bodies. However, as a minor Valieva is subject to less harsh rules. When the delayed results of a December test emerged, the Russian Anti-Doping Agency determined that she should be allowed to compete. The World Anti-Doping Agency, the Internet Olympic Committee, and the International Skating Union all appealed the decision. Instead, the Court for Arbitration for Sport upheld RUSADA's decision, writing in its final report : "...athletes should not be subject to the risk of serious harm occasioned by anti-doping authorities' failure to function effectively at a high level of performance and in a manner designed to protect the integrity of the operation of the Games" wrote in deciding to allow Valieva to compete. In other words, the lab and the anti-doping authorities should have gotten all this resolved out of the world's sight, before the Games began, and because Valieva was a leading contender for the gold medal, denying her the right to compete could do her "irreparable harm".

The overlooked nuance here appears to be that Valieva had been issued a *provisional* suspension. As a "Protected Person" - that is, a child - she does not have to meet the same threshold of proof that adults do. The CAS judges accepted the possibility that, as her legal team argued, her positive test could have been due to contamination or accidental ingestion, as her grandfather used this medication. If you take the view that further investigation may eventually exonerate her, but too late for this Olympics, they have a point. If you take the strict view that the fight against doping requires hard lines to be drawn, then she should have been sent home.

But. But. But. On her doping control form, Valieva had acknowledged taking two more heart medications that aren't banned: L-carnitine, and hypoxen. Why is a 15-year-old testing positive for *three* heart medications? I don't care that two of them are legal.

Similarly: why is RUSADA involved when it's still suspended following Russia's state-sponsored doping scandal, which still has Russian athletes competing under the flag of the Russian Olympic Committee in a pretense that Russia is being punished?

Skating experts have had a lot to say about Valieva's coaches. We know from gymnastics as well as figure skating that the way women's bodies mature through their teens puts the most difficult - and most exciting - acrobatics out of reach. That reality has led to young female athletes being put on strict diets and doped with puberty blockers to keep them pre-pubescent. In her book, Ryan advocated age restrictions and greater oversight for both gymnastics and figure skating. Reviews complained that her more than 100 interviews with current and former gymnasts did not include the world's major successes, but that's the point: the 0.01% for whom the sport brings stardom are not representative. At Slate, Rita Wenxin Wang describes the same "culture of child abuse" Ryan described 25 years ago, pinpointing in particular Valieva's coach, Eteri Tutberidze, whose work with numerous young winning Russian teens won her Russia's Order of Honour from Vladimir Putin in 2018.

At The Ringer, Michael Baumann reports that Tutbeidze burns through young skaters at a frantic pace; they wow the world for two or three years, and vanish. That could help explain CAS's conviction that this medal shot was irreplaceable..

At the Guardian, former gymnast Sarah Clarke calls out the IOC for its failure to protect Valieva. Clarke was one of the hundreds of victims of sexual predator Larry Nassar and notes that while Nassar has been jailed his many enablers have never been prosecuted and the IOC never acted against any of the organizations (US Gymnastics, USADA) that looked the other way. Also at the Guardian, Sean Ingle calls the incident clear evidence of abuse of a minor. At Open Democracy, Aiden McQuade calls Valieva's treatment "child trafficking" and an indictment of the entire Olympic movement.

Given that minors should not be put in the position Valieva was, there's just one answer: bring in the age restrictions that Ryan advocated in 1995 and that gymnastics and tennis brought in 25 years ago - tennis, after watching a series of high-profile teenaged stars succumb to injuries and burnout. This is a different definition of "harm".

The sports world has long insisted that it should be self-regulating, independent of all governments. The evidence continues to suggest the conflicts of interest run too deep.

Illustrations: Russian women's figure skating coach Eteri Tutberidze, at the 2018 award ceremony with Vladimir Putin (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 3, 2021

Trust and antitrust

coyote-roadrunner-cliff.pngFour years ago, 2021's new Federal Trade Commission chair, Lina Khan, made her name by writing an antitrust analysis of Amazon that made three main points: 1) Amazon is far more dangerously dominant than people realize; 2) antitrust law, which for the last 30 years has used consumer prices as its main criterion, needs reform; and 3) two inventors in a garage can no longer upend dominant companies because they'll either be bought or crushed. She also accused Amazon of leveraging the Marketplace sellers data it collects to develop and promote competing products.

For context, that was the year Amazon bought Whole Foods.

What made Khan's work so startling is that throughout its existence Amazon has been easy to love: unlike Microsoft (system crashes and privacy), Google (search spam and privacy), or Facebook (so many issues), Amazon sends us things we want when we want them. Amazon is the second-most trusted institution in America after the military, according to a 2018 study by Georgetown University and NYU Rounding out the top five: Google, local police, and colleges and universities. The survey may need some updating.

And yet: recent stories suggest our trust is out of date.

This week, a study by the Institute for Local Self-Reliance claims that Amazon's 20-year-old Marketplace takes even higher commissions - 34% - than the 30% Apple and Google are being investigated for taking (30%) from their app stores. The study estimates that Amazon will earn $121 billion from these fees in 2021, double its 2019 takings and that Amazon's 2020 operating profits from Marketplace will reach $24 billion. The company responded to TechCrunch that some of those fees are optional add-ons, while report author Stacy Mitchell counters that "add-ons" such as better keyword search placement and using Amazon's shipping and warehousing have become essential because of the way the company disadvantages sellers who don't "opt" for them. In August, Amazon passed Walmart as the world's largest retailer outside of China). It is the only source of income for 22% of its sellers and the single biggest sales channel for many more; 56% of items sold on Amazon are from third-party sellers.

I started buying from Amazon so long ago that I have an insulated mug they sent every customer as a Christmas gift. Sometime in the last year, I started noticing the frequency of unfamiliar brand names in search results for things like cables, USB sticks, or socks. Smartwool I recognize, but Yuedge, KOOOGEAR, and coskefy? I suddenly note a small, new? tickbox on the left: "our brands". And now I see : "our brands" this time are ouhos, srclo, SuMade, and Sunew. Is it me, or are these names just plain weird?

Of course I knew Amazon owned Zappos, IMDB, Goodreads, and Abe Books, but this is different. Amazon now has hundreds of house brands, according to a study The Markup published in October. The main finding: Amazon promotes its own brands at others' expense, and being an Amazon brand or Amazon-exclusive is more important to your product's prominence than its star ratings or reviews. Amazon denies doing this. It's a classic antitrust conflict of interest: shoppers rarely look beyond the first five listed products, and the platform owner has full control over the order. The Markup used public records to identify more than 150 Amazon brands and developed a browser add-on that highlights them for you. Personally, I'm more inclined to just shop elsewhere.

Also often overlooked is Amazon's growing advertising business. Insider Intelligence estimates its digital ad revenues in 2021 at $24.47 billion - 55.5% higher than 2020, and representing 11.6% (and rising) of the (US) digital advertising market. In July, noting its riseCNBC surmised that Amazon's first-party relationship with its customers relieves it of common technology-company privacy issues. This claim - perhaps again based on the unreasonable trust so many of us place in the company - has to be wrong. Amazon collects vast arrays of personal data from search and purchase records, Alexa recordings, home camera videos, and health data from fitness trackers. We provide it voluntarily, but we don't sign blank checks for its use. Based on confidential documents, Reuters reports that Amazon's extensive lobbying operation has "killed or undermined" more than three dozen privacy bills in 25 US states. (The company denies the story and says it has merely opposed poorly crafted privacy bills.)

Privacy may be the thing that really comes to bite the company. A couple of weeks ago, Will Evans reported at Reveal News, based on a lengthy study of leaked internal documents, that Amazon's retail operation has so much personal data that it has no idea what it has, where it's stored, or how many copies are scattered across its IT estate: "sprawling, fragmented, and promiscuously shared". The very long story is that prioritizing speed of customer service has its downside, in that the company became extraordinarily vulnerable to insider threats such as abuse of access.

Organizations inevitably change over time, particularly when they're as ambitious as this one. The systems and culture that are temporary in startup mode become entrenched and patched, but never fixed. If trust is the land mass we're running on, what happens is we run off the edge of a cliff like Wile E. Coyote without noticing that the ground we trust isn't there any more. Don't look down.

Illustrations: Wile E. Coyote runs off a cliff, while the roadrunner watches.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 25, 2021

Lawful interception

NSOGroup-database.pngFor at least five years the stories have been coming about the Israeli company NSO Group. For most people, NSO is not a direct threat. For human rights activists, dissidents, lawyers, politicians, journalists, and others targeted by hostile authoritarian states, however, its elite hackers are dangerous. NSO itself says it supplies lawful interception, and only to governments to help catch terrorists.

Now, finally, someone is taking action. Not, as you might reasonably expect, a democratic government defending human rights, but Apple, which is suing the company on the basis that NSO's exploits cost it resources and technical support. Apple has also alerted targets in Thailand, El Salvador, and Uganda.

On Twitter, intelligence analyst Eric Garland picks over the complaint. Among his more scathing quotes: "Defendants are notorious hackers - amoral 21st century mercenaries who have created highly sophisticated cyber-surveillance machinery that invites routine and flagrant abuse", "[its] practices threaten the rules-based international order", and "NSO's products...permit attacks, including from sovereign governments that pay hundreds of millions of dollars to target and attack a tiny fraction of users with information of particular interest to NSO's customers".

The hidden hero in this story is the Canadian research group calls NSO's work "despotism as a service".

Citizen Lab began highlighting NSO's "lawful intercept" software in 2016, when analysis it conducted with Lookout Security showed that a suspicious SMS message forwarded by UAE-based Ahmed Mansoor contained links belonging to NSO Group's infrastructure. The links would have led Mansoor to a chain of zero-day exploits that would have turned his iPhone 6 into a comprehensive, remotely operated spying device. As Citizen Lab wrote, "Some governments cannot resist the temptation to use such tools against political opponents, journalists, and human rights defenders." It went on to note the absence of human rights policies and due diligence at spyware companies; the economic incentives all align the wrong way. An Android version was found shortly afterwards.

Among the targets Citizen Lab found in 2017: Mexican scientists working on obesity and soda consumption and Amnesty International researchers, In 2018, Citizen Lab reported that Internet scans found 45 countries where Pegasus appeared to be in operation, at least ten of them working cross-border. In 2018, Citizen Lab found Pegasus on the phone of Canadian resident Omar Abdulaziz, a Saudi dissident linked to murdered journalist Jamal Khashoggi. In September 2021, Citizen Lab discovered NSO was using a zero-click, zero-day vulnerability in the image rendering library used in Apple's iMessage to take over targets' iOS, WatchOS, and MacOS devices. Apple patched 1.65 billion products.

Both Privacy International and the Pegasus project, an joint investigation into the company by media outlets including the Guardian and coordinated by Forbidden Stories, have found dozens more examples.

In July 2021, a leaked database of 50,000 phone numbers believed to belong to people of interest to NSO clients since 2016 included human rights activists, business executives, religious figures, academics, journalists, lawyers, and union and government officials around the world. It was not clear if their devices had been hacked. Shortly afterwards, Rappler reported that NSO spyware can successfully infect even the latest, most secure iPhones.

Citizen Lab began tracking litigation and formal complaints against spyware companies in 2018. In a complaint filed in 2019, WhatsApp and Facebook are arguing that NSO and Q Cyber used their servers to distribute malware; on November 8 the US ninth circuit court of appeals has rejected NSO's claim of sovereign immunity, opening the way to discovery.. Privacy International promptly urged the British government to send a clear message, given that NSO's target was a UK-based lawyer challenging the company over human rights violations in Mexico and Saudi Arabia.

Some further background is to be found at Lawfare, where shortly *before* the suit was announced, security expert Stephanie Pell and law professor David Kaye discuss how to regulate spyware. In 2019, Kaye wrote a report calling for a moratorium on the sale and transfer of spyware and noting that its makers "are not subject to any effective global or national control". Kaye proposes adding human rights-based export rules to the Wassenaar Arrangement export controls for conventional arms and dual-use technologies. Using Wassenaar, on November 3 the US Commerce Department recently blacklisted NSO along with fellow Israeli company Candiru, Russian company Positive Technologies, and Singapore-based Computer Security Initiative Consultancy as national security threats. And there are still more, such as the surveillance system sold to Egypt by France-based Thales subsidiary Dassault and Nexa Technologies.

The story proves the point many have made throughout 30 years of fighting for the right to use strong encryption: while governments and their law enforcement agencies insist they need access to keep us safe: there is no magic hole that only "good guys" can use, and any system created to give special access will always end up being abused. We can't rely on the technology companies to defend human rights; that's not in their business model. Governments need to accept and act on the reality that exceptional access for anyone makes everyone everywhere less safe.

Illustrations: Citizen Lab's 2021 map of the distribution of suspected NSO infections (via Democracy Now.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 6, 2021

Privacy-preserving mass surveillance

new-22portobelloroad.jpgEvery time it seems like digital rights activists need to stop quoting George Orwell so much, stuff like this happens.

In an abrupt turnaround, on Thursday Apple announced the next stage in the decades-long battle over strong cryptography: after years of resisting law enforcement demands, the company is U-turning to backdoor its cryptography to scan personal devices and cloud stores for child abuse images. EFF sums up the problem nicely: "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor". Or, more simply, a hole is a hole. Most Orweliian moment: Nicholas Weaver framing it on Lawfare as "privacy-sensitive mass surveillance".

Smartphones, particularly Apple phones, have never really been *our* devices in the way that early personal computers were, because the supplying company has always been able to change the phone's software from afar without permission. Apple's move makes this reality explicit.

The bigger question is: why? Apple hasn't said. But the pressure has been mounting on all the technology companies in the last few years, as an increasing number of governments have been demanding the right of access to encrypted material. As Amie Stepanovich notes on Twitter, another factor may be the "online harms" agenda that began in the UK but has since spread to New Zealand, Canada, and others. The UK's Online Safety bill is already (controversially) in progress., as Ross Anderson predicted in 2018. Child exploitation is a terrible thing; this is still a dangerous policy.

Meanwhile, 2021 is seeing some of the AI hype of the last ten years crash into reality. Two examples: health and autonomous vehicles. At MIT Technology Review, Will Douglas Heaven notes the general failure of AI tools in the pandemic. Several research studies - the British Medical Journal, Nature, and the Turing Institute (PDF) - find that none of the hundreds of algorithms were any clinical use and some were actively harmful. The biggest problem appears to have been poor-quality training datasets, leading the AI to either identify the wrong thing, miss important features, or appear deceptively accurate. Finally, even IBM is admitting that Watson, its Jeopardy! champion has not become a successful AI medical diagnostician. Medicine is art as well as science; who knew? (Doctors and nurses, obviously.)

As for autonomous vehicles, at Wired Andrew Kersley reports that Amazon is abandoning its drone delivery business. The last year has seen considerable consolidation among entrants in the market for self-driving cars, as the time and resources it will take to achieve them continue to expand. Google's Waymo is nonetheless arguing that the UK should not cap the number of self-driving cars on public roads and the UK-grown Oxbotica is proposing a code of practice for deployment. However, as Christian Wolmar predicted in 2018, the cars are not here. Even some Tesla insiders admit that.

The AI that has "succeeded" - in the narrow sense of being deployed, not in any broader sense - has been the (Orwellian) surveillance and control side of AI - the robots that screen job applications, the automated facial recognition, the AI-driven border controls. The EU, which invests in this stuff, is now proposing AI regulations; if drafted to respect human rights, they could be globally significant.

However, we will also have to ensure the rules aren't abused against us. Also this week, Facebook blocked the tool a group of New York University social scientists were using to study the company's ad targeting, along with the researchers' personal accounts. The "user privacy" excuse: Cambridge Analytica. The 2015 scandal around CA's scraping a bunch of personal data via an app users voluntarily downloaded eventually cost Facebook $5 billion in its 2019 settlement with the US Federal Trade Commission that also required it to ensure this sort of thing didn't happen again. The NYU researchers' Ad Observatory was collecting advertising data via a browser extension users opted to install. They were, Facebook says, scraping data. Potato, potahto!

People who aren't Facebook's lawyers see the two situations as entirely different. CA was building voter profiles to study how to manipulate them. The Ad Observatory was deliberately avoiding collecting personal data; instead, they were collecting displayed ads in order to study their political impact and identify who pays for them. Potato, *tomahto*.

One reason for the universal skepticism is that this move has companions - Facebook has also limited journalist access to CrowdTangle, a data tool that helped establish that far-right news content generate higher numbers of interactions than other types and suffer no penalty for being full of misinformation. In addition, at the Guardian, Chris McGreal finds that InfluenceMap reports that fossil fuel companies are using Facebook ads to promote oil and gas use as part of remediating climate change (have some clean coal).

Facebook's response has been to claim it's committed to transparency and blame the FTC. The FTC was not amused: "Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest." The FTC knows Orwellian fiction when it sees it.

Illustrations: Orwell's house on Portobello Road, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 9, 2021

The border-industrial complex*

Rohingya_Refugee_Camp_26_(sep_2020).jpgMost people do not realize how few rights they have at the border of any country.

I thought I did know: not much. EFF has campaigned for years against unwarranted US border searches of mobile phones, where "border" legally extends 100 miles into the country. If you think, well, it's a big country, it turns out that two-thirds of the US population lives within that 100 miles.

No one ever knows what the border of their own country is like for non-citizens. This is one reason it's easy for countries to make their borders hostile: non-citizens have no vote and the people who do have a vote assume hostile immigration guards only exist in the countries they visit. British people have no idea what it's like to grapple with the Home Office, just as most Americans have no experience of ICE. Datafication, however, seems likely to eventually make the surveillance aspect of modern border passage universal. At Papers, Please, Edward Hasbrouck charts the transformation of travel from right to privilege.

In the UK, the Open Rights Group and the3million have jointly taken the government to court over provisions in the post-Brexit GDPR-enacting Data Protection Act (2018) that exempted the Home Office from subject access rights. The Home Office invoked the exemption in more than 70% of the 19,305 data access requests made to its office in 2020, while losing 75% of the appeals against its rulings. In May, ORG and the3million won on appeal.

This week's announced Nationality and Borders Bill proposes to make it harder for refugees to enter the country and, according to analyses by the Refugee Council and Statewatch, make many of them - and anyone who assists them - into criminals.

Refugees have long had to verify their identity in the UK by providing biometrics. On top of that, the cash support they're given comes in the form of prepaid "Aspen" cards, which means the Home Office can closely monitor both their spending and their location, and cut off assistance at will, as Privacy International finds. Scotland-based Positive Action calls the results "bureaucratic slow violence".

That's the stuff I knew. I learned a lot more at this week's workshop run by Security Flows, which studies how datafication is transforming borders. The short version: refugees are extensively dataveilled by both the national authorities making life-changing decisions about them and the aid agencies supposed to be helping them, like the UN High Commissioner for Refugees (UNHCR). Recently, Human Rights Watch reported that UNHCR had broken its own policy guidelines by passing data to Myanmar that had been submitted by more than 830,000 ethnic Rohingya refugees who registered in Bangladeshi camps for the "smart" ID cards necessary to access aid and essential services.

In a 2020 study of the flow of iris scans submitted by Syrian refugees in Jordan, Aalborg associate professor Martin Lemberg-Pedersen found that private companies are increasingly involved in providing humanitarian agencies with expertise, funding, and new ideas - but that those partnerships risk turning their work into an experimental lab. He also finds that UN agencies' legal immunity coupled with the absence of common standards for data protection among NGOs and states in the global South leave gaps he dubs "loopholes of externalization" that allow the technology companies to evade accountability.

At the 2020 Computers, Privacy, and Data Protection conference a small group huddled to brainstorm about researching the "creepy" AI-related technologies the EU was funding. Border security represents a rare opportunity, invisible to most people and justified by "national security". Home Secretary Priti Patel's proposal to penalize the use of illegal routes to the UK is an example, making desperate people into criminals. People like many of the parents I knew growing up in 1960s New York.

The EU's immigration agencies are particularly obscure. I had encoutnered Warsaw-based Frontex, the European Border and Coast Guard Agency which manages operational control of the Schengen Area, but not of EU-LISA, which since 2012 has managed the relevant large-scale IT systems SIS II, VIS, EURODAC, and ETIAS (like the US's ESTA). Unappetizing alphabet soup whose errors few know how to challenge.

The behind-the-scenes the workshop described sees the largest suppliers of ICT, biometrics, aerospace, and defense provide consultants who help define work plans and formulate calls to which their companies respond. The list of vendors appearing in Javier Sánchez-Monedero's 2018 paper for the Data Justice Lab, begins to trace those vendors, a mix of well-known and unknown. A forthcoming follow-up focuses on the economics and lobbying behind all these databases.

In the recent paper on financing border wars, Mark Akkerman analyzes the economic interests behind border security expansion, and observes "Migration will be one of the defining human rights issues of the 21st century." We know it will increase, increasingly driven by climate change; the fires that engulfed the Canadian village of Lytton, BC on July 1 made 1,000 people homeless, and that's just the beginning.

It's easy to ignore the surveillance and control directed at refugees in the belief that they are not us. But take the UK's push to create a hostile environment by pushing border checks into schools, workplaces, and health services as your guide, and it's obvious: their surveillance will be your surveillance.

*Credit the phrase "border-industrial complex" to Luisa Izuzquiza.

Illustrations: Rohingya refugee camp in Bangladesh, 2020 (by Rocky Masum, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 11, 2021

The fragility of strangers

Colonial_Pipeline_System.pngThis week, someone you've never met changed the configuration settings on their individual account with a company you've never heard of and knocked out 85% of that company's network. Dumb stuff like this probably happens all the time without attracting attention, but in this case the company, Fastly. is a cloud provider that also runs an intermediary content delivery network intended to speed up Internet connections. Result: people all over the world were unable to reach myriad major Internet sites such as Amazon, Twitter, Reddit, and the Guardian for about an hour.

The proximate cause of these outages, Fastly has now told the world, was a bug that was introduced (note lack of agency) into its software code in mid-May, which laid dormant until someone did something completely normal to trigger it.

In the early days, we all assumed that as more companies came onstream and admins built experience and expertise, this sort of thing would happen less and less. But as the mad complexity of our computer systems and networks continues to increase - Internet of Things! AI! - now it's more likely that stuff like this will also increase, will be harder to debug, and will cause far more ancillary damage - and that damage will not be limited to the virtual world. A single random human, accidentally or intentionally, is now capable of creating physical-world damage at scale.

Ransomware attacks earlier this month illustrate this. Attackers' use of a single leaked password linked to a disused VPN account in the systems that run the Colonial Pipeline compromised gasoline supplies down a large swathe of the US east coast. Near-simultaneously, a ransomware attack on the world's largest meatpacker, JBS, briefly halted production, threatening food security in North America and Australia. In December, an attack on network management software supplied by the previously little-known SolarWinds compromised more than 18,000 companies and government agencies. In all these cases, random strangers reached out across the world and affected millions of personal lives by leveraging a vulnerability inside a company that is not widely known but that provides crucial services to companies we do know and use every day.

An ordinary person just trying to live their life has no defense except to have backups of everything - not just data, but service providers and suppliers. Most people either can't afford that or don't have access to alternatives, which means that precarious lives are made even more so by hidden vulnerabilities they can't assess.

An earlier example: in 2012, journalist Matt Honan's data was entirely wiped out through an attack that leveraged quirks of two unrelated services - Apple and Amazon - against each other to seize control of his email address and delete all his data. Moral: data "in the cloud" is not a backup, even if the hosting company says they keep backups. Second moral: if there is a vulnerability, someone will find it, sometimes for motives you would never guess.

If memory serves, Akamai, founded in 1998, was the first CDN. The idea was that even though the Internet means the death of distance, physics matters. Michael Lewis captured this principle in detail in his book Flash Boys, in which a handful of Wall Street types pay extraordinary amounts to shave a few split-seconds off the time it takes to make a trade by using a ruler and map to send fiber topic cables along the shortest possible route between exchanges. Just so, CDNs cache frequently accessed content on mirror servers around the world. When you call up one of those pages, it, or frequently-used parts of it in the case of dynamically assembled pages, is served up from the nearest of those servers, rather than from the distant originator. By now, there are dozens of these networks and what they do has vastly increased in sophistication, just as the web itself has. A really major outlet like Amazon will have contracts with more than one, but apparently switching from one to the other isn't always easy, and because so many outages are very short it's often easier to wait it out. Not in this case!

At The Conversation, criminology professor David Wall also sees this outage as a sign of the future for the same reason I do: centralization and consolidation have shrunk, and continue to shrink, the number of single points of widespread failure. Yes, the Internet was built to withstand a bomb outage is true - but as we have been writing for 20 years now, this Internet is not that Internet. The path to today's Internet has led from the decentralized era of Usenet, IRC, and own-your-own mail server to web hosting farms to the walled gardens of Facebook, Google, and Apple, and the AI-dominating Big Nine. In 2013, Edward Snowden's revelations made plain how well that suits surveillance-hungry governments, and it's only gotten worse since, as companies seek to insert themselves into every aspect of our lives - intermediaries that bring us a raft of new insecurities that we have no time or ability to audit.

Increasing complexity, hidden intermediation, increasing numbers of interferers, and increasing scale all add up to a brittle and fragile Internet, onto which we continue to pile all our most critical services and activities. What could possibly go wrong?

Illustrations: Map of the Colonial Pipeline.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 2, 2021

Medical apartheid

swiss-cheese-virus-defence.jpgEver since 1952, when Clarence Willcock took the British government to court to force the end of wartime identity cards, UK governments have repeatedly tried to bring them back, always claiming they would solve the most recent public crisis. The last effort ended in 2010 after a five-year battle. This backdrop is a key factor in the distrust that's greeting government proposals for "vaccination passports" (previously immunity passports). Yesterday, the Guardian reported that British prime minister Boris Johnson backs certificates that show whether you've been vaccinated, have had covid and recovered, or had a test. An interim report will be published on Monday; trials later this month will see attendees to football matches required to produce proof of negative lateral flow tests 24 hours before the game and on entry.

Simultaneously, England chief medical officer Chris Whitty told the Royal Society of Medicine that most experts think covid will become like the flu, a seasonal disease that must be perennially managed.

Whitty's statement is crucial because it means we cannot assume that the forthcoming proposal will be temporary. A deeply flawed measure in a crisis is dangerous; one that persists indefinitely is even more so. Particularly when, as this morning, culture secretary Oliver Dowden tries to apply spin: "This is not about a vaccine passport, this is about looking at ways of proving that you are covid secure." Rebranding as "covid certificates" changes nothing.

Privacy advocates and human rights NGOs saw this coming. In December, Privacy International warned that a data grab in the guise of immunity passports will undermine trust and confidence while they're most needed. "Until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair." We are a long, long way from that universal access and likely to remain so; today's vaccines will have to be updated, perhaps as soon as September. There is substantial, but not enough, parliamentary opposition.

A grassroots Labour discussion Wednesday night showed this will become yet another highly polarized debate. Opponents and proponents combine issues of freedom, safety, medical efficacy, and public health in unpredictable ways. Many wanted safety - "You have no civil liberties if you are dead," one person said; others foresaw segregation, discrimination, and exclusion; still others cited British norms in opposing making compulsory either vaccinations or carrying any sort of "papers" (including phone apps).

Aside from some specific use cases - international travel, a narrow range of jobs - vaccination passports in daily life are a bad idea medically, logistically, economically, ethically, and functionally. Proponents' concerns can be met in better - and fairer - ways.

The Independent SAGE advisory group, especially Susan Michie, has warned repeatedly that vaccination passports are not a good solution for solution life. The added pressure to accept vaccination will increase distrust, she has repeatedly said, particularly among victims of structural racism.

Instead of trying to identify which people are safe, she argues that the government should be guiding employers, businesses, schools, shops, and entertainment venues to make their premises safer - see for example the CDC's advice on ventilation and list of tools. Doing so would not only help prevent the spread of covid and keep *everyone* safe but also help prevent the spread of flu and other pathogens. Vaccination passports won't do any of that. "It again puts the burden on individuals instead of spaces," she said last night in the Labour discussion. More important, high-risk individuals and those who can't be vaccinated will be better protected by safer spaces than by documentation.

In the same discussion, Big Brother Watch's Silkie Carlo predicted that it won't make sense to have vaccination passports and then use them in only a few places. "It will be a huge infrastructure with checkpoints everywhere," she predicted, calling it "one of the civil liberties threats of all time" and "medical apartheid" and imagining two segregated lines of entry to every venue. While her vision is dramatic, parts of it don't go far enough: imagine when this all merges with systems already in place to bar access to "bad people". Carlo may sound unduly paranoid, but it's also true that for decades successive British governments at every decision point have chosen the surveillance path.

We have good reason to be suspicious of this government's motives. Throughout the last year, Johnson has been looking for a magic bullet that will fix everything. First it was contact tracing apps (failed through irrelevance), then test and trace (failing in the absence of "and isolate and support"), now vaccinations. Other than vaccinations, which have gone well because the rollout was given to the NHS, these failed high-tech approaches have handed vast sums of public money to private contractors. If by "vaccination certificates" the government means the cards the NHS gives fully-vaccinated individuals listing the shots they've had, the dates, and the manufacturer and lot number, well fine. Those are useful for those rare situations where proof is really needed and for our own information in case of future issues, it's simple, and not particularly expensive. If the government means a biometric database system that, as Michie says, individualizes the risk while relieving venues of responsibility, just no.

Illustrations: The Swiss Cheese Respiratory Virus Defence, created by virologist Ian McKay.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 18, 2020

Ghost hackers

Screenshot from 2020-12-17 23-55-51.pngYears ago, a by-then-retired former teenaged hacker listened to my account of what mid-1990s hackers were saying, and sighed at how little things had changed since his 1980s heyday. "It's still that thing of doing it day after day," he said, or more or less, meaning tedious daily hours doggedly poking around sites, trying logins, keeping meticulous records, and matching a user name collected one year with a password spotted the next. Grueling, painstaking work for which you could get arrested. Which he eventually was, leading to said retirement.

Today's young hackers, brought up on video games, are used to such lengthy keyboard stints to grind out points. Does that make them better suited for the kind of work my former hacker, brought up on pinball machines, described? Not necessarily.

In a paper presented this week at the 2020 Workshop on Economics of Information Security, by Cambridge postdoc Ben Collier, he and co-authors Richard Clayton, Alice Hutchings, and Daniel R. Thomas lay out the lives of the vast majority of today's hackers. Attracted by the idea of being part of a "cool cybercrime", they find themselves doing low-level tech support, customer service, and 24/7 server maintenance for well-worn exploits, all while under the threat of disruption from intermediaries, law enforcement, and bugs left by incompetent software coders while impatient, distrustful customers fume at them. Worse, this kind of work doesn't attract the admiration of other hackers and these workers don't get to make creative leaps. It's just routine, boring office work, nothing like the hacker ethic they embraced or the hacker culture's self-image, which hasn't changed in any real sense since the 1990s, when it was described to me with evangelical fervor as thrilling.

The disappointment "fundamentally changes the experience of being in this world," Collier said. Isn't that always the way when your hobby becomes your day job?

These guys are little different from the "ghost workers", Mary L. Gray and Siddharth Suri profile in their 2019 book. However, this group don't expect these conditions, unlike the millions of invisible fixers and maintainers for companies like Uber, Amazon, and every other company that boasts of its special "AI" sauce. In the legitimate economy, these workers occupy the low-status bottom of the hierarchy and have little prospect of attaining the respect and perks of the engineers, research scientists, and top-level management who get all the visibility. The illegitimate economy is no different.

The authors got their idea from a leap of logic that seems obvious in retrospect: the gradual transition from the exploits of lone bedroom hackers to organized cybercrime-as-a- service. What was high-impact, low-volume crime is now high-volume crime, which requires a large, built infrastructure. "True scaling up needs lots of invisible supportive labor to enable true scale." Think the electrical or water grid in a large city.

Based on their forays onto cybercrime forums and numerous interviews, the authors find that neither the public at large nor the hackers themselves have adapted their mental models. "The heart of the subculture is still based on this idea of the mythic, lone, high-skilled hacker," Collier said. "It looks nothing like this invisible maintenance work." Or, of course, like this week's discovery that nation-state hackers have penetrated numerous US federal agencies.

In other words, the work these hackers are doing is exactly the same as life as a sysadmin for a legitimate business - with the same "deep, deep boredom" but with the added difficulty of how to spend their earnings. One of their many interviewees was able to monetize his efforts unusually well. "He ran out of stuff to buy himself and his friends, and finally quit because he was piling up Amazon gift cards in shoeboxes under his bed and it stressed him out." At one point, he even cut up thousands of dollars' worth of the cards "just for something to do". Closed to him: using the money to buy a house or education and improve his life.

WEIS began in 2002 as a unique effort to apply familiar concepts of economics - incentives, externalities, asymmetric information, and moral hazard - to information security, understanding that despite the growing threats no organizations has infinite resources. Over the years, economists have increasingly taken an interest. The result is a cross-the-streams event where a study like this one may be followed by a math-heavy analysis of the relationship between pricing and security-related business strategies, each offering possibilities for new approaches.

Collier concluded that arresting, charging, and convicting these guys is counter-productive because, "It's important not to block their escape routes. They often get in because the main routes in society are blocked." He added, "The systems of value and capital and social status that exist in the world are not working for loads of people, or they don't have access so they make their own alternatives." Cracking down and conducting mass arrests also blocks those routes back into mainstream society.

Would today's teens choose the hacking life if they really understood what the job was going to be like? As someone commented, at the next big arrest perhaps the press release should stress the number of hours the miscreants worked, the sub-McDonalds hourly pay they eventually earned, and the permanent anomie induced by their disappointment, disillusionment, and alienation.

Illustrations: Ben Collier, presenting "Cybercrime is (often) boring: maintaining the infrastructure of cybercrime economies" at WEIS 2020.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 6, 2020

Crypto in review

Caspar_Bowden-IMG_8994-2013-rama.jpgBy my count, this is net.wars number 990; the first one appeared on November 2, 2001. If you added in its predecessors - net.wars-the-book, and its sequel From Anarchy to Power, as well as the more direct precursors, the news analysis pieces I wrote for the Daily Telegraph between 1997 and early 2001, you'd get a different number I don't know how to calculate. Therefore: this is net.wars #990, and the run-up to 1,000 seems a good moment to review some durable themes of the last 20 years via what we wrote at the time.

net.wars #1 has, sadly, barely aged; it could almost be published today unchanged. It was a ticked-off response to former Home Secretary Jack Straw, who weeks after the 9/11 attacks told Britain's radio audience that the people who had opposed key escrow were now realizing they'd been naive. We were not! The issue Straw was talking about was the use of strong cryptography, and "key escrow" was the rejected plan to require each individual to deposit a copy of their cryptographic key with a trusted third party. "Trusted", on its surface meant someone *we* trusted to guard our privacy; in subtext it meant someone the government trusted to disclose the key when ordered to do so - the digital equivalent of being required to leave a copy of the key to your house with the local police in case they wanted to investigate you. The last half of the 1990s saw an extended public debate that concluded with key escrow being dropped for the final version of the Regulation of Investigatory Powers Act (2000) in favor of requiring individuals to produce cleartext when law enforcement require it. A 2014 piece for IEEE Security & Privacy explains RIPA and its successors and the communications surveillance framework they've created.

With RIPA's passage, a lot of us thought the matter was settled. We were so, so wrong. It did go quiet for a decade. Surveillance-related public controversy appeared to shift, first to data retention and then to ID cards, which were proposed soon after the 2005 attacks on London's tube and finally canned in 2010 when the incoming coalition government found a note from the previous chancellor, "There's no money".

As the world discovered in 2013, when Edward Snowden dropped his revelations of government spying, the security services had taken the crypto debate into their own hands, undermining standards and making backroom access deals. The Internet community reacted quickly with first advice and then with technical remediation.

In a sense, though, the joke was on us. For many netheads, crypto was a cause in the 1990s; the standard advice was that we should all encrypt all our email so the important stuff wouldn't stand out. To make that a reality, however, crypto software had to be frictionless to use - and the developers of the day were never interested enough in usability to make it so. In 2011, after I was asked to write an instruction manual for installing PGP (or GPG), the lack of usability was maddening enough for me to write: "There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners."

The only really successful crypto at that point were backend protocols like SSL (used to secure ecommerce transactions over the web), TLS (secures communications), and HTTPS (secures web connections) and the encryption built into mobile phone standards. Much has changed since, most notably Facebook's and Apple's decision to protect user messages and data, at a stroke turning crypto on for billions of users. The result, as Ross Anderson predicted in 2018, was to change the focus of governments' demand for access to hacking devices rather than cracking individual messages.

The arguments have not changed in all those years; they were helpfully collated by a group of senior security experts in 2015 in the report Keys Under Doormats (PDF). Encryption is mathematics; you cannot create a hole that only "good guys" can use. Everyone wants uncrackable encryption for themselves - but to be able to penetrate everyone else's. That scenario is no more possible than the suggestion some of Donald Trump's team are making that the same votes that are electing Republican senators and Congresspeople are not legally valid when applied to the presidency.

Nonetheless, we've heard repeated calls from law enforcement for breakable encryption: in 2015, 2017, and, most recently, six weeks ago. In between, while complaining that communications were going dark, in 2016 the FBI tried to force Apple to crack its own phones to enable an investigation. When the FBI found someone to crack it to order, Apple turned on end-to-end encryption.

I no longer believe that this dispute can be settled. Because it is built on logic proofs, mathematics will always be hard, non-negotiable, and unyielding, and because of their culture and responsibilities security services and law enforcement will always want more access. For individuals, before you adopt security precautions, think through your threat model and remember that most attacks will target the endpoints, where cleartext is inevitable. For nations, remember whatever holes you poke in others' security will be driven through in your own.

Illustrations: The late Caspar Bowden (1961-2015), who did so much to improve and explain surveillance policy in general and crypto policy in particular (via rama at Wikmedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 28, 2020

Through the mousehole

Rodchenkov-Fogel-Icarus.pngIt's been obvious for a long time that if you want to study a thoroughly dysfunctional security system you could hardly do better than doping control in sports. Anti-doping has it all: perverse incentives, wrong assumptions, conflicts of interest, and highly motivated opponents. If you doubt this premise, consider: none of the highest-profile doping cases were caught by the anti-doping system. Lance Armstrong (2010) was outed by a combination of dogged journalistic reporting by David Walsh and admissions by his former teammate Floyd Landis; systemic Russian doping (2014) was uncovered by journalist Hajo Seppelt, who has also broadcast investigations of China, Kenya, Germany, and weightlighting; BALCO (2002) was exposed by a coach who sent samples to the UCLA anti-doping lab; and Willy Voet (1998), soigneur to the Festina cycling team, was busted by French Customs.

I bring this up - again - because two insider tales of the Russian scandal have just been published. The first, The Russian Affair, by David Walsh, tells the story of Vitaly and Yuliya Stepanov, who provided Seppelt with material for The Secrets of Doping: How Russia Makes Its Winners (2014); the second, The Rodchenkov Affair, is a first-person account of the Russian system by Grigory Rodchenkov, from 2006 to 2015 the director of Moscow's testing lab. Together or separately, these books explain the Russian context that helped foster its particular doping culture. They also show an anti-doping system that isn't fit for purpose.

The Russian Affair is as much the story of the Stepanovs' marriage as of contrasting and complementary views of the doping system. Vitaly was an idealistic young recruit at the Russian Anti-Doping Agency; Yuliya Rusanova was an aspiring athlete willing to do anything to escape the desperate unhappiness and poverty of her native area, Kursk. While she lectured him about not understanding "the real world", he continued hopefully writing letters to contacts at the World Anti-Doping Agency describing the violations he was seeing. Yuliya comes to see the exploitation of a system that protects winners but lets others test positive to make the system look functional. Under Vitaly's guidance, she records the revealing conversations that Seppelt's documentary featured. Rodchenkov makes a cameo appearance; the Stepanovs believed he was paid to protect specific athletes from positive tests.

In the vastly more entertaining The Rodchenkov Affair, Rodchenkov denies receiving payment, calling Yuliya a "has-been" he'd never met. Instead, Rodchenkov describes developing new methods of detecting performance-enhancing substances, then finding methods to beat those same tests. If the nearest analogue to the Walsh-described Stepanovs' marriage is George and Kellyanne Conway, Rodchenkov's story is straight out of Philip K. Dick's A Scanner Darkly, in which an undercover narcotics agent is assigned to spy on himself.

Russia has advantages for dopers. For example, its enormous land mass allows athletes to sequenster themselves in training camps so remote they are out of range for testers. More important may be the pervasive sense of resignation that Vitaly Stepanov describes as his boss slashes WADA's 80 English pages of anti-doping protocols to ten in Russian translation because various aspects are "not possible to do in Russia". Rodchenkov, meanwhile, plans the Sochi anti-doping lab that the McLaren report later made famous for swapping positive samples for pre-frozen clean ones through a specially built "mousehole" operated by the FSB.

If you view this whole thing as a security system, it's clear that WADA's threat model was too simple, something like "athletes dope". Even in 1988, when Ben Johnson tested positive at the Seoul Olympics, it was obvious that everyone's interests depended on not catching star athletes. International sports depend on their stars - as do their families, coaches, support staff, event promoters, governments, fans, and even other athletes, who know the star attractions make their own careers possible. Anti-doping agencies must thread their way through this thicket.

In Rodchenkov's description, WADA appears inept, even without its failure to recognize this ecosystem. In one passage, Rodchenkov writes about the double-blind samples the IOC planted from time to time to test the lab: "Those DBs were easily detectable because they contained ridiculous compounds...which were never seen in doping control routine analysis." In another, he says: "[WADA] also assumed that all accredited laboratories were similarly competent, which was not the case. Some WADA-accredited laboratories were just sloppy, and would reach out to other countries' laboratories when they had to process quality control samples to gain re-accreditation."

Flaws are always easy to find once you know they're there. But WADA was founded in 1999. Just six years earlier, the opening of the Stasi records exposed the comprehensive East German system. The possibility of state involvement should have been high on the threat list from the beginning, as should the role of coaches and doctors who guide successive athletes to success.

It's hard to believe this system can be successfully reformed. Incentives to dope will always be with us, just as it would be impossible to eliminate all incentives to break into computer systems. Rodchenkov, who frequently references Orwell's 1984, insists that athletes dope because otherwise their bodies cannot cope with the necessary training, which he contends is more physically damaging than doping. This much is clear: a system that insists on autonomy while failing to fulfill its most basic mission is wrong. Small wonder that Rodchenkov concludes that sport will never be clean.

Illustrations: Grigory Rodchenkov and Bryan Fogel in Fogel's documentary, Icarus.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 17, 2020

Flying blind

twitter-bird-flipped.jpgQuick update to last week: the European Court of Justice has ruled in favor of Max Schrems a second time and struck down Privacy Shield, the legal framework that allowed data transfers from the EU to the US (and other third countries); businesses can still use Standard Contractual Clauses, subject to some conditions. TL;DR: Sucks even more to be the UK, caught in the middle between the EU and US demands regarding data flows. On to this week...

This week's Twitter hack is scary. Not, obviously, because it was a hack; by this time we ought to be too used to systems being penetrated by attackers to panic. We know technology is insecure. That's not news.

The big fear should be the unused potential.

Twitter's influence has always been disproportionate to its size. By Big Social Media standards, Twitter is small - a mere snip at 330 million users, barely bigger than Pinterest. TikTok has 800 million, Instagram has 1 billion, YouTube 2 billion, and Facebook 2.5 billion. But Twitter is addictively home to academics, politicians, and entertainers - and journalists, who monitor Twitter constantly for developments to report on. A lot of people feel unable to mention Twitter these days without stressing how much of a sinkhole they think it is (the equivalent of, in decades past, boasting how little TV you watched), but for public information in the West Twitter is a nerve center. We talk a lot about how Facebook got Trump elected, but it was Twitter that got him those acres of free TV and print coverage.

I missed most of the outage. According to Vice, on Wednesday similarly-worded tweets directing followers to send money in the form of bitcoin began appearing in the feeds coming from the high-profile, high-follower accounts belonging to Joe Biden, Elon Musk, Uber, Apple, Bill Gates, and others. Twitter had to shut down a fair bit of the service for a while and block verified users - high-profile public figures that Twitter deems important enough to make sure they're not fakes - from posting. The tweets have been removed, and some people who - presumably trying to follow standard practice in a data breach - tried to change their passwords got locked out - and some people must have sent money, since Vice reported the Bitcoin wallet in question had collected $100,000. But overall not much harm was done.

This time.

Most people, when they think about their social media account or email being hacked, think first of the risk that their messages will be read. This is always a risk, and it's a reason not to post your most sensitive secrets to technology and services you don't control. But the even bigger problem many people overlook is exactly what the attackers did here: spoofed messages that fool friends and contacts - in this case, the wider public - into thinking they're genuine. This is not a new problem; hackers have sought to take advantage of trust relationships to mount attacks ever since Kevin Mitnick dubbed the practice "social engineering" circa 1990.

In his detailed preliminary study of the attack, Brian Krebs suggests the attack likely came from people who've "typically specialized in hijacking social media accounts via SIM swapping". Whoever did it and whatever route they took, it seems clear they gained access to Twitter's admin tools, which enabled them to change the email address associated with accounts and either turn off or capture the two-factor authentication that might alert the actual owners. (And if, like many people, you operate Twitter, email, and 2FA on your phone, you actually don't *have* two factors, you have one single point of failure - your phone. Do not do this if you can avoid it.)

In the process of trying to manage the breach, Eric Geller reports at Politico, Twitter silenced accounts belonging to numerous politicians including US president Donald Trump and the US National Weather Service tornado alerts, among many others that routinely post public information, in some cases for more than 24 hours. You can argue that some of these aren't much of a loss, but the underlying problem is a critical one, in that organizations and individuals of all stripes use Twitter as an official outlet for public information. Forget money: deployed with greater subtlety at the right time, such an attack could change the outcome of elections by announcing false information about polling places (Geller's suggestion), or kill people simply by suppressing critical public safety warnings.

What governments and others don't appear to have realized is that in relying on Twitter as a conduit to the public they are effectively outsourcing their security to it without being in a position to audit or set standards beyond those that apply to any public company. Twitter, on the other hand, should have had more sense: if it created special security arrangements for Trump's account, as the New York Times says it did, why didn't it occur to the company to come up with a workable system for all its accounts? How could it not have noticed the need? The recurring election problems around the world weren't enough of a clue?

Compared to what the attackers *could* have wanted, stealing some money is trivial. Twitter, like others before it, will have to rethink its security to match its impact.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 1, 2020


china-alihealth.jpegAround 2010, when smartphones took off (Apple's iPhone user base grew from 8 million in 2009 to 100 million in early 2011), "There's an app for that" was a joke widely acknowledged as true. Faced with a pandemic, many countries are looking to develop apps that might offer shortcuts to reaching some variant of "old normal". The UK is no exception, and much of this week has been filled with debate about the nascent contact tracing app being developed by the National Health Service's digital arm, NHSx. The logic is simple: since John Snow investigated cholera in 1854, contact tracing has remained slow, labor-intensive , and dependent on infected individuals' ability to remember all their contacts. With a contagious virus that spreads promiscuously to strangers who happen to share your space for a time, individual memory isn't much help. Surely we can do better. We have technology!

In 2011, Jon Crowcroft and Eiko Yoneki had that same thought. Their Fluphone proved the concept, even helping identify asymptomatic superspreaders through the social graph of contacts developing the illness.

In March, China's Alipay Health got our attention. This all-seeing, all-knowing, data-mining, risk score-outputting app whose green, yellow, and red QR codes are inspected by police at Chinese metro stations, workplaces, and other public areas seeks to control the virus's movements by controlling people's access. The widespread Western reaction, to a first approximation: "Ugh!" We are increasingly likely to end up with something similar, but with very different enforcement and a layer of "democratic voluntary" - *sort* of China, but with plausible deniability.

Or we may not. This is a fluid situation!

This week has been filled with debate about why the UK's National Health Service's digital arm (NHSx) is rolling its own app when Google and Apple are collaborating on a native contact-tracing platform. Italy and Spain have decided to use it; Germany, which was planning to build its own app, pivoted abruptly, and Australia and Singapore (whose open source app, TraceTogether, was finding some international adoption) are switching. France balked, calling Apple "uncooperative".

France wants a centralized system, in which matching exposure notifications is performed on a government-owned central server. That means trusting the government to protect it adequately and not start saying, "Oooh, data, we could do stuff with that!" In a decentralized system, the contact matching us performed on the device itself, with the results released to health officials if the user decides to do so. Apple and Google are refusing to support centralized systems, largely because in many of the countries where iOS and Android phones are sold it poses significant dangers for the population. Essentially, the centralized ones ask you for a lot more trust in your government.

All this led to Parliament's Human Rights Committee, which spent the week holding hearings on the human rights implications of contact tracing apps. (See Michael Veale's and Orla Lynskey's written evidence and oral testimony.) In its report, the committee concluded that the level of data being collected isn't justifiable without clear efficacy and benefits; rights-protecting legislation is needed (helpfully, Lilian Edwards has spearheaded an effort to produce model safeguarding legislation; an independent oversight body is needed along with a Digital Contact Tracing Human Rights Commissioner; the app's efficacy and data security and privacy should be reviewed every 21 days; and the government and health authorities need to embrace transparency. Elsewhere, Marion Oswald writes that trust is essential, and the proposals have yet to earn it.

The specific rights discussion has been accompanied by broader doubts about the extent to which any app can be effective at contact tracing and the other flaws that may arise. As Ross Anderson writes, there remain many questions about practical applications in the real world. In recent blog postings, Crowcroft mulls modern contact tracing apps based on what they learned from Fluphone.

The practical concerns are even greater when you look at Ashkan Soltani's Twitter feed, in which he's turning his honed hacker sensibilities on these apps, making it clear that there are many more ways for these apps to fail than we've yet recognized. The Australian app, for example, may interfere with Bluetooth-connected medical devices such as glucose monitors. Drug interactions matter; if apps are now medical devices, then their interactions must be studied, too. Soltani also raises the possibility of using these apps for voter suppression. The hundreds of millions of downloads necessary to make these apps work means even small flaws will affect large numbers of people.

All of these are reasons why Apple and Google are going to wind up in charge of the technology. Even the UK is now investigating switching. Fixing one platform is a lot easier than debugging hundreds, for example, and interoperability should aid widespread use, especially when international travel resumes, currently irrelevant but still on people's minds. In this case, Apple's and Google's technology, like the Internet itself originally, is a vector for spreading the privacy and human rights values embedded in its design, and countries are changing plans to accept it - one more extraordinary moment among so many.

Illustrations: Alipay Health Code in action (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 6, 2020

Transitive rage

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgSomething has changed," a privacy campaigner friend commented last fall, observing that it had become noticeably harder to get politicians to understand and accept the reasons why strong encryption is a necessary technology to protect privacy, security, and, more generally, freedom. This particular fight had been going on since the 1990s, but some political balance had shifted. Mathematical reality of course remains the same. Except in Australia.

At the end of January, Bloomberg published a leaked draft of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT), backed by US Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT). In its analysis the Center for Democracy and Technology find the bill authorizes a new government commission, led by the US attorney general, to regulate online speech and, potentially, ban end-to-end encryption. At Lawfare, Stewart Baker, a veteran opponent of strong cryptography, dissents, seeing the bill as combating child exploitation by weakening the legal liability protection afforded by Section 230. Could the attorney general mandate that encryption never qualifies as "best practice"? Yes, even Baker admits, but he still thinks the concerns voiced by CDT and EFF are overblown.

In our real present, our actual attorney general, William Barr believes "warrant-proof encryption" is dangerous. His office is actively campaigning in favor of exactly the outcome CDT and EFF fear.

Last fall, my friend connected the "change" to recent press coverage of the online spread of child abuse imagery. Several - such as Michael H. Keller and Gabriel J.X. Dance's November story - specifically connected encryption to child exploitation, complaining that Internet companies fail to use existing tools, and that Facebook's plans to encrypt Messenger, "the main source of the imagery", will "vastly limit detection".

What has definitely changed is *how* encryption will be weakened. The 1990s idea was key escrow, a scheme under which individuals using encryption software would deposit copies of their private keys with a trusted third party. After years of opposition, the rise of ecommerce and its concomitant need to secure in-transit financial details eventually led the UK government to drop key escrow before the passage of the Regulation of Investigatory Powers Act (2000), which closed that chapter of the crypto debates. RIPA and its current successor, the Investigatory Powers Act (2016), requires individuals to descrypt information or disclose keys to government representatives. There have have been three prosecutions.

In 2013, we learned from Edward Snowden's revelations that the security services had not accepted defeat but had gone dark, deliberately weakening standards. The result: the Internet engineering community began the work of hardening the Internet as much as they could.

In those intervening years, though, outside of a few very limited cases - SSL, used to secure web transactions - very few individuals actually used encryption. Email and messaging remained largely open. The hardening exercise Snowden set off eventually included companies like Facebook, which turned on end-to-end encryption for all of WhatsApp in 2016, overnight turning 1 billion people into crypto users and making real the long-ago dream of the crypto nerds of being lost in the noise. If 1 billion people use messaging and only a few hundred use encryption, the encryption itself is a flag that draws attention. If 1 billion people use encrypted messaging, those few hundred are indistinguishable.

In June 2018, at the 20th birthday of the Foundation for Information Policy Research, Ross Anderson predicted that the battle over encryption would move to device hacking. The reasoning is simple: if they can't read the data in transit because of end-to-end encryption, they will work to access it at the point of consumption, since it will be cleartext at that point. Anderson is likely still to be right - the IPA includes provisions allowing the security services to engage in "bulk equipment interference", which means, less politely, "hacking".

At the same time, however, it seems clear that those governments that are in a position to push back at the technology companies now figure that a backdoor in the few giant services almost everyone uses brings back the good old days when GCHQ could just put in a call to BT. Game the big services, and the weirdos who use Signal and other non-mainstream services will stick out again.

At Stanford's Center for Internet and Society, Riana Pfefferkorn believes the DoJ is opportunistically exploiting the techlash much the way the security services rushed through historically and politically unacceptable surveillance provisions in the first few shocked months after the 9/11 attacks. Pfefferkorn calls it "transitive rage": Congresspeople are already mad at the technology companies for spreading false news, exploiting personal data, and not paying taxes, so encryption is another thing to be mad about - and pass legislation to prevent. The IPA and Australia's Assistance and Access Act are suddenly models. Plus, as UN Special Rapporteur David Keye writes in his book Speech Police: The Global Struggle to Govern the Internet, "Governments see that company power and are jealous of it, as they should be."

Pfefferkorn goes on to point out the inconsistency of allowing transitive rage to dictate banning secure encryption. It protects user privacy, sometimes against the same companies they're mad at. We'll let Alec Muffett have the last word, reminding that tomorrow's children's freedom is also worth protecting.

Illustrations: GCHQ's Bude listening post, at dawn (by wizzlewick at Wikimedia, CC3.0).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


February 28, 2020

The virtuous patient

US-health-insurance-coverage-state-2018.pngIt's interesting to speculate about whether our collective approach to cybersecurity would be different if the dominant technologies hadn't been developed under the control of US companies. I'm thinking about the coronavirus, which I fear is about to expose every bit of the class, race, and economic inequality of the US in the most catastrophic way.

Here in Britain, the question I'm most commonly asked has become, "Why do Americans oppose universal health care?" This question is particularly relevant as the Democratic primaries bed down into daily headlines and pundits opining on whether "democratic socialist" Bernie Sanders and Elizabeth Warren, who both favor "Medicare for All", are electable. How, UK friends ask, could they not be electable when what they're proposing is so obviously a good thing? How is calling health care a human right "socialist" rather than just "sane"? By that standard, Europe is full of socialist countries that are functioning democracies.

I respond that framing health insurance as an aspirational benefit of a "good job" was a stroke of evil genius that invoked everyone's worst meritocratic instincts while putting employers firmly in the feudal lord driving seat. I find it harder to explain how "socialist" became equated with "evil". "Socialized medicine" apparently began as a harmless description but in the 1960s the American Medical Association exploited it to scare people off. I thought doctors were supposed to first, do no harm?

Of course, a virus doesn't care who's paying for health care - the real crux of the debates - but it also doesn't care if you're rich, poor, upper crust, working class, Republic, Democrat, or a narcissist who thinks expertise is vastly overrated and scientists are just egos with degrees. The consequence of treating health care as an aspirational benefit instead of a human right is that in 2018 27.5 million Americans had no health insurance. As others have noticed, uninsured people cluster in "red" states. Since Donald Trump took office, however, the number of uninsured is slowly regrowing.

Some of the uninsured are undoubtedly people who are homeless, but most are from working families. They work in gas stations and convenience stores, as agency maids and security guards, as Uber drivers, food service. Skeleton staffing levels mean bosses penalize anyone trying to call in sick; low wage levels make sick days an unaffordable "luxury"; without available child care, kids must go to school, sick or well. Every misplaced incentive forces this group to soldier on and to avoid doctors as much as possible. The story of Ozmel Martinez Azcue, who did the socially responsible thing and got himself to a hospital for testing only to be billed for $3,270 (of which his share is $1,400) when he tested negative for coronavirus, is a horror story deterrent. As Carl Gibson writes at the Guardian, "...when you combine a for-profit healthcare system - in which only those wealthy enough to get care actually receive it - with a global pandemic, the only outcome will be unmitigated disaster".

This is a country where 40% of the population can't come up with an emergency $400, for whom no vaccine or test is "affordable". CDC's sensible advice is out of reach for the nearly 10% of the population whose work requires their physical presence; a divide throroughly exposed by 2012's Hurricane Sandy.

Sanity would dictate making testing, treatment, and vaccines completely free for the duration of the crisis in the interests of collective public health. But even that would require a profound shift in how Americans understand health care. It requires Americans to loosen their sense that health insurance is an individual merit badge and exercise a modest amount of trust in government - at a time when the man in charge is generally agreed to be entirely untrustworthy. As Laurie Garrett, the author of 1994's Pulitzer Prize-winning The Coming Plague, warned last month, two years ago Trump trashed the pandemic response teams Barack Obama put in place in 2014, after H1N1 and Ebola made the necessity for them clear.

If the US survives this intact, Trump will take the credit, but the reality will be that the country got lucky this time. Individuals won't, however; a pandemic in these conditions will soon be followed by a wave of bankruptcies, many directly or indirectly a consequence of medical bills - and a lot of them will have had health insurance. Plus, there will be the longer-term, hard-to-quantify damage of the spreading climate of fear, sowing distrust in a society that already has too much of it.

So back to cybersecurity and privacy. The same type of individualistic thinking underlies computer and networking designers who take the view that securing them is the individual problem of each entity that uses them. Individual companies have certainly improved on usability in some cases, but even the discovery of widespread disinformation campaigns has not really led to a public health-style collective response even though pervasive interconnection means the smallest user and device can be the vector for infecting a whole network. In security, as in health care, information asymmetry is such that the most "virtuous patient" struggles to make good choices. If a different country had dominated modern computing, would we, as Americans tend to think, have less, or no, innovation? Or would we have much more resilient systems?

Illustrations: The map of uninsured Americans in 2018, from the US Census Bureau.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 6, 2020

Mission creep

Haystack-Cora.png"We can't find the needles unless we collect the whole haystack," a character explains in the new play The Haystack, written by Al Blyth and in production at the Hampstead Theatre through March 7. The character is Hannah (Sarah Woodward), and she is director of a surveillance effort being coded and built by Neil (Oliver Johnstone) and Zef (Enyi Ororonkwo), familiarly geeky types whose preferred day-off activities are the cinema and the pub, rather than catching up on sleep and showers, as Hannah pointedly suggests. Zef has a girlfriend (and a "spank bank" of downloaded images) and is excited to work in "counter-terrorism". Neil is less certain, less socially comfortable, and, we eventually learn, more technically brilliant; he must come to grips with all three characteristics in his quest to save Cora (Rona Morison). Cue Fleabag: "This is a love story."

The play is framed by an encrypted chat between Neil and Denise, Cora's editor at the Guardian (Lucy Black). We know immediately from the technological checklist they run down in making contact that there has been a catastrophe, which we soon realize surrounds Cora. Even though we're unsure what it is, it's clear Neil is carrying a load of guilt, which the play explains in flashbacks.

As the action begins, Neil and Zef are waiting to start work as a task force seconded to Hannah's department to identify the source of a series of Ministry of Defence leaks that have led to press stories. She is unimpressed with their youth, attire, and casual attitude - they type madly while she issues instructions they've already read - but changes abruptly when they find the primary leaker in seconds. Two stories remain; because both bear Cora's byline she becomes their new target. Both like the look of her, but Neil is particularly smitten, and when a crisis overtakes her, he breaks every rule in the agency's book by grabbing a train to London, where, calling himself "Tom Flowers", he befriends her in a bar.

Neil's surveillance-informed "god mode" choices of Cora's favorite music, drinks, and food when he meets her remind of the movie Groundhog Day, in which Phil (Bill Murray) slowly builds up, day by day, the perfect approach to the women he hopes to seduce. In another cultural echo, the tense beginning is sufficiently reminiscent of the opening of Laura Poitras's film about Edward Snowden, CitizenFour, that I assumed Neil was calling from Moscow.

The requirement for the haystack, Hannah explains at the beginning of Act Two, is because the terrorist threat has changed from organized groups to home-grown "lone wolves", and threats can come from anywhere. Her department must know *everything* if it is to keep the nation safe. The lone-wolf theory is the one surveillance justification Blyth's characters don't chew over in the course of the play; for an evidence-based view, consult the VOX-Pol project. In a favorite moment, Neil and Hannah demonstrate the frustrating disconnect between technical reality and government targets. Neil correctly explains that terrorists are so rare that, given the UK's 66 million population, no matter how much you "improve" the system's detection rate it will still be swamped by false positives. Hannah, however, discovers he has nonetheless delivered. The false positive rate is 30% less! Her bosses are thrilled! Neil reacts like Alicia Florrick in The Good Wife after one of her morally uncomfortable wins.

Related: it is one of the great pleasures of The Haystack that its three female characters (out of a total of five) are smart, tough, self-reliant, ambitious, and good at their jobs.

The Haystack is impressively directed by Roxana Silbert. It isn't easy to make typing look interesting, but this play manages it, partly by the well-designed use of projections to show both the internal and external worlds they're seeing, and partly by carefully-staged quick cuts. In one section, cinema-style cross-cutting creates a montage that fast-forwards the action through six months of two key relationships.

Technically, The Haystack is impressive; Zef and Neil speak fluent Python, algorithms, and Bash scripts, and laugh realistically over a journalist's use of Hotmail and Word with no encryption ("I swear my dad has better infosec"), while the projections of their screens are plausible pieces of code, video games, media snippets, and maps. The production designers and Blyth, who has a degree in econometrics and a background as a research economist, have done well. There were just a few tiny nitpicks: Neil can't trace Cora's shut-down devices "without the passwords" (huh?); and although Neil and Zef also use Tor, at one point they use Firefox (maybe) and Google (doubtful). My companion leaned in: "They wouldn't use that." More startling, for me, the actors who play Neil and Zef pronounce "cache" as "cachet"; but this is the plaint of a sound-sensitive person. And that's it, for the play's 1:50 length (trust me; it flies by).

The result is an extraordinary mix of a well-plotted comic thriller that shows the personal and professional costs of both being watched and being the watcher. What's really remarkable is how many of the touchstone digital rights and policy issues Blyth manages to pack in. If you can, go see it, partly because it's a fine introduction to the debates around surveillance, but mostly because it's great entertainment.

Illustrations: Rona Morison, as Cora, in The Haystack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 31, 2020

Dirty networks

Thumbnail image for European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpgWe rarely talk about it this way, but sometimes what makes a computer secure is a matter of perspective. Two weeks ago, at the CPDP-adjacent Privacy Camp, a group of Russians explained seriously why they trust Gmail, WhatsApp, and Facebook.

"If you remove these tools, journalism in Crimea would not exist," said one. Google's transparency reports show that the company has never given information on demand to the Russian authorities.

That is, they trust Google not because they *trust* Google but because using it probably won't land them in prison, whereas their indigenous providers are stoolies in real time. Similarly, journalists operating in high-risk locations may prefer WhatsApp, despite its Facebookiness, because they can't risk losing their new source by demanding a shift to unfamiliar technology, and the list of shared friends helps establish the journalist's trustworthiness. The decision is based on a complex set of context and consequences, not on a narrow technological assessment.

So, now. Imagine you lead a moderately-sized island country that is about to abandon its old partnerships, and you must choose whether to allow your telcos to buy equipment from a large Chinese company, which may or may not be under government orders to build in surveillance-readiness. Do you trust the Chinese company? If not, who *do* you trust?

In the European Parliament, during Wednesday's pro forma debate on the UK's Withdrawal Agreement and emotional farewell, Guy Verhofstadt, the parliament's Brexit coordinator, asked: "What is in fact threatening Britain's sovereignty most - the rules of our single market or the fact that tomorrow they may be planting Chinese 5G masts in the British islands?"

He asked because back in London Boris Johnson was announcing he would allow Huawei to supply "non-core" equipment for up to 35% (measured how?) of the UK's upcoming 5G mobile network. The US, in the form of a Newt Gingrich, seemed miffed. Yet last year Brian Fung noted at the Washington Post ($) the absence of US companies among the only available alternatives: ZTE (China), Nokia (Finland), and Ericsson (Sweden). The failure of companies like Motorola and Lucent to understand, circa 2000, the importance of common standards to wireless communications - a failure Europe did not share - cost them their early lead. Besides, Fung adds, people don't trust the US like they used to, given Snowden's 2013 revelations and the unpredictable behavior of the US's current president. So, the question may be less "Do you want spies with that?" and more, "Which spy would you prefer?"

A key factor is cost. Huawei is both cheaper *and* the technology leader, partly, Alex Hern writes at the Guardian, because its government grants it subsidies that are illegal elsewhere. Hern calls the whole discussion largely irrelevant, because *actually* Huawei equipment is already embedded. Telcos - or rather, we - would have to pay to rip it out. A day later, BT proves he's right: it forecasts bringing the Huawei percentage down will cost £500 million.

All of this discussion has been geopolitical: Johnson's fellow Conservatives are unhappy; US secretary of state Mike Pompeo doesn't want American secrets traveling through Huawei equipment.

Technical expertise takes a different view. Bruce Schneier, for example, says: yes, Huawei is not trusted, and yes, the risks are real, but barring Huawei doesn't make the network secure. The US doesn't even want a secure network, if that means a network it can't spy into.

In a letter to The Times, Martyn Thomas, a fellow at the Royal Academy of Engineering, argues that no matter who supplies it the network will be "too complex to be made fully secure against an expert cyberattack". 5G's software-defined networks will require vastly more cells and, crucially, vastly more heterogeneity and complexity. You have to presume a "dirty network", Sue Gordon, then (US) Principal Deputy Director of National Intelligence, warned in April 2019. Even if Huawei is barred from Britain, the EU, and the US, it will still have a huge presence in Africa, which it's been building for years, and probably Latin America.

There was a time when a computer was a wholly-owned system built by a single company that also wrote and maintained its software; if it was networked it used that company's proprietary protocols. Then came PCs, and third-party software, and the famously insecure Internet. 5G, however, goes deeper: a network in which we trust nothing and no one, not just software but chips, wires, supply chains, and antennas, which Thomas explains "will have to contain a lot of computer components and software to process the signals and interact with other parts of the network". It's impossible to control every piece of all that; trying would send us into frequent panics over this or that component or supplier (see for example Super Micro). The discussion Thomas would like us to have is, "How secure do we need the networks to be, and how do we intend to meet those needs, irrespective of who the suppliers are?"

In other words, the essential question is: how do you build trusted communications on an untrusted network? The Internet's last 25 years have taught us a key piece of the solution: encrypt, encrypt, encrypt. Johnson, perhaps unintentionally, has just made the case for spreading strong, uncrackable encryption as widely as possible. To which we can only say: it's about time.

Illustrations: The European Court of Justice, to mark the fact that on this day the UK exits the European Union.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 27, 2019


shmatikov.jpgFor me, the scariest presentation of 2019 was a talk given by Cornell University professor Vitaly Shmatikov about computer models. It's partly a matter of reframing the familiar picture; for years, Bill Smart and Cindy Grimm have explained to attendees at We Robot that we don't necessarily really know what it is that neural nets are learning when they're deep learning.

In Smart's example, changing a few pixels in an image can change the machine learning algorithm's perception of it from "Abraham Lincoln" to "zebrafish". Misunderstanding what's important to an algorithm is the kind of thing research scientist Janelle Shane exploits when she pranks neural networks and asks them to generate new recipes or Christmas carols from a pile of known examples. In her book, You Look Like a Thing and I Love You, she presents the inner workings of many more examples.

All of this explains why researchers Kate Crawford and Trevor Paglen's ImageNet Roulette experiment tagged my my Twitter avatar as "the Dalai Lama". I didn't dare rerun it, because how can you beat that? The experiment over, would-be visitors are now redirected to Crawford's and Paglen's thoughtful examination of the problems they found in the tagging and classification system that's being used in training these algorithms.

Crawford and Paglen write persuasively about the world view captured by the inclusion of categories such as "Bad Person" and "Jezebel" - real categories in the Person classification subsystem. The aspect has gone largely unnoticed until now because conference papers focused on the non-human images in ten-year-old ImageNet and its fellow training databases. Then there is the *other* problem, that the people's pictures used to train the algorithm were appropriated from search engines, photo-sharing sites such as Flickr, and video of students walking their university campuses. Even if you would have approved the use of your forgotten Flickr feed to train image recognition algorithms, I'm betting you wouldn't have agreed to be literally tagged "loser" so the algorithm can apply that tag later to a child wearing sunglasses. Why is "gal" even a Person subcategory, still less the most-populated one? Crawford and Paglen conclude that datasets are "a political intervention". I'll take "Dalai Lama", gladly.

Again, though, all of this fits with and builds upon an already known problem: we don't really know which patterns machine learning algorithms identify as significant. In his recent talk to a group of security researchers at UCL, however, Shmatikov, whose previous work includes training an algorithm to recognize faces despite obfuscation, outlined a deeper problem: these algorithms "overlearn". How do we stop them from "learning" (and then applying) unwanted lessons? He says we can't.

"Organically, the model learns to recognize all sorts of things about the original data that were not intended." In his example, in training an algorithm to recognize gender using a dataset of facial images, alongside it will learn to infer race, including races not represented in the training dataset, and even identities. In another example, you can train a text classifier to infer sentiment - and the model also learns to infer authorship.

Options for counteraction are limited. Censoring unwanted features doesn't work because a) you don't know what to censor; b) you can't censor something that isn't represented in the training data; and c) that type of censoring damages the algorithm's accuracy on the original task. "Either you're doing face analysis or you're not." Shmatikov and Congzheng Song explain their work more formally in their paper Overlearning Reveals Sensitive Attributes.

"We can't really constrain what the model is learning," Shmatikov told a group of security researchers at UCL recently, "only how it is used. It is going to be very hard to prevent the model from learning things you don't want it to learn." This drives a huge hole through GDPR, which relies on a model of meaningful consent. How do you consent to something no one knows is going to happen?

What Shmatikov was saying, therefore, is that from a security and privacy point of view, the typical question we ask, "Did the model learn its task well?", is too limited. "Security and privacy people should also be asking: what else did the model learn?" Some possibilities: it could have memorized the training data; discovered orthogonal features; performed privacy-violating tasks; or incorporated a backdoor. None of these are captured in assessing the model's accuracy in performing the assigned task.

My first reaction was to wonder whether a data-mining company like Facebook could use Shmatikov's explanation as an excuse when it's accused of allowing its system to discriminate against people - for example, in digital redlinining. Shmatikov thought not, at least, not more than their work helps people find out what their models are really doing.

"How to force the model to discover the simplest possible representation is a separate problem worth invdstigating," he concluded.

So: we can't easily predict what computer models learn when we set them a task involving complex representations, and we can't easily get rid of these unexpected lessons while retaining the usefulness of the models. I was not the only person who found this scary. We are turning these things loose on the world and incorporating them into decision making without the slightest idea of what they're doing. Seriously?

Illustrations: Vitaly Shmatikov (via Cornell).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 1, 2019

Nobody expects the Spanish Inquisition

Monty_Python_Live_02-07-14-sporti-jpgSo can we stop now with the fantasy that data can be anonymized?

Two things sparked this train of thought. The first was seeing that researchers at the Mayo Clinic have shown that commercial facial recognition software accurately identified 70 of a sample set of 84 (that's 83%) MRI brain scans. For ten additional subjects, the software placed the correct identification in its top five choices. Yes, on reflection, it's obvious that you can't scan a brain without including its container, and that bone structure defines a face. It's still a fine example of data that is far more revealing than you expect.

The second was when Phil Booth, the executive director of medConfidential, on Twitter called out the National Health Service for weakening the legal definition of "anonymous" in its report on artificial intelligence (PDF).

In writing the MRI story for the Wall Street Journal (paywall), Melanie Evans notes that people have also been reidentified from activity patterns captured by wearables, a cautionary tale now that Google's owner, Alphabet, seeks to buy Fitbit. Cautionary, because the biggest contributor to reidentifying any particular dataset is other datasets to which it can be matched.

The earliest scientific research on reidentification I know of was Latanya Sweeney's 1997 success in identifying then-governor William Weld's medical record by matching the "anonymized" dataset of records of visits to Massachusetts hospitals against the voter database for Cambridge, which anyone could buy for $20. Sweeney has since found that 87% of Americans can be matched from just their gender, date of birth, and zip code. More recently, scientists at Louvain and Imperial College found that just 15 attributes can identify 99.8% of Americans. Scientists have reidentified individuals from anonymized shopping data, and by matching mobile phone logs against transit trips. Combining those two datasets identified 95% of the Singaporean population in 11 weeks; add GPS records and you can do it in under a week.

This sort of thing shouldn't be surprising any more.

The legal definition that Booth cited is Recital 26 of the General Data Protection Regulation, which specifies in a lot more detail about how to assess the odds ("all the means likely to be used", "account should be taken of all objective factors") of successful reidentification.

Instead, here's the passage he highlighted from the NHS report as defining "anonymized" data (page 23 of the PDF, 44 of the report): "Data in a form that does not identify individuals and where identification through its combination with other data is not likely to take place."

I love the "not likely". It sounds like one of the excuses that's so standard that Matt Blaze put them on a bingo card. If you asked someone in 2004 whether it was likely that their children's photos would be used to train AI facial recognition systems that in 2019 would be used to surveil Chinese Muslims and out pornography actors in Russia. And yet here we are. You can never reliably predict what data will be of what value or to whom.

At this point, until proven otherwise it is safer to assume that that there really is no way to anonymize personal data and make it stick for any length of time. It's certainly true that in some cases the sensitivity of any individual piece of data - say your location on Friday at 11:48 - vanishes quickly, but the same is not true of those data points when aggregated over time. More important, patient data is not among those types and never will be. Health data and patient information are sensitive and personal not just for the life of the patient but for the lives of their close relatives on into the indefinite future. Many illnesses, both mental and physical, have genetic factors; many others may be traceable to conditions prevailing where you live or grew up. Either way, your medical record is highly revealing - particularly to insurance companies interested in minimizing their risk of payouts or an employer wishing to hire only robustly healthy people - about the rest of your family members.

Thirty years ago, when I was first encountering large databases and what happens when you match them together, I came up with a simple privacy-protecting rule: if you do not want the data to leak, do not put it in the database. This still seems to me definitive - but much of the time we have no choice.

I suggest the following principles and assumptions.

One: Databases that can be linked, will be. The product manager's comment Ellen Ullman reported in 1997 still pertains: "I've never seen anyone with two systems who didn't want us to hook them together."

Two: Data that can be matched, will be.

Three: Data that can be exploited for a purpose you never thought of, will be.

Four: Stop calling it "sharing" when the entities "sharing" your personal data are organizations, especially governments or commercial companies, not your personal friends. What they're doing is *disclosing* your information.

Five: Think collectively. The worst privacy damage may not be to *you*.

The bottom line: we have now seen so many examples of "anonymized" data that can be reidentified that the claim that any dataset is anonymized should be considered as extraordinary a claim as saying you've solved Brexit. Extraordinary claims require extraordinary proof, as the skeptics say.

Addendum: if you're wondering why net.wars skipped the 50th anniversary of the first ARPAnet connection: first of all, we noted it last week; second of all, whatever headline writers think, it's not the 50th anniversary of the Internet, whose beginnings, as we wrote in 2004, are multiple. If you feel inadequately served, I recommend this from 2013, in which some of the Internet's fathers talk about all the rules they broke to get the network started.

Illustrations: Monty Python performing the Spanish Inquisition sketch in 2014 (via Eduardo Unda-Sanzana at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 23, 2019


rotated-gchq-secure-phone.jpegFor many reasons, I've never wanted to use my mobile phone for banking. For one thing, I have a desktop machine with three 24-inch monitors and a full-size functioning keyboard; why do I want to poke at a small screen with one finger?

Even if I did, the corollary is that mobile phones suck for typing passwords. For banking, you typically want the longest and most random password you can generate. For mobile phone use, you want something short, easy to remember and type. There is no obvious way to resolve this conflict, particularly in UK banking, where you're typically asked to type in three characters chosen from your password. It is amazingly easy to make mistakes counting when you're asked to type in letter 18 of a 25-character random string. (Although: I do admire the literacy optimism one UK bank displays when it asks for the "antepenultimate" character in your password. It's hard to imagine an American bank using this term.)

Beyond that, mobile phones scare me for sensitive applications in general; they seem far too vulnerable to hacking, built-in vulnerabilities, SIM swapping, and, in the course of wandering the streets of London, loss, breakage, or theft. So mine is not apped up for social media, ecommerce, or anything financial. I accept that two-factor authentication is a huge step forward in terms of security, but does it have to be on my phone? In this, I am, of course, vastly out of step with the bulk of the population, who are saying instead: "Can't it be on my phone?" What I want, however, is a 2FA device I can turn off and stash out of harm's way in a drawer at home. That approach would also mean not having to give my phone number to an entity that might, like Facebook has in the past, coopt it into their marketing plans.

So, it is with great unhappiness that I discover that the combination of the incoming Payment Services Directive 2 and the long-standing effort to get rid of cheques are combining to force me to install a mobile banking app.

PSD2 possibly will may perhaps not have been the antepenultimate gift from the EU28. At Wired, Laurie Clark explains the result of the directive's implementation, which is that ecommerce sites, as well as banks, must implement two-factor authentication (2FA) by September 14. Under this new regime, transactions above £30 (about $36.50, but shrinking by the Brexit-approaching day) will require customers to prove at least two of the traditional three security factors: something they have (a gadget such as a smart phone, a specific browser on a specific machine, or a secure key,, something they know (passwords and the answers to secondary questions), and something they are (biometrics, facial recognition). As Clark says, retailers are not going to love this, because anything that adds friction costs them sales and customers.

My guess is that these new requirements will benefit larger retailers and centralized services at the expense of smaller ones. Paypal, Amazon, and eBay already have plenty of knowledge about their customers to exploit to be confident of the customer's identity. Requiring 2FA will similarly privilege existing relationships over new ones.

So far, retail sites don' t seem to be discussing their plans. UK banking sites, however, began adopting 2FA some years ago, mostly in the form of secure keys that they issued and replaced as needed - credit card-sized electronic one-time pads. Those sites now are simply dropping the option of logging on with limited functionality without the key. These keys have their problems - especially non-inclusive design with small, fiddly keys and hard-to-read LCD screens - but I liked the option.

Ideally, this would be a market defined by standards, so people could choose among different options - such as the Yubikey, Where the banks all want to go, though, is to individual mobile phone apps that they can also use for marketing and upselling. Because of the broader context outlined above, I do not want this.

One bank I use is not interested in my broader context, only its own. It has ruled: must download app. My first thought was to load the app onto my backup, second-to-last phone, figuring that its unpatched vulnerabilities would be mitigated by its being turned off, stuck in a drawer, and used for nothing else. Not an option: its version of Android is two decimal places too old. No app for *you*!

At Bentham's Gaze, Steven Murdoch highlights a recent Which? study that found that those who can't afford, can't use, or don't want smartphones or who live with patchy network coverage will be shut out of financial services.

Murdoch, an expert on cryptography and banking security, argues that by relying on mobile apps banks are outsourcing their security to customers and telephone networks, which he predicts will fail to protect against criminals who infiltrate the phone companies and other threats. An additional crucial anti-consumer aspect is the refusal of phone manufacturers to support ongoing upgrades, forcing obsolescence on a captive audience, as we've complained before. This can only get worse as smartphones are less frequently replaced while being pressed into use for increasingly sensitive functions.

In the meantime, this move has had so little press that many people are being caught by surprise. There may be trouble ahead...


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 16, 2019

The law of the camera

compressed-King's_Cross_Western_Concourse-wikimedia.jpgAs if cued by the end of last week's installment, this week the Financial Times (paywalled), followed by many others broke the news that Argent LLP, the lead developer in regenerating the Kings Cross district of London in the mid-2000s, is using facial recognition to surveil the entire area. The 67-acre site includes two mainline railway stations, a major Underground interchange station, schools, retailers, the Eurostar terminal, a local college, ten public parks, 20 streets, 50 buildings, 1,900 homes...and, because it happens to be there, Google's UK headquarters. (OK, Google: how do you like it when you're on the receiving end instead of dishing it out?)

So, to be clear: this system has been installed - doubtless "for your safety" - even though over and over these automated facial recognition systems are being shown to be almost laughably inaccurate: in London, Big Brother Watch found a 95% inaccuracy rate (PDF); in California, the ACLU found that the software incorrectly matched one in five lawmakers to criminals' mugshots. US cities - San Francisco, Oakland, Somerville, Massachusetts - are legislating bans as a result. In London, however, Canary Wharf, a large development area in east London, told the BBC and the Financial Times that it is considering following Kings Cross's lead.

Inaccuracy is only part of the problem with the Kings Cross situation - and the deeper problem will persist even if and when the systems become accurate enough for prime time (which will open a whole new can of worms). The deeper problem is the effective privatization of public space: here, a private entity has installed a facial recognition system with no notice to any of the people being surveilled, with no public debate, and, according to the BBC, no notice to either local or central government.

To place this in context, it's worth revisiting the history of the growth of CCTV cameras in the UK, the world leader (if that's the word you want) in this area. As Simon Davies recounts in his recently-published memoir about his 30 years of privacy campaigning (and as I also remember), the UK began embracing CCTV in the mid-1990s (PDF), fueled in part by the emotive role it played in catching the murderers in the 1993 Jamie Bulger case. Central government began offering local councils funding to install cameras. Deployment accelerated after 9/11, but the trend had already been set.

By 2012, when the Protection of Freedoms Act was passed to create the surveillance camera commissioner's office, public resistance had largely vanished. At the first Surveillance Camera Conference, in 2013, representatives from several local councils said they frequently received letters from local residents requesting additional cameras. They were not universally happy about this; around that time the responsibility for paying for the cameras and the systems to run them was being shifted to the councils themselves, and many seemed to be reconsidering their value. There has never been much research assessing whether the cameras cut crime; what there is suggests CCTV diverts it rather than stops it. A 2013 briefing paper by the College of Policing (PDF) says CCTV provides a "small, but statistically significant, reduction in crime", though it notes that effectiveness depends on the type of crime and the setting. "It has no impact on levels of violent crime," the paper concludes. A 2014 summary of research to date notes the need to balance privacy concerns and assess cost-effectiveness. Adding on highly unreliable facial recognition won't change that - but it will entrench unnecessary harassment.

The issue we're more concerned about here is the role of private operators. At the 2013 conference, public operators complained that their private counterparts, operating at least ten times as many cameras, were not required to follow the same rules as public bodies (although many did). Reliable statistics are hard to find. A recent estimate claims London hosts 627,707 CCTV cameras, but it's fairer to say that not even the Surveillance Camera Commissioner really knows. It is clear, however, that the vast majority of cameras are privately owned and operated.

Twenty years ago, Davies correctly foresaw that networking the cameras would enable tracking people across the city. Neither he nor the rest of us saw that (deeply flawed) facial recognition would arrive this soon, if only because it's the result of millions of independent individual decisions to publicly post billions of facial photographs. This is what created the necessary mass of training data that, as Olivia Solon has documented, researchers have appropriated.

For an area the size and public importance of Kings Cross to be monitored via privately-owned facial recognition systems that have attracted enormous controversy in the public sector is profoundly disturbing. You can sort of see their logic: Kings Cross station is now a large shopping mall surrounding a major train station, so what's the difference between that and a shopping mall without one? But effectively, in setting the rules of engagement for part of our city that no one voted to privatize, Argent is making law, a job no one voted to give it. A London - or any other major city - carved up into corporately sanitized districts connected by lawless streets - is not where any of us asked to live.

Illustrations: The new Kings Cross Western Concourse (via Colin on Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 24, 2019

Name change

Dns-rev-1-wikimedia.gifIn 2014, six months after the Snowden revelations, engineers began discussing how to harden the Internet against passive pervasive surveillance. Among the results have been efforts like Let's Encrypt, EFF's Privacy Badger, and HTTPS Everywhere. Real inroads have been made into closing some of the Internet's affordances for surveillance and improving security for everyone.

Arguably the biggest remaining serious hole is the domain name system, which was created in 1983. The DNS's historical importance is widely underrated; it was essential in making email and the web usable enough for mass adoption before search engines. Then it stagnated. Today, this crucial piece of Internet infrastructure still behaves as if everyone on the Internet can trust each other. We know the Internet doesn't live there any more; in February the Internet Corporation for Assigned Names and Numbers, which manages the DNS, warned of large-scale spoofing and hijacking attacks. The NSA is known to have exploited it, too.

The problem is the unprotected channel between the computer into which we type humanly-readable names such as and the computers that translate those names into numbered addresses the Internet's routers understand, such as The fact that routers all trust each other is routinely exploited for the captive portals we often see when we connect to public wi-fi systems. These are the pages that universities, cafes, and hotels set up to redirect Internet-bound traffic to their own page so they can force us to log in, pay for access, or accept terms and conditions. Most of us barely think about it, but old-timers and security people see it as a technical abuse of the system.

Several hijacking incidents raised awareness of DNS's vulnerability as long ago as 1998, when security researchers Matt Blaze and Steve Bellovin discussed it at length at Computers, Freedom, and Privacy. Twenty-one years on, there have been numerous proposals for securing the DNS, most notably DNSSEC, which offers an upwards chain of authentication. However, while DNSSEC solves validation, it still leaves the connection open to logging and passive surveillance, and the difficulty of implementing it has meant that since 2010, when ICANN signed the global DNS root, uptake has barely reached14% worldwide.

In 2018, the IETF adopted DNS-over-HTTPS as a standard. Essentially, this sends DNS requests over the same secure channel browsers use to visit websites. Adoption is expected to proceed rapidly because it's being backed by Mozilla, Google, and Cloudflare, who jointly intend to turn it on by default in Chrome and Firefox. In a public discussion at this week's Internet Service Providers Association conference, a fellow panelist suggested that moving DNS queries to the application level opens up the possibility that two different apps on the same device might use different DNS resolvers - and get different responses to the same domain name.

Britain's first public notice of DoH came a couple of week ago in the Sunday Times, which billed it as Warning over Google Chrome's new threat to children. This is a wild overstatement, but it's not entirely false: DoH will allow users to bypass the parts of Britain's filtering system that depend on hijacking DNS requests to divert visitors to blank pages or warnings. An engineer would probably argue that if Britain's many-faceted filtering system is affected it's because the system relies on workarounds that shouldn't have existed in the first place. In addition, because DoH sends DNS requests over web connections, the traffic can't be logged or distinguished from the mass of web traffic, so it will also render moot some of the UK's (and EU's) data retention rules.

For similar reasons, DoH will break captive portals in unfriendly ways. A browser with DoH turned on by default will ignore the hotel/cafe/university settings and instead direct DNS queries via an encrypted channel to whatever resolver it's been set to use. If the network requires authentication via a portal, the connection will fail - a usability problem that will have to be solved.

There are other legitimate concerns. Bypassing the DNS resolvers run by local ISPs in favor of those belonging to, say, Google, Cloudflare, and Cisco, which bought OpenDNS in 2015, will weaken local ISPs' control over the connections they supply. This is both good and bad: ISPs will be unable to insert their own ads - but they also can't use DNS data to identify and block malware as many do now. The move to DoH risks further centralizing the Internet's core infrastructure and strengthening the power of companies most of us already feel have too much control.

The general consensus, however, is that like it or not, this thing is coming. Everyone is still scrambling to work out exactly what to think about it and what needs to be done to mitigate accompanying risks, as well as find solutions to the resulting problems. It was clear from the ISPA conference panel that everyone has mixed feelings, though the exact mix of those feelings and which aspects are identified as problems - differ among ISPs, rights activists, and security practitioners. But it comes down to this: whether you like this particular proposal or not, the DNS cannot be allowed to remain in its present insecure state. If you don't want DoH, come up with a better proposal.

Illustrations: DNS diagram (via Б.Өлзий at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 22, 2019


Metropolis-openingshot.png"As a citizen, how will I know I live in a smarter city, and how will life be different?" This question was probably the smartest question asked at yesterday's Westminster Forum seminar on smart cities (PDF); it was asked by Tony Sceales, acting as moderator.

"If I feel safe and there's less disruption," said Peter van Manen. "You won't necessarily know. Thins will happen as they should. You won't wake up and say, 'I'm in the city of the future'," said Sam Ibbott. "Services become more personalized but less visible," said Theo Blackwell the Chief Digital Office for London.

"Frictionless" said Jacqui Taylor, offering it as the one common factor she sees in the wildly different smart city projects she has encountered. I am dubious that this can ever be achieved: one person's frictionless is another's desperate frustration: streets cannot be frictionless for *both* cars and cyclists, just as a city that is predicted to add 2 million people over the next ten years can't simultaneously eliminate congestion. "Working as intended" was also heard. Isn't that what we all wish computers would do?

Blackwell had earlier mentioned the "legacy" of contactless payments for public transport. To Londoners smushed into stuffed Victoria Line carriages in rush hour, the city seems no smarter than it ever was. No amount of technological intelligence can change the fact that millions of people all want to go home at the same time or the housing prices that force them to travel away from the center to do so. We do get through the ticket barriers faster.

"It's just another set of tools," said Jennifer Schooling. "It should feel no different."

The notion of not knowing as the city you live in smartens up should sound alarm bells. The fair reason for that hiddenness is the reality that, as Sara Degli Esposti pointed out at this year's Computers, Privacy, and Data Protection, this whole area is a business-to-business market. "People forget that, especially at the European level. Users are not part of the picture, and that's why we don't see citizens engaged in smart city projects. Citizens are not the market. This isn't social media."

She was speaking at CPDP's panel on smart cities and governance, convened by the University of Stirling's William Webster, who has been leading a research project, CRISP, to study these technologies. CRISP asked a helpfully different question: how can we use smart city technologies to foster citizen engagement, coproduction of services, development of urban infrastructure, and governance structures?

The interesting connection is this: it's no surprise when CPDP's activists, regulators, and academics talk about citizen engagement and participation, or deplore a model in which smart cities are a business-led excuse for corporate and government, surveillance. The surprise comes when two weeks later the same themes arise among Westminster Forum's more private and public sector speakers and audience. These are the people who are going to build these new programs and services, and they, too, are saying they're less interested in technology and more interested in solving the problems that keep citizens awake at night: health, especially.

There appears to be a paradigm shift beginning to happen as municipalities begin to seriously consider where and on what to spend their funds.

However, the shift may be solely European. At CPDP, Canadian surveillance studies researcher David Murakami Wood told the story of Toronto, where (Google owner) Alphabet subsidiary Sidewalk Labs swooped in circa 2014 with proposals to redevelop the Quayside area of Toronto in partnership with Waterfront Toronto. The project has been hugely controversial - there were hearings this week in Ottawa, the provincial capital.

As Murakami Wood's tells it, for Sidewalk Labs the area is a real-world experiment using real people's lives as input to create products the company can later sell elsewhere. The company has made clear it intends to keep all the data the infrastructure generates on its servers in the US as well as all the intellectual property rights. This, Murakami Wood argued, is the real cost of the "free" infrastructure. It is also, as we're beginning to see elsewhere, the extension of online tracking or, as Murakami Wood put it, surveillance capitalism into the physical world: cultural appropriation at municipal scale from a company that has no track record in building buildings, or even publishing detailed development plans. Small wonder that Murakami Wood laughed when he heard Sidewalk Labs CEO Dan Doctoroff impress a group of enthusiastic young Canadian bankers with the news that the company had been studying cities for *two years*.

Putting these things together, we have, as Andrew Adams suggested, three paradigms, which we might call US corporate, Chinese authoritarian, and, emerging, European participatory and cooperative. Is this the choice?

Yes and no. Companies obviously want to develop systems once, sell them everywhere. Yet the biggest markets are one-off outliers. "Croydon," said Blackwell, "is the size of New Orleans." In addition, approaches vary widely. Some places - Webster mentioned Glasgow - are centralized command and control; others - Brazil - are more bottom-up. Rick Robinson finds that these do not meet in the middle.

The clear takeaway overall is that local context is crucial in shaping smart city projects and despite some common factors each one is different. We should built on that.

Illustrations: Fritz Lang's Metropolis (1927).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 10, 2019

Secret funhouse mirror room

Lost_City_-_Fun_House.jpg"Here," I said, handing them an old pocket watch. "This is your great-grandfather's watch." They seemed a little stunned.

As you would. A few weeks earlier, one of them had gotten a phone call from a state trooper. A cousin they'd never heard of had died, and they might be the next of kin.

"In this day and age," one of them told me apologetically, "I thought it must be a scam."

It wasn't. Through the combined offices of a 1940 divorce and a lifetime habit of taciturnity on personal subjects, a friend I'd known for 45 years managed to die without ever realizing his father had an extensive tree of living relatives. They would have liked each other, I think.

So they came to the funeral and met their cousin through our memories and the family memorabilia we found in his house. And then they went home bearing the watch, understandably leaving us to work out the rest.

Whenever someone dies, someone else inherits a full-time job. In our time, that full-time job is located at the intersection of security, privacy - and secrecy, the latter a complication rarely discussed. In the eight years since I was last close to the process of closing out someone's life, very much more of the official world has moved online. This is both help and hindrance. I was impressed with the credit card company whose death department looked online for obits to verify what I was saying instead of demanding an original death certificate (New York state charges $15 per copy). I was also impressed with - although a little creeped out by - the credit card company that said, "Oh, yes, we already know." (It had been three weeks, two of them Christmas and New Year's.)

But those, like the watch, were easy, accounts with physical embodiments - that is, paper statements. It's the web that's hard. All those privacy and security settings that we advocate for live someones fall apart when they die without disclosing their passwords. We found eight laptops, the most recent an actively hostile mid-2015 MacBook Pro. Sure, reset the password, but doing so won't grant access to any other stored passwords. If File Vault is turned on, a beneficent fairy - or a frustrated friend trying to honor your stated wishes that you never had witnessed or notarized - is screwed. I'd suggest an "owner deceased" mode, but how do you protect *that* for a human rights worker or a journalist in a war zone holding details of at-risk contacts? Or when criminals arrive knowing how to unlock it? Privacy and security are essential, but when someone dies they turn into secrecy that - I seem to recall predicting in 1997 - means your intended beneficiaries *don't* inherit because they can't unlock your accounts.

It's a genuinely hard problem, not least because most people don't want to plan for their own death. Personal computers operate in binary mode: protect everything, or nothing, and protect it all the same way even though exposing a secret not-so-bad shame is a different threat model from securing a bank account. But most people do not think, "After I'm dead, what do I care?" Instead, they think, "I want people to remember me the way I want and this thing I'm ashamed of they must never, ever know, or they'll think less of me." It takes a long time in life to arrive at, "People think of me the way they think of me, and I can't control that. They're still here in my life, and that must count for something." And some people never realize that they might feel more secure in their relationships if they hid less.

So, the human right to privacy bequeaths a problem: how do you find your friend's long-lost step-sibling, who is now their next of kin, when you only know their first name and your friend's address book is encrypted on a hard drive and not written, however crabbily, in a nice, easily viewed paper notebook?

If there's going to be an answer, I imagine it lies in moving away from binary mode. It's imaginable that a computer operating system could have a "personal rescue mode" that would unlock some aspects of the computer and not others, an extension of the existing facilities for multiple accounts and permissions, though these are geared to share resources, not personal files. The owner of such a system would have to take some care which information went in which bucket, but with a system like that they could give a prospective executor a password that would open the more important parts.

No such thing exists, of course, and some people wouldn't use it even if it did. Instead, the key turned out to be the modest-sized-town people network, which was and is amazing. It was through human connections that we finally understood the invoices we found for a storage unit. Without ever mentioning it, my friend had, for years, at considerable expense, been storing a mirror room from an amusement park funhouse. His love of amusement parks was no surprise. But if we'd known, the mirror room would now be someone's beloved possession instead of broken up in a scrapyard because a few months before he died my friend had stopped paying his bills - also without telling anyone.

Illustrations: The Lost City Fun House (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 21, 2018

Behind you!

640px-Aladdin_pantomime_Nottingham_Playhouse_2008.jpgFor one reason or another - increasing surveillance powers, increasing awareness of the extent to which online activities are tracked by myriad data hogs, Edward Snowden - crypto parties have come somewhat back into vogue over the last few years after a 20-plus-year hiatus. The idea behind crypto parties is that you get a bunch of people together and they all sign each other's keys. Fun! For some value of fun.

This is all part of the web of trust that is supposed to accrue when you use public key cryptography software like PGP or GPG: each new signature on a person's public key strengthens the trust you can have that the key truly belongs to that person. In practice, the web of trust, also known as "public key infrastructure", does not scale well, and the early 1990s excitement about at least the PGP version of the idea died relatively quickly.

A few weeks ago, ORG Norwich held such a meeting and I went along to help workshop about when and how you want to use crypto. Like any security mechanism, encrypting email has its limits. Accordingly, before installing PGP and saying, "Secure now!" a little threat modeling is a fine thing. As bad as it can be to operate insecurely, it is much, much worse to operate under the false belief that you are more secure than you are because the measures you've taken don't fit the risks you face.

For one thing, PGP does nothing to obscure metadata - that is, the record of who sent email to whom. Newer versions offer the option to encrypt the subject line, but then the question arises: how do you get busy people to read the message?

For another thing, even if you meticulously encrypt your email, check that the recipient's public key is correctly signed, and make no other mistakes, you are still dependent on your correspondent to take appropriate care of their archive of messages and not copy your message into a new email and send it out in plain text. The same is true of any other encrypted messaging program such as Signal; you depend on your correspondents to keep their database encrypted and either password-protect their phone and other devices or keep them inaccessible. And then, too, even the most meticulous correspondent can be persuaded to disclose their password.

For that reason, in some situations it may in fact be safer not to use encryption and remain conscious that anything you send may be copied and read. I've never believed that teenagers are innately better at using technology than their elders, but in this particular case they may provide role models: research has found that they are quite adept at using codes only they understand. To their grown-ups, it just looks like idle Facebook chatter.

Those who want to improve their own and others' protection against privacy invasion therefore need to think through what exactly they're trying to achieve.

Some obvious questions are, partly derived from Steve Bellovin's book Thinking Security:

- Who might want to attack you?
- What do they want?
- Are you a random target, the specific target, or a stepping stone to mount attacks on others?
- What do you want to protect?
- From whom do you want to protect it?
- What opportunities do they have?
- When are you most vulnerable?
- What are their resources?
- What are *your* resources?
- Who else's security do you have to depend on whose decisions are out of your control?

At first glance, the simple answer to the first of those is "anyone and everyone". This helpful threat pyramid shows the tradeoff between the complexity of the attack and the number of people who can execute it. If you are the target of a well-funded nation-state that wants to get you, just you, and nobody else but you, you're probably hosed. Unless you're a crack Andromedan hacker unit (Bellovin's favorite arch-attacker), the imbalance of available resources will probably be insurmountable. If that's your situation, you want expert help - for example, from Citizen Lab.

Most of us are not in that situation. Most of us are random targets; beyond a raw bigger-is-better principle, few criminals care whose bank account they raid or which database they copy credit card details from. Today's highly interconnected world means that even a small random target may bring down other, much larger entities when an attacker leverages a foothold on our insignificant network to access the much larger ones that trusts us. Recognizing who else you put at risk is an important part of thinking this through.

Conversely, the point about risks that are out of your control is important. Forcing everyone to use strong, well-designed passwords will not matter if the site they're used for stores them in with inadequate protections.

The key point that most people forget: think about the individuals involved. Security is about practice, not just technology; as Bruce Schneier likes to say, it's a process not a product. If the policy you implement makes life hard for other people, they will eventually adopt workarounds that make their lives more manageable. They won't tell you what they've done, and you won't have anyone to shout to warn you where the risk is lurking.

Illustrations: Aladdin panomime at Nottingham Playhouse, 2008 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 14, 2018

Entirely preventable

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgThis week, the US House of Representatives Committee on Oversight and Government Reform used this phrase to describe the massive 2017 Equifax data breach: "Entirely preventable." It's not clear that the ensuing recommendations, while all sensible and valuable stuff - improve consumers' ability to check their records, reduce the use of Social Security numbers as unique identifiers, improve oversight of credit reporting agencies, increase transparency and accountability, hold federal contractors liable, and modernize IT security - will really prevent another similar breach from taking place. A key element was a bit of unpatched software that left open a vulnerability used by the attackers to gain a foothold - in part, the report says, because the legacy IT systems made patching difficult. Making it easier to do the right thing is part of the point of the recommendation to modernize the IT estate.

How closely is it feasible to micromanage companies the size and complexity of Equifax? What protection against fraud will we have otherwise?

The massive frustration is that none of this is new information or radical advice. On the consumer rights side, the committee is merely recommending practices that have been mandated in the EU for more than 20 years in data protection law. Privacy advocates have been saying for more than *30* years that the SSN is every example of how a unique identifier should *not* be used. Patching software is so basic that you can pick any random top ten security tips and find it in the top three. We sort of make excuses for small businesses because their limited resources mean they don't have dedicated security personnel, but what excuse can there possibly be for a company the size of Equifax that holds the financial frailty of hundreds of millions of people in its grasp?

The company can correctly say this: we are not its customers. It is not its job to care about us. Its actual customers - banks, financial services, employers, governments - are all well served. What's our problem? Zeynep Tufecki summed it up correctly on Twitter when she commented that we are not Equifax's customers but its victims. Until there are proportionate consequences for neglect and underinvestment in security, she said later, the companies and their departing-with-bonuses CEOs will continue scrimping on security even though the smallest consumer infraction means they struggle for years to reclaim their credit rating.

If Facebook and Google should be regulated as public utilities, the same is even more true for the three largest credit agencies, Equifax, Experian, and TransUnion, who all hold much more power over us, and who are much less accountable. We have no opt-out to exercise.

But even the punish-the-bastards approach merely smooths over and repaints the outside of a very ugly tangle of amyloid plaques. Real change would mean, as Mydex CEO David Alexander is fond of arguing, adopting a completely different approach that puts each of us in charge of our own data and avoids creating these giant attacker-magnet databases in the first place. See also data brokers, which are invisible to most people.

Meanwhile, in contrast to the committee, other parts of the Five Eyes governments seem set on undermining whatever improvements to our privacy and security we can muster. Last week the Australian parliament voted to require companies to back-door their encryption when presented with a warrant. While the bill stops at requiring technology companies to build in such backdoors as a permanent fixture - it says the government cannot require companies to introduce a "systemic weakness" or "systemic vulnerability" - the reality is that being able to break encryption on demand *is* a systemic weakness. Math is like that: either you can prove a theorem or you can't. New information can overturn existing knowledge in other sciences, but math is built on proven bedrock. The potential for a hole is still a hole, with no way to ensure that only "good guys" can use it - even if you can agree who the good guys are.

In the UK, GCHQ has notified the intelligence and security committee that it will expand its use of "bulk equipment interference". In other words, having been granted the power to hack the world's computers - everything from phones and desktops to routers, cars, toys, and thermostats - when the 2016 Investigatory Powers Act was being debated, GCHQ now intends to break its promise to use that power sparingly.

As I wrote in a submission to the consultation, bulk hacking is truly dangerous. The best hackers make mistakes, and it's all too easy to imagine a hacking error becoming the cause of a 100-car pile-up. As smart meters roll out, albeit delayed, and the smart grid takes shape, these, too, will be "computers" GCHQ has the power to hack. You, too, can torture someone in their own home just by controlling their thermostat. Fun! And important for national security. So let's do more of it.

In a time when attacks on IT infrastructure are growing in sophistication, scale, and complexity, the most knowledgeable people in government, whose job it is to protect us, are deliberately advocating weakening it. The consequences that are doubtless going to follow the inevitable abuse of these powers - because humans are humans and the mindset inside law enforcement is to assume the worst of all of us - will be entirely preventable.

Illustrations: GCHQ listening post at dawn (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 18, 2018

Fool me once

new-22portobelloroad.jpgMost of the "us" who might read this rarely stop to marvel at the wonder that is our daily trust in the society that surrounds us. One of the worst aspects of London Underground's incessant loud reminders to report anything suspicious - aside from the slogan, which is dumber than a bag of dead mice - is that it interrupts the flow of trust. It adds social friction. I hear it, because I don't habitually block out the world with headphones.

Friction is, of course, the thing that so many technologies are intended to eliminate. And they might, if only we could trust them.

Then you read things like this news, that Philip Morris wants to harvest data from its iQOS e-cigarette. If regulators allow, Philip Morris will turn on functions in the device's internal chips that capture data on its user's smoking habits, not unlike ebook readers' fine-grained data collection. One can imagine the data will be useful for testing strategies for getting people to e-smoke longer.

This example did not arrive in time for this week's Nuances of Trust event, hosted by the Alliance for Internet of Things Innovation (AIOTI) and aimed at producing intelligent recommendations for how to introduce trust into the Internet of Things. But, so often, it's the company behind the devices you can't trust. For another example: Volkswagen.

Partly through the problem-solving session, we realized we had regenerated three of Lawrence Lessig's four modalities of constraining behavior: technology/architecture, law, market, social norms. The first changes device design to bar shipping loads of data about us to parts unknown; law pushes manufacturers into that sort of design, even if it cost more; market would mean people refused to buy privacy-invasive devices, and social norms used to be known as "peer pressure". Right now, technology is changing faster than we can create new norms. If a friend has an Amazon Echo at home, does entering their house constitute signing Amazon's privacy policy? Should they show me the privacy policy before I enter? Is it reasonable to ask them to turn it off while I'm there? We could have asked questions like "Are you surreptitiously recording me?" at any time since portable tape recorders were invented, but absent a red, blinking light we felt safe in assuming no. Now, suddenly, trusting my friend requires also trusting a servant belonging to a remote third party. If I don't, it's a social cost - to me, and maybe to my friend, but not to Amagoople.

On Tuesday, Big Brother Watch provided a far more alarming example when director Silkie Carlo launched BBW's report on automated facial recognition (PDF). Now, I know the technically minded will point out grumpily that all facial recognition is "automated" because it's a machine what does it, but what BBW means is a system in which CCTV and other cameras automatically feed everything they gather into a facial recognition system that sprinkles AI fairy dust and pops out Persons of Interest (I blame TV). Various UK police have deployed these AFR systems at concerts and football and rugby games; at the 2016 and 2017 Notting Hill Carnivals; on Remembrance Sunday 2017 to restrict "fixated individuals"; and at peaceful demonstrations. On average, fewer than 9% of matches were accurate; but that's little consolation when police pick you out of the hordes arriving by train for an event and insist on escorting you under watch. The system London's Met Police used had a false positive rate of over 98%! How does a system like that even get out of the lab?

Neither the police nor the Home Office seem to think that bringing in this technology requires any public discussion; when asked they play the Yes, Minister game of pass the policy. Within the culture of the police, it may in fact be a social norm that invasive technologies whose vendors promise magical preventative results should be installed as quickly as possible before anyone can stop them. Within the wider culture...not so much.

This is the larger problem with what AIOTI is trying to do. It's not just that the devices themselves are insecure, their risks capricious, and the motives of their makers suspect. It's that long after you've installed and stopped thinking about a system incorporating these devices someone else can come along to subvert the whole thing. How do you ensure that the promise you make today cannot be broken by yourself or others in future? The problem is near-identical to the one we face with databases: each may be harmless on its own, but mash them together and you have a GDPR fine-to-the-max dataset of reidentification.

Somewhere in the middle of this an AIOTI participant suggested that the IoT rests on four pillars: people, processes, things, data. Trust has pillars, too, that take a long time to build but that can be destroyed in an instant: choice, control, transparency, and, the one we talk about least, but perhaps the most important, familiarity. The more something looks familiar, the more we trust it, even when we shouldn't. Both the devices AIOTI is fretting about and the police systems BBW deplores have this in common: they center on familiar things whose underpinnings have changed without our knowledge - yet their owners want us to trust them. We wish we could.

Illustrations:: Orwell's house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 26, 2018

Game of thrones

tennisballonclay.jpg"If a sport could have been invented with the possibility of corruption in mind, that sport would be tennis," wrote Richard Ings, the former Administrator of Rules for the Association of Tennis Professionals, in 2005 in a report not published until now.

This is a tennis story - but it's also a story about what happens when new technology meets a porous, populous, easily socially engineered, global system with great economic inequality that can be hacked to produce large sums of money. In other words, it's arguably a prototype for any number of cybersecurity stories.

A newly published independent panel report (PDF) finds a "tsunami" of corruption at the lower levels of tennis in the form of match-fixing and gambling, exactly as Ings predicted. This should surprise no one who's been paying attention. The extreme disparity between the money at the highly visible upper levels and the desperate scratching for the equivalent of worms for everyone else clearly sets up motives. Less clear until this publication were the incentives contributed by the game's structure and the tours' decision to expand live scoring to hundreds of tiny events.

Here's how tennis really works. The players - even those who never pass the first round - at the US Open or Wimbledon are the cream of the cream of the cream, generally ranked in the top 150 or so of the millions globally who play the game. Four times a year at the majors - the Australian Open, Roland Garros, the US Open, and Wimbledon - these pros have pretty good paydays. The rest of the year, they rack up frequent flyer miles, hotel bills, coaches' and trainers' salaries, and the many other expenses that go into maintaining an itinerant business style.

As Michael Mewshaw reported as long ago as 1983 in his book Short Circuit, "tanking" - deliberately losing - is a tour staple despite rules requiring "best efforts". People tank for many reasons: bad mood, fatigue, frustration, weather, niggling injuries, better money on offer elsewhere. But also, as Ings wrote: some matches have no significance, in part beeause, as Daily Tennis editor Robert Waltz has often pointed out, the ranking system does not penalize losses and pushes players to overplay, likely contributing to the escalating injury rate.

Between the Association of Tennis Players (the men's tour) and the Women's Tennis Association, there are more than 3,000 players with at least one ranking point. The report counted 336 men and 253 women who actually break even. Besides them, in 2013 the International Tennis Federation says counted 8,874 male, 4,862 female, professional tennis players, of whom 3,896 men, 2,212 women earned no prize money.

So, do the math: you're ranked in the low 800s, your shoulder hurts, your year-to-date prize money is $555, and you're playing three rounds of qualifying to earn entry to a 32-player main draw event whose total prize money is $25,000. Tournaments at that level are not required to provide housing or hospitality (food) and you're charged a $40 entry fee. You're a young player gaining match practice and points hoping to move up with all possible speed, or a player rebuilding your ranking after injury, or an aging player on the verge of having to find other employment. And someone who has previously befriended you as a fan offers you $50,000 to throw the match. Not greed, the report writes: desperation.

In some cases, losing in the final round of qualifying doesn't matter because there's already an open slot for a lucky loser. No one but you has to know. No one really *can* know if the crucial shot you missed was deliberate or not, and you're in the main draw anyway and will get that prize money and ranking points. Accommodating the request may not hurt you, but, the report argues, each of those individual decisions is a cancer eating away at the integrity of the game.

Ings foresaw all this in 2005, when he wrote his report (PDF), which appears as Appendix 11 of the new offering. On Twitter, Ings called it his proudest achievement in his time in sports.

Obviously, smartphones and the internet - especially mobile banking - play a crucial role. Between 2001 and 2014 Australian sports betting quintupled. Live scores and online betting companies facilitate remote in-play betting on matches most people don't know exist. And, the report finds, the restrictions on gambling in many countries have not helped; when people can't gamble legally they do so illegally, facilitated by online options. Provided with these technologies, corrupt bettors can bet on any sub-unit of a match: a point, a game, a set. A player who won't throw a match might still throw a point or a set. Bettors can also cheat by leveraging the delay between the second they see a point end on court and the time the umpire pushes the button to send the score to live services across the world. In some cases, the Guardian reported in 2016, corrupt umpires help out by extending that delay.

The fixes needed for all this are like the ones suggested for any other cybersecurity problem. Disrupt the enabling technology; educate the users; and root out the bad guys. The harder - but more necessary - things are fixing the incentives, because doing so requires today's winners to restructure a game that's currently highly profitable for them. Thirteen years on, will they do it?

Illustrations: Tennis ball (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 16, 2018

Homeland insecurity

United_Kingdom_foreign_born_population_by_country_of_birth.pngTo the young people," a security practitioner said at a recent meeting, speaking of a group he'd been working with, "it's life lived on their phone."

He was referring to the tendency for adults to talk to kids about fake news, or sexting, or sexual abuse and recruitment, and so on as "online" dangers the adults want to protect them from. But, as this practitioner was trying to explain (and we have said here before), "online" isn't separate to them. Instead, all these issues are part of the context of pressures, relationships, economics, and competition that makes up their lives. This will become increasingly true as widely deployed sensors and hybrid cyber-physical systems and tracking become the norm.

This is a real generation gap. Older adults have taken on board each of these phenomena as we've added it into our existing understanding of the world. Watching each arrive singly over time allows the luxury of consideration and the mental space in which to plot a strategy. If you're 12, all of these things are arriving at once as pieces that are coalescing into your picture of the world. Even if you only just finally got your parents to let you have your own phone you've been watching videos on YouTube, FaceTiming your friends, and playing online games all your life.

An important part of "life lived on the phone" is in the UK's data protection bill implementation of the General Data Protection Regulation, now going through Parliament. The bill carves out some very broad exemptions. Most notably, opposed by the Open Rights Group and the3million, the bill would remove a person's rights as a data subject in the interests of "effective immigration control". In other words, under this exemption the Home Office could make decisions about where and whether you were allowed to live but never have to tell you the basis for its decisions. Having just had *another* long argument with a different company about whether or not I've ever lived in Iowa, I understand the problem of being unable to authenticate yourself because of poor-quality data.

It's easy for people to overlook laws that "only" affect immigrants, but as Gracie Mae Bradley, an advocacy and policy officer, made clear at this week's The State of Data 2018 event, hosted by Jen Persson, one of the consequences is to move the border from Britain's ports into its hospitals, schools, and banks, which are now supposed to check once a quarter that their 70 million account holders are legitimate. NHS Digital is turning over confidential patient information to help the Home Office locate and deport undocumented individuals. Britain's schools are being pushed to collect nationality. And, as Persson noted, remarkably few parents even know the National Pupil Database exists, and yet it catalogues highly detailed records of every schoolchild.

"It's obviously not limited to immigrants," Bradley said of the GDPR exemption. "There is no limit on the processes that might apply this exemption". It used to be clear when you were approaching a national border; under these circumstances the border is effectively gummed to your shoe.

The data protection bill also has the usual broad exemptions for law enforcement and national security.

Both this discussion (implicitly) and the security conversation we began with (explicitly) converged on security as a felt, emotional state. Even a British citizen living in their native country in conditions of relative safety - a rich country with good health care, stable governance, relatively little violence, mostly reasonable weather - may feel insecure if they're constantly being required to prove the legitimacy of their existence. Conversely, people may live in objectively more dangerous conditions and yet feel more secure because they know the local government is not eying them suspiciously with a view to telling them to repatriate post-haste.

Put all these things together with other trends, and you have the potential for a very high level of social insecurity that extends far outwards from the enemy class du jour, "illegal immigrants". This in itself is a damaging outcome.

And the potential for social control is enormous. Transport for London is progressively eliminating both cash and its Oyster payment cards in favor of direct payment via credit or debit card. What happens to people who one quarter fail the bank's inspection. How do they pay the bus or tube fare to get to work?

Like gender, immigration status is not the straightforward state many people think. My mother, brought to the US when she was four, often talked about the horror of discovering in her 20s that she was stateless: marrying my American father hadn't, as she imagined, automatically made her an American, and Switzerland had revoked her citizenship because she had married a foreigner. In the 1930s, she was naturalized without question. Now...?

Trying to balance conflicting securities is not new. The data protection bill-in-progress offers the opportunity to redress a serious imbalance, which Persson called, rightly, a "disconnect between policy, legislation, technological change, and people". It is, as she and others said, crucial that the balance of power that data protection represents not be determined by a relatively small, relatively homogeneous group.

Illustrations: 2008 map of nationalities of UK residents (via Wikipedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 5, 2018

Dangerous corner

Thumbnail image for Satan_inside.jpgIt's a sign of the times that the most immediately useful explanation of the Intel CPU security flaws announced this week is probably the one at Lawfare. There, Nicholas Weaver explains the problems that Meltdown and Spectre create, and give a simple, practical immediate workaround for unnerved users: install an ad blocker to protect yourself from JavaScript exploits embedded in the vulnerable advertising supply chain.

There is, of course, plenty of other coverage. On Twitter, New York Times cybersecurity reporter Nicole Perlroth has an explanatory stream. The Guardian says it's the worst CPU bug in history, Ars Technica has a reasonably understandable technical explanation, and explains the basics, and Bruce Schneier is collecting further links. At his blog, Steve Bellovin reminds us that security is a *systems* property - that is, that a system is not secure unless every one of its components is, even the ones we've never heard of.

The gist: the Meltdown flaw affects almost all Intel processors back to 1995. The Spectre bug affects billions of processors: all CPUs from Intel, AMD, and ARM. The workaround - not really solution - is operating system patches that will replace the compromised hardware functions and therefore slow performance somewhat. The longer-term solution to the latter is to redesign processors, though since Pewwrlroth's posting CERT appears to have changed its recommended solution from replacing the processors to patching the operating system. Because the problem is in the underlying hardware, no operating system escapes the consequences.

More entertaining is Thomas Claburn's SorryWatch-style analysis of Intel's press release. If you have a few minutes, you may like to use Clayburn's translation to play Matt Blaze's Security Problem Excuse Bingo. It's also worth citing Blaze's comment on Meltdown and Spectre: "Meltdown and Spectre are serious problems. I look forward to seeing the innovative ways in which their impact will be both wildly exaggerated and foolishly dismissed over the coming weeks."

Doubtless there's a lot more to learn about the flaws and their consequences. Desktop operating systems and iPhones/iPads will clearly have fixes. What's less clear is what will happen with operating systems with less active update teams, such as older versions of Android, which are rarely, if ever, updated. Other older software matters, too: as we saw last year with Wannacry, there are a noticeable number of people still running Windows XP,. For some of those, upgrading isn't really possible because the software that runs on those machines is itself irreplaceable. Those machines should not be connected to the internet, but as we wrote in 2014 when Microsoft discontinued all support for XP, software is forever. We must assume that there are systems lurking in all sorts of places that will never be updated, though they may migrate if and when their underlying hardware eventually fails. How long does it take to replace 20 years of processors?

The upshot is that we must take the view that we should have taken all along: nothing is completely secure. The Purdue professor Gene Spafford summed this up years ago: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room with armed guards - and even then I have my doubts."

Since a computer secured in line with Spafford's specifications is unusable, clearly we must make compromises.

One of the problems that WannaCry made clear is the terrible advice that emerges in the wake of a breach. Everyone's first go-to piece of advice is usually to change your password. But this would have made no difference in the WannaCry case, where the problem was aging systems exploited by targeted malware. After the system's been replaced or remediated (Microsoft did release, exceptionally, a security patch for XP on that occasion), *then*, sure, change your password. It would make no difference in this case, either, since to date in-the-wild exploitation of these bugs is not known, and Spectre in particular requires a high level of expertise and resources to exploit.

The most effective course would be to replace our processors. This is obviously not so easy, since at least one of the flaws affects all Intel processors back to 1995 (so...Windows 95, anyone?) and many ARM, AMD, and even some Qualcomm processors as well. The problem appears to have been, as Thomas Claburn explains in an update that processor designers made incorrect assumptions about the safety of the "speculative execution" aspect of the design. They thought it was adequately fenced off; Google's researchers beg to differ. Significantly, they appear not to have revisited their original decision since 1995, despite the clearly escalating threats and attacks since then.

The result of the whole exercise is to expose two truths that have been the subject of much denial. The first is to provide yet more evidence that security is a market failure. Everything down to the basics needs to be reviewed and rethought for an era in which we must design everything on the assumption that an adversary is waiting to attack it. The second is that we, the users, need to be careful about the assumptions we make about the systems we use. The most dangerous situation is to believe you are secure, because it's then that you unknowingly take the biggest risks.

Illustrations: Satan inside (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 20, 2017

Risk profile

Thumbnail image for Fingerprint-examiner-FBI-1930s.jpgSo here is this week's killer question: "Are you aware of any large-scale systems employing this protection?"

It's a killer question because this was the answer: "No."

Rewind. For as long as I can remember - and I first wrote about biometrics in 1999 - biometrics vendors have claimed that these systems are designed to be privacy-protecting. The reason, as I was told for a Guardian article on fingerprinting in schools in 2006, is that these systems don't store complete biometric images. Instead, when your biometric is captured, whether that's a fingerprint to pay for a school lunch or an iris scan for some other purpose - the system samples points in the resulting image and deploys some fancy mathematics to turn them into a "template", a numerical value that is what the system stores. The key claim: there is no way to reverse-engineer the template to derive the original image because the template doesn't contain enough information.

The claim sounds plausible to anyone used to one-way cryptographic hashes, or who is used to thinking about compressed photographs and music files, where no amount of effort can restore Humpty-Dumpty's missing data. And yet.

Even at the time, some of the activists I interviewed were dubious about the claim. Even if it was true in 1999, or 2003, or 2006, they argued, it might not be true in the future. Plus, in the meantime these systems were teaching kids that it was OK to use these irreplaceable iris scans, fingerprints, and so on for essentially trivial purposes. What would the consequences be someday in the future when biometrics might become a crucial element of secure identification?

Thumbnail image for wayman-from-video.pngWell, here we are in 2017, and biometrics are more widely used, even though not as widely deployed as they might have hoped in 1999. (There are good reasons for this, as James L. Wayman explained in a 2003 interview for New Scientist: deploying these systems is much harder than anyone ever thinks. The line that has always stuck in my mind: "No one ever has what you think they're going to have where you think they're going to have it." His example was the early fingerprint system he designed that was flummoxed on the first day by the completely unforeseen circumstance of a guy who had three thumbs.)

So-called "presentation attacks" - for example, using high-resolution photographs to devise a spoof dummy finger - have been widely discussed already. For this reason, such applications have a "liveness" test. But it turns out there are other attacks to be worried about.

Thumbnail image for rotated-nw-marta-gomez-barrerro-2017.jpgThis week, at the European Association for Biometrics held a symposium on privacy, surveillance, and biometrics, I discovered that Andrew Clymer, who said in 2003 that, "Anybody who says it is secure and can't be compromised is silly", was precisely right. As Marta Gomez-Barrero explained, in 2013 she published a successful attack on these templates she called "hill climbing". Essentially, this is an iterative attack. Say you have a database of stored templates for an identification system; a newly-presented image is compared with the database looking for a match. In a hill-climbing attack, you generate synthetic templates and run them through the comparator, and then apply a modification scheme to the synthetic templates until you get a match. The reconstructions Gomez-Barrero showed aren't always perfect - the human eye may see distortions - but to the biometrics system it's the same face. You can fix the human problem by adding some noise to the image. The same is true of iris scans (PDF), hand shapes, and so on.

Granted, someone wishing to conduct this attack has to have access to that database, but given the near-daily headlines about breaches, this is not a comforting thought.

Slightly better is the news that template protection techniques do exist; in fact, they've been known for ten to 15 years and are the subject of ISO standard 24745. Simply encrypting the data doesn't help as much as you might think, because every attempted match requires the template to be decrypted. Just like reused passwords, biometric templates are vulnerable to cross-matching that allows an attacker to extract more information. Second, if the data is available on the internet - this is especially applicable to face-based systems - an attacker can test for template matches.

It was at this point that someone asked the question we began with: are these protection schemes being used in large-scale systems? And...Gomez-Barrerra said: no. Assuming she's right, this is - again - one of those situations where no matter how carefully we behave we are the mercy of decisions outside our control that very few of us even know are out there waiting to cause trouble. It is market failure in its purest form, right up there with Equifax, which none of us chooses to use but still inflicted intimate exposure on hundreds of millions of people; and the 7547 bug, which showed you can do everything right in buying network equipment and still get hammered.

It makes you wonder: when will people learn that you can't avoid problems by denying there's any risk? Biometric systems are typically intended to handle the data of millions of people in sensitive applications such as financial transactions and smartphone authentication. Wouldn't you think security would be on the list of necessary features?

Illustrations: A 1930s FBI examiner at work (via FBI); James Wayman; Marta Gomez-Barrero.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 6, 2017

Send lawyers, guns, and money

Thumbnail image for Las_Vegas_strip.jpgThere are many reasons why, Bryan Schatz finds at Mother Jones, people around Las Vegas disagree with President Donald Trump's claim that now is not the time to talk about gun control. The National Rifle Association probably agrees; in the past, it's been criticized for saving its public statements for proposed legislation and staying out of the post-shooting - you should excuse the expression - crossfire.

Gun control doesn't usually fit into net.wars' run of computers, freedom, and privacy subjects. There are two reasons for making an exception now. First, the discovery of the Firearm Owners Protection Act, which prohibits the creation of *any* searchable registry of firearms in the US. Second, the rhetoric surrounding gun control debates.

To take the second first, in a civil conversation on the subject, it was striking that the arguments we typically use to protest knee-jerk demands for ramped-up surveillance legislation to atrocious incidents are the same ones used to oppose gun control legislation. Namely: don't pass bad laws out of fear that do not make us safer; tackle underlying causes such as mental illness and inequality; put more resources into law enforcement/intelligence. In the 1990s crypto wars, John Perry Barlow deliberately and consciously adapted the NRA' slogan to create "You can have my encryption algorithm...when you pry it from my cold, dead fingers from my private key".

Using the same rhetoric doesn't mean both are right or both are wrong: we must decide on evidence. Public debates over surveillance do typically feature evidence about the mathematical underpinnings of how encryption works, day-to-day realities of intelligence work, and so on. The problem with gun control debates in the US is that evidence from other countries is automatically written off as irrelevant, and, more like the subject of copyright reform, lobbying money hugely distorts the debate.

Thumbnail image for Atf_ffl_check-licensed-gun-dealer.jpgThe second issue touches directly on privacy. Soon after the news of the Las Vegas shooting broke, a friend posted a link to the 2016 GQ article Inside the Federal Bureau of Way Too Many Guns. In it, writer and author Jeanne Marie Laskas pays a comprehensive visit to Martinsburg, West Virginia, where she finds a "low, flat, boring building" with a load of shipping containers kept out in the parking lot so the building's floors don't collapse under the weight of the millions of gun license records they contain. These are copies of federal form 4473, which is filled out at the time of gun purchases and retained by the retailer. If a retailer goes out of business, the forms it holds are shipped to the tracing center. When a law enforcement officer anywhere in the US finds a gun at a crime scene, this is where they call to trace it. The kicker: all those records are eventually photographed and stored on microfilm. Miles and miles of microfilm. Charlie Houser, the tracing center's head, has put enormous effort into making his human-paper-microfilm system as effective and efficient as possible; it's an amazing story of what humans can do.

Why microfilm? Gun control began in 1968, five years after the shooting of President John F. Kennedy. Even at that moment of national grief and outrage, the only way President Lyndon B. Johnson could get the Gun Control Act passed was to agree not to include a clause he wanted that would have set up a national gun registry to enable speedy tracing. In 1986, the NRA successfully lobbied for the Firearm Owners Protection Act, which prohibits the creation of *any* registry of firearms. What you register can be found and confiscated, the reasoning apparently goes. So, while all the rest of us engaged in every other activity - getting health care, buying homes, opening bank accounts, seeking employment - were being captured, collected, profiled, and targeted, the one group whose activities are made as difficult to trace as possible is...gun owners?

It is to boggle.

That said, the reasons why the American gun problem will likely never be solved include the already noted effect of lobbying money and, as E.J. Dionne Jr., Norman J. Ornstein and Thomas E. Mann discuss in the Washington Post, the non-majoritarian democracy the US has become. Even though majorities in both major parties favor universal background checks and most Americans want greater gun control, Congress "vastly overrepresents the interests of rural areas and small states". In the Senate that's by design to ensure nationwide balance: the smallest and most thinly populated states have the same number of senators - two - as the biggest, most populous states. In Congress, the story is more about gerrymandering and redistricting. Our institutions, they conclude, are not adapting to rising urbanization: 63% in 1960, 84% in 2010.

Besides those reasons, the identification of guns and personal safety endures, chiefly in states where at one time it was true.

A month and a half ago, one of my many conversations around Nashville went like this, after an opening exchange of mundane pleasantries:

"I live in London."

"Oh, I wouldn't want to live there."


"Too much terrorism." (When you recount this in London, people laugh.)

"If you live there, it actually feels like a very safe city." Then, deliberately provocative, "For one thing, there are practically no guns."

"Oh, that would make me feel *un"safe."

Illustrations: Las Vegas strip, featuring the Mandelay Bay; an ATF inspector checks up on a gun retailer.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 22, 2017


original-LOC-opper-newspaper.png"Fake news is not some unfortunate consequence," the writer and policy consultant Maria Farrell commented at the UK Internet Governance Forum last week. "It is the system working as it should in the attention economy."

The occasion was a panel featuring Simon Milner, Facebook's UK policy director; Carl Miller, from the Demos think tank, James Cook, Business Insider UK's technology editor; the MP and shadow minister for industrial strategy Chi Onwurah (Labour - Newcastle upon Tyne Central); and, as moderator, Nominet chair Mark Wood.

cropped-Official_portrait_of_Chi_Onwurah.jpgThey all agreed to disagree on the definition of "fake news". Cook largely saw it as a journalism problem: fact checkers and sub-editors are vanishing. Milner said Facebook has a four-pronged strategy: collaborate with others to find industry solutions, as in the Facebook Journalism Project; disrupt the economic flow - that is, target clickbait designed to take people *off* Facebook to sites full of ads (irony alert); take down fake accounts (30,000 before the French election); try to build new products that improve information diversity and educate users. Miller wants digital literacy added to the national curriculum: "We have to change the skills we teach people. Journalists used to make those decisions on our behalf, but they don't any more." Onwurah, a chartered electrical engineer who has worked for Ofcom, focused on consequences: she felt the technology giants could do more to combat the problem, and expressed intelligent concern about algorithmic "black boxes" that determine what we see.

Boil this down. Onwurah is talking technology and oversight. Milner also wants technology: solutions should be content-neutral but identify and eliminate bad behavior at the scale of 2 billion users, who don't want to read terms and conditions or be repeatedly asked for ratings. Miller - "It undermines our democracy" - wants governments to take greater responsibility: "it's a race between politics and technology". Cook wants better journalism, but, "It's terrifying, as someone in technology, to think of government seeing inside the Facebook algorithm." Because other governments will want their privilege, too; Apple is censoring its app store in order to continue selling iPhones in China.

Thumbnail image for MariaFarrellPortrait.jpgIt was Farrell's comment, though, that sparked the realization that fake news cannot be solved by thinking of it as a problem in only one of the fields of journalism, international relations, economic inequality, market forces, or technology. It is all those things and more, and we will not make any progress until we take an approach that combines all those disciplines.

Fake news is the democratization of institutional practices that have become structural over many decades. Much of today's fake news uses tactics originally developed by publishers to sell papers. Even journalists often fail to ask the right questions, sometimes because of editorial agendas, sometimes because the threat of lost access to top people inhibits what they ask.

Everyone needs the traditional journalist's mindset of asking, "What's the source?" and "What's their agenda?" before deciding on a story's truth. But there's no future in blaming the people who share these stories (with or without believing them) or calling them stupid. Today we're talking about absurdist junk designed to make people share it; tomorrow's equivalent may be crafted for greater credibility and hence be far more dangerous. Miller's concern for the future of democracy is right. It's not just that these stories are used to poison the information supply and sow division just before an election; the incessant stream of everyday crap causes people to disengage because they trust nothing.

In 1987 I founded The Skeptic in 1987 to counter what the late, great Simon Hoggart called paranormal beliefs' "background noise, interfering with the truth". Of course it matters that a lie on the internet can nearly cause a shoot-out at a pizza restaurant. But we can't solve it with technology, fact-checking, or government fiat at it. Today's generation is growing up in a world where everybody cheats and then lies about it: sports stars.

What we're really talking about here is where to draw the line between acceptable fakery ("spin") and unacceptable fakery. Astrology columns get a pass. Apparently so do professional PR people, as in the 1995 book Toxic Sludge Is Good for You: Lies, Damn Lies, and the Public Relations Industry, by John Stauber and Sheldon Rampton (made into a TV documentary in 2002). In mainstream discussions we don't hear that Big Tobacco's decades-long denial about its own research or Exxon Mobil's approach to climate change undermine democracy. If these are acceptable, it seems harder to condemn the Macedonian teen seeking ad revenue.

This is the same imbalance as prosecuting lone, young, often neuro-atypical computer hackers while the really pressing issues are attacks by criminals and organized gangs.

That analogy is the point: fake news and cybersecurity are sibling problems. Both are tennis, not figure skating; that is, at all times there is an adversary actively trying to frustrate you. "Fixing the users" through training is only one piece of either puzzle.

Treating cybersecurity as a purely technical problem failed. Today's crosses many fields: computer science, philosophy, psychology, law, international relations, economics. So does the VOX-Pol project to study online extremism. This is what we need for fake news.

Illustrations: "The fin de siecle newspaper proprietor", by Frederick Burr Opper, 1894 (from the Library of Congress via Wikipedia); Chi Onwurah; Maria Farrell.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 15, 2017


equifax-announcement.pngThe Equifax announcement this week is peculiarly terrible. It's not just that 143 million Americans and uncertain numbers of Canadians and Britons are made vulnerable to decades of identity fraud (social security numbers can't - yet - be replaced with new ones). Nor is it the unusually poor apology issued by the company or its ham-fisted technical follow-up (see also Argentina). No, the capper is that no one who is in Equifax's database has had any option about being in it in the first place. "We are its victims, not its customers," a number of people observed on Twitter this week.

Long before Google, Amazon, Facebook, and Apple became GAFA, Equifax and its fellow credit bureaus viewed consumers as the product. Citizens have no choice about this; our reward is access to financial services, which we *pay* for. Americans' credit reports are routinely checked on every applications forcredit, bank accounts, or even employment. The impact was already visibly profound enough in 1970, when Congress passed the Fair Credit Reporting Act. In granting Americans the right to inspect their credit reports and request corrections, it is the only US legislation offering rights similar to those granted to Europeans by the data protection laws. The only people who can avoid the tentacled reach of Equifax are those who buy their homes and cars with cash, operate no bank accounts or credit cards, pay cash for medical care and carry no insurance, and have not need for formal employment or government benefits.

Based on this breach and prior examples, investigative security journalist Brian Krebs calls the credit bureaus "terrible stewards of very sensitive data".

It was with this in the background that I attended a symposium on reforming Britain's Computer Misuse Act run by the Criminal Law Reform Now Network. In most hacking cases you don't want to blame the victim, but one might make an exception for Equifax. Since the discussion allowed for such flights of fancy, I queried whether a reformed act should include something like "contributory negligence" to capture such situations. "That's data protection laws," someone said (the between-presentation discussions were under the Chatham House Rule). True. Later, however, merging that thought with other comments about the fact that the public interest in secure devices is not being met either by legislators or by the market inspired Duncan Campbell to suggest that perhaps what we need as a society is a "computer security act" that embraces the whole of society - individuals and companies - that needs protection. Companies like Equifax, with whom we have no direct connection but whose data management deeply affects our lives, he suggested, should arguably be subject to a duty of care. Another approach several of those at the meeting favored was introducing a public interest defense for computer misuse, much as the Defamation Act has for libel. Such a defense could reasonably include things like security research, journalism, and whistleblowing,

The law we have is of course nothing like this.

As of 2013, according to the answer to a Parliamentary question, there had been 339 prosecutions and 262 convictions under the CMA. A disproportionate number of those who are arrested under the act are young - average age, 17. There is ongoing work on identifying ways to turn the paths for young computer whizzes toward security and societal benefit rather than cracking and computer crime. In the case of "Wannacry hero" Marcus Hutchins, arrested by the FBI after Defcon, investigative security journalist Brian Krebs did some digging and found that it appears likely he was connected to writing malware at one time but had tried to move toward more socially useful work. Putting smart young people with no prior criminal record in prison with criminals and ruining their employment prospects isn't a good deal for either them or us.

Yet it's not really surprising that this is who the CMA is capturing, since in 1990 that was the threat: young, obsessive, (predominantly) guys exploring the Net and cracking into things. Hardly any of them sought to profit financially from their exploits beyond getting free airtime so they could stay online longer - not even Kevin Mitnick, the New York Times's pick for "archetypal dark side hacker", now a security consultant and book author. In the US, the police Operation Sundown against this type of hacker spurred the formation of the Electronic Frontier Foundation. "I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves," John Perry Barlow wrote at the time.

Thumbnail image for schifreen.jpgSchifreen and Gold , who were busted for hacking into Prince Philip's Prestel mailbox, established the need for a new law. The resulting CMA was not written for a world in which everyone is connected, street lights have their own network nodes, and Crime as a Service relies on a global marketplace of highly specialized subcontractors. Lawmakers try to encode principles, not specifics, but anticipating such profound change is hard. Plus, as a practical matter, it is feasible to capture a teenaged kid traceable to (predominantly) his parents' basement, but not the kingpin of a worldwide network who could be anywhere. And so CLRNN's question: what should a new law look like? To be continued...

Illustrations: Equifax CEO Rick Smith; Robert Schifreen;

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 4, 2017

Imaginary creatures

virginmary-devil.jpgI learned something new this week: I may not be a real person.

"Real people often prefer ease of use and a multitude of features to perfect, unbreakable security."

So spake the UK's Home Secretary Amber Rudd on August 1, and of course what she was really saying was, We need a back door in all encryption so we can read anything we deem necessary, and anyone who opposes this perfectly sensible idea is part of a highly vocal geek minority who can safely be ignored.

The way I know I'm not a real person is that around the time she was saying that I was emailing my accountant a strongly-worded request that they adopt some form of secured communications for emailing tax returns and accounts back and forth. To my astonishment, their IT people said they could do PGP. Oh, frabjous day. Is PGP-encrypted email more of a pain in the ass than ordinary email? You betcha. Conclusion: I am an imaginary number.

Thumbnail image for Thumbnail image for Amber_Rudd_2016.jpgAccording to Cory Doctorow at BoingBoing's potted history of this sort of pronouncement, Rudd is at a typical first stage. At some point in the future, Doctorow predicts, she will admit that people want encryption but say they shouldn't have it, nonetheless.

I've been trying to think of analogies that make clear how absurd her claim is. Try food safety: >>Real people often prefer ease of use and a multitude of features to perfect, healthy food.>> Well, that's actually true. People grab fast food, they buy pre-prepared meals, and we all know why: a lot of people lack the time, expertise, kitchen facilities, sometimes even basic access to good-quality ingredients to do their own cooking, which overall would save them money and probably keep them in better health (if they do it right). But they can choose this convenience in part because they know - or hope - that food safety regulations and inspections mean the convenient, feature-rich food they choose is safe to eat. A government could take the view that part of its role is to ensure that when companies promise their encryption is robust it actually is.

But the real issue is that it's an utterly false tradeoff. Why shouldn't "real people" want both? Why shouldn't we *have* both? Why should anyone have to justify why they want end-to-end encryption? "I'm sorry, officer. I had to lock my car because I was afraid someone might steal it." Does anyone query that logic on the basis that the policeman might want to search the car?

The second-phase argument (the first being in the 1990s) about planting back doors has been recurring for so long now that it's become like a chronic illness with erupting cycles. In response, so much good stuff has been written to point out the technical problems with that proposal that there isn't really much more to say about it. Go forth and read that link.

There is a much more interesting question we should be thinking about. The 1990s public debate about back doors in the form of key escrow ended with the passage in the UK of the Regulation of Investigatory Powers Act (2000) and in the US with the gradual loosening of the export controls. We all thought that common sense and ecommerce had prevailed. Instead, we now know, the security services ignored these public results and proceeded to go their own way. As we now know, they secretly spent a decade working to undermine security standards. They installed vulnerabilities, and generally borked public trust in the infrastructure.

So: it seems reasonable to assume that the present we-must-have-back-doors noise is merely Plan A. What's Plan B ? What other approaches would you be planning if you ran the NSA or GCHQ? I'm not enough of a technical expert to guess at what clever solutions they might find, but historically a lot of access has been gained by leveraging relationships with appropriate companies such as BT (in the UK) and AT&T (in the US). Today's global tech companies have so far seemed to be more resistant to this approach than a prior generation's national companies were.

Tim_Cook_2009_cropped.jpgThis week's news that Apple began removing censorship-bypassing VPNs from its app store in China probably doesn't contradict this. The company says it complies with national laws; in the FBI case it fought an order in court. However, Britain's national laws unfortunately include 2016's Investigatory Powers Act (2016), which makes it legal for security services to hack everyone's computers ("bulk equipment interference" by any other name...) and has many other powers that have barely been invoked publicly yet. A government that's rational on this sort of topic might point this out, and say, let's give these new powers a chance to bed down for a year or two and *then* see what additional access we might need.

Instead, we seem doomed to keep having this same conversation on an endless loop. Those of us wanting to argue for the importance of securing national infrastructure, particularly as many more billions of points of vulnerability are added to it, can't afford to exit the argument. But, like decoding a magician's trick, we should remember to look in all those other directions. That may be where the main action is, for those of us who aren't real enough to count.

Illustrations: The Virgin Mary punching the devil in the face (book of hours ('The De Brailes Hours'), Oxford ca. 1240 (BL, Add 49999, fol. 40v), via Discarding Images); Amber Rudd; Tim Cook (Valery Marchive).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 14, 2012

Defending Facebook

The talks at the monthly Defcon London are often too abstruse for the "geek adjacent". Not so this week, when Chris Palow, Facebook's engineering manager, site integrity London, outlined the site's efforts to defend itself against attackers.

This is no small thing: the law of truly large numbers means that a tiny percentage of a billion users is still a lot of abusers. And Palow has had to scale up very quickly: when he joined five years ago, the company had 30 million users. Today, that's just a little more than a third of the site's *fake* accounts, based on the 83 million the company claimed in its last quarterly SEC filing.

As became rapidly apparent, there are fakes and there are fakes. Most of those 83 million are relatively benign: accounts for people's dogs, public/private variants, duplicate accounts created when a password is lost, and so on. The rest, about 1.5 percent - which is still 14 million - are the troublemakers, spreading spam and malicious links such as the Koobface worm. Eliminating these is important; there is little more damaging to a social network than rampant malware that leverages the social graph to put users in danger in a space they use because they believe it is safe.

This is not an entirely new problem, but none of the prior solutions are really available to Facebook. Prehistoric commercial social environments like CompuServe and AOL, because people paid to use them, could check credit cards. (Yes, the irony was that in the window between sign-up and credit card verification lay a golden opportunity for abusers to harass the rest of the Net from throwaway email accounts.) Usenet and other free services were defenseless against malicious posters, and despite volunteer community efforts most of the audience fled as a result. As a free service whose business model requires scale, Facebook can't require a credit card or heavyweight authentication, and its ad-supported business model means it can't afford to lose any of its audience, so it's damned in all directions. It's also safe to say that the online criminal underground is hugely more developed and expert now.

Fake accounts are the entry points for all sorts of attacks; besides the usual issues of phishing attacks and botnet recruitment, the more fun exploit is using those links to vacuum up people's passwords in order to exploit them on all the other sites across the Web where those same people have used those same passwords.

So a lot of Palow's efforts are directed at making sure those accounts don't get opened in the first place. Detection is a key element; among other techniques is a lightweight captcha-style request to identify a picture.

"It's still easy for one user to have three or four accounts," he said, "but we can catch anyone registering 1 million fakes. Most attacks need scale."

For the small-scale 16-year-old in the bedroom, he joked that the most effective remedy is found in the site's social graph: their moms are on Facebook. In a more complicated case from the Philippines using cheap human labor to open 500 accounts a day in order to spam links selling counterfeit athletic shoes the miscreants talked about their efforts *on* Facebook.

Another key is preventing, or finding and fixing, bugs in the code that runs the site. Among the strategies Palow listed for this, which included general improvements to coding practice such as better testing, regular reviews, and static and dynamic analysis, is befriending the community of people who find and report bugs.

Once accounts have been created, spotting the spammers involves looking for patterns that sound very much like the ones that characterize Usenet spam: are the same URLs being posted across a range of accounts, do those accounts show other signs of malware infection, are they posted excessively on a single channel, and so on.

Other more complex historical attacks include the Tunisian government's effort to steal passwords. Palow also didn't have much nice to say about ad-replacement schemes such as the now-defunct Phorm.

The current hot issue is what Palow calls "toolbars" and I would call browser extensions. Many of these perform valuable functions from the user's point of view, but the price, which most users don't see until it's too late, is that they operate across all open windows, from your insecure reading of the tennis headlines to your banking session. This particular issue is beginning to be locked down by browser vendors, who are implementing content security policies, essentially the equivalent of the Android and iOS curated app stores. As this work is progressing at different rates, in some cases Facebook can leverage the browsers' varying blocking patterns to identify malware.

More complex responses involve partnerships with software and anti-virus vendors. There will be more of this: the latest trend is stealing tokens on Facebook (such as the iPhone Facebook app's token) to enable spamming off-site.

A fellow audience member commented that sometimes it's more effective long-term to let the miscreants ride for a month while you formulate a really heavy response and then drop the anvil. Perhaps: but this is the law of truly large numbers again. When you have a billion users the problem is that during that month a really shocking number of people can be damaged. Palow's life, therefore, is likely to continue to be patch, patch, patch.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

October 26, 2012

Lie to me

I thought her head was going to explode.

The discussion that kicked off this week's Parliament and Internet conference revolved around cybersecurity and trust online, harmlessly at first. Then Helen Goodman (Labour - Bishop Auckland), the shadow minister for Culture, Media, and Sport, raised a question: what was Nominet doing to get rid of anonymity online? Simon McCalla, Nominet's CTO, had some answers: primarily, they're constantly trying to improve the accuracy and reliability of the Whois database, but it's only a very small criminal element that engage in false domain name registration. Like that.

A few minutes later, Andy Smith, PSTSA Security Manager, Cabinet Office, in answer to a question about why the government was joining the Open Identity Exchange (as part of the Identity Assurance Programme) advised those assembled to protect themselves online by lying. Don't give your real name, date of birth, and other information that can be used to perpetrate identity theft.

Like I say, bang! Goodman was horrified. I was sitting near enough to feel the splat.

It's the way of now that the comment was immediately tweeted, picked up by the BBC reporter in the room, published as a story, retweeted, Slashdotted, tweeted some more, and finally boomeranged back to be recontextualized from the podium. Given a reporter with a cellphone and multiple daily newspaper editions, George Osborne's contretemps in first class would still have reached the public eye the same day 15 years ago. This bit of flashback couldn't have happened even five years ago.

For the record, I think it's clear that Smith gave good security advice, and that the headline - the greater source of concern - ought to be that Goodman, an MP apparently frequently contacted by constituents complaining about anonymous cyberbullying, doesn't quite grasp that this is a nuanced issue with multiple trade-offs. (Or, possibly, how often the cyberbully is actually someone you know.) Dates of birth, mother's maiden names, the names of first pets...these are all things that real-life friends and old schoolmates may well know, and lying about the answers is a perfectly sensible precaution given that there is no often choice about giving the real answers for more sensitive purposes, like interacting with government, medical, and financial services. It is not illegal to fake or refuse to disclose these things, and while Facebook has a real names policy it's enforced with so little rigor that it has a roster of fake accounts the size of Egypt.

Although: the Earl of Erroll might be a bit busy today changing the fake birth date - April 1, 1900 - he cheerfully told us and Radio 4 he uses throughout; one can only hope that he doesn't use his real mother's maiden name, since that, as Tom Scott pointed out later, is in Erroll's Wikipedia entry. Since my real birth date is also in *my* Wikipedia entry and who knows what I've said where, I routinely give false answers to standardized security questions. What's the alternative? Giving potentially thousands of people the answers that will unlock your bank account? On social networking sites it's not enough for you to be taciturn; your birth date may be easily outed by well-meaning friends writing on your wall. None of this is - or should be - illegal.

It turns out that it's still pretty difficult to explain to some people how the Internet works or why . Nominet can work as hard as it likes on verifying its own Whois database, but it is powerless over the many UK citizens and businesses that choose to register under .com, .net, and other gTLDs and country codes. Making a law to enjoin British residents and companies from registering domains outside of .uk...well, how on earth would you enforce that? And then there's the whole problem of trying to check, say, registrations in Chinese characters. Computers can't read Chinese? Well, no, not really, no matter what Google Translate might lead you to believe.

Anonymity on the Net has been under fire for a long, long time. Twenty years ago, the main source of complaints was AOL, whose million-CD marketing program made it easy for anyone to get a throwaway email address for 24 hours or so until the system locked you out for providing an invalid credit card number. Then came Hotmail, and you didn't even need that. Then, as now, there are good and bad reasons for being anonymous. For every nasty troll who uses the cloak to hide there are many whistleblowers and people in private pain who need its protection.

Smith's advice only sounds outrageous if, like Goodman, you think there's a valid comparison between Nominet's registration activity and the function of the Driver and Vehicle Licensing Agency (and if you think the domain name system is the answer to ensuring a traceable online identity). And therein lies the theme of the day: the 200-odd Parliamentarians, consultants, analysts, government, and company representatives assembled repeatedly wanted incompatible things in conflicting ways. The morning speakers wanted better security, stronger online identities, and the resources to fight cybercrime; the afternoon folks were all into education and getting kids to hack and explore so they learn to build things and understand things and maybe have jobs someday, to their own benefit and that of the rest of the country. Paul Bernal has a good summary.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 12, 2012

My identity, my self

Last week, the media were full of the story that the UK government was going to start accepting Facebook logons for authentication. This week, in several presentations at the RSA Conference, representatives of the Government Digital Service begged to differ: the list of companies that have applied to become identity providers (IDPs) will be published at the end of this month and until then they are not confirming the presence or absence of any particular company. According to several of the spokesfolks manning the stall and giving presentations, the press just assumed that when they saw social media companies among the categories of organization that might potentially want to offer identity authentication, that meant Facebook. We won't actually know for another few weeks who has actually applied.

So I can mercifully skip the rant that hooking a Facebook account to the authentication system you use for government services is a horrible idea in both directions. What they're actually saying is, what if you could choose among identification services offered by the Post Office, your bank, your mobile network operator (especially for the younger generation), your ISP, and personal data store services like Mydex or small, local businesses whose owners are known to you personally? All of these sounded possible based on this week's presentations.

The key, of course, is what standards the government chooses to create for IDPs and which organizations decide they can meet those criteria and offer a service. Those are the details the devil is in: during the 1990s battles about deploying strong cryptography, the government's wanted copies of everyone's cryptography keys to be held in escrow by a Trusted Third Party. At the time, the frontrunners were banks: the government certainly trusted those, and imagined that we did, too. The strength of the disquiet over that proposal took them by surprise. Then came 2008. Those discussions are still relevant, however; someone with a long memory raised the specter of Part I of the Electronic Communications Act 2000, modified in 2005, as relevant here.

It was this historical memory that made some of us so dubious in 2010, when the US came out with proposals rather similar to the UK's present ones, the National Strategy for Trusted Identities in Cyberspace (NSTIC). Ross Anderson saw it as a sort of horror-movie sequel. On Wednesday, however, Jeremy Grant, the senior executive advisor for identity management at the US National Institute for Standards and Technology (NIST), the agency charged with overseeing the development of NSTIC, sounded a lot more reassuring.

Between then and now came both US and UK attempts to establish some form of national ID card. In the US, "Real ID", focused on the state authorities that issue driver's licenses. In the UK, it was the national ID card and accompanying database. In both countries the proposals got howled down. In the UK especially, the combination of an escalating budget, a poor record with large government IT projects, a change of government, and a desperate need to save money killed it in 2006.

Hence the new approach in both countries. From what the GDS representatives - David Rennie (head of proposition at the Cabinet Office), Steven Dunn (lead architect of the Identity Assurance Programme; Twitter: @cuica), Mike Pegman (security architect at the Department of Welfare and Pensions, expected to be the first user service; Twitter: @mikepegman), and others manning the GDS stall - said, the plan is much more like the structure that privacy advocates and cryptographers have been pushing for 20 years: systems that give users choice about who they trust to authenticate them for a given role and that share no more data than necessary. The notion that this might actually happen is shocking - but welcome.

None of which means we shouldn't be asking questions. We need to understand clearly the various envisioned levels of authentication. In practice, will those asking for identity assurance ask for the minimum they need or always go for the maximum they could get? For example, a bar only needs relatively low-level assurance that you are old enough to drink; but will bars prefer to ask for full identification? What will be the costs; who pays them and under what circumstances?

Especially, we need to know what the detail of the standards organizations must meet to be accepted as IDPs, in particular, what kinds of organization they exclude. The GDS as presently constituted - composed, as William Heath commented last year, of all the smart, digitally experienced people you *would* hire to reinvent government services for the digital world if you had the choice - seems to have its heart in the right place. Their proposals as outlined - conforming, as Pegman explained happily, to Kim Cameron's seven laws of identity - pay considerable homage to the idea that no one party should have all the details of any given transaction. But the surveillance-happy type of government that legislates for data retention and CCDP might also at some point think, hey, shouldn't we be requiring IDPs to retain all data (requests for authentication, and so on) so we can inspect it should we deem it necessary? We certainly want to be very careful not to build a system that could support such intimate secret surveillance - the fundamental objection all along to key escrow.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

September 28, 2012

Don't take ballots from smiling strangers

Friends, I thought it was spam, and when I explain I think you'll see why.

Some background. Overseas Americans typically vote in the district of their last US residence. In my case, that's a county in the fine state of New York, which for much of my adult life, like clockwork, has sent me paper ballots by postal mail. Since overseas residents do not live in any state, however, you are eligible to vote only in federal elections (US Congress, US Senate, and President). I have voted in every election I have ever been eligible for back to 1972.

So last weekend three emails arrived, all beginning, "Dear voter".

The first one, from, subject line "Electronic Ballot Access for Military/Overseas Voters":

An electronic ballot has been made available to you for the GE 11/6/12 (Federal) by your local County Board of Elections. Please access to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

The second, from "NYS Board of Elections",, subject "Your Ballot is Now Available":

An electronic ballot has been made available to you for the November 6, 2012 General Election. Please access to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

If you have any questions or experience any problems, please email or visit the NYS Board of Elections' website at for additional information.

The third, from, subject, "Ballot Available Notification":

An electronic ballot has been made available to you for the GE 11/6/12 (Federal) by your local County Board of Elections. Please access to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

In all my years as a voter, I've never had anything to do with the NY Board of Elections. I had not received any notification from the county board of elections telling me to expect an email, confirming the source, or giving the Web site address I would eventually use. But the county board of elections Web site had no information indicating they were providing electronic ballots for overseas voters. So I ask you: what would you think?

What I thought was that the most likely possibilities were both evil. One was that it was just ordinary, garden-variety spam intended to facilitate a more than usually complete phishing job. That possibility made me very reluctant to check out the URL in the message, even by typing it in. The security expert Rebecca Mercuri, whose PhD dissertation in 2000 was the first to really study the technical difficulties of electronic voting, was more intrepid. She examined the site and noted errors, such as the request for the registrant's Alabama driver's license number on this supposedly New York state registration page. Plus, the amount of information requested for verification is unnerving; I don't know these people, even though checks out as belonging to the Spanish company Scytl, which provides election software to a variety of places, including New York state.

The second possibility was that these messages were the latest outbreak of longstanding deceptive election practices which include disseminating misinformation with the goal of disenfranchising particular groups of voters. All I know about this comes from the 2008 Computers, Freedom, and Privacy conference, a panel organized by EPIC's Lillie Coney. And it's nasty stuff: leaflets, phone calls, mailings, saying stuff like Republicans vote on Tuesday (the real US election day), Democrats on Wednesday. Or that you can't vote if you've ever been found guilty of anything. Or if you voted in an earlier election this year. Or the polling location has changed. Or you'll be deported if you try to vote and you're an illegal immigrant. Typically, these efforts have been targeted at minorities and the poor. But the panel fully expected them to move increasingly online and to target a wider variety of groups, particularly through spam email. So that was my second thought. Is this it? Someone wants me not to vote?

This election year, of course, the efforts to disenfranchise groups of voters are far more sophisticated. Why send out leaflets when you can push for voter identification laws on the basis that voter fraud is a serious problem? This issue is being discussed at length by the New York Times, the Atlanticelsewhere. Deceptive email seems amateurish by comparison.

I packed up the first two emails and forwarded them to an email address at my county's board of elections from which I had previously received a mailing. On Monday, there came a prompt response. No, the messages are genuine. Some time that I don't remember I ticked a box saying "OR email", and hence I was being notified that an electronic ballot was available. I wrote back, horrified: paper ballot, by postal mail, please. And get a security expert to review how you've done this. Because seriously: the whole setup is just dangerously wrong. Voting security matters. Think like a bank.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

August 10, 2012

Wiped out

There are so many awful things in the story of what happened this week to technology journalist Matt Honan that it's hard to know where to start. The fundamental part - that through not particularly clever social engineering an outsider was able in about 20 minutes to take over and delete his Google account, take over and defame his Twitter account, and then wipe all the data on his iPhone, iPad, and MacBook - would make a fine nightmare, or maybe a movie with some of the surrealistic quality of Martin Scorsese's After Hours (1985). And all, as Honan eventually learned, because the hacker fancied an outing with his three-digit Twitter ID, a threat so unexpected there's no way you'd make it your model.

Honan's first problem was the thing Suw Charman-Anderson put her finger on for an Infosecurity Magazine piece I did earlier this year: gaining access to a single email address to which every other part of your digital life - ecommerce accounts, financial accounts, social media accounts, password resets all over the Web - is locked puts you in for "a world of hurt". If you only have one email account you use for everything, given access to it, an attacker can simply request password resets all over the place - and then he has access to your accounts and you don't. There are separate problems around the fact that the information required for resets is both the kind of stuff people disclose without thinking on social networks and commonly reused. None of this requires fancy technology fix, just smarter, broader thinking

There are simple solutions to the email problem: don't use one email account for everything and, in the case of Gmail, use two-factor authentication. If you don't operate your own server (and maybe even if you do) it may be too complicated to create a separate address for every site you use, but it's easy enough to have a public address you use for correspondence, a private one you use for most of your site accounts, and then maybe a separate, even less well-known one for a few selected sites that you want to protect as much as you can.

Honan's second problem, however, is not so simple to fix unless an incident like this commands the attention of the companies concerned: the interaction of two companies' security practices that on their own probably seemed quite reasonable. The hacker needed just two small bits of information: Honan's address (sourced from the Whois record for his Internet domain name), and the last four digits of a credit card number, The hack to get the latter involved adding a credit card to Honan's account over the phone and then using that card number, in a second phone call, to add a new email address to the account. Finally, you do a password reset to the new email address, access the account, and find the last four digits of the cards on file - which Apple then accepted, along with the billing address, as sufficient evidence of identity to issue a temporary password into Honan's iCloud account.

This is where your eyes widen. Who knew Amazon or Apple did any of those things over the phone? I can see the point of being able to add an email address; what if you're permanently locked out of the old one? But I can't see why adding a credit card was ever useful; it's not as if Amazon did telephone ordering. And really, the two successive calls should have raised a flag.

The worst part is that even if you did know you'd likely have no way to require any additional security to block off that route to impersonators; telephone, cable, and financial companies have been securing telephone accounts with passwords for years, but ecommerce sites do not (or haven't) think of themselves as possible vectors for hacks into other services. Since the news broke, both Amazon and Apple have blocked off this phone access. But given the extraordinary number of sites we all depend on, the takeaway from this incident is that we ultimately have no clue how well any of them protect us against impersonation. How many other sites can be gamed in this way?

Ultimately, the most important thing, as Jack Schofield writes in his Guardian advice column is not to rely on one service for everything. Honan's devastation was as complete as it was because all his devices were synched through iCloud and could be remotely wiped. Yet this is the service model that Apple has and that Microsoft and Google are driving towards. The cloud is seductive in its promises: your data is always available, on all your devices, anywhere in the world. And it's managed by professionals, who will do all the stuff you never get around to, like make backups.

But that's the point: as Honan discovered to his cost, the cloud is not a backup. If all your devices are hooked to it, it is your primary data pool, and, as Apple co-founder Steve Wozniak pointed out this week it is out of your control. Keep your own backups, kids. Develop multiple personalities. Be careful out there.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 14, 2012

The ninth circle of HOPE

Why do technologies fail? And what do we mean by failure?

These questions arise in the first couple of hours of HOPE 9, this year's edition of the hacker conference run biannually by 2600, the hacker quarterly.

Technology failure has a particular meaning in the UK, where large government projects have traditionally wasted large amounts of public money and time. Many failures are more subtle. To take a very simple example: this morning, the elevators failed. It was not a design flaw or loss of functionality: the technology worked perfectly as intended. It was not a usability flaw: what could be simpler than pushing a button? It was not even an accessibility or availability flaw: there were plenty of elevators. What it was, in fact, was a social - or perhaps a contextual - flaw. This group of people who break down complex systems to their finest components to understand them and make them jump through hoops simply failed to notice or read the sign that gave the hours of operation even though it was written in big letters and placed at eye level, just above the call button. This was, after all, well-understood technology that needed no study. And so they stood around in groups, waiting until someone came, pointed out the sign, and chased them away. RTFM, indeed.

But this is what humans do: we make assumptions based on our existing knowledge. To the person with a hammer, everything looks like a nail. To the person with a cup and nowhere to put it, the unfamiliar CD drive looks like a cup holder. To the kids discovering the Hole in the Wall project, a 2000 experiment with installing a connected computer in an Indian slum, the familiar wait-and-wait-some-more hourglass was a drum. Though that last is only a failure if you think it's important that the kids know it's an hourglass; they understood perfectly well the thing that mattered, which is that it was a sign the thing in the wall was doing something and they had to wait.

We also pursue our own interests, sometimes at the expense of what actually matters in a situation. Far Kron, speaking on the last four years of community fabrication, noted that the Global Village Construction project, which is intended to include a full set of the machines necessarily to build a civilization, includes nothing to aid more mundane things like fetching fresh water and washing clothes, which are overall a bigger drain on human time. I am tempted to suggest that perhaps the project needs to recruit some more women (who around the world tend to do most of the water fetching and clothes washing), but it may simply be that small, daily chores are things you worry about after you have your village. (Though this is the inverse of how human settlements have historically worked.)

A more intriguing example, cited by Chris Anderson, a former organizer with New York's IndyMedia, in the early panel on Technology to Change Society that inspired this piece, is Twitter. How is one of the most important social networks and messaging platforms in the world a failure?

"If you define success in technical terms you might only *be* successful in technical terms," he said. Twitter, he explained grew out of a number of prior open-source projects the founders were working. "Indymedia saw technology as being in service to goals, but lacks the social goals those projects started with."

Gus Andrews, producer of The Media Show, a YouTube series on digital media literacy, focused on the hidden assumptions creators make. Some believed, for example, that open source software was vital to One Laptop Per Child, for example, believed that being able to fix the software was a crucial benefit for the recipients.

In 2000, Lawrence Lessig argued that "code is law", and that technological design controls how it can be used. Andrews took a different view: "To believe that things are ineluctably coded into technology is to deny free will." Pointing at Everett Rogers' 1995 book, The Diffusion of Innovations, she said, "There are things we know about how technology enacts social change and one of the thing we know is that it's not the technology."

Not the technology? You might think that if anyone were going to be technology obsessed it would be the folks at a hacker conference. And certainly the public areas are filled with people fidgeting with radio frequencies, teaching others to solder, and showing off their latest 3D printers and their creations (this year's vogue: printing in brightly colored Lego plastic). But the roots of the hacker movement in general and of 2600 in particular are as much social and educational as they are technological.

Eric Corley, who has styled himself "Emmanuel Goldstein", edits the magazine, and does a weekly radio show for WBAI-FM in New York. At a London hacker conference in 1995, he summed up this ethos for me (and The Independent) by talking about hacking as a form of consumer advocacy. His ideas about keeping the Internet open and free, and about ferreting out information corporations would rather keep hidden were niche - and to many people scary - then, but mainstream now.

HOPE continues through Sunday.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 15, 2012

A license to print money

"It's only a draft," Julian Huppert, the Liberal Democrat MP for Cambridge, said repeatedly yesterday. He was talking about the Draft Communications Data Bill (PDF), which was published on Wednesday. Yesterday, in a room in a Parliamentary turret, Hupper convened a meeting to discuss the draft; in attendance were a variety of Parliamentarians plus experts from civil society groups such as Privacy International, the Open Rights Group, Liberty, and Big Brother Watch. Do we want to be a nation of suspects?

The Home Office characterizes the provisions in the draft bill as vital powers to help catch criminals, save lives, and protect children. Everyone else - the Guardian, ZDNet UK, and dozens more - is calling them the "Snooper's charter".

Huppert's point is important. Like the Defamation Bill before it, publishing a draft means there will be a select committee with 12 members, discussion, comments, evidence taken, a report (by November 30, 2012), and then a rewritten bill. This draft will not be voted on in Parliament. We don't have to convince 650 MPs that the bill is wrong; it's a lot easier to talk to 12 people. This bill, as is, would never pass either House in any case, he suggested.

This is the optimistic view. The cynic might suggest that since it's been clear for something like ten years that the British security services (or perhaps their civil servants) have a recurring wet dream in which their mountain of data is the envy of other governments, they're just trying to see what they can get away with. The comprehensive provisions in the first draft set the bar, softening us up to give away far more than we would have in future versions. Psychologists call this anchoring, and while probably few outside the security services would regard the wholesale surveillance and monitoring of innocent people as normal, the crucial bit is where you set the initial bar for comparison for future drafts of the legislation. However invasive the next proposals are, it will be easy for us to lose the bearings we came in with and feel that we've successfully beaten back at least some of the intrusiveness.

But Huppert is keeping his eye on the ball: maybe we can not only get the worst stuff out of this bill but make things actually better than they are now; it will amend RIPA. The Independent argues that private companies hold much more data on us overall but that article misses that this bill intends to grant government access to all of it, at any time, without notice.

The big disappointment in all this, as William Heath said yesterday, is that it marks a return to the old, bad, government IT ways of the past. We were just getting away from giant, failed public IT projects like the late unlamented NHS platform for IT and the even more unlamented ID card towards agile, cheap public projects run by smart guys who know what they're doing. And now we're going to spend £1.8 billion of public money over ten years (draft bill, p92) building something no one much wants and that probably won't work? The draft bill claims - on what authority is unclear - that the expenditure will bring in £5 to £6 billion in revenues. From what? Are they planning to sell the data?

Or are they imagining the economic growth implied by the activity that will be necessary to build, install, maintain, and update the black boxes that will be needed by every ISP in order to comply with the law. The security consultant Alec Muffet has laid out the parameters for this SpookBox 5000: certified, tested, tamperproof, made by, say, three trusted British companies. Hundreds of them, legally required, with ongoing maintenance contracts. "A license to print money," he calls them. Nice work if you can get it, of course.

So we're talking - again - about spending huge sums of government money on a project that only a handful of people want and whose objectives could be better achieved by less intrusive means. Give police better training in computer forensics, for example, so they can retrieve the evidence they need from the devices they find when executing a search warrant.

Ultimately, the real enemy is the lack of detail in the draft bill. Using the excuse that the communications environment is changing rapidly and continuously, the notes argue that flexibility is absolutely necessary for Clause 1, the one that grants the government all the actual surveillance power, and so it's been drafted to include pretty much everything, like those contracts that claim copyright in perpetuity in all forms of media that exist now or may hereinafter be invented throughout the universe. This is dangerous because in recent years the use of statutory instruments to bypass Parliamentary debate has skyrocketed. No. Make the defenders of this bill prove every contention; make them show the evidence that makes every extra bit of intrusion necessary.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 8, 2012

Insecure at any speed

"I have always depended on the kindness of strangers," Blanche says graciously to the doctor hauling her off to the nuthouse at the end of Tennessee Williams' play A Streetcar Named Desire. And while she's quite, quite mad in her genteel Old Southern delusional way she is still nailing her present and future situation, which is that she's going to be living in a place where the only people who care about her are being paid to do so (and given her personality, that may not be enough).

Of course it's obvious to anyone who's lying in a hospital bed connected to a heart monitor that they are at the mercy of the competence of the indigenous personnel. But every discussion of computer passwords tends to go as though the problem is us. Humans choose bad passwords: short, guessable, obvious, crackable. Or we use the same ones everywhere, or we keep cycling the same two or three when we're told to change them frequently. We are the weakest link.

And then you read this week's stories that major sites for whom our trust is of business-critical importance - LinkedIn, eHarmony, and" - have been storing these passwords in such a way that they were vulnerable to not only hacking attacks but also decoding once they had been copied. My (now old) password, I see by typing it into LeakedIn for checking, was leaked but not cracked (or not until I typed it in, who knows?).

This is not new stuff. Salting passwords before storing them - the practice of adding random characters to make the passwords much harder to crack - has been with us for more than 30 years. If every site does these things a little differently, the differences help mitigate the risk we users bring upon ourselves by using the same passwords all over the place. It boggles the mind that these companies could be so stupid as to ignore what has been best practice for a very long time.

The leak of these passwords is probably not immediately critical. For one thing, although millions of passwords leaked out, they weren't attached to user names. As long as the sites limit the number of times you can guess your password before they start asking you more questions or lock you out, the odds that someone can match one of those 6.5 million passwords to your particular account are...well, they're not 6.5 million to one if you've used a password like "password" or "1233456", but they're small. Although: better than your chances of winning the top lottery prize.

Longer term may be the bigger issue. As Ars Technica notes, the decoded passwords from these leaks and their cryptographically hashed forms will get added to the rainbow tables used in cracking these things. That will shrink the space of good, hard-to-crack passwords.

Most of the solutions to "the password problem" aim to fix the user in one way or another. Our memories have limits - so things like Password Safe will remember them for us. Or those impossible strings of letters and numbers are turned into a visual pattern by something like GridSure, which folded a couple of years ago but whose software and patents have been picked up by CryptoCard.

An interesting approach I came across late last year is sCrib, a USB stick that you plug into your computer and that generates a batch of complex passwords it will type in for you. You can pincode-protect the device and it can also generate one-time passwords and plug into a keyboard to protect against keyloggers. All very nice and a good idea except that the device itself is so *complicated* to use: four tiny buttons storing 12 possible passwords it generates for you.

There's also the small point that Web sites often set rules such that any effort to standardize on some pattern of tough password is thwarted. I've had sites reject passwords for being too long, or for including a space or a "special character". (Seriously? What's so special about a hyphen?) Human factors simply escape the people who set these policies, as XKCD long ago pointed out.

But the key issue is that we have no way of making an informed choice when we sign up for anything. We have simply no idea what precautions a site like Facebook or Gmail takes to protect the passwords that guard our personal data - and if we called to ask we'd run into someone in a call center whose job very likely was to get us to go away. That's the price, you might say, of a free service.

In every other aspect of our lives, we handle this sort of thing by having third-party auditors who certify quality and/or safety. Doctors have to pass licensing exams and answer to medical associations. Electricians have their work inspected to ensure it's up to code. Sites don't want to have to explain their security practices to every Sheldon and Leonard? Fine. But shouldn't they have to show *someone* that they're doing the right things?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 1, 2012

The pet rock manifesto

I understand why government doesn't listen to security experts on topics where their advice conflicts with the policies it likes. For example: the Communications Capabilities Development Programme, where experts like Susan Landau, Bruce Schneier, and Ross Anderson have all argued persuasively that a hole is a hole and creating a vulnerability to enable law enforcement surveillance is creating a vulnerability that can be exploited by...well, anyone who can come up with a way to use it.

All of that is of a piece with recent UK and US governments' approach to scientific advice in general, as laid out in The Geek Manifesto, the distillation of Mark Henderson's years of frustration serving as science correspondent at The Times (he's now head of communications for the Wellcome Trust). Policy-based evidence instead of evidence-based policy, science cherry-picked to support whatever case a minister has decided to make, the role of well-financed industry lobbyists - it's all there in that book, along with case studies of the consequences.

What I don't understand is why government rejects experts' advice when there's no loss of face involved, and where the only effect on policy would be to make it better, more relevant, and more accurately targeted at the problem it's trying to solve. Especially *this* government, which has in other areas has come such a long way.

Yet this is my impression from Wednesday's Westminster eForum on the UK's Cybersecurity strategy (PDF). Much was said - for example, by James Quinault, the director of the Office of Cybersecurity and Information Assurance - about information and intelligence sharing and about working collaboratively to mitigate the undeniably large cybersecurity threat (even if it's not quite as large as BAe Systems Detica's seemingly-pulled-out-of-the-air £27 billion would suggest; Detica's technical director, Henry Harrison didn't exactly defend that number, but said no one's come up with a better estimate for the £17 billion that report attributed to cyberespionage.)

It was John Colley, the managing director EMEA for (ISC)2 who said it: in a meeting he attended late last year with, among others, the MP James Brokenshire, Minister for Crime and Security at the Home Office shortly before the publication of the UK's four-year cybersecurity strategy (PDF), he asked who the document's formulators had talked to among practitioners, "the professionals involved at the coal face". The answer: well, none. GCHQ wrote a lot of it (no surprise, given the frequent, admittedly valid, references to its expertise and capabilities), and some of the major vendors were consulted. But the actual coal face guys? No influence. "It's worrying and distressing," Colley concluded.

Well, it is. As was Quinault's response when I caught him to ask whether he saw any conflict between the government's policies on CCDP and surveillance back doors built into communications equipment versus the government's goal of making Britain "one of the most secure places in the world to do business". That response was, more or less precisely: No.

I'm not saying the objectives are bad; but besides the issues raised when the document was published, others were highlighted Wednesday. Colley, for example, noted that for information sharing to work it needs two characteristics: it has to go both ways, and it has to take place inside a network of trust; GCHQ doesn't usually share much. In addition, it's more effective, according to both Colley and Stephen Wolthusen, a reader in mathematics at Royal Holloway's Information Security Group, to share successes rather than problems - which means that you need to be able to phone the person who's had your problem to get details. And really, still so much is down to human factors and very basic things, like changing the default passwords on Internet-facing devices. This is the stuff the coalface guys see every day.

Recently, I interviewed nearly a dozen experts of varying backgrounds about the future of infosecurity; the piece is due to run in Infosecurity Magazine sometime around now. What seemed clear from that exercise is that in the long run we would all be a lot more secure a lot more cheaply if we planned ahead based on what we have learned over the past 50 years. For example: before rolling out wireless smart meters all over the UK, don't implement remote disconnection. Don't link to the Internet legacy systems such as SCADA that were never designed with remote access in mind and whose security until now has depended on securing physical access. Don't plant medical devices in people's chests without studying the security risks. Stop, in other words, making the same mistakes over and over again.

The big, upcoming issue, Steve Bellovin writes in Privacy and Cybersecurity: the Next 100 Years (PDF), a multi-expert document drafted for the IEEE, is burgeoning complexity. Soon, we will be surrounded by sensors, self-driving cars, and the 2012 version of pet rocks. Bellovin's summation, "In 20 years, *everything* will be connected...The security implications of this are frightening." And, "There are two predictions we can be quite certain about: there will still be dishonest people, and our software will still have some bugs." Sounds like a place to start, to me.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 30, 2012

The ghost of cash

"It's not enough to speak well of digital money," Geronimo Emili said on Wednesday. "You must also speak negatively of cash." Emili has a pretty legitimate gripe. In his home country, Italy, 30 percent of the economy is black and the gap between the amount of tax the government collects and the amount it's actually owed is €180 billion. Ouch.

This sets off a bit of inverted nationalist competition between him and the Greek lawyer Maria Giannakaki, there to explain a draft Greek law mandating direct payment of VAT from merchants' tills to eliminate fraud: which country is worse? Emili is sure it's Italy.

"We invented banks," he said. "But we love cash." Italy's cash habit costs the country €10 billion a year - and 40 percent of Europe's bank robberies.

This exchange took place at this year's Digital Money Forum, an annual event that pulls together people interested in everything from the latest mobile technology to the history of Anglo-Saxon coinage. Their shared common interest: what makes money work? If you, like most of this group, want to see physical cash eliminated, this is the key question.

Why Anglo-Saxon coinage? Rory Naismith explains that the 8th century began the shift from valuing coins merely for their metal content and assigning them a premium for their official status. It was the beginning of the abstraction of money: coins, paper, the elimination of the gold standard, numbers in cyberspace. Now, people like Emili and this event's convenor, David Birch, argue it's time to accept money's fully abstract nature and admit the truth: it's a collective hallucination, a "promise of a promise".

These are not just the ravings of hungry technology vendors: Birch, Emili, and others argue that the costs of cash fall disproportionately on the world's poor, and that cash is the key vector for crime and tax evasion. Our impressions of the costs are distorted because the costs of electronic payments, credit cards, and mobile wallets are transparent, while cash is free at the point of use.

When I say to Birch that eliminating cash also means eliminating the ability to transact anonymously, he says, "That's a different conversation." But it isn't, if eliminating crime and tax evasion are your drivers. In the two days only Bitcoin offers anonymity, but it's doomed to its niche market, for whatever reason. (I think it's too complicated; Dutch financial historian Simon Lelieveldt says it will fail because it has no central bank.)

I pause to be annoyed by the claim that cash is filthy and spreads disease. This is Microsoft-level FUD, and not worthy of smart people claiming to want to benefit the poor and eliminate crime. In fact, I got riled enough to offer to lick any currency (or coins; I'm not proud) presented. I performed as promised on a fiver and a Danish note. And you know, they *kept* that money?

In 1680, says Birch, "Pre-industrial money was failing to serve an industrial revolution." Now, he is convinced, "We are in the early part of the post-industrial revolution, and we're shoehorning industrial money in to fit it. It can't last." This is pretty much what John Perry Barlow said about copyright in 1993, and he was certainly right.

But is Birch right? What kind of medium is cash? Is it a medium of exchange, like newspapers, trading stored value instead of information, or is it a format, like video tape? If it's the former, why shouldn't cash survive, even if only as a niche market? Media rarely die altogether - but formats come and go with such speed that even the more extreme predictions at this event - such as Sandra Alzetta, who said that her company expects half its transactions to be mobile by 2020 -seem quite modest. Her company is Visa International, by the way.

I'd say cash is a medium of exchange, and today's coins and notes are its format. Past formats have included shells, feathers, gold coins, and goats; what about a format for tomorrow that printed or minted on demand, at ATMs? I ask the owner of the grocery shop around the corner if his life would be better if cash were eliminated, and he shrugs no. "I'd still have to go out and get the stuff."

What's needed is low-cost alternatives that fit in cultural contexts. Lydia Howland, whose organization IDEO works to create human-centered solutions to poverty, finds the same needs in parts of Britain that exist in countries like Kenya, where M-Pesa is succeeding in bringing access to banking and remote payments to people who have never had access to financial services before.

"Poor people are concerned about privacy," she said on Wednesday. "But they have so much anonymity in their lives that they pay a premium for every financial service." Also, because they do so much offline, there is little understanding of how they work or live. "We need to create a society where a much bigger base has a voice."

During a break, I try to sketch the characteristics of a perfect payment mechanism: convenient; transparent to the user; universally accepted; universally accessible and usable; resistant to tracking, theft, counterfeiting, and malware; and hard to steal on a large scale. We aren't there yet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 6, 2012

Only the paranoid

Yesterday's news that the Ramnit worm has harvested the login credentials of 45,000 British and French Facebook users seems to me a watershed moment for Facebook. If I were an investor, I'd wish I had already cashed out. Indications are, however, that founding CEO Mark Zuckerberg is in it for the long haul, in which case he's going to have to find a solution to a particularly intractable problem: how to protect a very large mass of users from identity fraud when his entire business is based on getting them to disclose as much information about themselves as possible.

I have long complained about Facebook's repeatedly changing privacy controls. This week, while working on a piece on identity fraud for Infosecurity, I've concluded that the fundamental problem with Facebook's privacy controls is not that they're complicated, confusing, and time-consuming to configure. The problem with Facebook's privacy controls is that they exist.

In May 2010, Zuckerberg enraged a lot of people, including me, by opining that privacy is no longer a social norm. As Judith Rauhofer has observed, the world's social norms don't change just because some rich geeks in California say so. But the 800 million people on Facebook would arguably be much safer if the service didn't promise privacy - like Twitter. Because then people wouldn't post all those intimate details about themselves: their kids' pictures, their drunken, sex exploits, their incitements to protest, their porn star names, their birth dates... Or if they did, they'd know they were public.

Facebook's core privacy problem is a new twist on the problem Microsoft has: legacy users. Apple was willing to make earlier generations of its software non-functional in the shift to OS X. Microsoft's attention to supporting legacy users allows me to continue to run, on Windows 7, software that was last updated in 1997. Similarly, Facebook is trying to accommodate a wide variety of privacy expectations, from those of people who joined back when membership was limited to a few relatively constrained categories to those of people joining today, when the system is open to all.

Facebook can't reinvent itself wholesale: it is wholly and completely wrong to betray users who post information about themselves into what they are told is a semi-private space by making that space irredeemably public. The storm every time Facebook makes a privacy-related change makes that clear. What the company has done exceptionally well is to foster the illusion of a private space despite the fact that, as the Australian privacy advocate Roger Clarke observed in 2003, collecting and abusing user data is social networks' only business model.

Ramnit takes this game to a whole new level. Malware these days isn't aimed at doing cute, little things like making hard drive failure noises or sending all the letters on your screen tumbling into a heap at the bottom. No, it's aimed at draining your bank account and hijacking your identity for other types of financial exploitation.

To do this, it needs to find a way inside the circle of trust. On a computer network, that means looking for an unpatched hole in software to leverage. On the individual level, it means the malware equivalent of viral marketing: get one innocent bystander to mistakenly tell all their friends. We've watched this particular type of action move through a string of vectors as the human action moves to get away from spam: from email to instant messaging to, now, social networks. The bigger Facebok gets, the bigger a target it becomes. The more information people post on Facebook - and the more their friends and friends of friends friend promiscuously - the greater the risk to each individual.

The whole situation is exacerbated by endemic, widespread, poor security practices. Asking people to provide the same few bits of information for back-up questions in case they need a password reset. Imposing password rules that practically guarantee people will use and reuse the same few choices on all their sites. Putting all the eggs in services that are free at point of use and that you pay for in unobtainable customer service (not to mention behavioral targeting and marketing) when something goes wrong. If everything is locked to one email account on a server you do not control, if your security questions could be answered by a quick glance at your Facebook Timeline and a Google search, if you bank online and use the same passwords have a potential catastrophe in waiting.

I realize not everyone can run their own mail server. But you can use multiple, distinct email addresses and passwords, you can create unique answers on the reset forms, and you can limit your exposure by presuming that everything you post *is* public, whether the service admits it or not. Your goal should be to ensure that when - it's no longer safe to say "if" - some part of your online life is hacked the damage can be contained to that one, hopefully small, piece. Relying on the privacy consciousness of friends means you can't eliminate the risk; but you can limit the consequences.

Facebook is facing an entirely different risk: that people, alarmed at the thought of being mugged, will flee elsewhere. It's happened before.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 25, 2011

Paul Revere's printing press

There is nothing more frustrating than watching smart, experienced people reinvent known principles. Yesterday's Westminster Forum on cybersecurity was one such occasion. I don't blame them, or not exactly: it's just maddening that we have made so little progress, while the threats keep escalating. And it is from gatherings like this one that government policy is made.

Rephrasing Bill Clinton's campaign slogan, "It's the people, stupid," said Philip Virgo, chairman of the security panel of the IT Livery Company, to kick off the day, a sentiment echoed repeatedly by nearly every other speaker. Yes, it's the people - who trust when they shouldn't, who attach personal devices to corporate networks, who disclose passwords when they shouldn't, who are targeted by today's Facebook-friending social engineers. So how many people experts on the program? None. Psychologists? No. Nor any usability experts or people whose jobs revolve around communication, either. (Or women, but I'm prepared to regard that as a separate issue.)

Smart, experienced guys, sure, who did a great job of outlining problems and a few possible solutions. Somewhere toward the end of the proceedings, someone allowed in passing that yes, it's not a good idea to require people to use passwords that are too complex to remember easily. This is the state of their art? It's 12 years since Angela Sasse and Anne Adams covered this territory in Users Are Not the Enemy. Sasse has gone on to help found the field of security economics, which seeks to quantify the cost of poorly designed security - not just in data breaches and DoS attacks but in the lost productivity of frustrated, overburdened users. Sasse argues that the problem isn't so much the people as user-hostile systems and technology.

"As user-friendly as a cornered rat," Virgo says he wrote of security software back in 1983. Anyone who's looked at configuring a firewall lately knows things haven't changed that much. In a world of increasingly mass-market software and devices, security software has remained resolutely elitist: confusing error messages, difficult configuration, obscure technology. How many users know what to do when their browser says a Web site certificate is invalid? Or how to answer anti-virus software that asks whether you want to authorise HIPS/RegMod-007?

"The current approach is not working," said William Beer, director of information security and cybersecurity for PriceWaterhouseCoopers. "There is too much focus on technology, and not enough focus from business and government leaders." How about academics and consumers, too?

There is no doubt, though, that the threats are escalating. Twenty years ago, the biggest worry was that a teenaged kid would write a virus that spread fast and furious in the hope of getting on the evening news. Today, an organized criminal underground uses personal information to target a small group of users inside RSA, leveraging that into a threat to major systems worldwide. (Trend Micro CTO Andy Dancer said the attack began in the real world with a single user befriended at their church. I can't find verification, however.)

The big issue, said Martin Smith, CEO of The Security Company, is that "There's no money in getting the culture right." What's to sell if there's no technical fix? Like when your plane is held to ransom by the pilot, or when all it takes to publish 250,000 US diplomatic cables is one alienated, low-ranked person with a DVD burner and a picture of Lady Gaga? There's a parallel here to pharmaceuticals: one reason we have few weapons to combat rampaging drug resistance is that for decades developing new antibiotics was not seen as a profitable path.

Granted, you don't, as Dancer said afterwards, want to frame security as an issue of "fixing the people" (but we already know better than that). Nor is it fair to ban company employees from social media lest some attacker pick it up and use it to create a false sense of trust. Banning the latest new medium, said former GCHQ head John Bassett, is just the instinctive reaction in a disturbance; in 1775 Boston the "problem" was Paul Revere's printing press stirring up trouble.

Nor do I, personally, want to live in a trust-free world. I'm happy to assume the server next to me is compromised, but "Trust no one" is a lousy way to live.

Since perfect security is not possible, Dancer advised, organizations should plan for the worst. Good advice. When did I first hear it? Twenty years ago and most months since, by Peter Neumann in his RISKS Forum. It is depressing and frustrating that we are still having this conversation as if it were new - and that we will have it all over again over the next decade as smart meters roll out to 26 million British households by 2020, opening up the electrical grid to attacks that are already being predicted and studied.

Neumann - and Dancer - is right. There is no perfect security because it's in no one's interest to create it. Plan for the worst.

To Gene Spafford, 1989: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room protected by armed guards - and even then I have my doubts."

For everything else, there's a stolen Mastercard.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 4, 2011

The identity layer

This week, the UK government announced a scheme - Midata - under which consumers will be able to reclaim their personal information. The same day, the Centre for the Study of Financial Innovation assembled a group of experts to ask what the business model for online identification should be. And: whatever that model is, what the the government's role should be. (For background, here's the previous such discussion.)

My eventual thought was that the government's role should be to set standards; it might or might not also be an identity services provider. The government's inclination now is to push this job to the private sector. That leaves the question of how to serve those who are not commercially interesting; at the CSFI meeting the Post Office seemed the obvious contender for both pragmatic and historical reasons.

As Mike Bracken writes in the Government Digital Service blog posting linked above, the notion of private identity providers is not new. But what he seems to assume is that what's needed is federated identity - that is, in Wikipedia's definition, a means for linking a person's electronic identity and attributes across multiple distinct systems. What I meant is a system in which one may have many limited identities that are sufficiently interoperable that you can make a choice which to use at the point of entry to a given system. We already have something like this on many blogs, where commenters may be offered a choice of logging in via Google, OpenID, or simply posting a name and URL.

The government gateway circa Year 2000 offered a choice: getting an identity certificate required payment of £50 to, if I remember correctly, Experian or Equifax, or other companies whose interest in preserving personal privacy is hard to credit. The CSFI meeting also mentioned tScheme - an industry consortium to provide trust services. Outside of relatively small niches it's made little impact. Similarly, fifteen years ago, the government intended, as part of implementing key escrow for strong cryptography, to create a network of trusted third parties that it would license and, by implication, control. The intention was that the TTPs should be folks that everyone trusts - like banks. Hilarious, we said *then*. Moving on.

In between then and now, the government also mooted a completely centralized identity scheme - that is, the late, unlamented ID card. Meanwhile, we've seen the growth a set of competing American/global businesses who all would like to be *the* consumer identity gateway and who managed to steal first-mover advantage from existing financial institutions. Facebook, Google, and Paypal are the three most obvious. Microsoft had hopes, perhaps too early, when in 1999 it created Passport (now Windows Live ID). More recently, it was the home for Kim Cameron's efforts to reshape online identity via the company's now-cancelled CardSpace, and Brendon Lynch's adoption of U-Prove, based on Stefan Brands' technology. U-Prove is now being piloted in various EU-wide projects. There are probably lots of other organizations that would like to get in on such a scheme, if only because of the data and linkages a federated system would grant them. Credit card companies, for example. Some combination of mobile phone manufacturers, mobile network operators, and telcos. Various medical outfits, perhaps.

An identity layer that gives fair and reasonable access to a variety of players who jointly provide competition and consumer choice seems like a reasonable goal. But it's not clear that this is what either the UK's distastefully spelled "Midata" or the US's NSTIC (which attracted similar concerns when first announced, has in mind. What "federated identity" sounds like is the convenience of "single sign-on", which is great if you're working in a company and need to use dozens of legacy systems. When you're talking about identity verification for every type of transaction you do in your entire life, however, a single gateway is a single point of failure and, as Stephan Engberg, founder of the Danish company Priway, has often said, a single point of control. It's the Facebook cross-all-the-streams approach, embedded everywhere. Engberg points to a discussion paper) inspired by two workshops he facilitated for the Danish National IT and Telecom Agency (NITA) in late 2010 that covers many of these issues.

Engberg, who describes himself as a "purist" when it comes to individual sovereignty, says the only valid privacy-protecting approach is to ensure that each time you go online on each device you start a new session that is completely isolated from all previous sessions and then have the choice of sharing whatever information you want in the transaction at hand. The EU's LinkSmart project, which Engberg was part of, created middleware to do precisely that. As sensors and RFID chips spread along with IPv6, which can give each of them its own IP address, linkages across all parts of our lives will become easier and easier, he argues.

We've seen often enough that people will choose convenience over complexity. What we don't know is what kind of technology will emerge to help us in this case. The devil, as so often, will be in the details.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 28, 2011

Crypto: the revenge

I recently had occasion to try out Gnu Privacy Guard, the Free Software Foundation's version of PGP, Phil Zimmermann's legendary Pretty Good Privacy software. It was the first time I'd encrypted an email message since about 1995, and I was both pleasantly surprised and dismayed.

First, the good. Public key cryptography is now implemented exactly the way it should have been all along: once you've installed it and generated a keypair, encrypting a message is ticking a box or picking a menu item inside your email software. Even key management is handled by a comprehensible, well-designed graphical interface. Several generations of hard work have created this and also ensured that the various versions of PGP, OpenPGP, and GPG are interoperable, so you don't have to worry about who's using what. Installation was straightforward and the documentation is good.

Now, the bad. That's where the usability stops. There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners.

Item: the subject line doesn't get encrypted. There is nothing you can do about this except put a lot of thought into devising a subject line that will compel people to read the message but that simultaneously does not reveal anything of value to anyone monitoring your email. That's a neat trick.

Item: watch out for attachments, which are easily accidentally sent in the clear; you need to encrypt them separately before bundling them into the message.

Item: while there is a nifty GPG plug-in for Thunderbird - Enigmail - Outlook, being commercial software, is less easily supported. GPG's GpgOL module works only with 2003 (SP2 and above) and 2007, and not on 64-bit Windows. The problem is that it's hard enough to get people to change *one* habit, let alone several.

Item: lacking appropriate browser plug-ins, you also have to tell them to stop using Webmail if the service they're used to won't support IMAP or POP3, because they won't be able to send encrypted mail or read what others send them over the Web.

Let's say you're running a field station in a hostile area. You can likely get users to persevere despite these points by telling them that this is their work system, for use in the field. Most people will put up with a some inconvenience if they're being paid to do so and/or it's temporary and/or you scare them sufficiently. But that strategy violates one of the basic principles of crypto-culture, which is that everyone should be encrypting everything so that sensitive traffic doesn't stand out. They are of course completely right, just as they were in 1993, when the big political battles over crypto were being fought.

Item: when you connect to a public keyserver to check or download someone's key, that connection is in the clear, so anyone surveilling you can see who you intend to communicate with.

Item: you're still at risk with regard to traffic data. This is what RIPA and data retention are all about. What's more significant? Being able to read a message that says, "Can you buy milk?" or the information that the sender and receiver of that message correspond 20 times a day? Traffic data reveals the pattern of personal relationships; that's why law enforcement agencies want it. PGP/GPG won't hide that for you; instead, you'll need to set up a proxy or use Tor to mix up your traffic and also protect your Web browsing, instant messaging, and other online activities. As Tor's own people admit, it slows performance, although they're working on it (PDF).

All this says we're still a long way from a system that the mass market will use. And that's a damn shame, because we genuinely need secure communications. Like a lot of people in the mid-1990s, I'd have thought that by now encrypted communications would be the norm. And yet not only is SSL, which protects personal details in transit to ecommerce and financial services sites, the only really mass-market use, but it's in trouble. Partly, this is because of the technical issues raised in the linked article - too many certification authorities, too many points of failure - but it's also partly because hardly anyone understands how to check that a certificate is valid or knows what to do when warnings pop up that it's expired or issued for a different name. The underlying problem is that many of the people who like crypto see it as both a cool technology and a cause. For most of us, it's just more fussy software. The big advance since the mid 1990s is that at least now the *developers* will use it.

Maybe mobile phones will be the thing that makes crypto work the way it should. See, for example, Dave Birch's current thinking on the future of identity. We've been arguing about how to build an identity infrastructure for 20 years now. Crypto is clearly the mechanism. But we still haven't solved the how.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 21, 2011

Printers on fire

It used to be that if you thought things were spying on you, you were mentally disturbed. But you're not paranoid if they're really out to get you, and new research at Columbia University, with funding from DARPA's Crash program, exposes how vulnerable today's devices are. Routers, printers, scanners - anything with an embedded system and an IP address.

Usually what's dangerous is monoculture: Windows is a huge target. So, argue Columbia computer science professor Sal Stolfo and PhD student Ang Cui, device manufacturers rely on security by diversity: every device has its own specific firmware. Cui estimates, for example, that there are 300,000 different firmware images for Cisco routers, varying by feature set, model, operating system version, hardware, and so on. Sure, what's the payback? Especially compared to that nice, juicy Windows server over there?

"In every LAN there are enormous numbers of embedded systems in every machine that can be penetrated for various purposes," says Cui.

The payback is access to that nice, juicy server and, indeed, the whole network Few update - or even check - firmware. So once inside, an attacker can lurk unnoticed until the device is replaced.

Cui started by asking: "Are embedded systems difficult to hack? Or are they just not low-hanging fruit?" There isn't, notes Stolfo, an industry providing protection for routers, printers, the smart electrical meters rolling out across the UK, or the control interfaces that manage conference rooms.

If there is, after seeing their demonstrations, I want it.

Their work is two-pronged: first demonstrate the need, then propose a solution.

Cui began by developing a rootkit for Cisco routers. Despite the diversity of firmware and each image's memory layout, routers are a monoculture in that they all perform the same functions. Cui used this insight to find the invariant elements and fingerprint them, making them identifiable in the memory space. From that, he can determine which image is in place and deduce its layout.

"It takes a millisecond."

Once in, Cui sets up a control channel over ping packets (ICMP) to load microcode, reroute traffic, and modify the router's behaviour. "And there's no host-based defense, so you can't tell it's been compromised." The amount of data sent over the control channel is too small to notice - perhaps a packet per second.

"You can stay stealthy if you want to."

You could even kill the router entirely by modifying the EEPROM on the motherboard. How much fun to be the army or a major ISP and physically connect to 10,000 dead routers to restore their firmware from backup?

They presented this at WOOT (Quicktime), and then felt they needed something more dramatic: printers.

"We turned off the motor and turned up the fuser to maximum." Result: browned paper and...smoke.

How? By embedding a firmware update in an apparently innocuous print job. This approach is familiar: embedding programs where they're not expected is a vector for viruses in Word and PDFs.

"We can actually modify the firmware of the printer as part of a legitimate document. It renders correctly, and at the end of the job there's a firmware update." It hasn't been done before now, Cui thinks, because there isn't a direct financial pay-off and it requires reverse-engineering proprietary firmware. But think of the possibilities.

"In a super-secure environment where there's a firewall and no access - the government, Wall Street - you could send a resume to print out." There's no password. The injected firmware connects to a listening outbound IP address, which responds by asking for the printer's IP address to punch a hole inside the firewall.

"Everyone always whitelists printers," Cui says - so the attacker can access any computer. From there, monitor the network, watch traffic, check for regular expressions like names, bank account numbers, and social security numbers, sending them back out as part of ping messages.

"The purpose is not to compromise the printer but to gain a foothold in the network, and it can stay for years - and then go after PCs and servers behind the firewall." Or propagate the first printer worm.

Stolfo's and Cui's call their answer a "symbiote" after biological symbiosis, in which two biological organisms attach to each other to mutual benefit.

The goal is code that works on an arbitrarily chosen executable about which you have very little knowledge. Emulating a biological symbiote, which finds places to attach to the host and extract resources, Cui's symbiote first calculates a secure checksum across all the static regions of the code, then finds random places where its code can be injected.

"We choose a large number of these interception points - and each time we choose different ones, so it's not vulnerable to a signature attack and it's very diverse." At each device access, the symbiote steals a little bit of the CPU cycle (like an RFID chip being read) and automatically verifies the checksum.

"We're not exploiting a vulnerability in the code," says Cui, "but a logical fallacy in the way a printer works." Adds Stolfo, "Every application inherently has malware. You just have to know how to use it."

Never mind all that. I'm still back at that printer smoking. I'll give up my bank account number and SSN if you just won't burn my house down.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 30, 2011

Trust exercise

When do we need our identity to be authenticated? Who should provide the service? Whom do we trust? And, to make it sustainable, what is the business model?

These questions have been debated ever since the early 1990s, when the Internet and the technology needed to enable the widespread use of strong cryptography arrived more or less simultaneously. Answering them is a genuinely hard problem (or it wouldn't be taking so long).

A key principle that emerged from the crypto-dominated discussions of the mid-1990s is that authentication mechanisms should be role-based and limited by "need to know"; information would be selectively unlocked and in the user's control. The policeman stopping my car at night needs to check my blood alcohol level and the validity of my driver's license, car registration, and insurance - but does not need to know where I live unless I'm in violation of one of those rules. Cryptography, properly deployed, can be used to protect my information, authenticate the policeman, and then authenticate the violation result that unlocks more data.

Today's stored-value cards - London's Oyster travel card, or Starbucks' payment/wifi cards - when used anonymously do capture some of what the crypto folks had in mind. But the crypto folks also imagined that anonymous digital cash or identification systems could be supported by selling standalone products people installed. This turned out to be wholly wrong: many tried, all failed. Which leads to today, where banks, telcos, and technology companies are all trying to figure out who can win the pool by becoming the gatekeeper - our proxy. We want convenience, security, and privacy, probably in that order; they want security and market acceptance, also probably in that order.

The assumption is we'll need that proxy because large institutions - banks, governments, companies - are still hung up on identity. So although the question should be whom do we - consumers and citizens - trust, the question that ultimately matters is whom do *they* trust? We know they don't trust *us*. So will it be mobile phones, those handy devices in everyone's pockets that are online all the time? Banks? Technology companies? Google has launched Google Wallet, and Facebook has grand aspirations for its single sign-on.

This was exactly the question Barclaycard's Tom Gregory asked at this week's Centre for the Study of Financial Innovation round-table discussion (PDF) . It was, of course, a trick, but he got the answer he wanted: out of banks, technology companies, and mobile network operators, most people picked banks. Immediate flashback.

The government representatives who attended Privacy International's 1997 Scrambling for Safety meeting assumed that people trusted banks and that therefore they should be the Trusted Third Parties providing key escrow. Brilliant! It was instantly clear that the people who attended those meetings didn't trust their banks as much as all that.

One key issue is that, as Simon Deane-Johns writes in his blog posting about the same event, "identity" is not a single, static thing; it is dynamic and shifts constantly as we add to the collection of behaviors and data representing it.

As long as we equate "identity" with "a person's name" we're in the same kind of trouble the travel security agencies are when they try to predict who will become a terrorist on a particular flight. Like the browser fingerprint, we are more uniquely identifiable by the collection of our behaviors than we are by our names, as detectives who search for missing persons know. The target changes his name, his jobs, his home, and his wife - but if his obsession is chasing after trout he's still got a fishing license. Even if a link between a Starbucks card and its holder's real-world name is never formed, the more data the card's use enters into the system the more clearly recognizable as an individual he will be. The exact tag really doesn't matter in terms of understanding his established identity.

What I like about Deane-Johns' idea -

the solution has to involve the capability to generate a unique and momentary proof of identity by reference to a broad array of data generated by our own activity, on the fly, which is then useless and can be safely discarded"

is two things. First, it has potential as a way to make impersonation and identity fraud much harder. Second is that implicit in it is the possibility of two-way authentication, something we've clearly needed for years. Every large organization still behaves as though its identity is beyond question whereas we - consumers, citizens, employees - need to be thoroughly checked. Any identity infrastructure that is going to be robust in the future must be built on the understanding that with today's technology anyone and anything can be impersonated.

As an aside, it was remarkable how many people at this week's meeting were more concerned about having their Gmail accounts hacked than their bank accounts. My reasoning is that the stakes are higher: I'd rather lose my email reputation than my house.. Their reasoning is that the banking industry is more responsive to customer problems than technology companies. That truly represents a shift from 1997, when technology companies were smaller and more responsive.

More to come on these discussions...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 16, 2011

The world at ten

Like, net.wars-the-column is to some extent a child of 9/11 (the column was preceded by the book. four years of near-weekly news analysis pieces for the Daily Telegraph, and a sequel book, From Anarchy to Power: the Net Comes of Age). On November 2, 2011 the column will be ten years old, its creation sparked by a burst of frustrated anger when then foreign minister Jack Straw wagged a post-9/11 finger at those who had opposed his plans to restrict the use of strong encryption and implement key escrow in the mid 1990s when he was at the Home Office and blamed us.

Ten years on, we can revisit his claim. We now know, for example, that when Osama bin Laden wanted to hide, he didn't use cryptography to cloak his whereabouts. Instead, the reason his safe house stood out from those around it was that it was a technological black spot: "no phones, no broadband. In other words, bin Laden feared the power of technology as much as Straw and his cohorts: both feared it would empower their enemies. That paranoia was justified - but backfired spectacularly.

In our own case, it's clear that "the terrorists" have scored a substantial amount of victory. We - the US, the UK, Europe - would have had some kind of recession anyway, given the rapacious and unregulated behavior of banks and brokers leading up to 2008 - but we would have been much better placed to cope with it if we - the US - hadn't been simultaneously throwing $1.29 trillion at invading Iraq and Afghanistan. If you include medical and disability care for current and future veterans, according to the Eisenhower Research Project at Brown University that number rises to as much as $4 trillion.

But more than that, as Ryan Singel writes US-specifically at Wired, the West has built up a gigantic and expensive inward-turned surveillance infrastructure that is unlikely to be dismantled when or if the threat it was built to control goes away. In the last ten years, countless hundreds of millions of dollars and countless million of hours of lost productivity have been spent on airport security when, as Bruce Schneier frequently writes, the only two changes that have made a significant difference to air travel safety have been reinforcing the cockpit doors and teaching passengers to fight back. The Department of Homeland Security's budget for its 2011 financial year is $56.3 billion (PDF) - which includes $214.7 million for airport scanners and another $218.9 million for people to staff them (so much for automation).

The UK in particular has spent much of the last ten years building the database state, creating dozens of large databases aimed at tracking various portions of society through various parts of their lives. Some of this has been dismantled by the coalition, but not all. The most visible part of the ID card is gone - but the key element was always the database of the nation's residents, and as data-sharing between government departments becomes ever easier, the equivalent may be built in practice rather than by explicit plan. In every Western country CCTV cameras are proliferating, as are surveillance-by-design policies such as data retention, built-in wiretapping, and widespread filtering. Every time a new system is built - the London congestion charge, for example, or the mooted smart road pricing systems - there are choices that would allow privacy to be built in. And so far, each time those choices are not taken.

But if the policies aimed at our ourselves are misguided, as net.wars has frequently argued, the same is true of the policies we have directed at others. As part of the British Science Festival, Paul Rogers, a researcher with the Oxford Group, presented A War Gone Badly Wrong - The War on Terror Ten Years On, looking back at the aftermath of the attacks rather than the attacks themselves; the Brown research shows that in the various post-9/11 military actions 80 people have died for every 9/11 victim. Like millions of others who were ignored, the Oxford Research Group opposed the war at the time.

"The whole approach was a mistake." he told the press last Friday, arguing that the US should instead have called it an act of international criminality and sworn to work with everyone to bring the criminals to justice. "The US would have had worldwide support for that kind of action that it did not have for Afghanistan - or, especially, Iraq." He added, "If they had treated al-Qaeda as a common, bitter, vicious criminal movement, not a brave, religious movement worthy of fighting, that degrades it."

What he hopes his research will lead to now is "a really serious understanding of what wrong, and the risks of early recourse to early military responses." And, he added, "sustainable security" that focuses on conflict prevention. "Why it's important to look at the experience of the war on terror is to discern and learn those lessons."

They say that a conservative is a liberal who's been mugged. By analogy, it seems that a surveillance state is a democracy that's been attacked.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 2, 2011

White rabbits

I feel like I am watching magicians holding black top hats. They're not sure a mess of hexagonal output on the projection screen so comprehensible words appear...and people laugh. And then some command line screens flash in and out before your eyes and something absurd and out-of-place appears, like the Windows calculator, and everyone applauds. I am at 44con, a less-crazed London offshoot of the Defcon-style mix of security and hacking. Although, this being Britain, they're pushing the sponsored beer.

In this way we move through exploits: iOS, Windows Phone 7, and SAP, whose protocols are pulled apart by Sensepost's Ian de Villiers. And after that Trusteer Rapport, which seems to be favored by banks and other financial services, and disliked by everyone else. All these talks leave a slightly bruised feeling, not so much like you'd do better to eschew all electronics and move to a hut on a deserted beach without a phone as that even if you did that you'd be vulnerable to other people's decisions. While exploring the inner workings of USB flash drives (PDF), for example, Phil Polstra noted in passing that the Windows Registry logs every single time you insert one. I knew my computer tracked me, but I didn't quite realize the full extent.

The bit of magic that most clearly makes this point is Maltego. This demonstration displays neither hexagonal code nor the Windows calculator, but rolls everything privacy advocates have warned about for years into one juice tool that all the journalists present immediately start begging for. (This is not a phone hacking joke; this stuff could save acres of investigative time.) It's a form of search that turns a person or event into a colorful display of whirling dots (hits) that resolve into clusters. Its keeper, Roelof Temmingh, uses a mix of domain names, IP addresses, and geolocation to discover the Web sites White House users like to visit and tweets from the NSA parking lot. Version 4 - the first version of the software dates to 2007 - moves into real-time data mining.

Later, I ask a lawyers with a full, licensed copy to show me an ego search. We lack the time to finish, but our slower pace and diminished slickness make it plain that this software takes time and study to learn to drive. This is partly comforting: it means that the only people who can use it to do the full spy caper are professionals, rather than amateurs. Of course, those are the people who will also have - or be able to command - access to private databases that are closed to the rest of us, such as the utility companies' electronic customer records, which, when plugged in can link cyberworld and real world identities. "A one-click stalking machine," Temmingh calls it.

As if your mobile phone - camera, microphone, geolocation, email, and Web browsing history - weren't enough. One attendee tells me seriously that he would indeed go to jail for two years rather than give up his phone's password, even if compelled under the Regulation of Investigatory Powers Act. Even if your parents are sick and need you to take care of them? I ask. He seems to feel I'm unfairly moving the bar.

Earlier the current mantra that every Web site should offer secure HTTP came under fire. IOActive's Vincent Berg showed off how to figure out which grid tile of Google Maps and which Wikipedia pages someone has been looking at despite the connection's being carred over SSL. The basis of this is our old friend traffic analysis. It's not a great investigative tool because, as Berg himself points out, there would be many false positives, but side-channel leaks in Web pages are still a coming challenge (PDF). SSL has its well-documented problems, but "At some point the industry will get it right." We can but hope.

It was left to Alex Conran, whose TV program The Real Hustle starts its tenth season on BBC Three on Monday, to wind things up by reminding us that the most enduring hacks are the human ones. Conran says that after perpetrating more than 500 scams on an unsuspecting public (and debunking them afterwards), he has concluded that just as Western music relies on endless permutations of the same seven notes, scams rely on variations on the same five elements. They will sound familiar to anyone who's read The Skeptic over the last 24 years.

The five: misdirection, social compliance, the love of a special deal, time pressure, social proof (or reenforcement). "Con men are the hackers of human nature", Conran said, but noted that part of the point of his show is that if you educate people what the risks are they will take the necessary steps to protect themselves. And then dispensed this piece of advice: if you want to control the world, buy a hi-vis jacket. They're cheap, and when you're wearing one, apparently anyone you meet will do anything you tell them without question. No magic necessary.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 5, 2011

Cheaters in paradise

It seems that humans in general are not particularly good at analyzing incentives. How else can you explain the number of decisions we make with adverse, unintended consequences? Three examples.

One: this week US newspapers - such as the LA Times, the New York Times, and Education Week - report that myriad states have discovered a high number of erasures on standardized tests or suspiciously sudden improvement in test scores. (At one Pennsylvania school, for example, eighth graders' reading proficiency jumped from 28.9 percent to 63.8 percent between 2008 and 2009.)

The culprits: teachers and principals. When tests determined only the future of the students taking them, the only cheaters were students. Now that tests determine school rankings and therefore the economic future of teachers, principals, and schools, many more people are motivated to ensure that students score highly.

Don't imagine the kids don't grasp this. In 2002, when I wrote about plagiarism for the Independent, all the kids I interviewed noted that despite their teachers' warnings of dire consequences schools would not punish plagiarists and risk hurting their rankings in the league tables.

A kid in an American school this week might legitimately ask why he should be punished for cheating or plagiarism when his teachers are doing the same thing on a much grander scale for greater and far more immediate profit. A similar situation applies to our second example, this week's decision by the International Tennis Federation to suspend 31-year-old player Robert Kendrick for 12 months after testing positive for the banned stimulant methylhexaneamine.

At his age, a 12-month ban is an end-of-career notice. Everyone grants that he did not intend to cheat and that the amount of the drug was not performance-enhancing. Like a lot of people who travel through many time zones on the way to work, he took a jetlag pill whose ingredients he believed to be innocuous. He admits he screwed up; he and his lawyers have simply asked for what a fairer sentence. Fairer because in January 2010, when fellow player Wayne Odesnik was caught by Australian Customs with eight vials of human growth hormone, he was suspended for two years - double the sentence but far more than double the offense. And Odesnik didn't even stay out that long; his sentence was commuted to time served after seven months.

At the time, the ITF said that he had bought his way out of purgatory by cooperating with its anti-doping program, presumably under the rule that allows such a reversal when the player has turned informant. No follow-up has disclosed who Odesnik might have implicated, and although it's possible that it all forms part of a lengthy, ongoing investigation, the fact remains: his offense was a lot worse than Kendrick's but has cost him a lot less.

It says a lot that the other players are scathing about Odesnik, sympathetic to Kendrick. This is a watershed moment, where the athletes are openly querying the system's fairness despite any suspicions that might be raised by their doing so.

The anti-doping system as it is presently constructed has never made sense to me: it is invasive, unwieldly, and a poor fit for some sports (like tennis, where players are constantly on the move). The The lesson sent by these morality plays is: don't get caught. And there is enough money in professional sports to ensure that there are many actors invested in ensuring exactly that: coaches, agents, managers, corporate sponsors, and the tours themselves. Of course testing and punshing athletes is going to fail to contain the threat.

Kamakshi Tandon's ideas on this are very close to mine: do traditional policing. Instead of relying on test samples, which can be mishandled, misread, or unreliable, use other types of evidence when they're available. Why, for example, did the anti-doping authorities refuse Martina Hingis's request to do a hair strand test when a urine sample tested positive for cocaine at Wimbledon in 2007? Why are the A and B samples tested at the same lab instead of different labs? (What lab wants to say it misread the first sample?) My personal guess is that it's because the anti-doping authorities believe that anyone playing professional sports is probably guilty anyway, so why bother assembling the quality of evidence that would be required for a court case? That might even be true - but in that case anti-doping efforts to date have been a total failure.

Our third example: last week's decision by Fox to allow only verified paying cable customers to watch TV shows on Hulu in the first week after their initial broadcast. (Yet more evidence that Murdoch does not get the Internet.) We are in the 12th year of the wars on file-sharing, and still rights holders make decisions like this that increase the incentives to use unauthorized sources.

In the long scheme of things, as Becky Hogge used to say while she was the executive director of the Open Rights Group the result or poorly considered incentives that make bad law is that they teach people not to respect the law. That will have many worse consequences down the line.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

July 29, 2011

Name check

How do you clean a database? The traditional way - which I still experience from time to time from journalist directories - is that some poor schnook sits in an office and calls everyone on the list, checking each detail. It's an immensely tedious job, I'm sure, but it's a living.

The new, much cheaper method is to motivate the people in the database to do it themselves. A government can pass a law and pay benefits. Amazon expects the desire to receive the goods people have paid for to be sufficient. For a social network it's a little harder, yet Facebook has managed to get 750 million users to upload varying amounts of information. Google hopes people will do the same with Google+,

The emotional connections people make on social networks obscure their basic nature as databases. When you think of them in that light, and you remember that Google's chief source of income is advertising, suddenly Google's culturally dysfunctional decision to require real names on |Google+ makes some sense. For an advertising company,a fuller, cleaner database is more valuable and functional. Google's engineers most likely do not think in terms of improving the company's ability to serve tightly targeted ads - but I'd bet the company's accountants and strategists do. The justification - that online anonymity fosters bad behavior - is likely a relatively minor consideration.

Yet it's the one getting the attention, despite the fact that many people seem confused about the difference between pseudonymity, anonymity, and throwaway identity. In the reputation-based economy the Net thrives on, this difference matters.

The best-known form of pseudonymity is the stage name, essentially a form of branding for actors, musicians, writers, and artists, who may have any of a number of motives for keeping their professional lives separate from their personal lives: privacy for themselves, their work mates, or their families, or greater marketability. More subtly, if you have a part-time artistic career and a full-time day job you may not want the two to mix: will people take you seriously as an academic psychologist if they know you're also a folksinger? All of those reasons for choosing a pseudonym apply on the Net, where everything is a somewhat public performance. Given the harassment some female bloggers report, is it any wonder they might feel safer using a pseudonym?

The important characteristic of pseudonyms, which they share with "real names", is persistence. When you first encounter someone like GrrlScientist, you have no idea whether to trust her knowledge and expertise. But after more than ten years of blogging, that name is a known quantity. As GrrlScientist writes about Google's shutting down her account, it is her "real-enough" name by any reasonable standard. What's missing is the link to a portion of her identity - the name on her tax return, or the one her mother calls her. So what?

Anonymity has long been contentious on the Net; the EU has often considered whether and how to ban it. At the moment, the driving justification seems to be accountability, in the hope that we can stop people from behaving like malicious morons, the phenomenon I like to call the Benidorm syndrome.

There is no question that people write horrible things in blog and news site comments pages, conduct flame wars, and engage in cyber bullying and harassment. But that behaviour is not limited to venues where they communicate solely with strangers; every mailing list, even among workmates, has flame wars. Studies have shown that the cyber versions of bullying and harassment, like their offline counterparts, are most often perpetrated by people you know.

The more important downside of anonymity is that it enables people to hide, not their identity but their interests. Behind the shield, a company can trash its competitors and those whose work has been criticized can make their defense look more robust by pretending to be disinterested third parties.

Against that is the upside. Anonymity protects whistleblowers acting in the public interest, and protesters defying an authoritarian regime.

We have little data to balance these competing interests. One bit we do have comes from an experiment with anonymity conducted years ago on the WELL, which otherwise has insisted on verifying every subscriber throughout its history. The lesson they learned, its conferencing manager, Gail Williams, told me once, was that many people wanted anonymity for themselves - but opposed it for others. I suspect this principle has very wide applicability, and it's why the US might, say, oppose anonymity for Bradley Manning but welcome it for Egyptian protesters.

Google is already modifying the terms of what is after all still a trial service. But the underlying concern will not go away. Google has long had a way to link Gmail addresses to behavioral data collected from those using its search engine, docs, and other services. It has always had some ability to perform traffic analysis on Gmail users' communications; now it can see explicit links between those pools of data and, increasingly, tie them to offline identities. This is potentially far more powerful than anything Facebook can currently offer. And unlike government databases, it's nice and clean, and cheap to maintain.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 25, 2011

Wartime economy

Everyone loves a good headline, and £27 billion always makes a *great* one. In this case, that was the sum that a report written by the security consultancy firm Detica, now part of BAE Systems and issued by the Office of Cyber Security and Information Assurance (PDF) estimates that cybercrime is costing the UK economy annually. The claim was almost immediately questioned by ZDNet's Tom Espiner, who promptly checked it out with security experts. Who complained that the report was full of "fake precision" (LSE professor Peter Sommer), "questionable calculations" (Harvard's Tyler Moore), and "nonsense" (Cambridge's Richard Clayton).

First, some comparisons.

Twenty-seven billion pounds (approximately $40 billion) is slightly larger than a year's worth of the International Federation of the Phonographic Industry's estimate of the cumulative retail revenue lost to piracy by the European creative industries from 2008 to 2015 (PDF) (total €240 billion, about £203 million, eight years, £25.4 billion a year). It is roughly the estimated cost of the BP oil spill, the amount some think Facebook will be worth at an IPO, and noticeably less than Apple's $51 billion cash hoard. But: lots smaller than the "£40 billion underworld" The Times attributed to British gangs in 2008.

Several things baffle about this report. The first is that so little information is given about the study's methodology. Who did the researchers talk to? What assumptions did they make and what statistical probabilities did they assign in creating the numbers and charts? How are they defining categories like "online scams" or "IP theft" (they're clear about one thing: they're not including file-sharing in that figure)? What is the "causal model" they developed?

We know one person they didn't talk to: Computer Weekly notes the omission of Detective superintendent Charlie McMurdie, head of the Metropolitan Police's Central e-Crime Unit, who you'd' think would be one of the first ports of call for understanding the on-the-ground experience.

One issue the report seems to gloss over is how very difficult it is to define and categorize cybercrime. Last year, the Oxford Internet Institute conducted a one-day forum on the subject, out of which came the report Mapping and Measuring Cybercrime (PDF) , published in June 2010. Much of this report is given over to the difficulty of such definitions; Sommer, who participated in the forum, argued that we shouldn't worry about the means of commission - a crime is a crime. More recently - perhaps a month ago - Sommer teamed up with the OII's Ian Brown to publish a report for an OECD project on future global shocks, Reducing Systemic Cybersecurity Risk (PDF). The authors' conclusion: "very few single cyber-related events have the capacity to cause a global shock". This report also includes considerable discussion of cybercrime in assessing whether "cyberwarfare" is a genuine global threat. But the larger point about both these reports is that they disclose their methodology in detail.

And as a result, they make much more modest and measured claims, which is one reason that critics have looked at the source of the OCSIA/Detica report - BAE - and argued that the numbers are inflated and the focus largely limited to things that fit BAE's business interests (that is, IP theft and espionage; the usual demon, abuse of children, is left untouched).

The big risk here is that this report will be used in determining how policing resources are allocated.

"One of the most important things we can do is educate the public," says Sommer. "Not only about how to protect themselves but to ensure they don't leave their computers open to be formed into botnets. I am concerned that the effect of all these hugely military organizations lobbying for funding is that in the process things like Get Safe Online will suffer."

There's a broader point that begins with a personal nitpick. On page four, the report says this: "...the seeds of criminality planted by the first computer hackers 20 years ago." Leaving aside the even smaller nitpick that the *real*, original computer hackers, who built things and spent their enormous cleverness getting things to work, date to 40 and 50 years ago, it is utterly unfair to compare today's cybercrime to the (mostly) teenaged hackers of 1990, who spent their Saturday nights in their bedrooms war-dialling sites and trying out passwords. They were the computer equivalent of joy-riders, caused little harm, and were so disproportionately the targets of freaked-out, uncomprehending law enforcement that the the Electronic Frontier Foundation was founded to spread some sanity on the situation. Today's cybercrime underground is composed of professional criminals who operate in an organized and methodical way. There is no more valid comparison between the two than there is between Duke Nukem and al-Qaeda.

One is not a gateway to the other - but the idea that criminals would learn computer techniques and organized crime would become active online was repeatedly used as justification for anti-society legislation from cryptographic key escrow to data retention and other surveillance. The biggest risk of a report like this is that it will be used as justification for those wrong-headed policies rather than as it might more rightfully be, as evidence of the failure of no less than five British governments to plan ahead on our behalf.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 21, 2011


The Reform Club, I read on its Web site, was founded as a counterweight to the Carlton Club, where conservatives liked to meet and plot away from public scrutiny. To most of us, it's the club where Phileas Fogg made and won his bet that he could travel around the world in 80 days, no small feat in 1872.

On Wednesday, the club played host to a load of people who don't usually talk to each other much because they come at issues of privacy from such different angles. Cityforum, the event's organizer, pulled together representatives from many parts of civil society, government security, and corporate and government researchers.

The key question: what trade-offs are people willing to make between security and privacy? Or between security and civil liberties? Or is "trade-off" the right paradigm? It was good to hear multiple people saying that the "zero-sum" attitude is losing ground to "proportionate". That is, the debate is moving on from viewing privacy and civil liberties as things we must trade away if we want to be secure to weighing the size of the threat against the size of the intrusion. It's clear to all, for example, that one thing that's disproportionate is local councils' usage of the anti-terrorism aspects of the Regulation of Investigatory Powers Act to check whether householders are putting out their garbage for collection on the wrong day.

It was when the topic of the social value of privacy was raised that it occurred to me that probably the closest model to what people really want lay in the magnificent building all around us. The gentleman's club offered a social network restricted to "the right kind of people" - that is, people enough like you that they would welcome your fellow membership and treat you as you would wish to be treated. Within the confines of the club, a member like Fogg, who spent all day every day there, would have had, I imagine, little privacy from the other members or, especially, from the club staff, whose job it was to know what his favorite drink was and where and when he liked it served. But the club afforded members considerable protection from the outside world. Pause to imagine what Facebook would be like if the interface required each would-be addition to your friends list to be proposed and seconded and incomers could be black-balled by the people already on your list.

This sort of web of trust is the structure the cryptography software PGP relies on for authentication: when you generate your public key, you are supposed to have it signed by as many people as you could. Whenever someone wanted to verify the key, they could look at the list of who had signed it for someone they themselves knew and could trust. The big question with such a structure is how you make managing it scale to a large population. Things are a lot easier when it's just a small, relatively homogeneous group you have to deal with. And, I suppose, when you have staff to support the entire enterprise.

We talk a lot about the risks of posting too much information to things like Facebook, but that may not be its biggest issue. Just as traffic data can be more revealing than the content of messages, complex social linkages make it impossible to anonymize databases: who your friends are may be more revealing than your interactions with them. As governments and corporations talk more and more about making "anonymized" data available for research use, this will be an increasingly large issue. An example: an little-known incident in 2005, when the database of a month's worth of UK telephone calls was exported to the US with individuals' phone numbers hashed to "anonymize" them. An interesting technological fix comes from Microsoft' in the notion of differential privacy, a system for protecting databases both against current re-identification and attacks with external data in the future. The catch, if it is one, is that you must assign to your database a sort of query budget in advance - and when it's used up you must burn the database because it can no longer be protected.

We do know one helpful thing: what price club members are willing to pay for the services their club provides. Public opinion polls are a crude tool for measuring what privacy intrusions people will actually put up with in their daily lives. A study by Rand Europe released late last year attempted to examine such things by framing them in economic terms. The good news is they found that you'd have to pay people £19 to get them to agree to provide a DNA sample to include in their passport. The weird news is that people would pay £7 to include their fingerprints. You have to ask: what pitch could Rand possibly have made that would make this seem worth even one penny to anyone?

Hm. Fingerprints in my passport or a walk across a beautiful, mosaic floor to a fine meal in a room with Corinthian columns, 25-foot walls of books, and a staff member who politely fails to notice that I have not quite confirmed to the dress code? I know which is worth paying for if you can afford it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 7, 2011

Scanning the TSA

There are, Bruce Schneier said yesterday at the Electronic Privacy Information Center mini-conference on the TSA (video should be up soon), four reasons why airport security deserves special attention, even though it directly affects a minority of the population. First: planes are a favorite terrorist target. Second: they have unique failure characteristics - that is, the plane crashes and everybody dies. Third: airlines are national symbols. Fourth: planes fly to countries where terrorists are.

There's a fifth he didn't mention but that Georgetown lawyer Pablo Molina and We Won't Fly founder James Babb did: TSAism is spreading. Random bag searches on the DC Metro and the New York subways. The TSA talking about expanding its reach to shopping malls and hotels. And something I found truly offensive, giant LED signs posted along the Maryland highways announcing that if you see anything suspicious you should call the (toll-free) number below. Do I feel safer now? No, and not just because at least one of the incendiary devices sent to Maryland state offices yesterday apparently contained a note complaining about those very signs.

Without the sign, if you saw someone heaving stones at the cars you'd call the police. With it, you peer nervously at the truck in front of you. Does that driver look trustworthy? This is, Schneier said, counter-productive because what people report under that sort of instruction is "different, not suspicious".

But the bigger flaw is cover-your-ass backward thinking. If someone tries to bomb a plane with explosives in a printer cartridge, missing a later attempt using the exact same method will get you roasted for your stupidity. And so we have a ban on flying with printer cartridges over 500g and, during December, restrictions on postal mail, something probably few people in the US even knew about.

Jim Harper, a policy scholar with the Cato Institute and a member of the Department of Homeland Security's Data Privacy and Integrity Advisory Committee, outlined even more TSA expansion. There are efforts to create mobile lie detectors that measure physiological factors like eye movements and blood pressure.

Technology, Lillie Coney observed, has become "like butter - few things are not improved if you add it."

If you're someone charged with blocking terrorist attacks you can see the appeal: no one wants to be the failure who lets a bomb onto a plane. Far, far better if it's the technology that fails. And so expensive scanners roll through the nation's airports despite the expert assessment - on this occasion, from Schneier and Ed Luttwak, a senior associate with the Center for Strategic and International Studies - that the scanners are ineffective, invasive, and dangerous. As Luttwak said, the machines pull people's attention, eyes, and brains away from the most essential part of security: watching and understanding the passengers' behavior.

"[The machine] occupies center stage, inevitably," he said, "and becomes the focus of an activity - not aviation security, but the operation of a scanner."

Equally offensive in a democracy, many speakers argued, is the TSA's secrecy and lack of accountability. Even Meera Shankar, the Indian ambassador, could not get much of a response to her complaint from the TSA, Luttwak said. "God even answered Job." The agency sent no representative to this meeting, which included Congressmen, security experts, policy scholars, lawyers, and activists.

"It's the violation of the entire basis of human rights," said the Stanford and Oxford lawyer Chip Pitts around the time that the 112th Congress was opening up with a bipartisan reading of the US Constitution. "If you are treated like cattle, you lose the ability to be an autonomous agent."

As Libertarian National Committee executive director Wes Benedict said, "When libertarians and Ralph Nader agree that a program is bad, it's time for our government to listen up."

So then, what are the alternatives to spending - so far, in the history of the Department of Homeland Security, since 2001 - $360 billion, not including the lost productivity and opportunity costs to the US's 100 million flyers?

Well, first of all, stop being weenies. The number of speakers who reminded us that the US was founded by risk-takers was remarkable. More people, Schneier noted, are killed in cars every month than died on 9/11. Nothing, Ralph Nader said, is spent on the 58,000 Americans who die in workplace accidents every year or the many thousands more who are killed by pollution or medical malpractice.

"We need a comprehensive valuation of how to deploy resources in a rational manner that will be effective, minimally invasive, efficient, and obey the Constitution and federal law," Nader said

So: dogs are better at detecting explosives than scanners. Intelligent profiling can whittle down the mass of suspects to a more manageable group than "everyone" in a giant game of airport werewolf. Instead, at the moment we have magical thinking, always protecting ourselves from the last attack.

"We're constantly preparing for the rematch," said Lillie Coney. "There is no rematch, only tomorrow and the next day." She was talking as much about Katrina and New Orleans as 9/11: there will always, she said, be some disaster, and the best help in those situations is going to come from individuals and the people around them. Be prepared: life is risky.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 24, 2010

Random acts of security

When I was in my 20s in the 1970s, I spent a lot of time criss-crossing the US by car. One of the great things about it, as I said to a friend last week, was the feeling of ownership that gave me wherever I was: waking up under the giant blue sky in Albuquerque, following the Red River from Fargo to Grand Forks, or heading down the last, dull hour of New York State Thruway to my actual home, Ithaca, NY, it was all part of my personal backyard. This, I thought many times, is my country!

This year's movie (and last year's novel) Up in the Air highlighted the fact that the world's most frequent flyers feel the same way about airports. When you've traversed the same airports so many times that you've developed a routine it's hard not to feel as smug as George Clooney's character when some disorganized person forgets to take off her watch before going through the metal detector. You, practiced and expert, slide through smoothly without missing a beat. The check-in desk staff and airline club personnel ask how you've been. You sit in your familiar seat on the plane. You even know the exact moment in the staff routine to wander back to the galley and ask for a mid-flight cup of tea.

Your enemy in this comfortable world is airport security, which introduces each flight by putting you back in your place as an interloper.

Our equivalent back then was the Canadian border, which we crossed in quite isolated places sometimes. The border highlighted a basic fact of human life: people get bored. At the border crossing between Grand Forks, ND and Winnipeg, Manitoba, for example, the guards would keep you talking until the next car hove into view. Sometimes that was one minute, sometimes 15.

We - other professional travelers and I - had a few other observations. If you give people a shiny, new toy they will use it, just for the novelty. One day when I drove through Lewiston-Queenston they had drug-sniffing dogs on hand to run through and around the cars stopped for secondary screening. Fun! I was coming back from a folk festival in a pickup truck with a camper on the back, so of course I was pulled over. Duh: what professional traveler who crosses the border 12 times a year risks having drugs in their car?

Cut to about a week ago, at Memphis airport. It was 10am on a Saturday, and the traffic approaching the security checkpoint was very thin. The whole body image scanners - expensive, new, the latest in cover-your-ass-ness - are in theory only for secondary screening: you go through them if you alarm the metal detectors or are randomly selected.

How does that work? When there's little traffic everyone goes through the scanner. For the record, I opted out and was given an absolutely professional and courteous pat-down, in contrast to the groping reports in the media for the last month. Yes: felt around under my waistband and hairline. No: groping. You've got to love the Net's many charming inhabitants: when I posted this report to a frequent flyer forum a poster hazarded that I was probably old and ugly.

My own theory is simply that it was early in the day, and everyone was rested and fresh and hadn't been sworn at a whole lot yet. So no one was feeling stressed out or put-upon by a load of uppity, obnoxious passengers.

It seems clear, however, that if you wanted to navigate security successfully carrying items that are typically unwanted on a flight, your strategy for reducing the odds of attracting extra scrutiny would be fairly simple, although the exact opposite of what experienced (professional) travelers are in the habit of doing:

- Choose a time when it's extremely crowded. Scanners are slower than metal detectors, so the more people there are the smaller the percentage going through them. (Or study the latest in scanner-defeating explosives fashions.)

- Be average and nondescript, someone people don't notice particularly or feel disposed to harass when they're in a bad mood. Don't be a cute, hot young woman; don't be a big, fat, hulking guy; don't wear clothes that draw the eye: expensive designer fashions, underwear, Speedos, a nun's habit (who knows what that could hide? and anyway isn't prurient curiosity about what could be under there a thing?).

- Don't look rich, powerful, special, or attitudinous. The TSA is like a giant replication of Stanley Milgram's experiment. Who's the most fun to roll over? The business mogul or the guy just like you who works in a call center? The guy with the video crew spoiling for a fight, or the guy who treats you like a servant? The sexy young woman who spurned you in high school or the crabby older woman like your mean second-grade teacher? Or the wheelchair-bound or medically challenged who just plain make you uncomfortable?

- When you get in line, make sure you're behind one or more of the above eye-catching passengers.

Note to TSA: you think the terrorists can't figure this stuff out, too? The terrorist will be the last guy your agents will pick for closer scrutiny.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 26, 2010

Like, unlike

Some years back, the essayist and former software engineer Ellen Ullman wrote about the tendency of computer systems to infect their owners. The particular infectious she covered in Close to the Machine: Technophilia and Its Discontents was databases. Time after time, she saw good, well-meaning people commission a database to help staff or clients, and then begin to use it to monitor those they originally intended to help. Why? Well, because they *can*.

I thought - and think - that Ullman was onto something important there, but that this facet of human nature is not limited to computers and databases. Stanley Milgram's 1961 experiments showed that humans under the influence of apparent authority will obey instructions to administer treatment that outside of such a framework they would consider abhorrent. This seems to me sufficient answer to Roger Ebert's comment that no TSA agent has yet refused to perform the "enhanced pat-down", even on a child.

It would almost be better if the people running the NHS Choices Web site had been infected with the surveillance bug because they would be simply wrong. Instead, the NHS is more complicatedly wrong: it has taken the weird decision that what we all want is to . share with our Facebook friends the news that we have just looked at the page on gonorrhea. Or, given the well-documented privacy issues with Facebook's rapid colonization of the Web via the "Like" button, allow Facebook to track our every move whether we're logged in or not.

I can only think of two possibilities for the reasoning behind this. One is that NHS managers have little concept of the difference between their site, intended to provide patient information and guidance, and that of a media organization needing advertising to stay afloat. It's one of the truisms of new technologies that they infiltrate the workplace through the medium of people who already use them: email, instant messaging, latterly social networks. So maybe they think that because they love Facebook the rest of us must, too. My other thought is that NHS managers think this is what we want because their grandkids have insisted they get onto Facebook, where they now occupy their off-hours hitting the "like" button and poking each other and think this means they're modern.

There's the issue Tim Berners-Lee has raised, that Facebook and other walled gardens are dividing the Net up into incompatible silos. The much worse problem, at least for public services and we who must use them, is the insidiously spreading assumption that if a new technology is popular it must be used no matter what the context. The effect is about as compelling as a TSA agent offering you a lollipop after your pat-down.

Most likely, the decision to deploy the "Like" button started with the simple, human desire for feedback. At some point everyone who runs a Web site wonders what parts of the site get read the most...and then by whom...and then what else they read. It's obviously the right approach if you're a media organization trying to serve your readers better. It's a ludicrously mismatched approach if you're the NHS because your raison d'être is not to be popular but to provide the public with the services they need at the most vulnerable times in their lives. Your page on rare lymphomas is not less valuable or important just because it's accessed by fewer people than the pages on STDs, nor are you actually going to derive particularly useful medical research data from finding that people who read about lymphoma also often read pages on osteoporosis. But it's easy, quick, and free to install Google Analytics or Facebook Like, and so people do it without thought.

Both of these incidents have also exposed once and for all the limited value of privacy policies. For one thing, a patient in distress is not going to take time out from bleeding to read the fine print ("when you visit pages on our site that display a Facebook Like button, Facebook will collect information about your visit") or check for open, logged-in browser windows. The NHS wants its sites to be trusted; but that means more than simply being medically accurate; it requires implementing confidentiality as well. The NHS's privacy policy is meaningless if you need to be a technical expert to exercise any choice. Similarly, who cares what the TSA's privacy policy says if the simple desire to spend Christmas with your family requires you to submit to whatever level of intimate inspection the agent on the ground that day feels like dishing out? What privacy policy makes up for being required to covered in urine spilled from your roughly handled urostomy bag? Milgram moments, both.

It's at this point that we need our politicians to act in our interests, because the thinking has to change at the top level.

Meantime, if you're traveling in the US this Christmas, the ACLU, and Edward Hasbrouck have handy guides to your rights. But pragmatically, if you do get patted down and really want to make your flight, it seems like your best policy is to lie back and think of the country of your choice.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 12, 2010

Just between ourselves

It is, I'm sure, pure coincidence that a New York revival of Vaclav Havel's wonderfully funny and sad 1965 play The Memorandum was launched while the judge was considering the Paul Chambers "Twitter joke trial" case. "Bureaucracy gone mad," they're billing the play, and they're right, but what that slogan omits is that the bureaucracy in question has gone mad because most of its members don't care and the one who does has been shut out of understanding what's going on. A new language, Ptydepe, has been secretly invented and introduced as a power grab by an underling claiming it will improve the efficiency of intra-office communications. The hero only discovers the shift when he receives a memorandum written in the new language and can't get it translated due to carefully designed circular rules. When these are abruptly changed the translated memorandum restores him to his original position.

It is one of the salient characteristics of Ptydepe that it has a different word for every nuance of the characters' natural language - Czech in the original, but of course English in the translation I read. Ptydepe didn't work for the organization in the play because it was too complicated for anyone to learn, but perhaps something like it that removes all doubt about nuance and context would assist older judges in making sense of modern social interactions over services such as Twitter. Clearly any understanding of how people talk and make casual jokes was completely lacking yesterday when Judge Jacqueline Davies upheld the conviction of Paul Chambers in a Doncaster court.

Chambers' crime, if you blinked and missed those 140 characters, was to post a frustrated message about snowbound Doncaster airport: "Crap! Robin Hood airport is closed. You've got a week and a bit to get your shit together otherwise I'm blowing the airport sky high!" Everyone along the chain of accountability up to the Crown Prosecution Service - the airport duty manager, the airport's security personnel, the Doncaster police - seems to have understood he was venting harmlessly. And yet prosecution proceeded and led, in May, to a conviction that was widely criticized both for its lack of understanding of new media and for its failure to take Chambers' lack of malicious intent into account.

By now, everyone has been thoroughly schooled in the notion that it is unwise to make jokes about bombs, plane crashes, knives, terrorists, or security theater - when you're in an airport hoping to get on a plane. No one thinks any such wartime restraint need apply in a pub or its modern equivalent, the Twitter/Facebook/online forum circle of friends. I particularly like Heresy Corner's complaint that the judgement makes it illegal to be English.

Anyone familiar with online writing style immediately and correctly reads Chambers' Tweet for what it was: a perhaps ill-conceived expression of frustration among friends that happens to also be readable (and searchable) by the rest of the world. By all accounts, the judge seems to have read it as if it were a deliberately written personal telegram sent to the head of airport security. The kind of expert explanation on offer in this open letter apparently failed to reach her.

The whole thing is a perfect example of the growing danger of our data-mining era: that casual remarks are indelibly stored and can be taken out of context to give an utterly false picture. One of the consequences of the Internet's fundamental characteristic of allowing the like-minded and like-behaved to find each other is that tiny subcultures form all over the place, each with its own set of social norms and community standards. Of course, niche subcultures have always existed - probably every local pub had its own set of tropes that were well-known to and well-understood by the regulars. But here's the thing they weren't: permanently visible to outsiders. A regular who, for example, chose to routinely indicate his departure for the Gents with the statement, "I'm going out to piss on the church next door" could be well-known in context never to do any such thing. But if all outsiders saw was a ten-second clip of that statement and the others' relaxed reaction that had been posted to YouTube they might legitimately assume that pub was a shocking hotbed of anti-religiou slobs. Context is everything.

The good news is that the people on the ground whose job it was to protect the airport read the message, understood it correctly, and did not overreact. The bad news is that when the CPS and courts did not follow their lead it opened up a number of possibilities for the future, all bad. One, as so many have said, is that anyone who now posts anything online while drunk, angry, stupid, or sloppy-fingered is at risk of prosecution - with the consequence of wasting huge amounts of police and judicial time that would be better spent spotting and stopping actual terrorists. The other is that everyone up the chain felt required to cover their ass in case they were wrong.

Chambers still may appeal to the High Court; Stephen Fry is offering to pay his fine (the Yorkshire Post puts his legal bill at £3,000), and there's a fund accepting donations.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 23, 2010

An affair to remember

Politicians change; policies remain the same. Or if, they don't, they return like the monsters in horror movies that end with the epigraph, "It's still out there..."

Cut to 1994, my first outing to the Computers, Freedom, and Privacy conference. I saw: passionate discussions about the right to strong cryptography. The counterargument from government and law enforcement and security service types was that yes, strong cryptography was a fine and excellent thing at protecting communications from prying eyes and for that very reason we needed key escrow to ensure that bad people couldn't say evil things to each other in perfect secrecy. The listing of organized crime, terrorists, drug dealers, and pedophiles as the reasons why it was vital to ensure access to cleartext became so routine that physicist Timothy May dubbed them "The Four Horsemen of the Infocalypse". Cypherpunks opposed restrictions on the use and distribution of strong crypto; government types wanted at the very least a requirement that copies of secret cryptographic keys be provided and held in escrow against the need to decrypt in case of an investigation. The US government went so far as to propose a technology of its own, complete with back door, called the Clipper chip.

Eventually, the Clipper chip was cracked by Matt Blaze, and the needs of electronic commerce won out over the paranoia of the military and restrictions on the use and export of strong crypto were removed.

Cut to 2000 and the run-up to the passage of the UK's Regulation of Investigatory Powers Act. Same Four Horsemen, same arguments. Eventually RIPA passed with the requirement that individuals disclose their cryptographic keys - but without key escrow. Note that it's just in the last couple of months that someone - a teenager - has gone to jail in the UK for the first time for refusing to disclose their key.

It is not just hype by security services seeking to evade government budget cuts to say that we now have organized cybercrime. Stuxnet rightly has scared a lot of people into recognizing the vulnerabilities of our infrastructure. And clearly we've had terrorist attacks. What we haven't had is a clear demonstration by law enforcement that encrypted communications have impeded the investigation.

A second and related strand of argument holds that communications data - that is traffic data such as email headers and Web addresses - must be retained and stored for some lengthy period of time, again to assist law enforcement in case an investigation is needed. As the Foundation for Information Policy Research and Privacy International have consistently argued for more than ten years, such traffic data is extremely revealing. Yes, that's why law enforcement wants it; but it's also why the American Library Association has consistently opposed handing over library records. Traffic data doesn't just reveal who we talk to and care about; it also reveals what we think about. And because such information is of necessity stored without context, it can also be misleading. If you already think I'm a suspicious person, the fact that I've been reading proof-of-concept papers about future malware attacks sounds like I might be a danger to cybersociety. If you know I'm a journalist specializing in technology matters, that doesn't sound like so much of a threat.

And so to this week. The former head of the Department of Homeland Security, Michael Chertoff, at the RSA Security Conference compared today's threat of cyberattack to nuclear proliferation. The US's Secure Flight program is coming into effect, requiring airline passengers to provide personal data for the US to check 72 hours in advance (where possible). Both the US and UK security services are proposing the installation of deep packet inspection equipment at ISPs. And language in the UK government's Strategic Defence and Security Review (PDF) review has led many to believe that what's planned is the revival of the we-thought-it-was-dead Interception Modernisation Programme.

Over at Light Blue Touchpaper, Ross Anderson links many of these trends and asks if we will see a resumption of the crypto wars of the mid-1990s. I hope not; I've listened to enough quivering passion over mathematics to last an Internet lifetime.

But as he says it's hard to see one without the other. On the face of it, because the data "they" want to retain is traffic data and note content, encryption might seem irrelevant. But a number of trends are pushing people toward greater use of encryption. First and foremost is the risk of interception; many people prefer (rightly) to use secured https, SSH, or VPN connections when they're working over public wi-fi networks. Others secure their connections precisely to keep their ISP from being able to analyze their traffic. If data retention and deep packet inspection become commonplace, so will encrypted connections.

And at that point, as Anderson points out, the focus will return to long-defeated ideas like key escrow and restrictions on the use of encryption. The thought of such a revival is depressing; implementing any of them would be such a regressive step. If we're going to spend billions of pounds on the Internet infrastructure - in the UK, in the US, anywhere else - it should be spent on enhancing robustness, reliability, security, and speed, not building the technological infrastructure to enable secret, warrantless wiretapping.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 8, 2010

The zero effect

Some fifteen years ago I reviewed a book about the scary future of our dependence on computers. The concluding note: that criminals could take advantage of a zero-day exploit to disseminate a virus around the world that at a given moment would shut down all the world's computers.

I'm fairly sure I thought this was absurd and said so. Since when have we been able to get every computer to do anything on command? But there was always the scarier and less unlikely prospect that building computers into more and more everyday things would add more and more chances for code to increase the physical world's vulnerability to attack.

By any measure, the Stuxnet worm that has been dominating this week's technology news is an impressive bit of work (it may even have had a beta test). So impressive, in fact, that you imagine its marketing brochure said, like the one for the spaceship Heart of Gold in The Hitchhiker's Guide to the Galaxy, "Be the envy of other major governments."

The least speculative accounts, like those of Bruce Schneier, Business Standard, and Symantec, and that of long-time Columbia University researcher Steve Bellovin agree on a number of important points.

First, whoever coded this worm was an extremely well-resourced organization. Symantec estimates the effort would have required five to ten people and six months - and, given that the worm is nearly bug-free, teams of managers and quality assurance folks. (Nearly bug-free: how often can you say that of any software?) In a paper he gave at Black Hat several years ago, Peter Gutmann documented the highly organized nature of the malware industry (PDF). Other security researchers have agreed: there is a flourishing ecosystem around malware that includes myriad types of specialist groups who provide all the features of other commercial sectors, up to and including customer service.

In addition, the writers were willing to use up three zero-day exploits (many reports say four, but Schneier notes that one has been identified as a previously reported vulnerability). This is expensive, in the sense that these vulnerabilities are hard to find and can only be used once (because once used, they'll be patched). You don't waste them on small stuff.

Plus, the coders were able to draw on rather specialised knowledge of the inner workings of Siemens programmable logic controller systems and gain access to the certificates needed to sign drivers. And the worm was both able to infect widely and target specifically. Interesting.

The big remaining question is what the goal was: send a message; do something to one specific target; as yet unidentified; simple proof of concept? Whatever the purpose was, it's safe to say that this will not be the last piece of weapons-grade malware (as Bellovin calls it) to be unleashed on the world. If existing malware is any guide, future Stuxnets will be less visible, harder to find and stop, and written to more specific goals. Yesterday's teenaged bedroom hacker defacing Web pages has been replaced by financial criminals whose malware cleans other viruses off the systems it infects and steals very specifically useful data. Today's Stuxnet programmers will most likely be followed by more complex organizations with much clearer and more frightening agendas. They won't stop all the world's computers (because they'll need their own to keep running); but does that matter if they can disrupt the electrical grid and the water supply, or reroute trains and disrupt air traffic control?

Schneier notes that press reports incorrectly identified the Siemens systems Stuxnet attacked as SCADA (for Supervisory Control and Data Acquisition) rather than PLC. But that doesn't mean that SCADA systems are invulnerable: Tom Fuller, who ran the now-defunct Blindside project in 2006-2007 for the government consultancy Kable under a UK government contract, spotted the potential threats to SCADA systems as long ago as that. Post-Stuxnet, others are beginning to audit these systems and agree. An Australia audit of Victoria's water systems concluded that these are vulnerable to attack, and it seems likely many more such reports will follow.

But the point Bellovin makes that is most likely to be overlooked is this one: that building a separate "secure" network will not provide a strong defense. To be sure, we are making many pieces of infrastructure more vulnerable by adding new vulnerabilities. Many security experts agree that the deployment of wireless electrical meters and the "smart grid" has failed to understand the privacy and security issues this is going to raise.

The temptation to overlook Bellovin's point is going to be very strong. But the real-world equivalent is to imagine that because your home computer is on a desert island surrounded by a moat filled with alligators it can't be stolen. Whereas, the reality is that a family member or invited guest can still copy your data and make off with it or some joker can drop in by helicopter.

Computers are porous. Infrastructure security must assume that and limit the consequences.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats all non-spam comments; I still don't know why.

September 24, 2010

Lost in a Haystack

In the late 1990s you could always tell when a newspaper had just gotten online because it would run a story about the Good Times virus.

Pause for historical detail: the Good Times virus (and its many variants) was an email hoax. An email message with the subject heading "Good Times" or, later, "Join the Crew", or "Penpal Greetings", warned recipients that opening email messages with that header would damage their computers or delete the contents of their hard drives. Some versions cited Microsoft, the FCC, or some other authority. The messages also advised recipients to forward the message to all their friends. The mass forwarding and subsequent complaints were the payload.

The point, in any case, is that the Good Times virus was the first example of mass social engineering that spread by exploiting not particularly clever psychology and a specific kind of technical ignorance. The newspaper staffers of the day were very much ordinary new users in this regard, and they would run the story thinking they were serving their readers. To their own embarrassment, of course. You'd usually see a retraction a week or two later.

Austin Heap, the progenitor of Haystack, software he claimed was devised to protect the online civil liberties of Iranian dissidents, seems unlikely to have been conducting an elaborate hoax rather than merely failing to understand what he was doing. Either way, Haystack represents a significant leap upward in successfully taking mainstream, highly respected publications for a technical ride. Evgeny Morozov's detailed media critique underestimates the impact of the recession and staff cuts on an already endangered industry. We will likely see many more mess-equals-technology-plus-journalism stories because so few technology specialists remain in the post-recession mainstream media.

I first heard Danny O'Brien's doubts about Haystack in June, and his chief concern was simple and easily understood: no one was able to get a copy of the software to test it for flaws. For anyone who knows anything about cryptography or security, that ought to have been damning right out of the gate. The lack of such detail is why experienced technology journalists, including Bruce Schneier, generally avoided commenting on it. There is a simple principle at work here: the *only* reason to trust technology that claims to protect its users' privacy and/or security is that it has been thoroughly peer-reviewed - banged on relentlessly by the brightest and best and they have failed to find holes.

As a counter-example, let's take Phil Zimmermann's PGP, email encryption software that really has protected the lives and identities of far-flung dissidents. In 1991, when PGP first escaped onto the Net, interest in cryptography was still limited to a relatively small, though very passionate, group of people. The very first thing Zimmermann wrote in the documentation was this: why should you trust this product? Just in case readers didn't understand the importance of that question, Zimmermann elaborated, explaining how fiendishly difficult it is to write encryption software that can withstand prolonged and deliberate attacks. He was very careful not to claim that his software offered perfect security, saying only that he had chosen the best algorithms he could from the open literature. He also distributed the source code freely for review by all and sundry (who have to this day failed to find substantive weaknesses). He concludes: "Anyone who thinks they have devised an unbreakable encryption scheme either is an incredibly rare genius or is naive and inexperienced." Even the software's name played down its capabilities: Pretty Good Privacy.

When I wrote about PGP in 1993, PGP was already changing the world by up-ending international cryptography regulations, blocking mooted US legislation that would have banned the domestic use of strong cryptography, and defying patent claims. But no one, not even the most passionate cypherpunks, claimed the two-year-old software was the perfect, the only, or even the best answer to the problem of protecting privacy in the digital world. Instead, PGP was part of a wider argument taking shape in many countries over the risks and rewards of allowing civilians to have secure communications.

Now to the claims made for Haystack in its FAQ:

However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy.

Without proper and open testing of the entire system - peer review - they could not possibly know this. The strongest cryptographic algorithm is only as good as its implementation. And even then, as Clive Robertson writes in Financial Cryptography, technology is unlikely to be a complete solution.

What a difference a sexy news hook makes. In 1993, the Clinton Administration's response to PGP was an FBI investigation that dogged Zimmermann for two years; in 2010, Hillary Clinton's State Department fast-tracked Haystack through the licensing requirements. Why such a happy embrace of Haystack rather than existing privacy technologies such as Freenet, Tor, or other anonymous remailers and proxies remains as a question for the reader.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 27, 2010

Trust the data, not the database

"We're advising people to opt out," said the GP, speaking of the Summary Care Records that are beginning to be uploaded to what is supposed to be eventually a nationwide database used by the NHS. Her reasoning goes this way. If you don't upload your data now you can always upload it later. If you do upload it now - or sit passively by while the National Health Service gets going on your particular area - and live to regret it you won't be able to get the data back out again.

You can find the form here, along with a veiled hint that you'll be missing out on something if you do opt out - like all those great offers of products and services companies always tell you you'll get if you sign up for their advertising, The Big Opt-Out Web site has other ideas.

The newish UK government's abrupt dismissal of the darling databases of last year has not dented the NHS's slightly confusing plans to put summary care records on a national system that will move control over patient data from your GP, who you probably trust to some degree, to...well, there's the big question.

In briefings for Parliamentarians conducted by the Open Rights Group in 2009, Emma Byrne, a researcher at University College, London who has studied various aspects of healthcare technology policy, commented that the SCR was not designed with any particular use case in mind. Basic questions that an ordinary person asks before every technology purchase - who needs it? for what? under what circumstances? to solve what problem? - do not have clear answers.

"Any clinician understands the benefits of being able to search a database rather than piles of paper records, but we have to do it in the right way," Fleur Fisher, the former head of ethics, science, and information for the British Medical Association said at those same briefings. Columbia University researcher Steve Bellovin, among others, has been trying to figure out what that right way might look like.

As comforting as it sounds to say that the emergency care team looking after you will be able to look up your SCR and find out that, for example, you are allergic to penicillin and peanuts, in practice that's not how stuff happens - and isn't even how stuff *should* happen. Emergency care staff look at the patient. If you're in a coma, you want the staff to run the complete set of tests, not look up in a database, see you're a diabetic and assume it's a blood sugar problem. In an emergency, you want people to do what the data tells them, not what the database tells them.

Databases have errors, we know this. (Just last week, a database helpfully moved the town I live in from Surrey to Middlesex, for reasons best known to itself. To fix it, I must write them a letter and provide documentation.) Typing and cross-matching blood drawn by you from the patient in front of you is much more likely to have you transfusing the right type of blood into the right patient.

But if the SCR isn't likely to be so much used by the emergency staff we're all told would? might? find it helpful, it still opens up much broader possibilities of abuse. It's this part of the system that the GP above was complaining about: you cannot tell who will have access or under what circumstances.

GPs do, in a sense, have a horse in this race, in that if patient data moves out of their control they have lost an important element of their function as gatekeepers. But given everything we know about how and why large government IT projects fail, surely the best approach is small, local projects that can be scaled up once they're shown to be functional and valuable. And GPs are the people at the front lines who will be the first to feel the effects of a loss of patient trust.

A similar concern has kept me from joining at study whose goals I support, intended to determine if there is a link between mobile phone use and brain cancer. The study is conducted by an ultra-respectable London university; they got my name and address from my mobile network operator. But their letter notes that participation means giving them unlimited access to my medical records for the next 25 years. I'm 56, about the age of the earliest databases, and I don't know who I'll be in 25 years. Technology is changing faster than I am. What does this decision mean?

There's no telling. Had they said I was giving them permission for five years and then would be asked to renew, I'd feel differently about it. Similarly, I'd be more likely to agree had they said that under certain conditions (being diagnosed with cancer, dying, developing brain disease) my GP would seek permission to release my records to them. But I don't like writing people blank checks, especially with so many unknowns over such a long period of time. The SCR is a blank check.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

August 6, 2010

Bride of Clipper

"It's the Clipper chip," said Ross Anderson, or more or less, "risen out of its grave trailing clanking chains and covered in slime." Anderson was talking about the National Strategy for Trusted Identities in Cyberspace, a plan hatched in the US and announced by cybersecurity czar Howard Schmidt in June.

The Clipper chip was the net.war in progress when I went to my first Computers, Freedom, and Privacy conference, the 1994 edition, held in Chicago. The idea behind Clipper was kind of cute: the government, in the form of the NSA, had devised a cryptographic chip that could be installed in any telecommunications device to encrypt and decrypt any communications it transmitted or received. The catch: the government would retain a master key to allow it to decrypt anything it wanted whenever it felt the need. Privacy advocates and civil libertarians and security experts joined to fight a battle royal against its adoption as a government standard. We'll never know how that would have come out because while passions were still rising a funny thing happened: cryptographer Matt Blaze discovered he could bypass the government's back door (PDF) and use the thing to send really encrypted communications. End of Clipper chip.

At least, as such.

The most important element of the Clipper chip, however - key escrow - stayed with us a while longer. It means what it sounds like: depositing a copy of your cryptographic key, which is supposed to be kept secret, with an authority. During the 1990s run of fights over key escrow (the US and UK governments wanted it; technical experts, civil libertarians, and privacy advocates all thought it was a terrible idea) such authorities were referred to as "trusted third parties" (TTPs). At one event Privacy International organised to discuss the subject, government representatives made it clear their idea of TTPs were banks. They seemed astonished to discover that in fact people don't trust their banks that much. By the time the UK's Regulation of Investigatory Powers Act was passed in 2000, key escrow had been eliminated.

But it is this very element - TTPs and key escrow - that is clanking along to drip slime on the NSTIC. The proposals are, of course, still quite vague, as the Electronic Frontier Foundation has pointed out. But the proposals do talk of "trusted digital identities" and "identity providers" who may be from the public or private sectors. They talk less, as the Center for Democracy and Technology has pointed out, about the kind of careful user-centric, role-specific, transactional authentication that experts like Jan Camenisch and Stefan Brands have long advocated. (Since I did that 2007 interview with him, Brands' company, Credentica, has been bought by Microsoft and transformed into its new authentication technology, U-Prove.) Have an identity ecosystem, by all means, but the key to winning public trust - the most essential element of any such system - will be ensuring that identity providers are not devised as Medium-sized Brothers-by-proxy.

Blaze said at the time that the Feds were pretty grown-up about the whole thing. Still, I suppose it was predictable that it would reappear. Shortly after the 9/11 attacks Jack Straw, then foreign minister, called those of us who opposed key escrow in the 1990s "very naïve". The rage over that kicked off the first net.wars column.

The fact remains that if you're a government and you want access to people's communications and those people encrypt those communications there are only two approaches available to you. One: ban the technology. Two: install systems that let you intercept and decode the communications at will. Both approaches are suddenly vigorously on display with respect to Blackberry devices, which offer the most secure mobile email communications we have (which is why businesses and governments love them so much for their own use).

India wants to take the second approach, but will settle for the first if Research in Motion doesn't install a server in India, where it can be "safely" monitored. The UAE, as everyone heard this week, wants to ban it starting on October 11. (I was on Newsnight Tuesday to talk about this with Kirsty Wark and Alan West.)

No one, not CDT, PI, or EFF, not even me, disputes that there are cases where intercepting and reading communications - wiretapping - is necessary in the interest of protecting innocent lives. But what key escrow and its latter variants enables, as Susan Landau, a security researcher and co-author of Privacy on the Line: The Politics of Wiretapping and Encryption, has noted, is covert wiretapping. Or, choose your own favorite adjective: covert, warrantless, secret, unauthorized, illegal... It would be wonderful to be able to think that all law enforcement heroes are noble, honorable, and incapable of abusing the power we give them. But history says otherwise: where there is no oversight, abuse follows. Judicial oversight of wiretapping requests is our bulwark against mass surveillance.

CDT, EFF, and others are collecting ideas for improving NSTIC, starting with extending the period for public comments, which was distressingly short (are we seeing a pattern develop here?). Go throw some words at the problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 18, 2010

Things I learned at this year's CFP

- There is a bill in front of Congress to outlaw the sale of anonymous prepaid SIMs. The goal seems to be some kind of fraud and crime prevention. But, as Ed Hasbrouck points out, the principal people who are likely to be affected are foreign tourists and the Web sites who sell prepaid SIMS to them.

- Robots are getting near enough in researchers' minds for them to be spending significant amounts of time considering the legal and ethical consequences in real life - not in Asimov's fictional world where you could program in three safety llaws and your job was done. Ryan Calo points us at the work of Stanford student Victoria Groom on human-robot interaction. Her dissertation research not yet on the site, discovered that humans allocate responsibility for success and failure proportionately according to how anthropomorphic the robot is.

- More than 24 percent of tweets - and rising sharply - are sent by automated accounts, according to Miranda Mowbray at HP labs. Her survey found all sorts of strange bots: things that constantly update the time, send stock quotes, tell jokes, the tea bot that retweets every mention of tea...

- Google's Kent Walker, the 1997 CFP chair, believes that censorship is as big a threat to democracy as terrorism, and says that open architectures and free expression are good for democracy - and coincidentally also good for Google's business.

- Microsoft's chief privacy strategist, Peter Cullen, says companies must lead in privacy to lead in cloud computing. Not coincidentally, others are the conference note that US companies are losing business to Europeans in cloud computing because EU law prohibits the export of personal data to the US, where data protection is insufficient.

- It is in fact possible to provide wireless that works at a technical conference. And good food!

- The Facebook Effect is changing the attitude of other companies about user privacy. Lauren Gelman, who helps new companies with privacy issues, noted that because start-ups all see Facebook's success and want to be the next 400 million-user environment, there was a strong temptation to emulate Facebook's behavior. Now, with the angry cries mounting from consumers, she's having to spend less effort convincing them about the level of pushback companies will get from consumers if they change their policies and defy their expectations. Even so, it's important to ensure that start-ups include privacy in their budgets and not become an afterthought. In this respect, she makes me realize, privacy in 2010 is at the stage that usability was in the early 1990s.

- All new program launches come through the office of the director of Yahoo!'s business and human rights program, Ebele Okabi-Harris. "It's very easy for the press to focus on China and particular countries - for example, Australia last year, with national filtering," she said, "but for us as a company it's important to have a structure around this because it's not specific to any one region." It is, she added later, a "global problem".

- We should continue to be very worried about the database state because the ID cards repeal act continues the trend toward data sharing among government departments and agencies, according to Christina Zaba from No2ID.

- Information brokers and aggregators, operating behind the scenes, are amassing incredible amounts of details about Americans and it can require a great deal of work to remove one's information from these systems. The main customers of these systems are private investigators, debt collectors, media, law firms, and law enforcement. The Privacy Rights Clearinghouse sees many disturbing cases, as Beth Givens outlined, as does Pam Dixon's World Privacy forum.

- I always knew - or thought I knew - that the word "robot" was not coined by Asimov but by Karel Capek for his play R.U.R. (for "Rossum's Universal Robots", which coincidentally I also know that playing a robot in same was Michael Caine's first acting job). But Twitterers tell me that this isn't quite right. The word is derived from the Czech word "robota", "compulsory work for a feudal landlord". And that it was actually coined by Capek's older brother, Josef..

- There will be new privacy threats emerging from automated vehicles, other robots, and voicemail transcription services, sooner rather than later.

- Studying the inner workings of an organization like the International Civil Aviation Organization is truly difficult because the time scales - ten years to get from technical proposals to mandated standard, which is when the public becomes aware of - are a profound mismatch for the attention span of media and those who fund NGOs. Anyone who feels like funding an observer to represent civil society at ICAO should get in touch with Edward Hasbrouck.

- A lot of our cybersecurity problems could be solved by better technology.

- Lillie Coney has a great description of deceptive voting practices designed to disenfranchise the opposition: "It's game theory run amok!"

- We should not confuse insecure networks (as in vulnerable computers and flawed software) with unsecured networks (as in open wi-fi).

- Next year's conference chairs are EPIC's Lillie Coney and Jules Polonetsky. It will be in Washington, DC, probably the second or third week in June. Be there!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 30, 2010

Child's play

In the TV show The West Wing (Season 6, Episode 17, "A Good Day") young teens tackle the president: why shouldn't they have the right to vote? There's probably no chance, but they made their point: as a society we trust kids very little and often fail to take them or their interests seriously.

That's why it was so refreshing to read in 2008's < a href="">Byron Review the recommendation that we should consult and listen to children in devising programs to ensure their safety online. Byron made several thoughtful, intelligent analogies: we supervise as kids learn to cross streets, we post warning signs at swimming pools but also teach them to swim.

She also, more controversially, recommended that all computers sold for home use in the UK should have Kitemarked parental control software "which takes parents through clear prompts and explanations to help set it up and that ISPs offer and advertise this prominently when users set up their connection."

The general market has not adopted this recommendation; but it has been implemented with respect to the free laptops issued to low-income families under Becta's £300 million Home Access Laptop scheme, announced last year as part of efforts to bridge the digital divide. The recipients - 70,000 to 80,000 so far - have a choice of supplier, of ISP, and of hardware make and model. However, the laptops must meet a set of functional technical specifications, one of which is compliance with PAS 74:2008, the British Internet safety standard. That means anti-virus, access control, and filtering software: NetIntelligence.

Naturally, there are complaints; these fall precisely in line with the general problems with filtering software, which have changed little since 1996, when the passage of the Communications Decency Act inspired 17-year-old Bennett Haselton to start Peacefire to educate kids about the inner working of blocking software - and how to bypass it. Briefly:

1. Kids are often better at figuring out ways around the filters than their parents are, giving parents a false sense of security.

2. Filtering software can't block everything parents expect it to, adding to that false sense of security.

3. Filtering software is typically overbroad, becoming a vehicle for censorship.

4. There is little or no accountability about what is blocked or the criteria for inclusion.

This case looks similar - at first. Various reports claim that as delivered NetIntelligence blocks social networking sites and even Google and Wikipedia, as well as Google's Chrome browser because the way Chrome installs allows the user to bypass the filters.

NetIntelligence says the Chrome issue is only temporary; the company expects a fix within three weeks. Marc Kelly, the company's channel manager, also notes that the laptops that were blocking sites like Google and Wikipedia were misconfigured by the supplier. "It was a manufacturer and delivery problem," he says; once the software has been reinstalled correctly, "The product does not block anything you do not want it to." Other technical support issues - trouble finding the password, for example - are arguably typical of new users struggling with unfamiliar software and inadequate technical support from their retailer.

Both Becta and NetIntelligence stress that parents can reconfigure or uninstall the software even if some are confused about how to do it. First, they must first activate the software by typing in the code the vendor provides; that gets them password access to change the blocking list or uninstall the software.

The list of blocked sites, Kelly says, comes from several sources: the Internet Watch Foundation's list and similar lists from other countries; a manual assessment team also reviews sites. Sites that feel they are wrongly blocked should email NetIntelligence support. The company has, he adds, tried to make it easier for parents to implement the policies they want; originally social networks were not broken out into their own category. Now, they are easily unblocked by clicking one button.

The simple reaction is to denounce filtering software and all who sail in her - censorship! - but the Internet is arguably now more complicated than that. Research Becta conducted on the pilot group found that 70 percent of the parents surveyed felt that the built-in safety features were very important. Even the most technically advanced of parents struggle to balance their legitimate concerns in protecting their children with the complex reality of their children's lives.

For example: will what today's children post to social networks damage their chances of entry into a good university or a job? What will they find? Not just pornography and hate speech; some parents object to creationist sites, some to scary science fiction, others to Fox News. Yesterday's harmless flame wars are today's more serious cyber-bullying and online harassment. We must teach kids to be more resilient, Byron said; but even then kids vary widely in their grasp of social cues, common sense, emotional make-up, and technical aptitude. Even experts struggle with these issues.

"We are progressively adding more information for parents to help them," says Kelly. "We want the people to keep the product at the end. We don't want them to just uninstall it - we want them to understand it and set the policies up the way they want them." Like all of us, Kelly thinks the ideal is for parents to engage with their children on these issues, "But those are the rules that have come along, and we're doing the best we can."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 23, 2010

Death, where is thy password?

When last seen, our new widow was wrestling with her late husband's password, unable to get into the Microsoft Money files he used to manage their finances or indeed his desktop computer in general. Hours of effort from the best geekish minds (we are grateful to Drew and Peter) led nowhere. Eventually, we paid £199 to Elcomsoft (the company whose employee Dmitry Sklyarov was arrested in 2001 at Defcon for cracking Adobe eBook files) for its Advanced Office Password Recovery software and it found the password after about 18 hours of constrained brute-force attempts. That password, doctored in line with the security hint my friend had left behind, unlocked his desktop.

My widow had only one digit wrong in that password, by the way. Computers have no concept of "close enough".

But the fun was only beginning. It is a rarely discussed phenomenon of modern life that when someone close to you dies, alongside the memories and any property real and personal they bequeath you a full-time job. The best-arranged, most orderly financial affairs do not transfer themselves gently after the dying of the light.

For one thing, it takes endless phone calls to close, open, or change the names on accounts. Say an average middle-class American: maybe five credit card accounts, two bank accounts, a brokerage account, a couple of IRA accounts, and a 401(K) plan per job held? Plus mortgage, utilities (gas, electric, broadband, cellphone, TV cable), government agencies (motor vehicles, Social Security, federal and state tax), plus magazine/product/service subscriptions. Shall we guess 40 to 50 accounts?

All these organizations are, of course, aware that people die, and they have Procedures. What varies massively (from eavesdropping on some of those phone calls) is the behavior of the customer service people you have to talk to. In a way, this makes sense: customer service representatives are people, too (sometimes), and if you've ever had to tell someone that your <insert close relative here> just died unexpectedly you'll know that the reactions run the gamut from embarrassed to unexpectedly kind to abrupt to uncomfortably inquisitive to (occasionally) angry. That customer service rep isn't going to be any different. Unfortunately. Because you, the customer, are making your 11th call of the day, and it isn't getting any easier or more fun.

A desire to automate this sort of thing was often the reason given for the UK to bring in an ID card. Report once, update everywhere. It sounds wonderful (assuming they've got the right dead person). Although my suspicion is that what organizations do with the information will be as different then as it is now: some automatically close accounts and send a barcoded letter with a number to call if you want to take the account over; some just want you to spell the new name; a few actually try to help you while doing the job they have to do.

What hasn't been set up with death in mind, though, is online account access. I'm told that in the UK, where direct debits and standing orders have a long history, all automated payments are immediately cancelled when the account holder dies and must be actively reinstated if they are to continue. In the US, where automated payments basically arrived with the Internet, things are different: some (such as mortgage payments) may be arranged with your bank, but others may be run through a third-party electronic payment service. In the case of one such account, we discovered that although both my friend and his wife had individual logins she could not change his settings while logged in using her ID and password. In other words, she could not cancel the payments he'd set up.

Cue another password battle. Our widow had already supplied death certificate and confirmation that she was executor. The company accordingly reset his password for her. But using her computer instead of his to access the site and enter the changed password triggered the site's suspicions, and it demanded an answer to the ancillary security question: "What city was your mother born in?"

There turned out to be some uncertainty about that. And then how the right town was spelled. By which time the site had thrown a hissy fit and locked her out for answering incorrectly too many times. And this time customer service couldn't unlock it without an in-person office visit.

Who thinks to check when they're setting up an automated payment how the site will handle matters when you're dead or incapacitated? We all should - and the services should help us by laying this stuff out up front in the FAQs.

The bottom line: these services are largely new, and they're being designed primarily by younger people who are dismissive about the technical aptitude of older people. At every technical conference the archetypal uncomprehending non-technical user geeks refer to is "your grandmother" or "my mother". Yet it does not seem to occur to them that these are the people who, at the worst moment of their lives, are likely to have to take over and operate these accounts on someone else's behalf and they are going to need help.

Death's a bitch - and then you die.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Unfortunately, this blog eats non-spam comments and I don't know why.

February 19, 2010

Death doth make hackers of us all

"I didn't like to ask him what his passwords were just as he was going in for surgery," said my abruptly widowed friend.

Now, of course, she wishes she had.

Death exposes one of the most significant mismatches between security experts' ideas of how things should be done and the reality for home users. Every piece of advice they give is exactly the opposite of what you'd tell someone trying to create a disaster recovery plan to cover themselves in the event of the death of the family computer expert, finance manager, and media archivist. If this were a business, we'd be talking about losing the CTO, CIO, CSO, and COO in the same plane crash.

Fortunately, while he was alive, and unfortunately, now, my friend was a systems programmer of many decades of expertise. He was acutely aware of the importance of good security. And so he gave his Windows desktop, financial files, and email software fine passwords. Too fine: the desktop one is completely resistant to educated guesses based on our detailed knowledge of his entire life and partial knowledge of some of his other PINs and passwords.

All is not locked away. We think we have the password to the financial files, so getting access to those is a mere matter of putting the hard drive in another machine, finding the files, copying them, installing the financial software on a different machine, and loading them up. But it would be nice to have direct as-him access to his archive of back (and new) email, the iTunes library he painstakingly built and digitized, his Web site accounts, and so on. Because he did so much himself, and because his illness was an 11-day chase to the finish, our knowledge of how he did things is incomplete. Everyone thought there was time.

With backups secured and the financial files copied, we set to the task of trying to gain desktop access.

Attempt 1: ophcrack. This is a fine piece of software that's easy to use as long as you don't look at any of the detail. Put it on a CD, boot from said CD, run it on automatic, and you're fine. The manual instructions I'm sure are fine, too, for anyone who has studied Windows SAM files.

Ophcrack took a happy 4 minutes and 39 seconds to disclose that the computer has three accounts: administrator, my friend's user account, and guest. Administrator and guest have empty passwords; 's is "not found". But that's OK, said the security expert I consulted, because you can log in as administrator using the empty password and change the user account. Here is a helpful command. Sure. No problem.

Except, of course, that this is Vista, and Vista hides the administrator account to make sure that no brainless idiot accidentally got into the administrator account and ran around the system creating havoc and corrupted files. By "brainless idiot" I mean: the user-owner of the computer. Naturally, my friend had left it hidden.

In order to unhide the administrator account so you can run the commands to reset 's password, you have to run the command prompt in administrator mode. Which we can't do because, of course, there are only two administrator accounts and one is hidden and the other is the one we want the password for. Next.

Attempt 2: Password Changer. Now, this is a really nifty thing: you download the software, use it to create a bootable CD, and boot the computer. Which would be fine, except that the computer doesn't like it because apparently is missing...

We will draw a veil over the rest. But my point is that no one would advise a business to operate in this way - and now that computers are in (almost) every home, homes are businesses, too. No one likes to think they're going to die, still less without notice. But if you run your family on your computer you need a disaster recovery plan - fire, flood, earthquake, theft, computer failure, stroke, and yes, unexpected death,

- Have each family member write down their passwords. Privately, if you want, in sealed envelopes to be stored in a safe deposit box at the bank. Include: Windows desktop password, administrator password, automated bill-paying and financial record passwords, and the list of key Web sites you use and their passwords. Also the passwords you may have used to secure phone records and other accounts. Credit and debit card PINs. Etc.

- Document your directory structure so people know where the important data - family photos, financial records, Web accounts, email address books - is stored. Yes, they can figure it out, but you can make it a lot easier for them.

- Set up your printer so it works from other computers on the home network even if yours is turned off. (We can't print anything, either.)

- Provide an emergency access route. Unhide the administrator account.

- Consider your threat model.

Meanwhile, I think my friend knew all this. I think this is his way of taking revenge on me for never letting him touch *my* computer.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 1, 2010

Privacy victims

Frightened people often don't make very good decisions. If I were in charge of aviation security, I'd have been pretty freaked out by the Christmas Day crotch bomber - failure or no failure. Even so, like all of us Boxing Day quarterbacks, I'd like to believe I'd have had more sense than to demand that airline passengers stay seated and unmoving for an hour, laps empty.

But the locking-the-barn elements of the TSA's post-crotch rules are too significant to ignore: the hastily implemented rules were very specifically drafted to block exactly the attack that had just been attempted. Which, I suppose, makes sense if your threat model is a series of planned identical, coordinated attacks and copycats. But as a method of improving airport security it's so ineffective and irrelevant that even the normally rather staid Economist accused the TSA of going insane and Bruce Schneier called the new rulesmagical thinking.

Consider what actually happened on Christmas Day:

- Intelligence failed. Umar Farouk Abdulmutallab was on the watch list (though not, apparently, the no-fly list), and his own father had warned the US embassy.

- Airport screening failed. He got through with his chunk of explosive attached to his underpants and the stuff he needed to set it off. (As the flyer boards have noted, anyone flying this week should be damned grateful he didn't stuff it in a condom and stick it up his ass.)

- And yet, the plan failed. He did not blow up the plane; there were practically no injuries, and no fatalities.

That, of course, was because a heroic passenger was paying attention instead of snoozing and leaped over seats to block the attempt.

The logical response, therefore, ought to be to ask passengers to be vigilant and to encourage them to disrupt dangerous activities, not to make us sit like naughty schoolchildren being disciplined. We didn't do anything wrong. Why are we the ones who are being punished?

I have no doubt that being on the plane while the incident was taking place was terrifying. But the answer isn't to embark upon an arms race with the terrorists. Just as there are well-funded research labs churning out new computer viruses and probing new software for vulnerabilities, there are doubtless research facilities where terrorist organizations test what scanners can detect and in what quantity.

Matt Blaze has a nice analysis of why this approach won't work to deter terrorists: success (plane blown up) and failure (terrorist caught) are, he argues, equally good outcomes for the terrorist, whose goal is to sow terror and disruption. All unpredictable screening does is drive passengers nuts and, in some cases, put their health at risk. Passengers work to the rules. If there are no blankets, we wear warmer clothes; if there is no bathroom access, we drink less; if there is no in-flight entertainment, we rearrange the hours we sleep.

As Blaze says, what's needed is a correct understanding of the threat model - and as Schneier has often said, the most effective changes since 9/11 have been reinforcing the cockpit doors and the fact that passengers now know to resist hijackers.

Since the incident, much of the talk has been about whole-body scanners - "nudie scanners" Dutch privacy advocates have dubbed them - as if these will secure airplanes for once and for all. I think if people think that whole-body scanners are the answer they have misunderstood the problem.

Or problems, because there is more than one. First: how can we make air travel secure from terrorists? Second: how can we make air travelers feel secure? Third: how can we accomplish those things while still allowing travelers to be comfortable, a specification which includes respecting their right to privacy and civil liberties? If your reaction to that last is to say that you don't care whose rights are violated, all that matters is perfect security I'm going to guess that: 1) you fly very infrequently; 2) you would be happy to do so chained to your seat naked with a light coating of Saran wrap; and 3) that your image of the people who are threats is almost completely unlike your own.

It is particularly infuriating to read that we are privacy victims: that the opposition of privacy advocates to invasive practices such as whole-body scanners are the reason this clown got as close as he did. Such comments are as wrong-headed as Jack Straw claiming after 9/11 that opponents of key escrow were naïve.

The most rational response, it seems to me, is for TSA and airlines alike to solicit volunteers among their most loyal and committed passengers. Elite flyers know the rhythms of flights; they know when something is amiss. Train us to help in emergencies and to spot and deter mishaps.

Because the thing we should have learned from this incident is that we are never going to have perfect security: terrorists are a moving target. We need fallbacks, for when our best efforts fail.

The more airport security becomes intrusive, annoying, and visibly stupid, the more motive passengers will have to find workarounds and the less respect they will have for these authorities. That process is already visible. Do you feel safer now?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter, or send email to

December 4, 2009

Which lie did I tell?

"And what's your mother's maiden name?"

A lot of attention has been paid over the years to the quality of passwords: how many letters, whether there's a sufficient mix of numbers and "special characters", whether they're obviously and easily guessable by anyone who knows you (pet's name, spouse's name, birthday, etc.), whether you've reset them sufficiently recently. But, as someone noted this week on UKCrypto, hardly anyone pays attention to the quality of the answers to the "password hint" questions sites ask so they can identify you when you eventually forget your password. By analogy, it's as though we spent all our time beefing up the weight, impenetrability, and lock quality on our front doors while leaving the back of the house accessible via two or three poorly fitted screen doors.

On most sites it probably doesn't matter much. But the question came up after the BBC broadcast an interview with the journalist Angela Epstein, the loopily eager first registrant for the ID card, in which she apparently mentioned having been asked to provide the answers to five rather ordinary security questions "like what is your favorite food". Epstein's column gives more detail: "name of first pet, favourite song and best subject at school". Even Epstein calls this list "slightly bonkers". This, the UKCrypto poster asked, is going to protect us from terrorists?

Dave Birch had some logic to contribute: "Why are we spending billions on a biometric database and taking fingerprints if they're going to use the questions instead? It doesn't make any sense." It doesn't: she gave a photograph and two fingerprints.

But let's pretend it does. The UKCrypto discussion headed into technicalities: has anyone studied challenge questions?

It turns out someone has: Mike Just, described to me as "the world expert on challenge questions". Just, who's delivered two papers on the subject this year, at the Trust (PDF) and SOUPS (PDF) conferences, has studied both the usability and the security of challenge questions. There are problems from both sides.

First of all, people are more complicated and less standardized than those setting these questions seem to think. Some never had pets; some have never owned cars; some can't remember whether they wrote "NYC", "New York", "New York City", or "Manhattan". And people and their tastes change. This year's favorite food might be sushi; last year's chocolate chip cookies. Are you sure you remember accurately what you answered? With all the right capitalization and everything? Government services are supposedly thinking long-term. You can always start another account; but ten years from now, when you've lost your ID card, will these answers be valid?

This sort of thing is reminiscent of what biometrics expert James Wayman has often said about designing biometric systems to cope with the infinite variety of human life: "People never have what you expect them to have where you expect them to have it." (Note that Epstein nearly failed the ID card registration because of a burn on her finger.)

Plus, people forget. Even stuff you'd think they'd remember and even people who, like the students he tested, are young.

From the security standpoint, there are even more concerns. Many details about even the most obscure person's life are now public knowledge. What if you went to the same school for 14 years? And what if that fact is thoroughly documented online because you joined its Facebook group?

A lot depends on your threat model: your parents, hackers with scripted dictionary attacks, friends and family, marketers, snooping government officials? Just accordingly came up with three types of security attacks for the answers to such questions: blind guess, focused guess, and observation guess. Apply these to the often-used "mother's maiden name": the surname might be two letters long; it is likely one of the only 150,000 unique surnames appearing more than 100 times in the US census; it may be eminently guessable by anyone who knows you - or about you. In the Facebook era, even without a Wikipedia entry or a history of Usenet postings many people's personal details are scattered all over the online landscape. And, as Just also points out, the answers to challenge questions are themselves a source of new data for the questioning companies to mine.

My experience from The Skeptic suggests that over the long term trying to protect your personal details by not disclosing them isn't going to work very well. People do not remember what they tell psychics over the course of 15 minutes or an hour. They have even less idea what they've told their friends or, via the Internet, millions of strangers over a period of decades or how their disparate nuggets of information might match together. It requires effort to lie - even by omission - and even more to sustain a lie over time. It's logically easier to construct a relatively small number of lies. Therefore, it seems to me that it's a simpler job to construct lies for the few occasions when you need the security and protect that small group of lies. The trouble then is documentation.

Even so, says Birch, "In any circumstance, those questions are not really security. You should probably be prosecuted for calling them 'security'."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to

August 1, 2009


Let's face it: Las Vegas ought not to exist. A city in the middle of the desert that shows off extravagant water fountains. (No matter how efficient these are, they must lose plenty of water to the 110F dry desert air.) Where in a time of energy crisis few can live without cars or air-conditioning and many shops and hotels air-condition to a climate approximating that of Britain in winter. A city that specializes in gigantic, all-night light displays. And, a city with so little respect for its own history that it tears itself down and rebuilds every five years, on average. (It even tore down the hotel that Elvis made famous and replaced it.)

In fact, of course, the Strip is all façade. Go a block east or west and look at the backs of the hotels and what you see is the rear side of a movie set.

There is of course a real Las Vegas away from the Strip that's cooler and much prettier, but much of the above still applies: it is a perfect advertisement for unsustainability. Which is why it seemed particularly apt this year as the location for the annual Black Hat and Defcon security/hacker conferences. Just as Las Vegas itself is an exemplar of the worst abuse of a fragile ecosystem, so increasingly do the technologies we use and trust daily.

If you're not familiar with these twin conferences, they're held on successive days in Las Vegas in late July. At Black Hat during the week, a load of (mostly) guys in (mostly) suits present their latest research into new security problems and their suggested fixes. On Thursday night, (mostly) the same crowd trade in their (mostly) respectable clothes for (mostly) cargo shorts and T-shirts and head for Defcon for the weekend to present (many of) the same talks over again to a (mostly) younger, wilder crowd. Black Hat has executive stationery for sale and sit-down catered lunches; Defcon has programmable badges, pizza in the hotel's food court, and is much, much cheaper.

It's noticeable that, after years when people have been arrested for or sued to prevent their disclosures, a remarkable number of this year's speakers took pains to emphasize the responsible efforts they'd made to contact manufacturers and industry associations and warn them about what they'd found. Some of the presentations even ended with, "And they've fixed it in the latest release." What fun is that?

The other noticeable trend this year was away from ordinary computer issues and into other devices. This was only to be (eventually) expected: as computers infiltrate all parts of our lives they're bringing insecurity along with them into areas where it pretty much didn't exist before. Electric meters: once mechanical devices that went round and round; now smart gizmos that could be remotely reprogrammed. Flaws in the implementation of SMS mean that phishing messages and other scams most likely lie in the future of our mobile phones.

Even such apparently stolid mechanisms such as parking meters can be gamed. Know what's inside those things? Z80 chips! Yes, the heart of those primitive 1980s computers live on in that parking meter that just clicked over to VIOLATION.

Las Vegas seems to go on as if the planet were not in danger. Similarly, we know - because we write and read it daily - that the Internet was built as a sort of experiment on underpinnings that are ludicrously, laughably wrongly designed for the weight we're putting on them. And yet every day we go on buying things with credit cards, banking, watching our governments shift everything online, all I suppose with the shared hope that it will all come right somehow.

You do wonder, though, after two days of presentations that find the same fundamental errors we've known about for decades: passwords submitted in plain text, confusion between making things easy for users and depriving them of valuable information to help them spot frauds. The failure, as someone said in the blur of the last few days, to teach beginning programmers about the techniques of secure coding. Plus, of course, the business urgency of let's get this thing working and worry about security later.

On the other hand, it was just as alarming to hear Robert Lentz, deputy assistant secretary of Defense, say it was urgent to "get the anonymity out of the network" and ensure that real-world and cyber identities converge with multifactor biometric identification in both logical and physical worlds. My laptop computer was perfectly secure against all the inquisitors at Black Hat because it never left my immediate possession and I couldn't connect to the wireless; but that's not how I want to live.

The hardest thing about security seems to be understanding when we really need it and how. But the thing about Vegas - as IBM's Jeff Jonas so eloquently explained at etech in 2008 - is that behind the Strip (which I always like to say is what you'd get if you gave a band of giant children an unlimited supply of melted plastic and bad taste) and its city block-sized casinos lies a security system so sophisticated that it doesn't interfere with customers' having a good time. Vegas, so fake in other ways, is the opposite of security theater. Whereas, so much of our security - which is often intrusive enough to feel real - might as well be the giant plastic Sphinx in front of the Luxor.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, to follow on Twitter or send email to

July 24, 2009

Security for the rest of us

Many governments, faced with the question of how to improve national security, would do the obvious thing: round up the usual suspects. These would be, of course, the experts - that is, the security services and law enforcement. This exercise would be a lot like asking the record companies and film studios to advise on how to improve copyright: what you'd get is more of the same.

This is why it was so interesting to discover that the US National Academies of Science was convening a workshop to consult on what research topics to consider funding, and began by appointing a committee that included privacy advocates and usability experts, folks like Microsoft researcher Butler Lampson, Susan Landau, co-author of books on privacy and wiretapping, and Donald Norman, author of the classic book The Design of Everyday Things. Choosing these people suggests that we might be approaching a watershed like that of the late 1990s, when the UK and the US governments were both forced to understand that encryption was not just for the military any more. The peace-time uses of cryptography to secure Internet transactions and protect mobile phone calls from casual eavesdropping are much broader than crypto's war-time use to secure military communications.

Similarly, security is now everyone's problem, both individually and collectively. The vulnerability of each individual computer is a negative network externality, as NYU economist Nicholas Economides pointed out. But, as many asked, how do you get people to understand remote risks? How do you make the case for added inconvenience? Each company we deal with makes the assumption that we can afford the time to "just click to unsubscribe" or remember one password, without really understanding the growing aggregate burden on us. Norman commented that door locks are a trade-off, too: we accept a little bit of inconvenience in return for improved security. But locks don't scale; they're acceptable as long as we only have to manage a small number of them.

In his 2006 book, Revolutionary Wealth, Alvin Toffler comments that most of us, without realizing it, have a hidden third, increasingly onerous job, "prosumer". Companies, he explained, are increasingly saving money by having us do their work for them. We retrieve and print out our own bills, burn our own CDs, provide unpaid technical support for ourselves and our families. One of Lorrie Cranor's students did the math to calculate the cost in lost time and opportunities if everyone in the US read annually the privacy policy of each Web site they visited once a month. Most of these things require college-level reading skills; figure 244 hours per year per person, $3,544 each...$781 billion nationally. Weren't computers supposed to free us of that kind of drudgery? As everything moves online, aren't we looking at a full-time job just managing our personal security?

That, in fact, is one characteristic that many implementations of security share with welfare offices - and that is becoming pervasive: an utter lack of respect for the least renewable resource, people's time. There's a simple reason for that: the users of most security systems are deemed to be the people who impose it, not the people - us - who have to run the gamut.

There might be a useful comparison to information overload, a topic we used to see a lot about ten years back. When I wrote about that for ComputerActive in 1999, I discovered that everyone I knew had a particular strategy for coping with "technostress" (the editor's term). One dealt with it by never seeking out information and never phoning anyone. His sister refused to have an answering machine. One simply went to bed every day at 9pm to escape. Some refused to use mobile phones, others to have computers at home..

But back then, you could make that choice. How much longer will we be able to draw boundaries around ourselves by, for example, refusing to use online banking, file tax returns online, or participate in social networks? How much security will we be able to opt out of in future? How much do security issues add to technostress?

We've been wandering in this particular wilderness a long time. Angela Sasse, whose 1999 paper Users Are Not the Enemy talked about the problems with passwords at British Telecom, said frankly, "I'm very frustrated, because I feel nothing has changed. Users still feel security is just an obstacle there to annoy them."

In practice, the workshop was like the TV game Jeopardy: the point was to generate research questions that will go into a report, which will be reviewed and redrafted before its eventual release. Hopefully, eventually, it will all lead to a series of requests for proposals and some really good research. It is a glimmer of hope.

Unless, that is, the gloominess of the beginning presentations wins out. If you listened to Lampson, Cranor, and to Economides, you got the distinct impression that the best thing that could happen for security is that we rip out the Internet (built to be open, not secure), trash all the computers (all of whose operating systems were designed in the pre-Internet era), and start over from scratch. Or, like the old joke about the driver who's lost and asking for directions, "Well, I wouldn't start from here".

So, here's my question: how can we make security scale so that the burden stays manageable?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to

July 17, 2009

Human factors

For the last several weeks I've been mulling over the phrase security fatigue. It started with a paper (PDF) co-authored by Angela Sasse, in which she examined the burden that complying with security policies imposes upon corporate employees. Her suggestion: that companies think in terms of a "compliance budget" that, like any other budget (money, space on a newspaper page), has to be managed and used carefully. And, she said, security burdens weigh differently on different people and at different times, and a compliance budget needs to comprehend that, too.

Some examples (mine, not hers). Logging onto six different machines with six different user IDs and passwords (each of which has to be changed once a month) is annoying but probably tolerable if you do it once every morning when you get to work and once in the afternoon when you get back from lunch. But if the machines all log you out every time you take your hands off the keyboard for two minutes, by the end of the day they will be lucky to survive your baseball bat. Similarly, while airport security is never fun, the burden of it is a lot less to a passenger traveling solo after a good night's sleep who reaches the checkpoints when they're empty than it is to the single parent with three bored and overtired kids under ten who arrives at the checkpoint after an overnight flight and has to wait in line for an hour. Context also matters: a couple of weeks ago I turned down a ticket to Court 1 at Wimbledon on men's semi-finals day because I couldn't face the effort it would take to comply with their security rules and screening. I grudgingly accept airport security as the trade-off for getting somewhere, but to go through the same thing for a supposedly fun day out?

It's relatively easy to see how the compliance budget concept could be worked out in practice in a controlled environment like a company. It's very difficult to see how it can be worked out for the public at large, not least because none of the many companies each of us deals with sees it as beneficial to cooperate with the others. You can't, for example, say to your online broker that you just can't cope with making another support phone call, can't they find some other way to unlock your account? Or tell Facebook that 61 privacy settings is too many because you're a member of six other social networks and Life is Too Short to spend a whole day configuring them all.

Bruce Schneier recently highlighted that last-referenced paper, from Joseph Bonneau and Soeren Preibusch at Cambridge's computer lab, alongside another by Leslie John, Alessandro Acquisti, and George Loewenstein from Carnegie-Mellon, to note a counterintuitive discovery: the more explicit you make privacy concerns the less people will tell you. "Privacy salience" (as Schneier calls it) makes people more cautious.

In a way, this is a good thing and goes to show what privacy advocates have been saying along: people do care about privacy if you give them the chance. But if you're the owners of Facebook, a frequent flyer program, or Google it means that it is not in your business interest to spell out too clearly to users what they should be concerned about. All of these businesses rely on collecting more and more data about more and more people. Fortunately for them, as we know from research conducted by Lorrie Cranor (also at Carnegie-Mellon), people hate reading privacy policies. I don't think this is because people aren't interested in their privacy. I think this goes back to what Sasse was saying: it's security fatigue. For most people, security and privacy concerns are just barriers blocking the thing they came to do.

But choice is a good thing, right? Doesn't everyone want control? Not always. Go back a few years and you may remember some widely publicized research that pointed out that too many choices stall decision-making and make people feel...tired. A multiplicity of choices adds weight and complexity to the decision you're making: shouldn't you investigate all the choices, particularly if you're talking about which of 56 mutual funds to add to your 401(k)?

It seems obvious, therefore, that the more complex the privacy controls offered by social networks and other services the less likely people are to use them: too many choices, too little time, too much security fatigue. In minor cases in real life, we handle this by making a decision once and sticking to it as a kind of rule until we're forced to change: which brand of toothpaste, what time to leave for work, never buy any piece of clothing that doesn't have pockets. In areas where rules don't work, the best strategy is usually to constrain the choices until what you have left is a reasonable number to investigate and work with. Ecommerce sites notoriously get this backwards: they force you to explore group by group instead of allowing you to exclude choices you'll never use.

How do we implement security and privacy so that they're usable? This is one of the great unsolved, under-researched questions in security. I'm hoping to know more next week.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to

May 8, 2009

Automated systems all the way down

Are users getting better or worse?

At what? you might ask. Naturally: at being thorns in the side of IT security people. Users see security as damage, and route around it.

You didn't need to look any further than this week's security workshop, where this question was asked, to see this principle in action. The hotel-supplied wireless was heavily filtered: Web and email access only, no VPNs, "undesirable" sites blocked. Over lunch, the conversation: how to set up VPNs using port 443 to get around this kind of thing. The perfect balanced sample: everyone's a BOFH *and* a hostile user. Kind of like Jacqui Smith, who has announced plans to largely circumvent the European Court of Human Rights' ruling that Britain has to remove the DNA of innocent people from the database. Apparently, this government perceives European law as damage.

But the question about users was asked seriously. The workshop gathered security folks from all over to brain storm and compare notes: what are the emerging security threats? What should we be worrying about? And, most important, what should people be researching?

Three working groups - smart environments, malware and fraud, and critical systems - came up with three different lists, mostly populated with familiar stuff - but the familiar stuff keeps going and getting worse. According to Symantec's latest annual report spam, for example, was up 162 percent in 2008 over 2007, with a total of 349.6 billion messages sent - simply a staggering waste of resources. What has changed is targeting; new attacks are short-lived, small distribution affairs - much harder to shut down.

Less familiar to me was the "patch window" problem, which basically goes like this: it takes 24 hours for 80 percent of Windows users to get a new patch from Windows Update. An attacker who downloads the patch as soon as it's available can quickly - within minutes - reverse-engineer it to find out what bug(s) it's fixing. Then the attacker has most of a day in which to exploit the bug. Last year, Carnegie-Mellon's David Brumley and others found a way to automate this process (PDF). An ironic corollary: the more bug-free the program, the easier a patch window attack becomes. Various solutions were discussed for this, none of them entirely satisfactory; the most likely was to roll out the patch locked, and distribute a key only after the download cycle is complete.

But back to the trouble with users: systems are getting more and more complex. A core router now has 5,000 lines of code; an edge router 11,000. Someone has to read and understand all those lines. And that's just one piece. "Today's networks are now so complex we don't understand them any more," said Cisco's Michael Behrenger. Critical infrastructures need to be more like the iPhone, a complex system that nonetheless just about anyone can operate.

As opposed, I guess, to being like what most people have now: systems that are a mish-mash of strategies for getting around things that don't work. But I do see his point. Once you could debug even a large network by reading the entire configuration. Pause to remember the early days of Demon Internet, when the technical support staff would debug your connection by directly editing the code of the dial-up software we were all using, KA9Q. If you'd taken *those* humans out of the system, no one could have gotten online.

It's my considered view that while you can blame users for some things - the one in 12.5 million spam recipients Christian Kreibich said actually buys the pharma products so advertised springs to mine - blaming them in general is a lot like the old saw about how "only a poor workman blames his tools". It's more than 20 years since Donald Norman pointed out in The Design of Everyday Things that user error is often a result of poor system design. Yet a depressing percentage of security folks complaining about system complexity don't even know his name and a failure to understand human factors is security's single biggest failure.

Joseph Bonneau made this point in a roundabout way by considering Facebook which, he said, really is inventing the Web - not just in the rounded corners sense, but in the sense of inventing its own protocols for things for which standards already exist. Plus - and more important for the user question - it's training users to do things that security people would rather they didn't, like click on emailed links without checking the URLs. "Social networks," he said, "are repeating all the Web's security problems - phishing, spam, 419 scams, identity theft, malware, cross-site scripting, click fraud, stalking...privacy is the elephant in the room." Worse, "They really don't yet have a business model, which makes dealing with security difficult."

It's a typical scenario in computing, where each new generation reinvents every wheel. And that's the trouble with automation with everything, too. Have these people never used voice menus?

Get rid of the humans and replace them with automated systems that operate perfectly, great. But won't humans have to write the automated systems? No, automated systems will do that. And who will program those? Computers. And who...

Never mind.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to follow (and reply) on , post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

October 10, 2008

Data mining snake oil

The basic complaints we've been making for years about law enforcement's and government's desire to collect masses of data have primarily focused on the obvious set of civil liberties issues: the chilling effect of surveillance, the right of individuals to private lives, the risk of abuse of power by those in charge of all that data. On top of that we've worried about the security risks inherent in creating such large targets from which data will, inevitably, leak sometimes.

This week, along came the National Research Council to offer a new trouble with dataveillance: it doesn't actually work to prevent terrorism. Even if it did work, the tradeoff of the loss of personal liberties against the security allegedly offered by policies that involve tracking everything everyone does from cradle to grave was hard to justify. But if it doesn't work - if all surveillance all the time won't make us actually safer - then the discussion really ought to be over.

The NAS report, Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Assessment, makes its conclusions clear: "Modern data collection and analysis techniques have had remarkable success in solving information-related problems in the commercial sector... But such highly automated tools and techniques cannot be easily applied to the much more difficult problem of detecting and preempting a terrorist attack, and success in doing so may not be possible at all."

Actually, the many of us who have had our cards stopped for no better reason than that the issuing bank didn't like the color of the Web site we were buying from, might question how successful these tools have been in the commercial sector. At the very least, it has become obvious to everyone how much trouble is being caused by false positives. If a similar approach is taken to all parts of everyone's lives instead of just their financial transactions, think how much more difficult it's going to be to get through life without being arrested several times a year.

The report again: "Even in well-managed programs such tools are likely to return significant rates of false positives, especially if the tools are highly automated." Given the masses of data we're talking about - the UK wants to store all of the nation's communications data for years in a giant shed, and a similar effort in the US would have to be many times as big - the tools will have to be highly automated. And - the report yet again - the difficulty of detecting terrorist activity "through their communications, transactions, and behaviors is hugely complicated by the ubiquity and enormoity of electronic databases maintained by both government agencies and private-sector corporations." The bigger the haystack, the harder it is to find the needle.

In a recent interview, David Porter, CEO of Detica, who has spent his entire career thinking about fraud prevention, said much the same thing. Porter's proposed solution - the basis of the systems Detica sells -is to vastly shrink the amount of data to be analyzed by throwing out everything we know is not fraud (or, as his colleague, Tom Black, said at the Homeland and Border Security conference in July, terrorist activity). To catch your hare, first shrink your haystack.

This report, as the title suggests, focuses particularly on balancing personal privacy against the needs of anti-terrorist efforts. (Although, any terrorist watching the financial markets the last couple of weeks would be justified in feeling his life's work had been wasted, since we can do all the damage that's needed without his help.) The threat from terrorists is real, the authors say - but so is the threat to privacy. Personal information in databases cannot be fully anonymized; the loss of privacy is real damage; and data varies substantially in quality. "Data derived by linking high-quality data with data of lesser quality will tend to be low-quality data." If you throw a load of silly string into your haystack, you wind up with a big mess that's pretty much useless to everyone and will be a pain in the neck to clean up.

As a result, the report recommends requiring systematic and periodic evaluation of every information-based government program against core values and proposes a framework for carrying that out. There should be "robust, independent oversight". Research and development of such programs should be carried out with synthetic data, not real data "anonymized"; real data should only be used once a program meets the proposed criteria for deployment and even then only phased in at a small number of sites and tested thoroughly. Congress should review privacy laws and consider how best to protect privacy in the context of such programs.

These things seem so obvious; but to get to this the point it's taken three years of rigorous documentation and study by a 21-person committee of unimpeachable senior scientists and review by members of a host of top universities, telephone companies, and top technology companies. We have to think the report's sponsors, who include the the National Science Foundation, and the Department of Homeland Security, will take the results seriously. Writing for Cnet, Declan McCullagh notes that the similar 1996 NRC CRISIS report on encryption was followed by decontrol of the export and use of strong cryptography two years later. We can but hope.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

September 12, 2008

Slow news

It took a confluence of several different factors for a six-year-old news story to knock 75 percent off the price of United Airlines shares in under an hour earlier this week. The story said that United Airlines was filing for bankruptcy, and of course was true - in 2002. Several media owners are still squabbling about whose fault it was. Trading was halted after that first hour by the systems put in place after the 1987 crash, but even so the company's shares closed 10 percent down on the day. Long-term it shouldn't matter in this case, but given a little more organization and professionalism that sort of drop provides plenty of opportunities for securities fraud.

The factor the companies involved can't sue: human psychology. Any time you encounter a story online you make a quick assessment of its credibility by considering: 1) the source; 2) its likelihood; 3) how many other outlets are saying the same thing. The paranormal investigator and magician James Randi likes to sum this up by saying that if you claimed you had a horse in your back yard he might want a neighbor's confirmation for proof, but if you said you had a unicorn in your back yard he'd also want video footage, samples of the horn, close-up photographs, and so on. The more extraordinary the claim, the more extraordinary the necessary proof. The converse is also true: the less extraordinary the claim and the better the source, the more likely we are to take the story on faith and not bother to check.

Like a lot of other people, I saw the United story on Google News on Monday. There's nothing particularly shocking these days about an airline filing for bankruptcy protection, so the reaction was limited to "What? Again? I thought they were doing better now" and a glance underneath the headline to check the source. Bloomberg. Must be true. Back to reading about the final in prospect between Andy Murray and Roger Federer at the US Open.

That was a perfectly fine approach in the days when all content was screened by humans and media were slow to publish. Even then there were mistakes, like the famous 1993 incident when a shift worker at Sky News saw an internal rehearsal for the Queen Mother's death on a monitor and mentioned it on the phone to his mother in Australia, who in turn passed it on to the media, which took it up and ran with it.

But now in the time that thought process takes daytraders have clicked in and out of positions and automated media systems have begun republishing the story. It was the interaction of several independently owned automated systems made what ought to have been a small mistake into one that hit a real company's real financial standing - with that effect, too, compounded by automated systems. Logically, we should expect to see many more such incidents, because all over the Web 2.0 we are building systems that talk to each other without human intervention or oversight.

A lot of the Net's display choices are based on automated popularity contests: on-the-fly generated lists of the current top ten most viewed stories, Amazon book rankings, Google's page rank algorithm that bumps to the top sites with the most inbound links for a given set of search terms. That's no different from other media: Jacqueline Kennedy and Princess Diana were beloved of magazine covers for the most obvious sale-boosting reasons. What's different is that on the Net these measurements are made and acted upon instantaneously, and sometimes from very small samples, which is why in a very slow news hour on a small site a single click on a 2002 story seems to have bumped it up to the top, where Google spotted it and automatically inserted it into its feed.

The big issue, really - leaving aside the squabble between the Tribune and Google over whether Google should have been crawling its site at all - is the lack of reliable dates. It's always a wonder to me how many Web sites fail to anchor their information in time: the date a story is posted or a page is last updated should always be present. (I long, in fact, for a browser feature that would display at the top of a page the last date a page's main content was modified.)

Because there's another phenomenon that's insufficiently remarked upon: on the Internet, nothing ever fully dies. Every hour someone discovers an old piece of information for the first time and thinks it's new. Most of the time, it doesn't matter: Dave Barry's exploding whale is hilariously entertaining no matter how many times you've read it or seen the TV clip. But Web 2.0 will make new money for endless recycling part of our infrastructure rather than a rare occurrence.

In 1998 I wrote that crude hacker defacement of Web sites was nothing to worry about compared to the prospect of the subtle poisoning of the world's information supply that might become possible as hackers became more sophisticated. This danger is still with us, and the only remedy is to do what journalists used to be paid to do: check your facts. Twice. How do we automate that?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

August 1, 2008

All paid up

"His checks keep bouncing because his signature varies," says a CIA operative (Sam Waterston) admiringly of the movie's retired spy hero Miles Kendig (Walter Matthau) in the 1980 movie Hopscotch. "He's a class act."

These days, Kendig would be using credit cards. And he'd be having the same problem: the part of his signature would be played by his usage patterns as seen by the credit card company's computers.

This would be doubly true if he used Amazon's Marketplace sellers. It seems - or so Barclaycard tells me every time they block my card - that putting through several purchases through Amazon Marketplace and then, a few days later, buying something larger like a plane ticket or a digital recorder exactly fits one of the fraud patterns their computers are programmed to look for.

Buy a dozen items in a day on eBay (go on, I dare you), and your statement will show a dozen transactions - but they'll all be from Paypal. Buy a dozen items in a single shopping basket on Amazon, and you'll get a dozen transactions all from different unknown sellers. To the computer what seems to you to be a single Amazon purchase looks exactly like someone testing the card with a dozen small transactions to see if it's a) live and b) possessed of available credit. Then, y'see, when the card has passed the test, the fraudster goes for the big one - that airplane ticket or digital recorder.

It's not clear to me why Barclaycard's computer doesn't recognize this pattern as typical after the first outing or two. (I fly one route, but my Barclaycard will not buy me a plane ticket.) Nor is it clear to me why it doesn't occur to the Barclaycard computer that as frauds go buying a digital recorder or a plane ticket for delivery to the cardholder's name and address ranks as fairly incompetent. Why doesn't it check that point before causing trouble?

You might ask a similar question of one of my US cards, which trips the fraud meter any time it's used outside the US. Even though they know I live in...London.

This week Amazon announced that it's offering its payment system, including One-Click, to third party sellers as one of its Web services offerings.

Much of the early press coverage of Amazon's decision seems to be characterizing Amazon Checkout, along with Google Checkout, as a competitor to Paypal. In fact, things are more complicated than that. Paypal, before it was bought by eBay, was one of the oldest businesses on the Net. Its roots, which still show every time you go through the intricate procedure of opting to use a credit card instead of a bank transfer, are in making it possible for anyone to send cash to anyone with an email address. Its first competitor was Western Union; its long tail business opportunity was online sellers who couldn't get credit card authorizations because they were too small. For eBay, buying Paypal meant being able to integrate payments into its ecology with some additional control over fraud while making extra money off each transaction.

Paypal is being adopted as an alternative payment method by all sorts of third parties, and as much of a pain as Paypal is (it can't cope with multinational people and you cannot opt out of giving it a bank account to verify) this is useful for consumers. Its security is generally well regarded by both banks and credit card companies and surely it's better to store financial details with one known company than with dozens of less familiar ones you may only trade with once. Given the choice, I'd far rather that single account were with the much-pleasanter-to-use Amazon. It's clear, though, that if you're offering a platform for others to build businesses on, as Amazon is, payment services are an obvious tool you want to include. Most likely, just as many stores now display multiple credit and debit card logos, many Web sellers will offer users a choice among multiple payment aggregators. Who wants to call the whole thing off because you say Google and I say Paypal?

Unfortunately, none of this solves my actual problem, those damn fraud-detecting algorithms. If Amazon actually aggregated payments into a single transaction - which is actually what you imagine it's doing the first time you buy from Marketplace - and spit the money back out to the intended destinations, there'd be no problem. For you: for Amazon, of course, it would raise a host of questions about whether it's a financial service, and how much responsibility it should assume for fraud. Those are, of course, very much the reasons why Paypal is so unpleasant - and yet also why it offers eBay buyers insurance.

What is clear is that this is yet another step that brings Amazon and eBay into closer competiton with each other: they are increasingly alike. Amazon's recent quarterly statement notes that about 30 percent of its revenues come from Marketplace sellers - and that the profitability of a sale is roughly the same whether it's direct or indirect. On eBay 42 percent of items now are straightforward sales, not auctions, and the changes it's made that favor its biggest sellers are making it more Wal-Mart than flea market.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

July 25, 2008


A certain amount of government and practical policy is being made these days based on the idea that you can take large amounts of data and anonymize it so researchers and others can analyze it without invading anyone's privacy. Of particular sensitivity is the idea of giving medical researchers access to such anonymized data in the interests of helping along the search for cures and better treatments. It's hard to argue with that as a goal - just like it's hard to argue with the goal of controlling an epidemic - but both those public health interests collide with the principle of medical confidentiality.

The work of Latanya Sweeney was I think the first hint that anonymizing data might not be so straightforward; I've written before about her work. This week, at the Privacy Enhancing Technologies Symposium in Leuven, Belgium (which I regrettably missed) researchers Arvind Narayanan and Vitaly Shmatikov from the University of Texas at Austin won an award sponsored by Microsoft for taking reidentifying supposedly anonymized data a step further.

The pair took a database released by the online DVD rental company Netflix last year as part of the $1 million Netflix Prize, a project to improve upon the accuracy of the system's predictions. You know the kind of thing, since it's built into everything from Amazon to Tivos - you give the system an idea of your likes and dislikes by rating the movies you've rented and the system makes recommendations for movies you'll like based on those expressed preferences. To enable researchers to work on the problem of improving these recommendations, Netflix released a dataset containing more than 100 million movie ratings contributed by nearly 500,000 subscribers between December 1999 and December 2005 with, as the service stated in its FAQ, all customer identifying information removed.

Maybe in a world where researchers only had one source of information that would be a valid claim. But just as Sweeney showed in 1997 that it takes very little in the way of public records to re-identify a load of medical data supplied to researchers in the state of Massachusetts, Narayananan and Shamtikov's work reminds us that we don't live in a world like that. For one thing, people tend disproportionately to rate their unusual, quirky favorites. Rating movies takes time; why spend it on giving The Lord of the Rings another bump when what people really need is to know about the wonders of King of Hearts, All That Jazz, and The Tall Blond Man with One Black Shoe? The consequence is that the Netflix dataset is what they call "sparse" - that is, there few subscribers have very similar records.

So: how much does someone need to know about you to identify a particular user from the database? It turns out, not much. The is the public ratings and dates at the Internet Movies Database, which include dates and real names. Narayanan and Shmatikov concluded that 99 percent of records could be uniquely identified from only eight matching ratings (of which two could be wrong); for 68 percent of the records you only need two (and reidentifying the rest becomes easier). And of course, if you know a little bit about the particular person whose record you want to identify things get a lot easier - the three movies I've just listed would probably identify me and a few of my friends.

Even if you don't care if your tastes in movies are private - and both US law and the American Library Association's take on library loan records would protect you more than you yourself would - there are couple of notable things here. First of all, the compromise last week whereby Google agreed to hand Viacom anonymized data on YouTube users isn't as good a deal for users as they might think. A really dedicated searcher might well think it worth the effort to come up with a way to re-identify the data - and so far rightsholders have shown themselves to be very dedicated indeed.

Second of all, the Thomas-Walport review on data-sharing actually recommends requiring NHS patients to agree to sharing data with medical researchers. There is a blithe assumption running through all the government policies in this area that data can be anonymized, and that as long as they say our privacy is protected it will be. It's a perfect example of what someone this week called "policy-based evidence-making".

Third of all, most such policy in this area assumes it's the past that matters. What may be of greater significance, as Narayanan and Shmatikov point out, is the future: forward privacy. Once a virtual identity has been linked to a real-world identity, that linkage is permanent. Yes, you can create a new virtual identity, but any slip that links it to either your previous virtual or your real-world identity blows your cover.

The point is not that we should all rush to hide our movie ratings. The point is that we make optimistic assumptions every day that the information we post and create has little value and won't come back to bite us on the ass. We do not know what connections will be possible in the future.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

July 4, 2008

The new normal

The (only) good thing about a war is you can tell when it's over.

The problem with the "War on Terror" is that terrorism is always with us, as Liberty's director, Shami Chakrabarti, said yesterday at the Homeland and Border Security 08 conference. "I do think the threat is very serious. But I don't think it can be addressed by a war." Because, "We, the people, will not be able to verify a discernible end."

The idea that "we are at war" has justified so much post 9/11 legislation, from the ID card (in the UK) and Real ID (US) to the continued expansion of police powers.

How long can you live in a state of emergency before emergency becomes the new normal? If there is no end, when do you withdraw the latitude wartime gives a government?

Several of yesterday's speakers talked about preserving "our way of life" while countering the threat with better security. But "our way of life" is a moving target.

For example, Baroness Pauline Neville-Jones, the shadow security minister, talked about the importance of controlling the UK's borders. "Perimeter security is absolutely basic." Her example: you can't go into a building without having your identity checked. But it's not so long ago - within the 18 years I've been living in London - that you could do exactly that, even sometimes in central London. In New York, of course, until 9/11, everything was wide open; these days midtown Manhattan makes you wait in front of barriers while you're photographed, checked, and treated with great suspicion if the person you're visiting doesn't answer the phone.

Only seven years ago, flying did not involve two hours of standing in line. Until January, tourists do not have to register three days before flying to the US for pre-screening.

It's not clear how much would change with a Conservative government. "There is a very great deal by this government we would continue," said Neville-Jones. But, she said, besides trackling threats, whether motivated (terrorists) or not (floods, earthquakes, "we are also at any given moment in the game of deciding what kind of society we want to have and what values we want to preserve." She wants "sustainable security, predicated on protecting people's freedom and ensuring they have more, not less, control over their lives." And, she said, "While we need protective mechanisms, the surveillance society is not the route down which we should go. It is absolutely fundamental that security and freedom lie together as an objective."

To be sure, Neville-Jones took issue with some of the present government's plans - the Conservatives would not, she said, go ahead with the National Identity Register, and they favour "a more coherent and wide-ranging border security force". The latter would mean bringing together many currently disparate agencies to create a single border strategy. The Conservatives also favour establishing a small "homeland command for the armed forces" within the UK because, "The qualities of the military and the resources they can bring to complex situations are important and useful." At the moment, she said, "We have to make do with whoever happens to be in the country."

OK. So take the four core elements of the national security strategy according to Admiral Lord Alan West, a Parliamentary under-secretary of state at the Home Office: pursue, protect, prepare, and prevent. "Prevent" is the one that all this is about. If we are in wartime, and we know that any measure that's brought in is only temporary, our tolerance for measures that violate the normal principles of democracy is higher.

Are the Olympics wartime? Security is already in the planning stages, although, as Tarique Ghaffur pointed out, the Games are one of several big events in 2012. And some events like sailing and Olympic football will be outside London, as will 600 training camps. Add in the torch relay, and it's national security.

And in that case, we should be watching very closely what gets brought in for the Olympics, because alongside the physical infrastructure that the Games always leave behind - the stadia and transport - may be a security infrastructure that we wouldn't necessarily have chosen for daily life.

As if the proposals in front of us aren't bad enough. Take for example, the clause of the counterterrorism bill (due for its second reading in the Lords next week) that would allow the authorities to detain suspects for up to 42 days without charge. Chakrabarti lamented the debate over this, which has turned into big media politics.

"The big frustration," she said, "is that alternatives created by sensible, proportionate means of early intervention are being ignored." Instead, she suggested, make the data legally collected by surveillance and interception admissible in fair criminal trials. Charge people with precursor terror offenses so they are properly remanded in custody and continue the investigation for the more serious plot. "That is a way of complying with ancient principles that you should know what you are accused of before being banged up, but it gives the police the time and powers they need."

Not being at war gives us the time to think. We should take it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

June 27, 2008

Mistakes were made

This week we got the detail on what went wrong at Her Majesty's Revenue and Customs that led to the loss of those two CDs full of the personal details of 25 million British households last year with the release of the Poynter Review (PDF). We also got a hint of how and whether the future might be different with the publication yesterday of Data Handling: Proecures in Government (PDF), written by Sir Gus O'Donnell and commissioned by the Prime Minister after the HMRC loss. The most obvious message of both reports: government needs to secure data better.

The nicest thing the Poynter review said was that HMRC has already made changes in response to its criticisms. Otherwise, it was pretty much a surgical demonstration of "institutional deficiencies".

The chief points:

- Security was not HMRC's top priority.

- HMRC in fact had the technical ability to send only the selection of data that NAO actually needed, but the staff involved didn't know it.

- There was no designated single point of contact between HMRC and NAO.

- HMRC used insecure methods for data storage and transfer.

- The decision to send the CDs to the NAO was taken by junior staff without consulting senior managers - which under HMRC's own rules they should have done.

- The reason HMRC's junior staff did not consult managers was that they believed (wrongly) that NAO had absolute authority to access any and all information HMRC had.

- The HMRC staffer who dispatched the discs incorrectly believed the TNT Post service was secure and traceable, as required by HMRC policy. A different TNT service that met those requirements was in fact available.

- HMRC policies regarding information security and the release of data were not communicated sufficiently through the organization and were not sufficiently detailed.

- HMRC failed on accountability, governance, information name it.

The real problem, though, isn't any single one of these things. If junior staff had consulted senior staff, it might not have mattered that they didn't know what the policies were. If HMRC used proper information security and secure methods for data storage (that is, encryption rather than simple password protection), they wouldn't have had access to send the discs. If they'd understood TNT's services correctly, the discs wouldn't have gotten lost - or at least been traceable if they had.

The real problem was the interlocking effect of all these factors. That, as Nassim Nicholas Taleb might say, was the black swan.

For those who haven't read Taleb's The Black Swan: The Impact of the Highly Improbable, the black swan stands for the event that is completely unpredictable - because, like black swans until one was spotted in Australia, no such thing has ever been seen - until it happens. Of course, data loss is pretty much a white swan; we've seen lots of data breaches. The black swan, really, is the perfectly secure system that is still sufficiently open for the people who need to use it.

That challenge is what O'Donnell's report on data handling is about and, as he notes, it's going to get harder rather than easier. He recommends a complete rearrangement of how departments manage information as well as improving the systems within individual departments. He also recommends greater openness about how the government secures data.

"No organisation can guarantee it will never lose data," he writes, "and the Government is no exception." O'Donnell goes on to consider how data should be protected and managed, not whether it should be collected or shared in the first place. That job is being left for yet another report in progress, due soon.

It's good to read that some good is coming out of the HMRC data loss: all departments are, according to the O'Donnell report, reviewing their data practices and beginning the process of cultural change. That can only be a good thing.

But the underlying problem is outside the scope of these reports, and it's this government's fondness for creating giant databases: the National Identity Register, ContactPoint, the DNA database, and so on. If the government really accepted the principle that it is impossible to guarantee complete data security, what would they do? Logically, they ought to start by cancelling the data behemoths on the understanding that it's a bad idea to base public policy on the idea that you can will a black swan into existence.

It would make more sense to create a design for government use of data that assumes there will be data breaches and attempts to limit the adverse consequences for the individuals whose data is lost. If my privacy is compromised alongside 50 million other people's and I am the victim of identity theft does it help me that the government department that lost the data knows which staff member to blame?

As Agatha Christie said long ago in one of her 80-plus books, "I know to err is human, but human error is nothing compared to what a computer can do if it tries." The man-machine combination is even worse. We should stop trying to breed black swans and instead devise systems that don't create so many white ones.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

June 13, 2008

Naked in plain sight

I couldn't have been more embarrassed than if the tall guy carrying a laptop had just told me I was wearing a wet T-shirt.

There I was, sitting in the Queen's club international press room. And there was he, the only other possessor of a red laptop in the entire building, showing me a screen full of a hotel reservation from a couple of months back, in full detail. With my name and address on it.

"If I can see it," he said in that maddening you-must-be-an-idiot IT security guy way, "so can everyone else."


I took that laptop to Defcon!

(And nothing bad happened. That I know of. Yet.)

Despite the many Guardian readers who are convinced that I am technically incompetent because I've written pieces in which it seemed more entertaining to pretend to be so for dramatic effect, I am not an idiot. I'm not even technically incompetent, or not completely so. I am just, like most people, busy, and, like most people, the problem most of the time is to get my computers to work, not to stop them from working. And I fall into that shadowland of people who know just enough to want to run their computers their way but not enough to understand all the ramifications of what they're doing.

So, for example: file shares (not file-sharing, a different kettle of worms entirely). What you are meant to do, because you are an ignorant and brain-challenged consumer, is drop any files you need to share on the network into the Shared Documents folder. While it's no more secure than any other folder (and its name is eminently guessable by outside experts), the fact that you have to knowingly put files in it means that very little of your system is exposed.

I, of course, am far too grand (and perverse) to put up with Microsoft telling me how to organize my system, so of course I don't do things that way. Instead, I share specific directories using a structure I devised myself that is the same on all my machines. That's where I fouled up, of course. That laptop runs XP, and in XP, as I suppose I am the last to notice, the default settings have what's known as "simple file-sharing" turned on, so that if you share a directory it's basically open to all comers. XP warns you you're doing something risky; what it doesn't do is tell you in a simple way how to reduce the risk.

Yes, I tried to read the help files. They're impenetrable. Help files, like most of the rest of computing, separate into two types: either they're written for the completely naïve user, or they're written for the professional system administrator. Despite the fact that people like me are a growing class of users, we have to learn this stuff behind the bicycle shed from people randomly selected via Google.

This is what it should have said. Do one of the following two things: either set permissions so that only those users who have passwords on your system can access this directory or stick a $ sign at the end of the directory name to make it hidden. If you do the latter, you will have to map the directory as a network drive on all the machines that want to use it. I note that they seem to have improved things in Vista, which I will no doubt start using sometime around 21012). I know Apple probably does this better and Linux is secured out the wazoo, but that's not the point: the point is that it's incredibly easy for moderately knowledgeable users to leave their systems with gaping wide open holes. What I would have liked them to do is offer me the option to view how my system looks to someone connecting from outside with no authentication. I feel sure this could be done.

The problem for Microsoft on this kind of thing is the same problem that afflicts everyone trying to do IT security: everything you do to make the system more secure makes it harder for users to make things work. In the case of the file shares, as long as your computer is at home sitting behind the kind of firewalled router the big ISPs supply, it's more important to grant access to other household members than it is to worry about outsiders. It's when you take that laptop out of the house...and the really awkward thing is that there isn't any really easy way to test for open shares within your own network if, like many people, you tend to use the same login ID and password on all your machines for simplicity's sake. Do friends let friends drive open shares?

The security guys (really, the wi-fi suppliers and tech support), who were only looking around the network for open shares because they were bored, had a good laugh, especially when I told them who I write for (latest addition to the list: Infosecurity magazine!). And they obligingly produced some statistics. Out of the 60 to 100 journalists in the building using the wireless, three had open shares. One, they said, was way more embarrassing than mine, though they declined to elaborate. I think they were just being nice.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

May 23, 2008

The haystack conundrum

Early this week the news broke that the Home Office wants to create a giant database in which will be stored details of all communications sent in Britain. In other words, instead of data retention, in which ISPs, telephone companies, and other service providers would hang onto communications data for a year or seven in case the Home Office wanted it, everything would stream to a Home Office data center in real time. We'll call it data swallowing.

Those with long memories - who seem few and far between in the national media covering this sort of subject - will remember that in about 1999 or 2000 there was a similar rumor. In the resulting outraged media coverage it was more or less thoroughly denied and nothing had been heard of it since, though privacy advocates continued to suspect that somewhere in the back of a drawer the scheme lurked, dormant, like one of those just-add-water Martians you find in the old Bugs Bunny cartoons. And now here it is again in another leak that the suspicious veteran watcher of Yes, Minister might think was an attempt to test public opinion. The fact that it's been mooted before makes it seem so much more likely that they're actually serious.

This proposal is not only expensive, complicated, slow, and controversial/courageous (Yes, Minister's Fab Four deterrents), but risk-laden, badly conceived, disproportionate, and foolish. Such a database will not catch terrorists, because given the volume of data involved trying to use it to spot any one would-be evil-doer will be the rough equivalent of searching for an iron filing in a haystack the size of a planet. It will, however, make it possible for anyone trawling the database to make any given individual's life thoroughly miserable. That's so disproportionate it's a divide-by-zero error.

The risks ought to be obvious: this is a government that can't keep track of the personal details of 25 million households, which fit on a couple of CDs. Devise all the rules and processes you want, the bigger the database the harder it will be to secure. Besides personal information, the giant communications database would include businesses' communication information, much of likely to be commercially sensitive. It's pretty good going to come up with a proposal that equally offends civil liberties activists and businesses.

In a short summary of the proposed legislation, we find this justification: "Unless the legislation is updated to reflect these changes, the ability of public authorities to carry out their crime prevention and public safety duties and to counter these threats will be undermined."

Sound familiar? It should. It's the exact same justification we heard in the late 1990s for requiring key escrow as part of the nascent Regulation of Investigatory Powers Act. The idea there was that if the use of strong cryptography to protect communications became widespread law enforcement and security services would be unable to read the content of the messages and phone calls they intercepted. This argument was fiercely rejected at the time, and key escrow was eventually dropped in favor of requiring the subjects of investigation to hand over their keys under specified circumstances.

There is much, much less logic to claiming that police can't do their jobs without real-time copies of all communications. Here we have real analogies: postal mail, which has been with us since 1660. Do we require copies of all letters that pass through the post office to be deposited with the security services? Do we require the Royal Mail's automated sorting equipment to log all address data?

Sanity has never intervened in this government's plans to create more and more tools for surveillance. Take CCTV. Recent studies show that despite the millions of pounds spent on deploying thousands of cameras all over the UK, they don't cut crime, and, more important, the images help solve crime in only 3 percent of cases. But you know the response to this news will not be to remove the cameras or stop adding to their number. No, the thinking will be like the scheme I once heard for selling harmless but ineffective alternative medical treatments, in which the answer to all outcomes is more treatment. (Patient gets better - treatment did it. Patient stays the same - treatment has halted the downward course of the disease. Patient gets worse - treatment came too late.)

This week at Computers, Freedom, and Privacy, I heard about the Electronic Privacy Information Center's work on fusion centers, relatively new US government efforts to mine many commercial and public sources of data. EPIC is trying to establish the role of federal agencies in funding and controlling these centers, but it's hard going.

What do these governments imagine they're going to be able to do with all this data? Is the fantasy that agents will be able to sit in a control room somewhere and survey it all on some kind of giant map on which criminals will pop up in red, ready to be caught? They had data before 9/11 and failed to collate and interpret it.

Iron filing; haystack; lack of a really good magnet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

May 9, 2008

Swings and roundabouts

There was a wonderful cartoon that cycled frequently around computer science departments in the pre-Internet 1970s - I still have my paper copy - that graphically illustrated the process by which IT systems get specified, designed, and built, and showed precisely why and how far they failed the user's inner image of what it was going to be. There is a scan here. The senior analyst wanted to make sure no one could possibly get hurt; the sponsor wanted a pretty design; the programmers, confused by contradictory input, wrote something that didn't work; and the installation was hideously broken.

Translate this into the UK's national ID card. Consumers, Sir James Crosby wrote in March (PDF)want identity assurance. That is, they - or rather, we - want to know that we're dealing with our real bank rather than a fraud. We want to know that the thief rooting through our garbage can't use any details he finds on discarded utility bills to impersonate us, change our address with our bank, clean out our accounts, and take out 23 new credit cards in our name before embarking on a wild spending spree leaving us to foot the bill. And we want to know that if all that ghastliness happens to us we will have an accessible and manageable way to fix it.

We want to swing lazily on the old tire and enjoy the view.

We are the users with the seemingly simple but in reality unobtainable fantasy.

The government, however - the project sponsor - wants the three-tiered design that barely works because of all the additional elements in the design but looks incredibly impressive. ("Be the envy of other major governments," I feel sure the project brochure says.) In the government's view, they are the users and we are the database objects.

Crosby nails this gap when he draws the distinction between ID assurance and ID management:

The expression 'ID management' suggests data sharing and database consolidation, concepts which principally serve the interests of the owner of the database, for example, the Government or the banks. Whereas we think of "ID assurance" as a consumer-led concept, a process that meets an important consumer need without necessarily providing any spin-off benefits to the owner of any database.

This distinction is fundamental. An ID system built primarily to deliver high levels of assurance for consumers and to command their trust has little in common with one inspired mainly by the ambitions of its owner. In the case of the former, consumers will extend use both across the population and in terms of applications such as travel and banking. While almost inevitably the opposite is true for systems principally designed to save costs and to transfer or share data.

As writer and software engineer Ellen Ullman wrote in her book Close to the Machine, databases infect their owners, who may start with good intentions but are ineluctibly drawn to surveillance.

So far, the government pushing the ID card seems to believe that it can impose anything it likes and if it means the tree collapses with the user on the swing, well, that's something that can be ironed out later. Crosby, however, points out that for the scheme to achieve any of the government's national security goals it must get mass take-up. "Thus," he writes, "even the achievement of security objectives relies on consumers' active participation."

This week, a similarly damning assessment of the scheme was released by the Independent Scheme Assurance Panel (PDF) (you may find it easier to read this clean translation - scroll down to policywatcher's May 8 posting). The gist: the government is completely incompetent at handling data, and creating massive databases will, as a result, destroy public trust in it and all its systems.

Of course, the government is in a position to compel registration, as it's begun doing with groups who can't argue back, like foreigners, and proposes doing for employees in "sensitive roles or locations, such as airports". But one of the key indicators of how little its scheme has to do with the actual needs and desires of the public is the list of questions it's asking in the current consultation on ID cards, which focus almost entirely on how to get people to love, or at least apply for, the card. To be sure, the consultation document pays lip service to accepting comments on any ID card-related topic, but the consultation is specifically about the "delivery scheme".

This is the kind of consultation where we're really damned if we do and damned if we don't. Submit comments on, for example, how best to "encourage" young people to sign up ("Views are invited particularly from young people on the best way of rolling out identity cards to them") without saying how little you like the government asking how best to market its unloved policy to vulnerable groups and when the responses are eventually released the government can say there are now no objectors to the scheme. Submit comments to the effect that the whole National Identity scheme is poorly conceived and inappropriate, and anything else you say is likely to be ignored on the grounds that they've heard all that and it's irrelevant to the present consultation. Comments are due by June 30.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

March 28, 2008

Leaving Las Vegas

Las Vegas shouldn't exist. Who drops a sprawling display of electric lights with huge fountains and luxury hotels that into the best desert scenery on the planet during an energy crisis? Indoors, it's Britain in mid-winter; outdoors you're standing in a giant exhaust fan. The out-of-proportion scale means that everything is four times as far away as you think, including the jackpot you're not going to win at one of its casinos. It's a great place to visit if you enjoy wallowing in self-righteous disapproval.

This all makes it the stuff of song, story, and legend and explains why Jeff Jonas's presentation at etech was packed.

The way Jonas tells it in his blog and at his presentation, he got into the gaming industry by driving through Las Vegas in 1989 idly wondering what was going on behind the scenes at the casinos. A year later he got the tiny beginnings of an answer when he picked up a used couch he'd found in the newspaper classified ads (boy, that dates it, doesn't it?) and found that its former owner played blackjack "for a living". Jonas began consulting to the gaming industry in 1991, helping to open Treasure Island, Bellagio, and Wynn.

"Possibly half the casinos in the world use technology we created," he said at etech.

Gaming revenues are now less than half of total revenues, he said, and despite the apparent financial win they might represent problem gamblers are in fact bad for business. The goal is for people to have fun. And because of that, he said, a place like the Bellagio is "optimized for consumer experience over interference. They don't want to spend money on surveillance."

Jonas began with a slide listing some common ideas about how Las Vegas works, culled from movies like Ocean's 11 and the TV show Las Vegas. Does the Bellagio have a vault? (No.) Do casinos perform background checks on guests based on public records? (No.) Is there a gaming industry watch list you can put yourself on but not take yourself off? (Yes, for people who know they have a gambling addiction.) Do casinos deliberately hire ex-felons? (Yes, to rehabilitate them.) Do they really send private jets for high rollers? (Cue story.)

There was, he said, a casino high roller who had won some $18 million. A win like that is going to show up in a casino's quarterly earnings. So, yes, they sent a private jet to his town and parked a limo in front of his house for the weekend. If you've got the bug, we're here for you, that kind of thing. He took the bait, and lost $22 million.

Do they help you create cover stories? (Yes.) "What happens in Vegas stays in Vegas" is an important part of ensuring that people can have fun that does not come back to bite them when they go home. The casinos' problem is with identity, not disguises, because they are required by anti-money laundering rules to report it any time someone crosses the $10,000 threshold for cash transactions. So if you play at several different tables, then go upstairs and change disguises, and come back and play some more, they have to be able to track you through all that. ID, therefore, is extremely important. Disguises are welcome; fake ID is not.

Do they use facial recognition to monitor the doors to spot cheaters on arrival? (Well...)

Of course technology-that-is-indistinguishable-from-magic-because-it-actually-is-magic appears on every crime-solving TV show these days. You know, the stuff where Our Heroes start with a fuzzy CCTV image and they punch in on a tiny piece of it and blow it up. And then someone says, "Can you enhance that?" and someone else says, "Oh, yes, we have new software," and a second later a line goes down the picture filling in detail. And a second after that you can read the brand on the face of a wrist watch (Numb3rs or the manufacturer's coding on a couple of pills (Las Vegas. Or they have a perfect matching system that can take a partial fingerprint lifted off a strand of hair or something and bang! the database can find not only the person's identity but their current home address and phone number (Bones). And who can ever forget the first episode of 24, when Jack Bauer, alarmed at the disappearance of his daughter, tosses his phone number to an underling and barks, "Find me all the Internet passwords associated with this phone number."

And yet...a surprising number of what ought to be the technically best-educated audience on the planet thought facial recognition was in operation to catch cheaters. Folks, it doesn't work in airports, either.

Which is the most interesting thing Jonas said: he now works for IBM (which bought his company) on privacy and civil liberties issues, including work on software to help the US government spot terrorists without invading privacy. It's an interesting concept, partly because security at airports and other locations is now so invasive. But also because if Las Vegas can find a way to deploy surveillance such that only the egregious problems are caught and everyone else just has a good time...why can't governments?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

February 29, 2008

Phormal ware

In the last ten days or so a stormlet has broken out about the announcement that BT, Carphone Warehouse, and TalkTalk, who jointly cover about 70 percent of British Internet subscribers, have signed up for a new advertising service. The supplier, Phorm (previously, 121Media), has developed Open Internet Exchange (OIX), a platform to serve up "relevant" ads to ISPs' customers. Ad agencies and Web sites also sign up to the service which, according to Phorm's FAQ, can serve up ads to any Web site "in the regular places the website shows ads". Partners include most British national newspapers, iVillage, and MGM OMD.

A brief chat with BT revealed that the service, known to consumers as Webwise, will apply only to BT's retail customers, not its wholesale division. Consumers will be able to opt out, and BT is planning an educational exercise to explain the service.

Obviously all concerned hope Webwise will be acceptable to consumers, but to make it a little more palatable, not signing out of it gets you warnings if you land on suspected phishing sites. I don't think improved security should, ethically, be tied to a person's ad-friendliness, but this is the world we live in.

"We've done extensive research with our customer base," says BT's spokesman, "and it's very clear that when customers know what is happening they're overwhelmingly in favor of it, particularly in terms of added security."

But the Net folk are suspicious folk, and words like "spyware" and "adware" are circling, partly because Phorm's precursor, 121Media, was blocked by Symantec and F-Secure as spyware. Plus, The Register discovered that BT had been sharing data with Phorm as long ag as last summer, and, apparently, lying about it.

Phorm's PR did not reply to a request for an interview, but a spokeswoman contacted briefly last week defended the company. "We are absolutely not and in no way an adware product at all."

The overlooked aspect: Phorm called in Privacy International's new commercial arm, 80/20, to examine its system.

PI's executive director, Simon Davies, one of the examiners, says, "Phorm has done its very best to eliminate and minimise the use of personal information and build privacy into the core of the technology. In that sense, it's a privacy-friendly technology, but that does not get us away from the intrusion aspect." In general, the principle is that ads shouldn't be served on an opt-out basis; users should have to opt in to receive them.

Tailoring advertising to the clickstream of user interests is of course endemic online now; it's how Google does AdSense, and it's why that company bought DoubleClick, which more or less invented the business of building up user profiles to create personalized ads. Phorm's service, however, does not build user profiles.

A cookie with a unique ID is stored on the user's system - but does not associate that ID with an individual or the computer it's stored on. Say you're browsing car sites like Ford and Nissan. The ISP does not give Phorm personally identifiable information like IP addresses, but does share the information that the computer this cookie is on is looking at car sites right now. OIX serves up car ads. The service ignores niche sites, secure sites (HTTPS), and low-traffic sites. Firewalling between Phorm and the ISP means that the ISP doesn't know and can't deduce the information that the OIX platform knows about what ads are being served. Nothing is stored to create a profile. Phorm instead offers advertisers instead is the knowledge that they are serving ads that reflect users' interests in real time.

The difference to Davies is that Google, which came last in Privacy International's privacy rankings, stores search histories and browsing data and ties them to personal identifiers, primarily login IDs and IP addresses. (Next month, the Article 29 Group will report its opinion as to whether IP addresses are personal information, so we will know better then which way the cookie crumbles.)

"The potential to develop a profile covertly is extremely limited, if not eliminated," says Davies.

Phorm itself says, "We really think what our stuff does dispells the myth that in order to provide relevance you have to store data."

I hate advertising as much as the next six people. But most ISPs are operating on razor-thin margins if they make money at all, and they're looking at continuously increasing demand for bandwidth. That demand can only get worse as consumers flock to the iPlayer and other sources of streaming video. The pressure on pricing is steadily downward with people like TalkTalk and O2 offering free or extremely cheap broadband as an add-on to mobile phone accounts. Meanwhile, the advertising revenues go to everyone but them. Is it surprising that they'd leap at this? Analysts estimate that BT will pick up £85 million in the first year. Nice if you can get it.

We all want low-cost broadband and free content. None of us wants ads. How exactly do we propose all this free stuff is going to be paid for?

As for Phorm, it's going to take a lot to make some users trust them. I'd say, though, that the jury is still out. Sometimes people do learn from past mistakes.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

February 8, 2008

If you have ID cards, drink alcohol

One of the key identifiers of an addiction is that indulgence in it persists long after all the reasons for doing it have turned from good to bad.

A sobered-up Scottish alcoholic once told me the following examplar of alcoholic thinking. A professor is lecturing to a class of alcoholics on the evils of drinking. To make his point, he takes two glasses, one filled with water, the other with alcohol. Into each glass he drops a live worm. The worm in the glass of water lives; the worm in the glass of alcohol dies.

"What," the professor asks, "can we learn from this?"

One of the alcoholics raises his hand. "If you have worms, drink alcohol."

In alcoholic thinking, of course, there is no circumstance in which the answer isn't "Drink alcohol."

So, too, with the ID card. The purpose as mooted between 2001 and 2004 was preventing benefit fraud and making life more convenient for UK citizens and residents. The plan promised perfect identification via the combination of a clean database (the National Identity Register) and biometrics (fingerprints and iris scans). The consultation document made a show of suggesting the cheaper alternative of a paper card with minimal data collection, but it was clear what they really wanted: the big, fancy stuff that would make them the envy of other major governments.

Opponents warned of the UK's poor track record with large IT projects, the privacy-invasiveness, and the huge amount such a system was likely to cost. Government estimates, now at £5.4 billion, have been slowly rising to meet Privacy International's original estimate of £6 billion.

By 2006, when the necessary legislation was passed, the government had abandoned the friendly "entitlement card" language and was calling it a national ID card. By then, also, the case had changed: less entitlement, more crime prevention.

It's 2008, and the wheels seem to be coming off. The government's original contention that the population really wanted ID cards has been shredded by the leaked documents of the last few weeks. In these, it's clear that the government knows the only way it will get people to adopt the ID card is by coercion, starting with the groups who are least able to protest by refusal: young people and foreigners.

Almost every element deemed important in the original proposal is now gone - the clean database populated through interviews and careful documentation (now the repurposed Department of Work and Pensions database); the iris scans (discarded); probably the fingerprints (too expensive except for foreigners). The one element that for sure remains is the one the government denied from the start: compulsion.

The government was always open about its intention for non-registration to become increasingly uncomfortable and eventually to make registration compulsory. But if the card is coming at least two years later than they intended, compulsion is ahead of schedule.

Of course, we've always maintained that the key to the project is the database, not the card. It's an indicator of just how much of a mess the project is that the Register, the heart of the system, was first to be scaled back because of its infeasibility. (I mean, really, guys. Interview and background-check the documentation of every one of 60 million people in any sort of reasonable time scale?)

The project is even fading in popularity with the very vendors who want to make money supplying the IT for it. How can you specify a system whose stated goals keep changing?

The late humorist and playwright Jean Kerr (probably now best known for her collection of pieces about raising five boys with her drama critic husband in a wacky old house in Larchmont, NY, Please Don't Eat the Daisies) once wrote a piece about the trials and tribulations of slogging through the out-of-town openings of one of her plays. In these pre-Broadway trial runs, lines get cut and revised; performances get reshaped and tightened. If the play is in trouble, the playwright gets no sleep for weeks. And then, she wrote, one day you look up at the stage, and, yes, the play is much better, and the performances are much better, and the audience seems to be having a good time. And yet - the play you're seeing on the stage isn't the play you had in mind at all.

It's one thing to reach that point in a project and retain enough perspective to be honest about it. It may be bad - but it isn't insane - to say, "Well, this play isn't what I had in mind, but you know, the audience is having a good time, and it will pay me enough to go away and try again."

But if you reach the point where the project you're pushing ahead clearly isn't any more the project you had in mind and sold hard, and yet you continue to pretend to yourself and everyone else that it is - then you have the kind of insanity problem where you're eating worms in order to prove you're not an alcoholic.

The honorable thing for the British government to do now is say, "Well, folks, we were wrong. Our opponents were right: the system we had in mind is too complicated, too expensive, and too unpopular because of its privacy-invasiveness. We will think again." Apparently they're so far gone that eating worms looks more sensible.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

November 23, 2007

Road block

There are many ways for a computer system to fail. This week's disclosure that Her Majesty's Revenue and Customs has played lost-in-the-post with two CDs holding the nation's Child Benefit data is one of the stranger ones. The Child Benefit database includes names, addresses, identifying numbers, and often bank details, on all the UK's 25 million families with a child under 16. The National Audit Office requested a subset for its routine audit; the HMRC sent the entire database off by TNT post.

There are so many things wrong with this picture that it would take a village of late-night talk show hosts to make fun of them all. But the bottom line is this: when the system was developed no one included privacy or security in the specification or thought about the fundamental change in the nature of information when paper-based records are transmogrified into electronic data. The access limitations inherent in physical storage media must be painstakingly recreated in computer systems or they do not exist. The problem with security is it tends to be inconvenient.

With paper records, the more data you provide the more expensive and time-consuming it is. With computer records, the more data you provide the cheaper and quicker it is. The NAO's file of email relating to the incident (PDF) makes this clear. What the NAO wanted (so it could check that the right people got the right benefit payments): national insurance numbers, names, and benefit numbers. What it got: everything. If the discs hadn't gotten lost, we would never have known.

Ironically enough, this week in London also saw at least three conferences on various aspects of managing digital identity: Digital Identity Forum, A Fine Balance, and Identity Matters. All these events featured the kinds of experts the UK government has been ignoring in its mad rush to create and collect more and more data. The workshop on road pricing and transport systems at the second of them, however, was particularly instructive. Led by science advisor Brian Collins, the most notable thing about this workshop is that the 15 or 20 participants couldn't agree on a single aspect of such a system.

Would it run on GPS or GSM/GPRS? Who or what is charged, the car or the driver? Do all roads cost the same or do we use differential pricing to push traffic onto less crowded routes? Most important, is the goal to raise revenue, reduce congestion, protect the environment, or rebalance the cost of motoring so the people who drive the most pay the most? The more purposes the system is intended to serve, the more complicated and expensive it will become, and the less likely it is to answer any of those goals successfully. This point has of course also been made about the National ID card by the same sort of people who have warned about the security issues inherent in large databases such as the Child Benefit database. But it's clearer when you start talking about something as limited as road charging.

For example: if you want to tag the car you would probably choose a dashboard-top box that uses GPS data to track the car's location. It will have to store and communicate location data to some kind of central server, which will use it to create a bill. The data will have to be stored for at least a few billing cycles in case of disputes. Security services and insurers alike would love to have copies. On the other hand, if you want to tag the driver it might be simpler just to tie the whole thing to a mobile phone. The phone networks are already set up to do hand-off between nodes, and tracking the driver might also let you charge passengers, or might let you give full cars a discount.

The problem is that the discussion is coming from the wrong angle. We should not be saying, "Here is a clever technological idea. Oh, look, it makes data! What shall we do with it?" We should be defining the problem and considering alternative solutions. The people who drive most already pay most via the fuel pump. If we want people to drive less, maybe we should improve public transport instead. If we're trying to reduce congestion, getting employers to be more flexible about working hours and telecommuting would be cheaper, provide greater returns, and, crucially for this discussion, not create a large database system that can be used to track the population's movements.

(Besides, said one of the workshop's participants: "We live with the congestion and are hugely productive. So why tamper with it?")

It is characteristic of our age that the favored solution is the one that creates the most data and the biggest privacy risk. No one in the cluster of organisations opposing the ID card - No2ID, Privacy International, Foundation for Information Policy Research, or Open Rights Group - wanted an incident like this week's to happen. But it is exactly what they have been warning about: large data stores carry large risks that are poorly understood, and it is not enough for politicians to wave their hands and say we can trust them. Information may want to be free, but data want to leak.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

November 9, 2007

Watching you watching me

A few months ago, a neighbour phoned me and asked if I'd be willing to position a camera on my windowsill. I live at the end of a small dead-end street (or cul-de-sac), that ends in a wall about shoulder height. The railway runs along the far side of the wall, and parallel to it and further away is a long street with a row of houses facing the railway. The owners of those houses get upset because graffiti keeps appearing alongside the railway where they can see it and covers flat surfaces such as the side wall of my house. The theory is that kids jump over the wall at the end of my street, just below my office window, either to access the railway and spray paint or to escape after having done so. Therefore, the camera: point it at the wall and watch to see what happens.

The often-quoted number of times the average Londoner is caught on camera per day is scary: 200. (And that was a few years ago; it's probably gone up.) My street is actually one of those few that doesn't have cameras on it. I don't really care about the graffiti; I do, however, prefer to be on good terms with neighbours, even if they're all the way across the tracks. I also do see that it makes sense at least to try to establish whether the wall downstairs is being used as a hurdle in the getaway process. What is the right, privacy-conscious response to make?

I was reminded of this a few days ago when I was handed a copy of Privacy in Camera Networks: A Technical Perspective, a paper published at the end of July. (We at net.wars are nothing if not up-to-date.)

Given the amount of money being spent on CCTV systems, it's absurd how little research there is covering their efficacy, their social impact, or the privacy issues they raise. In this paper, the quartet of authors – Marci Lenore Meingast (UC Berkeley), Sameer Pai (Cornell), Stephen Wicker (Cornell), and Shankar Sastry (UC Berkeley) – are primarily concerned with privacy. They ask a question every democratic government deploying these things should have asked in the first place: how can the camera networks be designed to preserve privacy? For the purposes of preventing crime or terrorism, you don't need to know the identity of the person in the picture. All you want to know is whether that person is pulling out a gun or planting a bomb. For solving crimes after the fact, of course, you want to be able to identify people – but most people would vastly prefer that crimes were prevented, not solved.

The paper cites model legislation (PDF) drawn up by the Constitution Project. Reading it is depressing: so many of the principles in it are such logical, even obvious, derivatives of the principles that democratic governments are supposed to espouse. And yet I can't remember any public discussion of the idea that, for example, all CCTV systems should be accompanied by identification of and contact information for the owner. "These premises are protected by CCTV" signs are everywhere; but they are all anonymous.

Even more depressing is the suggestion that the proposals for all public video surveillance systems should specify what legitimate law enforcement purpose they are intended to achieve and provide a privacy impact assessment. I can't ever remember seeing any of those either. In my own local area, installing CCTV is something politicians boast about when they're seeking (re)election. Look! More cameras! The assumption is that more cameras equals more safety, but evidence to support this presumption is never provided and no one, neither opposing politicians nor local journalists, ever mounts a challenge. I guess we're supposed to think that they care about us because they're spending the money.
The main intention of Meingast, Pai, et al, however, is to look at the technical ways such networks can be built to preserve privacy. They suggest, for example, collecting public input via the Internet (using codes to identify the respondents on whom the cameras will have the greatest impact). They propose an auditing system whereby these systems and their usage is reviewed. As the video streams become digital, they suggest using layers of abstraction of the resulting data to limit what can be identified in a given image. "Information not pertinent to the task in hand," they write hopefully, "can be abstracted out leaving only the necessary information in the image." They go on into more detail about this, along with a lengthy discussion of facial recognition.

The most depressing thing of all: none of this will ever happen, and for two reasons. First, no government seems to have the slightest qualm of conscience about installing surveillance systems. Second, the mass populace don't seem to care enough to demand these sorts of protections. If these protections are to be put in place at all, it must be done by technologists. They must design these systems so that it's easier to use them in privacy-protecting ways than to use them in privacy-invasive ways. What are the odds?

As for the camera on my windowsill, I told my neighbour after some thought that they could have it there for a maximum of a couple of weeks to establish whether the end of my street was actually being used as an escape route. She said something about getting back to me when something or other happened. Never heard any more about it. As far as I am aware, my street is still unsurveilled.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

October 12, 2007

The permission-based society

It was Edward Hasbrouck who drew my attention to a bit of rulemaking being proposed by the Transportation Security Agency. Under current rules, if you want to travel on a plane out of, around, into, or over the US you buy a ticket and show up at the airport, where the airline compares your name and other corroborative details to the no-fly list the TSA maintains. Assuming you're allowed onto the flight, unbeknownst to you, all this information has to be sent to the TSA within 15 minutes of takeoff (before, if it's a US flight, after if it's an international flight heading for the US).

Under the new rules, the information will have to arrive at the TSA 72 hours before the flight takes off – after all, most people have finalised their travel plans by that time, and only 7 to 10 percent of itineraries change after that – and the TSA has to send back an OK to the airline before you can be issued a boarding pass.

There's a whole lot more detail in the Notice of Proposed Rulemaking, but that's the gist. (They'll be accepting comments until October 22, if you would like to say anything about these proposals before they're finalised.)

There are lots of negative things to say about these proposals – the logistical difficulties for the travel industry, the inadequacy of the mathematical model behind this (which at the public hearing the ACLU's Barry Steinhardt compared to trying to find a needle in a haystack by pouring more hay on the stack), and the privacy invasiveness inherent in having the airlines collect the many pieces of data the government wants and, not unnaturally, retaining copies while forwarding it on to the TSA. But let's concentrate on one: the profound alteration such a scheme will make to American society at large. The default answer to the question of whether you had the right to travel anywhere, certainly within the confines of the US, has always been "Yes". These rules will change it to "No".

(The right to travel overseas has, at times, been more fraught. The folk scene, for example, can cite several examples of musicians who were denied passports by the US State Department in the 1950s and early 1960s because of their left-wing political beliefs. It's not really clear to me why the US wanted to keep people whose views it disapproved of within its borders but some rather hasty marriages took place in order to solve some of these immigration problems, though everyone's friends again now and it's fresh passports all round.)

Hasbrouck, Steinhardt, and EFF founder John Gilmore, who sued the government over the right to travel anonymously within the US, have all argued that the key issue here is the right to assemble guaranteed in the First Amendment. If you can't travel, you can't assemble. And if you have to ask permission to travel, your right of assembly is subject to disruption at any time. The secrecy with which the TSA surrounds its decision-making doesn't help.

Nor does the amount of personal data the TSA is collecting from airline passenger name records. The Identity Project's recent report on the subject highlights that these records may include considerable detail: what books the passenger is carrying, what answer you give when asked where you've been or are going, names and phone numbers given as emergency contacts, and so on. Despite the data protection laws, it isn't always easy to find out what information is being stored; when I made such a request of US Airways last year, the company refused to show me my PNR from a recent flight and gave as the reason: "Security." Civilisation as we know it is at risk if I find out what they think they know about me? We really are in trouble.

In Britain, the chief objections to the ID card and, more important, the underlying database, have of course been legion, but they have generally focused on the logistical problems of implementing it (huge cost, complex IT project, bound to fail) and its general privacy-invasiveness. But another thing the ID card – especially the high-tech, biometric, all-singing, all-dancing kind – will do is create a framework that could support a permission-based society in which the ID card's interaction with systems is what determines what you're allowed to do, where you're allowed to go, and what purchases you're allowed to make. There was a novel that depicted a society like this: Ira Levin's This Perfect Day, in which these functions were all controlled by scanner bracelets and scanners everywhere that lit up green to allow or red to deny permission. The inhabitants of that society were kept drugged, so they wouldn't protest the ubiquitous controls. We seem to be accepting the beginnings of this kind of life stone, cold sober.

American children play a schoolyard game called "Mother, May I?" It's one of those games suitable for any number of kids, and it involves a ritual of asking permission before executing a command. It's a fine game, but surely it isn't how we want to live.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

September 21, 2007

The summer of lost hats

I seem to have spent the summer dodging in and out of science fiction novels featuring four general topics: energy, security, virtual worlds, and what someone at the last conference called "GRAIN" technologies (genetic engineering, robotics, AI, and nanotechnology). So the summer started with doom and gloom and got progressively more optimistic. Along the way, I have mysteriously lost a lot of hats. The phenomena may not be related.

I lost the first hat in June, a Toyota Motor Racing hat (someone else's joke; don't ask) while I was reading the first of many very gloomy books about the end of the world as we know it. Of course, TEOTWAWKI has been oft-predicted, and there is, as Damian Thompson, the Telegraph's former religious correspondent, commented when I was writing about Y2K – a "wonderful and gleeful attention to detail" in these grand warnings. Y2K was a perfect example: a timetable posted to had the financial system collapsing around April 1999 and the cities starting to burn in October…

Energy books can be logically divided into three categories. One, apocalyptics: fossil fuels are going to run out (and sooner than you think), the world will continue to heat up, billions will die, and the few of us who survive will return to hunting, gathering, and dying young. Two, deniers: fossil fuels aren't going to run out, don't be silly, and we can tackle global warming by cleaning them up a bit. Here. Have some clean coal. Three, optimists: fossil fuels are running out, but technology will help us solve both that and global warming. Have some clean coal and a side order of photovoltaic panels.

I tend, when not wracked with guilt for having read 15 books and written 30,000 words on the energy/climate crisis and then spent the rest of the summer flying approximately 33,000 miles, toward optimism. People can change – and faster than you think. Ten years ago, you'd have been laughed off the British isles for suggesting that in 2007 everyone would be drinking bottled water. Given the will, ten years from now everyone could have a solar collector on their roof.

The difficulty is that at least two of those takes on the future of energy encourage greater consumption. If we're all going to die anyway and the planet is going inevitably to revert to the Stone Age, why not enjoy it while we still can? All kinds of travel will become hideously expensive and difficult; go now! If, on the other hand, you believe that there isn't a problem, well, why change anything? The one group who might be inclined toward caution and saving energy is the optimists – technology may be able to save us, but we need time to create create and deploy it. The more careful we are now, the longer we'll have to do that.

Unfortunately, that's cautious optimism. While technology companies, who have to foot the huge bills for their energy consumption, are frantically trying to go green for the soundest of business reasons, individual technologists don't seem to me to have the same outlook. At Black Hat and Defcon, for example (lost hats number two and three: a red Canada hat and a black Black Hat hat), among all the many security risks that were presented, no one talked about energy as a problem. I mean, yes, we have all those off-site backups. But you can take out a border control system as easily with an electrical power outage as you can by swiping an infected RFID passport across a reader to corrupt the database. What happens if all the lights go out, we can't get them back on again, and everything was online?

Reading all those energy books changes the lens through which you view technical developments somewhat. Singapore's virtual worlds are a case in point (lost hat: a navy-and-tan Las Vegas job): everyone is talking about what kinds of laws should apply to selling magic swords or buying virtual property, and all the time in the back of your mind is the blog posting that calculated that the average Second Life avatar consumes as much energy as the average Brazilian. And emits as much carbon as driving an SUV for 2,000 miles. Bear in mind that most SL avatars aren't figured up that often, and the suggestion that we could curb energy consumption by having virtual conferences instead of physical ones seems less realistic. (Though we could, at least, avoid airport security.) In this, as in so much else, the science fiction writer Vernor Vinge seems to have gotten there first: his book Marooned in Real Time looks at the plight of a bunch of post-Singularity augmented humans knowing their technology is going to run out.

It was left to the most science fictional of the conferences, last week's Center for Responsible Nanotechnology conference (my overview is here) to talk about energy. In wildly optimistic terms: technology will not only save us but make us all rich as well.

This was the one time all summer I didn't lose any hats (red Swiss everyone thought was Red Cross, and a turquoise Arizona I bought just in case). If you can keep your hat while all around you everyone is losing theirs…

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

August 10, 2007

Wall of sheep

Last week at Defcon my IM ID and just enough of the password to show they knew what it was appeared on the Wall of Sheep. This screen projection of the user IDs, partial passwords, and activities captured by the installed sniffer inevitably runs throughout the conference.

It's not that I forgot the sniffer was there, or that there is a risk in logging onto an IM client unencrypted over a Wi-Fi hot spot (at a hacker conference!) but that I had forgotten that it was set to log in automatically whenever it could. Easily done.

It's strange to remember now that once upon a time this crowd – or at least, type of crowd – was considered the last word in electronic evil. In 1995 the capture of Kevin Mitnick made headlines everywhere because he was supposed to be the baddest hacker ever. Yet other than gaining online access and free phone calls, Mitnick is not known to have ever profited from his crimes – he didn't sell copied source code to its owners' competitors, and he didn't rob bank accounts. We would be grateful – really grateful – if Mitnick were the worst thing we had to deal with online now.

Last night, the House of Lords Science and Technology Committee released its report on Personal Internet Security. It makes grim reading even for someone who's just been to Defcon and Black Hat. The various figures the report quotes, assembled after what seems to have been an excellent information-gathering process (that means, they name-check a lot of people I know and would have picked for them to talk to) are pretty depressing. Phishing has cost US banks around $2 billion, and although the UK lags well behind - £33.5 million in bank fraud in 2006 – here, too, it's on the rise. Team Cymru found (PDF) that on IRC channels dedicated to the underground you could buy credit card account information for between $1 (basic information on a US account) to $50 (full information for a UK account); $1,599,335.80 worth of accounts was for sale on a single IRC channel in one day. Those are among the few things that can be accurately measured: the police don't keep figures breaking out crimes committed electronically; there are no good figures on the scale of identity theft (interesting, since this is one of the things the government has claimed the ID card will guard against); and no one's really sure how many personal computers are infected with some form of botnet software – and available for control at four cents each.

The House of Lords recommendations could be summed up as "the government needs to do more". Most of them are unexceptional: fund more research into IT security, keep better statistics. Some measures will be welcomed by a lot of us: make banks responsible for losses resulting from electronic fraud (instead of allowing them to shift the liability onto consumers and merchants); criminalize the sale or purchase of botnet "services" and require notification of data breaches. (Now I know someone is going to want to say, "If you outlaw botnets, only outlaws will have botnets", but honestly, what legitimate uses are there for botnets? The trick is in defining them to include zombie PCs generating spam and exclude PCs intentionally joined to grids folding proteins.)

Streamlined Web-based reporting for "e-crime" could only be a good thing. Since the National High-Tech Crime Unit was folded into the Serious Organised Crime Agency there is no easy way for a member of the public to report online crime. Bringing in a central police e-crime unit would also help. The various kite mark schemes – for secure Internet services and so on – seem harmless but irrelevant.

The more contentious recommendations revolve around the idea that we the people need to be protected, and that it's no longer realistic to lay the burden of Internet security on individual computer users. I've said for years that ISPs should do more to stop spam (or "bad traffic") from exiting their systems; this report agrees with that idea. There will likely be a lot of industry ink spilled over the idea of making hardware and software vendors liable if "negligence can be demonstrated". What does "vendor" mean in the context of the Internet, where people decide to download software on a whim? What does it mean for open source? If I buy a copy of Red Hat Linux with a year's software updates, that company's position as a vendor is clear enough. But if I download Ubuntu and install it myself?

Finally, you have to twitch a bit when you read, "This may well require reduced adherence to the 'end-to-end' principle." That is the principle that holds that the network should carry only traffic, and that services and applications sit at the end points. The Internet's many experiments and innovations are due to that principle.
The report's basic claim is this: criminals are increasingly rampant and increasingly rapacious on the Internet. If this continues, people will catastrophically lose confidence in the Internet. So we must improve security by making the Internet safer. Couldn't we just make it safer by letting people stop using it? That's what people tell you to do when you're going to Defcon.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

August 3, 2007

The house always wins

Las Vegas really is the perfect place to put a security conference: don't security people always feel like an island of sanity surrounded by lunatic gamblers? Although, equally, it's probably true that Las Vegas casinos probably have some of the smartest security in the world when it comes to making sure that the house will always win.

A repeated source of humor this week at Black Hat has been the responses from various manufacturers when they're told that their systems are in fact hackable. My favorite was the presentation explaining how to hack the RDS-TMC radio service that delivers information about upcoming traffic jams and other disruptions to in-car satellite navigation systems. The industry's response to the news that Italian guys could effectively control traffic was pretty much that even if it was possible, which they seemed inclined to doubt, it would take a lot of knowledge, and anyway, it's illegal…

Adam Laurie got a similar response from RFID people when he showed you could in fact crack one of those all-singing, all-dancing new e-passports and, more than that, that you can indeed clone those supposedly "unique" RFID chips with a device small enough that you could pick up the information you need just standing next to someone in an elevator. (What a Las Vegas close-up magician could do with one of those…)

The industry's response to the news that Laurie could clone ID tags was to complain triumphantly that Laurie's clones "don't have the same form factor". You're an RFID chip reader. What do you see?
"I believe in full disclosure," said Laurie. "They must know you can program in any ID you want." But that's not what they tell the public.

And then there's mobile phone malware, which according to F-secure's Mikko Hypponen is about where PC viruses were ten years ago. We have, he figures, a chance to stop them now, so we won't wind up ten years from now with all the same security risks that we face with PCs. Some of the biggest manufacturers have joined the Trusted Computing Group (an effort to secure computer systems that unfortunately has the problem that it treats the user as a potentially hostile invader).

But viruses and other bad things spread a lot faster between mobile phones because they are specifically designed for…communication. The average smartphone has Bluetooth, infrared, USB, and its network connection, and each of those is a handy way of getting a virus into the phone, not to mention also MMS, user downloads, and memory card slots. And, in future, probably WLAN, email, SMS, and even P2P. This is the bad side of having phones that can run third-party applications and that are designed to be, damn it, communications devices.

Viruses that spread by Bluetooth are particularly entertaining because of the way Bluetooth's software handles incoming connections. Say a nearby phone tries to send your phone a virus. Your phone puts up a message asking you to confirm that you want to accept it. You click No. The message instantly reappears (viruses don't like to take no for an answer). There is in fact a simple solution: walk out of range. But most users don't know to do this, and in the meantime until they say Yes, their phone is unusable. The first virus to appear in the wild, 2004's Cabir, spreads very easily if users do something risky – like turn on their phone.

This is obviously a design problem caused by a failure of imagination, even though anti-virus companies such as Kaspersky have been warning for at least a decade that as the computing power of mobile phones increased they would become vulnerable to the same problems as desktop computers.

By far the vast majority of mobile phone malware is written for Symbian phones, by the way. Palm, Windows Mobile, and other operating systems barely figure in F-Secure's statistics. Trojans are the biggest threat, and the biggest way phones get infected is user downloads.

It would not noticeably ruin the user experience for mobile phone manufacturers to change the way Bluetooth handles such incoming requests.

It took the Meet the Feds panel to regain a sense of proportion. The most a mobile phone virus can do to a new phone equipped with a mobile wallet is steal your money and send out text messages to all your contacts that will alienate them forever, leaving you with a ruined life. (Take comfort from the words of the novelist Edward Whittemore, in his book Sinai Tapestry: "No one was safe, and there was no security – just life itself.")

Bad security is still bad security, and "the Feds" sure do a lot of it, and the rather stolid face they present to the public pushes us to regard them as comical. But they're gambling with far bigger consequences than any of us, as Chris Marshall of the NSA reminded everyone. He was out to dinner with his counterparts from a variety of countries, and they were discussing what "homeland security" really means. The representative from New Zealand spoke up: he has children living in New Zealand, Australia, the US, and France, where he also has grandchildren.

"Homeland security," he said simply, "is where my children are."

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

February 9, 2007

Getting out the vote

Voter-verified paper audit trails won't save us. That was the single clearest bit of news to come out of this week's electronic voting events.

This is rather depressing, because for the last 15 years it's looked as though VVPAT (as they are euphoniously calling it) might be something everyone could compromise on.: OK, we'll let you have your electronic voting machines as long as we can have a paper backup that can be recounted in case of dispute. But no. According to Rebecca Mercuri in London this week (and others who have been following this stuff on the ground in the US), what we thought a paper trail meant is definitely not what we're getting. This is why several prominent activist organisations have come out against the Holt bill HR811, introduced into Congress this week, despite its apparent endorsement of paper trails.

I don't know about you, but when I imagined a VVPAT, what I saw in my mind's eye was something like an IBM punch card dropping individually into some kind of display where a voter would press a key to accept or reject. Instead, vendors (who hate paper trails) are providing cheap, flimsy, thermal paper in a long roll with no obvious divisions to show where individual ballots are. The paper is easily damaged, it's not clear whether it will survive the 22 months it's supposed to be stored, and the mess is not designed to ease manual recounts. Basically, this is paper that can't quite aspire to the lofty quality of a supermarket receipt.

The upshot is that yesterday you got a programme full of computer scientists saying they want to vote with pencils and paper. Joseoph Kiniry, from University College, Dublin, talked about using formal methods to create a secure system – and says he wants to vote on paper. Anne-Marie Ostveen told the story of the Dutch hacker group who bought up a couple of Nedap machines to experiment on and wound up publicly playing chess on them – and exposing their woeful insecurity – and concluded, "I want my pencil back." And so on.

The story is the same in every country. Electronic voting machines – or, more correctly, electronic ballot boxes – are proposed and brought in without public debate. Vendors promise the machines will be accurate, reliable, secure, and cheaper than existing systems. Why does anyone believe this? How can a voting computer possibly be cheaper than a piece of paper and a pencil? In fact, Jason Kitcat, a longtime activist in this area, noted that according to the Electoral Commission the cost of the 2003 pilots were astounding – in Sheffield £55 per electronic vote, and that's with suppliers waiving some charges they didn't expect either. Bear in mind, also, that the machines have an estimated life of only ten years.

Also the same: governments lack internal expertise on IT, basically because anyone who understand IT can make a lot more money in industry than in either government or the civil service.

And: everywhere vendors are secretive about the inner workings of their computers. You do not have to be a conspiracy theorist to see that privatizing democracy has serious risks.

On Tuesday, Southport LibDem MP John Pugh spoke of the present UK government's enchantment with IT. "The procurers who commission IT have a starry-eyed view of what it can do," he said. "They feel it's a very 'modern' thing." Vendors, also, can be very persuasive (I'd like to see tests on what they put in the ink in those brochures, personally). If, he said, Bill Gates were selling voting machines and came up against Tony Blair, "We would have a bill now."

Politicians are, probably, also the only class of people to whom quick counts appeal. The media, for example, ought to love slow counts that keep people glued to their TV sets, hitting the refresh button on their Web browsers, and buying newspapers throughout. Florida 2000 was a media bonanza. But it's got to be hard on the guys who can't sleep until they know whether they have a job next month.

I would propose the following principles to govern the choice of balloting systems:

- The mechanisms by which votes are counted should be transparent. Voters should be able to see that the vote they cast is the vote they intended to cast,

- Vendors should be contractually prohibited from claiming the right to keep secret their source code, the workings of their machines, or their testing procedures, and they should not be allowed to control the circumstances or personnel under which or by whom their machines are tested. (That's like letting the psychic set the controls of the million-dollar test.)

- It should always be possible to conduct a public recount of individual ballots.

Pugh made one other excellent point: paper-based voting systems are mature. "The old system was never perfect," he said, but over time "we've evolved a way of dealing with almost every conceivable problem." Agents have the right to visit every polling station and watch the count, recounts can consider every single spoiled ballot. By contrast, electronic voting presumes everything will go right.

Guys, it's a computer. Next!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

January 26, 2007

Vote early, vote often...

It is a truth that ought to be universally acknowledged that the more you know about computer security the less you are in favor of electronic voting. We thought – optimists that we are – that the UK had abandoned the idea after all the reports of glitches from the US and the rather indeterminate results of a couple of small pilots a few years ago. But no: there are plans for further trials for the local elections in May.

It's good news, therefore, that London is to play host to two upcoming events to point out all the reasons why we should be cautious. The first, February 6, is a screening of the HBO movie Hacking Democracy, a sort of documentary thriller. The second, February 8, is a conference bringing together experts from several countries, most prominently Rebecca Mercuri, who was practically the first person to get seriously interested in the security problems surrounding electronic voting. Both events are being sponsored by the Open Rights Group and the Foundation for Information Policy Research, and will be held at University College London. Here is further information and links to reserve seats. Go, if you can. It's free.

Hacking Democracy (a popular download) tells the story of ,a href="">Bev Harris and Andy Stephenson. Harris was minding her own business in Seattle in 2000 when the hanging chad hit the Supreme Court. She began to get interested in researching voting troubles, and then one day found online a copy of the software that runs the voting machines provided by Diebold, one of the two leading manufacturers of such things. (And, by the way, the company whose CEO vowed to deliver Ohio to Bush.) The movie follows this story and beyond, as Harris and Stephenson dumpster-dive, query election officials, and document a steady stream of glitches that all add up to the same point: electronic voting is not secure enough to protect democracy against fraud.

Harris and Stephenson are not, of course, the only people working in this area. Among computer experts such as Mercuri, David Chaum, David Dill, Deirdre Mulligan, Avi Rubin, and Peter Neumann, there's never been any question that there is a giant issue here. Much argument has been spilled over the question of how votes are recorded; less so around the technology used by the voter to choose preferences. One faction – primarily but not solely vendors of electronic voting equipment – sees nothing wrong with Direct Recording Electronic, machines that accept voter input all day and then just spit out tallies. The other group argues that you can't trust a computer to keep accurate counts, and that you have to have some way for voters to check that the vote they thought they cast is the vote that was actually recorded. A number of different schemes have been proposed for this, but the idea that's catching on across the US (and was originally promoted by Mercuri) is adding a printer that spits out a printed ballot the voter can see for verification. That way, if an audit is necessary there is a way to actually conduct one. Otherwise all you get is the machine telling you the same number over again, like a kid who has the correct answer to his math homework but mysteriously can't show you how he worked the problem.

This is where it's difficult to understand the appeal of such systems in the UK. Americans may be incredulous – I was – but a British voter goes to the polls and votes on a small square of paper with a stubby, little pencil. Everything is counted by hand. The UK can do this because all elections are very, very simple. There is only one election – local council, Parliament – at a time, and you vote for one of only a few candidates. In the US, where a lemon is the size of an orange, an orange is the size of a grapefruit, and a grapefruit is the size of a soccer ball, elections are complicated and on any given polling day there are a lot of them. The famous California governor's recall that elected Arnold Schwarzeneger, for example, had hundreds of candidates; even a more average election in a less referendum-happy state than California may have a dozen races, each with six to ten candidates. And you know Americans: they want results NOW. Like staying up for two or three days watching the election returns is a bad thing.

It is of course true that election fraud has existed in all eras; you can "lose" a box of marked paper ballots off the back of a truck, or redraw districts according to political allegiance, or "clean" people off the electoral rolls. But those types of fraud are harder to cover up entirely. A flawed count in an electronic machine run by software the vendor allows no one to inspect just vanishes down George Orwell's memory hole.

What I still can't figure out is why politicians are so enthusiastic about all this. Yes, secure machines with well-designer user interfaces might get rid of the problem of "spoiled" and therefore often uncounted ballots. But they can't really believe – can they? – that fancy voting technology will mean we're more likely to elect them? Can it?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

October 6, 2006

A different kind of poll tax

Elections have always had two parts: the election itself, and the dickering beforehand (and occasionally afterwards) over who gets to vote. The latest move in that direction: at the end of September the House of Representatives passed the Federal Election Integrity Act of 2006 (H.R. 4844), which from 2010 will prohibit election officials from giving anyone a ballot who can't present a government-issued photo ID whose issuing requirements included proof of US citizenship. (This lets out driver's licenses, which everyone has, though I guess it would allow passports, which relatively few have.)
These days, there is a third element: specifying the technology that will tabulate the votes. Democracy depends on the voters' being able to believe that what determines the election is the voters' choices rather than the latter two.

The last of these has been written about a great deal in technology circles over the last decade. Few security experts are satisfied with the idea that we should trust computers to do "black box voting" where they count up and just let us know the results. Even fewer security experts are happy with the idea that so many politicians around the world want to embrace: Internet (and mobile phone) voting.

The run-up to this year's mid-term US elections has seen many reports of glitches. My favorite recent report comes from a test in Maryland, where it turned out that the machines under test did not communicate with each other properly when the touch screens were in use. If they don't communicate correctly, voters might be able to vote more than once. Attaching mice to the machines solves the problem – but the incident is exactly the kind of wacky glitch that's familiar from everyday computing life and that can take absurd amounts of time to resolve. Why does anyone think that this is a sensible way to vote? (Internet voting has all the same risks of machine glitches, and then a whole lot more.)

The 2000 US Presidential election isn’t as famous for the removal from the electoral rolls in Florida of few hundred thousand voters as it is for hanging chad – but read or watch on the subject. Of course, wrangling over who gets to vote didn't start then. Gerrymandering districts, fighting over giving the right to vote to women, slaves, felons, expatriates…

The latest twist in this fine, old activity is the push in the US towards requiring Voter ID. Besides the federal bill mentioned above, a couple of dozen states have passed ID requirements since 2000, though state courts in Missouri, Kentucky, Arizona, and California are already striking them down. The target here seems to be that bogeyman of modern American life, illegal immigrants.

Voter ID isn't as obviously a poll tax. After all, this is just about authenticating voters, right? Every voter a legal voter. But although these bills generally include a requirement to supply a voter ID free of charge to people too poor to pay for one, the supporting documentation isn't free: try getting a free copy of your birth certificate, for example. The combination of the costs involved in that aspect, plus the effort involved in getting the ID are a burden that falls disproportionately on the usual already disadvantaged groups (the same ones stopped from voting in the past by road blocks, insufficient provision of voting machines in some precincts, and indiscriminate cleaning of the electoral rolls). Effectively, voter ID creates an additional barrier between the voter and the act of voting. It may not be the letter of a poll tax, but it is the spirit of one.

This is in fact the sort of point that opponents are making.

There are plenty of other logistical problems, of course, such as: what about absentee voters? I registered in Ithaca, New York, in 1972. A few months before federal primaries, the Board of Elections there mails me a registration form; returning it gets me absentee ballots for the Democratic primaries and the elections themselves. I've never known whether my vote is truly anonymous; nor whether it's actually counted. I take those things on trust, just as, I suppose, the Board of Elections trusts that the person sending back these papers is not some stray British person who's does my signature really well. To insert voter ID into that process would presumably require turning expatriate voters over to, say, the US Embassies, who are familiar with authentication and checking identity documents.

Given that most countries have few such outposts, the barriers to absentee voting would be substantially raised for many expatriates. Granted, we're a small portion of the problem. But there's a direct clash between the trend to embrace remote voting - the entire state of Oregon votes by mail – and the desire to authenticate everyone.
We can fix most of the voting technology problems by requiring voter-verifiable, auditable, paper trails, as Rebecca Mercuri began pushing for all those years ago (and most computer scientists now agree with), and there seem to be substantial moves in that direction as state electors test the electronic equipment and scientists find more and more serious potential problems. Twenty-seven states now have laws requiring paper trails. But how we control who votes is the much more difficult and less talked-about frontier.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

September 15, 2006

Mobile key infrastructure

Could mobile phones be the solution to online security problems? Fred Piper posed this question yesterday to a meeting of the UK branch of the Information Systems Security Association (something like half of whom he'd taught at one point or another).

It wasn't that Piper favored the idea. He doesn't, he said, have a mobile phone. He was putt off the whole idea long ago by an ad he saw on TV that said the great thing about mobile phones was when you left the office it would go with you. He doesn't want to be that available. (This is, by the way, an old concern. I have a New Yorker cartoon from about the 1970s that shows a worried, harassed-looking businessman walking down the street being followed by a ringing telephone on a very long cord.)

But from his observation, mobile phones (PPT) are quietly sneaking their way into the security chain without anyone's thinking too much or too deeply about it. This trend he calls moving from two-factor authentication to two-channel authentication. You can see the sense of it. You want to do some online banking, so for extra security your bank could, in response to your entering your user name and password, send you a code to your previously registered mobile phone, which you then type into the Web site (PDF) as an extra way of proving you're you.

One reason things are moving in this direction is that even though security is supposed to be getting better in some ways it's actually regressing. For one thing, these days impersonating someone is easier than cracking the technology – so impersonation has become the real threat.

For another thing, there are traditionally three factors that may be used in creating an authentication system: something you know (a PIN or credit card number), something you have (a physical credit, ATM, or access card), or something you are (a personal characteristic such as a biometric). In general, good security requires at least two such factors. That way, if one factor is compromised although the security system is weakened it's not broken altogether.

But, despite the encryption protecting credit card details online, since you are not required to present the physical card, most of the time our online transactions rely for authentication on a single factor: something we know. The upshot is that credit cards no longer are as secure as in the physical world, where they rely on two factors, the physical card and something you know (the PIN or the exact shape of your signature). "The credit card number has become an extended password," he said.

Mobile phones have some obvious advantages. Most people have one, so you're not asking people to buy special readers, as you would have to if you wanted to use a smart card as an authentication token. To the consumer, using a mobile phone for authentication seems like a freel lunch. Most people, once they have one, carry them everywhere. So you're not asking them to keep track of anything more than they already are. The channel, as in the connection to the mobile phone, is owned by known entities and already secured by them. And mobile phones are intelligent devices (even if the people speaking into them on the Tube are not).

In addition, if you compare the cost of using mobile phones as a secure channel to exchange one-time passwords for specific sessions to the cost of setting up a public key infrastructure to do the same thing, it's clearly cheaper and less unwieldy.

There are some obvious disadvantages, too. There are black holes with no coverage. Increasingly, mobile phones will be multi-network devices. They will be able tocommunicate over the owned, relatively secure channel – but they will also be able to use insecure channels such as wi-fi. In addition, Bluetooth can add more risks.

Another possibility that occurs to me is that if mobile phones start being used in bank authentication systems we will see war-dialling of mobile phone numbers and phishing attacks on a whole new scale. Yes, such an attack would require far greater investment than today's phishing emails, but the rewards could be worth it. In a different presentation at the same meeting, Mike Maddison, a consultant with Deloitte, presented the results of surveys it's conducted of three industry sectors: financial services, telecommunications and media, and life sciences. All three say the same thing: attacks are becoming more sophisticated and more dangerous, and the teenaged hacker has been largely replaced by organised crime.

Piper was not proposing a "Mobile Key Infrastructure" as a solution. What he was suggesting is that phones are already being used in this way, and security professionals should be thinking about what it means and where the gotchas are going to be. In privacy circles, we talk a lot about mission creep. In computer software we talk about creeping featurism. I don't know if security folks have a standard phrase for what we're talking about here. But it seems to me that if you're going to build a security infrastructure it ought to be because you had a plan, not because a whole bunch of people converged on it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

August 18, 2006

Travel costs

I've never been much for conspiracy theories – in general, I tend to believe that I'm not important enough to be worth conspiring against – but if I were, this week would be a valid time. On Monday, they began lifting the draconian baggage restrictions imposed late last week. On Wednesday, we began seeing stories questioning the plausibility, chemistrywise, of the plot as we have been told it so far. On Thursday, the Guardian published a package of stories outlining the wonderful increased surveillance we have in store. Repeat after me: the timing is just coincidence. It is sheer paranoia to attribute to conspiracy what can be accounted for by coincidence.


One of the things I meant to mention in last week's net.wars but forgot is the US's new rules on passenger data, which require airlines to submit passenger records before the plane takes off instead of, as formerly, afterwards. Ed Hasbrouck has a helpful analysis of these new rules, their problems, and their probable costs. (has anyone calculated the lost productivity cost of the hours in airport security?). The EU now wants to adopt those rules for its ownself, a sad reversal from the notion that the EU might decline to provide passenger data to a country that has so little privacy protection.

One possibility that's been raised on both sides of the Atlantic is a "trusted passenger" scheme, whereby frequent travelers can register to be fast-tracked through the airport. In a sense, most airports already have the beginnings of such a scheme: frequent flyers. As a Gold Preferred US Airways Dividend Miles member, you use the first-class check-in, and in some airports even sped through security via a special line. Do I love it? You betcha. Do I think it's good security? No. If I were a terrorist wanting to get some of my cellmates onto planes to wreak havoc, I would have them flying all over the place building up a stainless profile until everyone trusted them. Only then would they be ready for activation. Obviously the scheme the security services have in mind will be more sophisticated and involve far more background checking, but the problem of the sleeper remains. It's like people who used to talk about gaming the system by getting a "dope-dealer's haircut" before traveling internationally: short, neat, and business-like. That will be the "terrorist's travel identity": suit, tie, briefcase, laptop, frequent flyer gold status, documented blameless existence.

The UK is also talking about "positive profiling" (although Statewatch notes that no explicit references to this appear in the joint press statement), which I guess is supposed to be more sophisticated than "Let's strip search all the Asian passengers" The now-MP-formerly-one-of-my-favorite-actresses Glenda Jackson has published a fairly cogent set of counter-arguments, though I'll note picayunally that the algorithm for picking passengers to search randomly had better be less clearly visible than just picking every third passenger in the queue. (You must immediately! report anyone who asks to change places with you!) The Home Secretary, John Reid, has said that such profiling will not target racial or religious groups but will be based on biometrics – fingerprints, iris scans. We hope Reid is aware of the years of research into fingerprints (DOC) attempting to prove that you could identify criminality in a fingerprint.

Closer to net.wars' heart is ministers' intention to make the Web hostile to terrorists. For example: by blocking Web sites that incite acts of terrorism or contain instructions on how to make a bomb. Aside from the years of evidence that blocking does not work, it's hard to see how you can get rid of bomb-making instructions, such as they are, without also getting rid of pretty much any Web site devoted to chemistry or safety. Though if you're an arts-educated politician who is proud of knowing little of science, that may seem like a perfectly reasonable thing to do. Show me someone who's curious, who wants to know how things work, who likes to try making things and soldering things, and playing with electrical circuits, and I'll show you a dangerous specimen.

But beyond that, I'll bet professional terrorists do not learn how to make bombs by reading Wikipedia.or typing "how make bomb" into Google.

I'm not sure how you make the Web hostile to terrorists without making it hostile to everyone. If you really want to make the Web hostile, the simplest way is simply to limit, by government fiat, the speed of the connection anyone is allowed to buy. Shove us all back to dial-up, and not only does the Web become hostile for terrorists trying to find information on how to make bombs, but you've pretty much solved music/video file-trading, too. Bonus!

We hear quoted a lot, now, the American master-of-all-trades Benjamin Franklin who probably said, "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety." But the liberties people deem essential seem to be narrowing, and no one wants to believe that safety is temporary. No plane full of passengers declines screening, saying, "We'll take our chances."

If there's a conspiracy, I guess that means we're in on it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

August 11, 2006

Any liquids discovered must be removed from the passenger

The most interesting thing I heard anyone say yesterday came from Simon Sole, of Exclusive Analysis, a risk assessment company specializing in the Middle East and Africa, in an interview on CNBC. He said that it's no longer possible to track terrorists by following the money, because terrorism is now so cheap they don't need much: it cost, he said, only $4,000 to crash two Russian planes. That is an awfully long way from the kind of funding we used to hear the IRA had.

It is also clear that the arms race of trying to combat terrorism (by throwing out toothpaste in Grand Forks, North Dakota?) is speeding up. Eighteen years ago, the Lockerbie crash was caused by a bomb in checked baggage; that sparked better baggage screening. Five years ago, planes were turned into bombs by hijackers armed with small knives. The current chaos is being caused by a chemistry plot: carry on ordinary-looking items that aren't weapons in themselves and mix to make dangerous. Banning people from carrying liquids is, obviously, a lot more complicated than banning knives when you already have metal detectors. Lacking scanning technology sophisticated enough to distinguish dangerous liquids from safe ones, banning specific liquids requires detailed, item-by-item searching. Which is why, presumably, the UK (unlike the US) has banished all hand luggage to the hold: maybe they just don't have the staffing levels to search everything in a reasonable amount of time.

The UK has always hated cabin baggage anyway, and the CAA has long restricted it to 6Kg, where the US has always been far more liberal. So yesterday's extremity is the kind of panic response someone might make in a crisis when they already think carrying a laptop is a sign of poor self-control (a BA staffer once said that to me). Ban everything! Yes, even newspapers! Go!

Passengers in and leaving the US can still carry stuff on, just not liquids or gels. You may regard this as reckless or enlightened, depending on your nationality or point of view. Rich pickings in the airport garbage tonight.

The Times this morning speculates that the restrictions on hand luggage may become permanent. When you read a little more closely, they mean the restrictions on liquids, not necessarily all hand luggage. If I ran an airline, I'd be campaigning pretty heavily against yesterday's measures. For one thing, it's going to kill business travel, the industry's most profitable segment. If you can't read or use your laptop in-flight or while waiting in the airport club before departure, you're losing many hours of productivity. Plus, imagine being cabin staff trying to control a plane full of hundreds of bored, hungry, frustrated people, some of them adults.

Security expert Bruce Schneier links to a logical reason why the restrictions should be temporary: blocking a particular method only works until the terrorists switch to something else. Schneier also links to a collection of weapons made in prison under the most unpromising of circumstances with the most limited materials. View those, and you know the truth: it will never be possible to secure everything completely. Anything we do to secure air travel is a balance of trade-offs.

One reason people are so dependent on their carry-on luggage is that airline travel is uncomfortable and generally unpleasant. Passengers carry bottles of water because aircraft air is dry. I carry a milk on selected flights because I hate synthetic creamer, and dried fruit and biscuits because recent cutbacks mean on some flights you go hungry. I also carry travel chopsticks (easier to eat with than a plastic fork), magazines to read and discard, headphones with earplugs in them, and I never check anything because I hate waiting for hours for luggage that may be lost, stolen, or tampered with. Better service would render a lot of this unnecessary.

I know these measures are for our own good: we don't want planes falling out of the sky and killing people. (Though we seem perfectly innured to the many thousands more deaths every year from car crashes.) But the extremity of the British measures seems like punishment: if you are so morally depraved as to want to travel by air, you deserve to be thoroughly miserable while doing it. In fact, part of air travel's increasingly lousy service is the cost of security. We lose twice.

I read – somewhere – about a woman of a much earlier generation who expressed sadness at the thought that today's weary wanders would never know "the pleasure of traveling". We know the pleasure of being somewhere else. But we do not know the pleasure of the process of getting there as they did in a time when you were followed by trunks that were packed by servants, who came along to ensure that you were comfortable and your needs catered to. Of course, you had to be rich to afford all that. I bet it wasn't so much fun for the servants, or for the starving poor stuffed into the ship's hold. Even so, yesterday I thought about that a lot, and with the same kind of sadness.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

August 4, 2006

Hard times at the identity corral

If there is one thing we always said about the ID card it's that it was going to be tough to implement. About ten days ago, the Sunday Times revealed how tough: manufacturers are oddly un-eager to bid to make something that a) the Great British Public is likely to hate, and b) they're not sure they can manufacture anyway. That suggests (even more strongly than before) that in planning the ID card the government operated like an American company filing a dodgy patent: if we specify it, they will come.

I sympathize with IBM and the other companies, I really do. Anyone else remember 1996, when nearly all the early stories coming out of the Atlanta Olympics blamed IBM' prominently for every logistical snafu? Some really weren't IBM's fault (such as the traffic jams). Given the many failures of UK government IT systems, being associated with the most public, widespread, visible system of all could be real stock market poison.

But there's a secondary aspect to the ID card that I, at least, never considered before. It's akin to the effect often seen in the US when an amendment to the Constitution is proposed. Even if it doesn't get ratified in enough states – as, for example, the Equal Rights Amendment did not – the process of considering it often inspires a wave of related legislation. The fact that ID cards, biometric identifiers, and databases are being planned and thought about at such a high level seems to be giving everyone the idea that identity is the hammer for every nail.

Take, for example, the announcement a couple of days ago of NetIDme, a virtual ID card intended to help kids identify each other online and protect them from the pedophiles our society apparently now believes are lurking behind every electron.

There are a lot of problems with this idea, worthy though the intentions behind it undoubtedly are. For one thing, placing all your trust in an ID scheme like this is a risk in itself. To get one of these IDs, you fill out a form online and then a second one that's sent to your home address and must be counter-signed by a professional person (how like a British passport) and a parent if you're under 18. It sounds to me as though this system would be relatively easy to spoof, even if you assume that no professional person could possibly be a bad actor (no one has, after all, ever fraudulently signed passports). No matter how valid the ID is when it's issued, in the end it's a computer file protected by a password; it is not physically tied to the holder in any way, any more than your Hotmail ID and password are. For a third thing, "the card removes anonymity," the father who designed the card, Alex Hewitt, told The Times. But anonymity can protect children as well as crooks. And you'd only have to infiltrate the system once to note down a long list of targets for later use.

But the real kicker is in NetIDme's privacy policy, in which the fledgling company makes it absolutely explicit that the database of information it will collect to issue IDs is an asset of a business: it may sell the database, the database will be "one of the transferred assets" if the company itself is sold, and you explicitly consent to the transfer of your data "outside of your country" to wherever NetIDme or its affiliates "maintain facilities". Does this sound like child safety to you?

But NetIDme and other systems – fingerprinting kids for school libraries, iris-scanning them for school cafeterias – have the advantage that they can charge for their authentication services. Customers (individuals, schools) have at least some idea of what they're paying for. This is not true for the UK's ID card, whose costs and benefits are still unclear, even after years of dickering over the legislation. A couple of weeks ago, it became known that as of October 5 British passports will cost £66, a 57 percent increase that No2ID attributes in part to the costs of infrastructure needed for ID cards but not for passports. But if you believe the LSE's estimates, we're not done yet. Most recent government estimates are that an ID card/passport will cost £93, up from £85 at the time of the LSE report. So, a little quick math: the LSE report also guessed that entry into the national register would cost £35 to £40 with a small additional charge for a card, so revising that gives us a current estimate of £38.15 to £43.60 for registration alone. If no one can be found to make the cards but the government tries to forget ahead with the database anyway, it will be an awfully hard sell. "Pay us £40 to give us your data, which we will keep without any very clear idea of what we're going to do with it, and in return maybe someday we'll sell you a biometric card whose benefits we don't know yet." If they can sell that, they may have a future in Alaska selling ice boxes to Eskimos.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

June 16, 2006

Security vs security, part II

It's funny. Half the time we hear that the security of the nation depends on the security of its networks. The other half the time we're being told by governments that if the networks are too secure the security of the nation is at risk.

This schizophrenia was on display this week in a ruling by the US Court of Appeals in the District of Columbia, which ruled in favor of the Federal Communications Commission: yes, the FCC can extend the Communications Assistance for Law Enforcement Act to VoIP providers. Oh, yeah, and other people providing broadband Internet access, like universities.

Simultaneously, a clutch of experts – to wit, Steve Bellovin (Columbia University), Matt Blaze (University of Pennsylvania), Ernest Brickell (Intel), Clinton Brooks (NSA, retired), Vinton Cerf (Google), Whifield Diffie (Sun), Susan Landau (Sun), Jon Peterson (NeuStar), and John Treichler (Applied Signal Technology) – released a paper explaining why requiring voice over IP to accommodate wiretapping is dangerous. Not all of these folks are familiar to me, but the ones who are could hardly be more distinguished, and it seems to me when experts on security, VOIP, Internet protocols, and cryptography all get together to tell you there's a problem, you (as in the FCC) should listen. Together, this week they released Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP (PDF), which carefully documents the problems.

First of all – and they of course aren't the only ones to have noticed this – the Internet is not your father's PSTN. On the public switched telephone network, you have fixed endpoints, you have centralized control, and you have a single, continuously open circuit. The whole point of VoIP is that you take advantage of packet switching to turn voice calls into streams of data that are more or less indistinguishable from all the other streams of data whose packets are flying alongside. Yes, many VoIP services give you phone numbers that sound the same as geographically fixed numbers – but the whole point is that neither caller nor receiver need to wait by the phone. The phone is where your laptop is. Or, possibly, where your secretary's laptop is. Or you're using Skype instead of Vonage because your contact also uses Skype.

Nonetheless, as the report notes, the apparent simplicity of VoIP, its design that makes it look as though it functions the same as old-style telephones, means that people wrongly conclude that anything you can do on the PSTN you should be able to do just as easily with VoIP.

But the real problems lie in security. There's no getting round the fact that when you make a hole in something you've made a hole through which stuff leaks out. And where in the PSTN world you had just a few huge service providers and a single wire you could follow along and place your wiretap wherever was most secure, in the VoIP world you have dozens of small providers, and an unpredictable selection of switching and routing equipment. You can't be sure any wiretap you insert will be physically controlled by the VoIP provider, which may be one of dozens of small operators. Your targets can create new identities at no cost faster than you can say "pre-pay mobile phone". You can't be sure the signals you intercept can be securely transported to Wiretap Central. The smart terminals we use have a better chance of detecting the wiretap – which is both good and bad, in terms of civil liberties. Under US law, you're supposed to tap only the communications pertaining to the court authorization; difficult to do because of the foregoing. And then, there's a hole, as the IETF observed in 2000, which could be exploited by someone else. Whom do you fear more will gain access to your communications: government, crook, hacker, credit reporting agency, boss, child, parent, or spouse? Fun, isn't it?

And then there's the money. American ISPs can look forward to the cost of CALEA with all the enthusiasm that European ISPs had for data retention. Here, the government helpfully provided its own data: a VoIP provider paid $100,000 to a contractor to develop its CALEA solution, plus a monthly fee of $14,000 to $15,000 and, on top of that, $2,000 for each intercept.

Two obvious consequences. First: VoIP will be primarily sold by companies overseas into the US because in general the first reason people buy VoIP is that it's cheap. Second: real-time communications will migrate to things that look a lot less like phone calls. The report mentions massively multi-player online role-playing games and instant messaging. Why shouldn't criminals adopt pink princess avatars and kill a few dragons while they plot?

It seems clear that all of this isn't any way to run a wiretap program, though even the report (two of whose authors, Landau and Diffie, have written a history of wiretapping) allows that governments have a legitimate need to wiretap, within limits. But the last paragraph sounds like a pretty good way to write a science fiction novel. In fact, something like the opening scenes of Vernor Vinge's new Rainbows End.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).