Main

December 23, 2022

An inherently adverse environment

Rockettes_2239922329_8e6ffd44de-370.jpgEarlier this year, I wrote a short story/provocation for the recent book 22 Ideas About the Future. My story imagined a future in which the British central government had undermined local authorities by allowing local communities to opt out and contract for their own services. One of the consequences was to carve London up into tiny neighborhoods, each with its own rules and sponsorships, making it difficult to plot a joined-up route across town. Like an idiot, I entirely overlooked the role facial recognition would play in such a scenario. Community blocs like these, some openly set up to exclude unwanted diversity, would absolutely grab at facial recognition to repel - or charge - unwelcome outsiders.

Most discussion of facial recognition to date has focused on privacy: that it becomes impossible to move around public spaces without being identified and tracked. We haven't thought enough about the potential use of facial recognition to underpin a braad permission-based society in which our presence in any space can be detected and terminated at any time. In such a society, we are all migrants.

That particular unwanted dystopian future is upon us. This week, we learned that a New Jersey lawyer was blocked from attending the Radio City Music Hall Christmas show with her daughter because the venue's facial recognition system identified her as a member of a law firm involved in litigation against Radio City's owner, MSG Entertainment. Security denied her entry, despite her protests that she was not involved in the litigation. Whether she was or wasn't shouldn't really matter; she had committed no crime, she was causing no disturbance, she was granted no due process, and she had no opportunity for redress.

Soon after she told her story a second instance emerged, a male lawyer who was blocked from attending a New York Knicks basketball game at Madison Square Garden. Then, quickly, a third: a woman and her husband were removed from their seats at a Brandi Carlile concert, also at Madison Square Garden.

MSG later explained that litigation creates "an inherently adverse environment". I read that this way: the company has chosen to use developing technology in an abusive display of power. In other words, MSG is treating its venues as if they were the new-style airports Edward Hasbrouck has detailed, also covered here a few weeks back. In its original context, airport thinking is bad enough; expanded to the world's many privately-owned public venues, the potential is terrifying.

Early adopters of sharing data to exclude bad people talked about barring known shoplifters from chains of pubs or supermarkets, or catching and punishing criminals much more quickly. The MSG story means the mission has crept from "terrorist" to "don't like their employer" at unprecedented speed.

The right to navigate the world without interference is one privileged folks have taken for granted. With some exceptions: in England, the right to ramble all parts of the countryside took more than a century to codify into law.To an American, exclusion from a public venue *feels* like it should be a Constitutional issue - but of course it's not, since the affected venues are owned by a private company. In the reactions I've seen to the MSG stories, people have called for a ban on live facial recognition. By itself that's probably not going to be enough, now that this compost heap of worms has been opened; we are going to need legislation to underpin the right to assemble in privately-owned public spaces. Such a right sort of exists already in the conditions baked into many relevant local licensing laws that require venue operators to be the real-world equivalent of common carriers in telecommunications, who are not allowed to pick and choose whose data they will carry.

In a fourth MSG incident, a lawyer who is suing Madision Square Garden for barring him from entering, tricked the cameras at the MSG-owned Beacon Theater by disguising himself with a beard and a baseball cap. He didn't exactly need to, as his company had won a restraining order requiring MSG to let its lawyers into its venues (the case continues).

In that case, MSG's lawyer told the court barring opposition lawyers was essential to protect the company: "It's not feasible for any entertainment venue to operate any other way,"

Since when? At the New York Times, Kashmir Hill explains that the company adopted this policy last summer and depends on the photos displayed on law firms' websites to feed into its facial recognition to look for matches. But really the answer can only be: since the technology became available to enforce such a ban. It is a clear case where the availability of a technology leads to worse behavior on the part of its owner.

In 1996, the software engineer turned essayist and novelist Ellen Ujllman wrote about exactly this with respect to databases: they infect their owners with the desire to use their new capabilities. In one of her examples, a man suddenly realized he could monitor what his long-trusted secretary did all day. In another, a system to help ensure AIDS patients were getting all the benefits they were entitled to slowly morphed into a system for checking entitlement. In the case of facial recognition, its availability infinitely extends the British Tories' concept of the hostile environment.


Illustrations: The Rockettes performing in 2008 (via skividal at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 2, 2022

Hearing loss

amazon-echo-dot-charcoal-front-on-370.jpgSome technologies fail because they aren't worth the trouble (3D movies). Some fail because the necessary infrastructure and underlying technologies aren't good enough yet (AI in the 1980s, pen computing in the 1990s). Some fail because the world goes another, simpler, more readily available way (Open Systems Interconnection). Some fail because they are beset with fraud (the fate that appears to be unfolding with respect to cryptocurrencies), And some fail even though they work as advertised and people want them and use them because they make no money to sustain their development for their inventors and manufacturers.

The latter appears to be the situation with smart speakers, which in 2015 were going to take over the world, and today, in 2022, are installed in 75% of US homes. Despite this apparent success, they are losing money even for market leaders Amazon (third) and Google (second), as Business Insider reported this week. Amazon's Worldwide Digital division, which includes Prime Video as well as Echo smart speakers and Alexa voice technology, lost $3 billion in the first quarter of this year alone, primarily due to Alexa and other devices. The division will now be the biggest target for the layoffs the company announced last week.

The gist: they thought smart speakers would be like razors or inkjet printers, where you sell the hardware at or below cost and reap a steady income stream from selling razor blades or ink cartridges. Amazon thought people would buy their smart speakers, see something they liked, and order the speaker to put through the purchase. Instead, judging from the small sample I have observed personally, people use their smart speakers as timers, radios, and enhanced remote controls, and occasionally to get a quick answer from Wikipedia. And that's it. The friends I watched order their smart speaker to turn on the basement lights and manage their shopping list have, as far as I could tell on a recent visit, developed no new uses for their voice assistant in three years of being locked up at home with it.

The system has developed a new feature, though. It now routinely puts the shopping list items on the wrong shopping list. They don't know why.

In raising this topic at The Overspill, Charles Arthur referred back to a 2016 Wired aritcle summarizing venture capitalist Mary Meeker's assessment in her annual Internet Trends report that voice was going to take over the world and the iPhone had peaked. In slides 115-133, Meeker outlined her argument: improving accuracy would be a game-changer.

Even without looking at recent figures, it's clear voice hasn't taken over. People do use speech when their hands are occupied, especially when driving or when the alternative is to type painfully into their smartphone - but keyboards still populate everyone's desks, and the only people I know who use speech for data entry are people for whom typing is exceptionally difficult.

One unforeseen deterrent may be that privacy emerged as a larger issue than early prognosticators may have expected. Repeated stories have raised awareness that the price of being able to use a voice assistant at will is that microphones in your home listen to everything you say waiting for their cue to send your speech to a distant server to parse. Rising consciousness of the power of the big technology companies has made more of us aware that smart speakers are designed more to fulfill their manufacturers' desires to intermediate and monetize our lives than to help us.

The notion that consumers would want to use Amazon's Echo for shopping appears seriously deluded with hindsight. Even the most dedicated voice users I know want to see what they're buying. Years ago, I thought that as TV and the Internet converged we'd see a form of interactive product placement in which it would be possible to click to buy a copy of the shirt a football player was wearing during a game or the bed you liked in a sitcom. Obviously, this hasn't happened; instead a lot of TV has moved to streaming services without ads, and interactive broadcast TV is not a thing. But in *that* integrated world voice-activated shopping would work quite well, as in "Buy me that bed at the lowest price you can find", or "Send my brother the closest copy you can find of Novak Djokovic's dark red sweatshirt, size large, as soon as possible, all cotton if possible."

But that is not our world, and in our world we have to make those links and look up the details for ourselves. So voice does not work for shopping beyond adding items to lists. And if that doesn't work, what other options are there? As Ron Amadeo writes at Ars Technica, the queries where Alexa is frequently used can't be monetized, and customers showed little interest in using Alexa to interact with other companies such as Uber or Domino's Pizza. And, even Google, which is also cutting investment in its voice assistant, can't risk alienating consumers by using its smart speaker to play ads. Only Apple appears unaffected.

"If you build it, they will come," has been the driving motto of a lot of technological development over the last 30 years. In this case, they built it, they came, and almost everyone lost money. At what point do they turn the servers off?


Illustrations: Amazon Echo Dot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter and/or Mastodon.

October 28, 2022

MAGICAL, Part 1

hasrbrouck-cpdp2017.jpg"What's that for?" I asked. The question referred to a large screen in front of me, with my newly-captured photograph in the bottom corner. Where was the camera? In the picture, I was trying to spot it.

The British Airways gate attendant at Chicago's O'Hare airport tapped the screen and a big green checkmark appeared.

"Customs." That was all the explanation she offered. It had all happened so fast there was no opportunity to object.

Behind me was an unforgiving line of people waiting to board. Was this a good time to stop to ask:

- What is the specific purpose of collecting my image?

- What legal basis do you have for collecting it?

- Who will be storing the data?

- How long will they keep it?

- Who will they share it with?

- Who is the vendor that makes this system and what are its capabilities?

It was not.

I boarded, tamely, rather than argue with a gate attendant who certainly didn't make the decision to install the system and was unlikely to know much about its details. Plus, we were in the US, where the principles of the data protection law don't really apply - and even if they did, they wouldn't apply at the border - even, it appears, in Illinois, the only US state to have a biometric privacy law.

I *did* know that US Customs and Border Patrol had begun trialing facial recognition in selected airports, beginning in 2017. Long-time readers may remember a net.wars report from the 2013 Biometrics Conference about the MAGICAL [sic] airport, circa 2020, through which passengers flow unimpeded because their face unlocks all. Unless, of course, they're "bad people" who need to be kept out.

I think I even knew - because of Edward Hasbrouck's indefatagable reporting on travel privacy - that at various airports airlines are experimenting with biometric boarding. This process does away entirely with boarding cards; the airline captures biometrics at check-in and uses them to entirely automate the "boarding process" (a favorite bit of airline-speak of the late comedian George Carlin). The linked explanation claims this will be faster because you can have four! automated lanes instead of one human-operated lane. (Presumably then the four lanes merge into a giant pile-up in the single-lane jetway.)

It was nonetheless startling to be confronted with it in person - and with no warning. CBP proposed taking non-US citizens' images in 2020, when none of us were flying, and Hasbrouck wrote earlier this year about the system's use in Seattle. There was, he complained, no signage to explain the system despite the legal requirement to do so, and the airport's website incorrectly claimed that Congress mandated capturing biometrics to identify all arriving and departing international travelers.

According to Biometric Update, as of last February, 32 airports were using facial recognition on departure, and 199 airports were using facial recognition on arrival. In total, 48 million people had their biometrics taken and processed in this way in fiscal 2021. Since the program began in 2018, the number of alleged impostors caught: 46.

"Protecting our nation, one face at a time," CBP calls it.

On its website, British Airways says passengers always have the ability to opt out except where biometrics are required by law. As noted, it all happened too fast. I saw no indication on the ground that opting out was possible, even though notice is required under the Paperwork Reduction Act (1980).

As Hasbrouck says, though, travelers, especially international travelers and even more so international travelers outside their home countries, go through so many procedures at airports that they have little way to know which are required by law and which are optional, and arguing may get you grounded.

He also warns that the system I encountered is only the beginning. "There is an explicit intention worldwide that's already decided that this is the new normal, All new airports will be designed and built with facial recognition built into them for all airlines. It means that those who opt out will find it more and more difficult and more and more delaying."

Hasbrouck, who is probably the world's leading expert on travel privacy, sees this development as dangerous. Largely, he says, it's happening unopposed because the government's desire for increased surveillance serves the airlines' own desire to cut costs through automating their business processes - which include herding travelers onto planes.

"The integration of government and business is the under-noticed aspect of this. US airports are public entities but operate with the thinking of for-profit entities - state power merged with the profit motive. State *monopoly* power merged with the profit motive. Automation is the really problematic piece of this. Once the infrastructure is built it's hard for airline to decide to do the right thing." That would be the "right thing" in the sense of resisting the trend toward "pre-crime" prediction.

"The airline has an interest in implying to you that it's required by government because it pressures people into a business process automation that the airline wants to save them money and implicitly put the blame on the government for that," he says. "They don't want to say 'we're forcing you into this privacy-invasive surveillance technology'."


Illustrations: Edward Hasbrouck in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 14, 2022

Signaled

wendyg_railway_signal_tracks_crossing-370.jpgA while back, I was trying to get a friend to install the encrypted messaging app Signal.

"Oh, I don't want another messaging app."

Well, I said, it's not *another* messaging app. Use it to replace the app you currently use for texting (SMS) and it will just sit there showing you your text messages. But whenever you encounter another Signal user those messages will be encrypted. People sometimes accepted this; more often, they wanted to know why I couldn't just use WhatsApp, like their school group, tennis club, other friends... (Well, see, it may be encrypted, but it's still owned by the Facebook currently known as Meta.)

This week I learned that soon I won't be able to make this argument any more, because...Signal will be dropping SMS support for Android users sometime in the next few months. I don't love either the plan or the vagueness of its timing. (For reasons I don't entirely understand, this doesn't apply to the nether world of iPhone users.)

The company's blog posting lists several reasons. Apparently the app's SMS integration is confusing to many users, who are unclear about when their messages are encrypted and when they're not. Whether this is true is being disputed in the related forum thread discussing this decision. On the bah! side is "even my grandmother can use it" (snarl) and on the other the valid evidence of the many questions users have posted about this over the years in the support forums. Maybe solvable with some user interface tweaks?

Second, the pricing differential between texting and Signal messages, which transit the Internet as data, has reversed since Signal began. Where data plans used to be rare and expensive, and SMS texts cheap or bundled with phone service, today data plans are common, and SMS has become expensive in some parts of the world. There, the confusion between SMS and Signal messaging really matters. I can't argue with that except to note that equally it's a problem that does *not* apply in many countries. Again, perhaps solvable with user settings...but it's fair enough to say that supporting this may not be the best use of Signal's limited resources. I don't have insight into the distribution of Signal's global user base, and users in other countries are likely to be facing bigger risks than I am.

Third is sort of a purity argument: it's inherently contradictory to include an insecure protocol in an app intended to protect security and privacy. "Inconsistent with our values." The forum discussion is split on this. While many agree with this position, many of the rest of us live in a world that includes lots of people who do not use, and do not want to use (see above), Signal, and it is vastly more convenient to have a single messaging app that handles both.

Signal may not like to stress this aspect, but one problem with trusting an encrypted messaging app in the first place is that the privacy and security are only as good as your correspondents' intentions. Maybe all your contacts set their messages to disappear after a week, password-protect and encrypt their message database, and assign every contact an alias. Or, maybe they don't password-protect anything, never delete anything, and mirror the device to three other computers, all of which they leave lying around in public. You cannot know for sure. So a certain level of insecurity is baked into the most secure installations no matter what you do. I don't see SMS as the biggest problem here.

I think this decision is going to pose real, practical problems for Signal in terms of retaining and growing its user base; it surely does not want the app's presence on a phone become governments' watch-this-person flag. At least in Western countries, SMS is inescapable. It would be better if two-factor authentication used a less hackable alternative, but at the moment SMS is the widespread vector of corporate choice. We consumers don't actually get to choose to dump it until they do. A switch is apparently happening very slowly behind the scenes in the form of RCS, which I don't even know if my aged phone supports. In the meantime, Signal becomes the "another messaging app" we began with - and historically, diminished convenience has been one of the biggest blocks to widespread adoption of privacy-enhancing technologies.

Signal's decision raises the possibility that we are heading into a time where texting people becomes far more difficult. It may become like the early days, when you could only text people using the same phone company as you - for example, Apple has yet to adopt RCS. Every new contact will have to start with a negotiation by email or phone: how do I text you? In *addition* to everything else.

The Internet isn't splintering (yet); email may be despised, but every service remains interoperable. But the mobile world looks like breaking into silos. I have family members who don't understand why they can't send me iMessages or FaceTime me (no iPhone?), and friends I can't message unless I want to adopt WhatsApp or Telegram (groan - another messaging app?).

Signal may well be right that this move is a win for security, privacy, and user clarity. But for communication? In *this* house, it's a frustrating regression.

Illustrations: Midjourney's rendering of "railway signal tracks crossing",

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 26, 2022

Zero day

Tesla-crash-NYTimes-370.pngYears ago, an alarmist book about cybersecurity threats concluded with the suggestion that attackers' expertise at planting backdoors could result in a "zero day" when, at an attacker-specified time, all the world's computers could be shut down simultaneously.

That never seemed likely.

But if you *do* want to take down all of the computers in an area the easiest way is to cut off the electricity supply. Which, if the worst predictions for this year's winter in Britain come true, is what could happen, no attacker required. All you need is a government that insists, despite expert warnings, that there will be plenty of very expensive energy to go round for those who can afford it - even while the BBC reports that in some areas of West London the power grid is so stretched by data centers' insatiable power demands that new homes can't be built

Lack of electrical power is something even those rich enough not to have to choose between eating and heating can't ignore - particularly because they're also most likely to be dependent on broadband for remote working. But besides that: no power means no Internet: no way for kids to do their schoolwork or adults to access government sites to apply for whatever grants become available. Exponentially increasing energy prices already threatens small businesses, charities, care homes, child care centers, schools, food banks, hospitals, and libraries, as well as households. It won't be much consolation if we all wind up "saving" money because there's no power available to pay for.

***

In an earlier, analog, era, parents taking innocent nude photos of their kids were sometimes prosecuted when they tried to have them developed at the local photo shop. In the 2021 equivalent, Kashmir Mill reports at the New York Times, Google flagged pictures two fathers took of their young sons' genitalia in order to help doctors diagnose an infection, labeled them child sexual abuse material, ordered them deleted, suspended the fathers' accounts, and reported them to the police.

It's not surprising that Google has automated content moderation systems dedicated to identifying abuse images, which are illegal almost everywhere. What *has* taken people aback, however, was these fathers' complete inability to obtain redress, even after the police exonerated them. Most of us would expect Google to have a "human in the loop" review process to whom someone who's been wrongfully accused can appeal.

In reality, though, the result is more likely to be like what happened in the so-called Twitter joke trial. In that case, a frustrated would-be airline passenger trying to visit his girlfriend posted on Twitter that he might blow up the airport if he still couldn't get a flight. Everyone who saw the tweet, from the airport's security staff to police, agreed he was harmless - and yet no one was willing to be the person who took the risk of signing off on it, just in case. With suspected child abuse, the same applies: no one wants to risk being the person who wrongly signs off on dropping the accusations. Far easier to trust the machine, and if it sets of a cascade of referrals that cost an innocent parent their child (as well as all their back GMail, contacts list, and personal data), well...it's not your fault. This goes double for a company like Google, whose bottom line depends on providing as little customer services as possible.

***

Even though all around us are stories about the risks of trusting computers not to fail, last week saw a Twitter request for the loan of a child. For the purpose of: having it run in front of a Tesla operating on Full Self-Drive to prove the car would stop. At the Guardian, Arwa Mahdawi writes that said poster did find a volunteer, albeit with this caveat: "They just have to convince their wife." Apparently several wives were duly persuaded, and the children got to experience life as crash test dummies - er, beta testers. Fortunately, none were harmed .

Reportedly, Google/YouTube is acting promptly to get the resulting videos taken down, though is not reporting the parents, who, as a friend quipped, are apparently unaware that the Darwin Award isn't meant to be aspirational.

***

The last five years of building pattern recognition systems - facial recognition, social scoring, and so on - have seen a lot of evidence-based pushback against claims that these systems are fairer because they eliminate human bias. In fact they codify it because they are trained on data with the historical effects of those biases already baked in.

This week saw a disturbing watershed: bias has become a selling point. An SFGate story by Joshua Bote (spotted at BoingBoing) highlights Sanos, a Bay Area startup that offers software intended to "whiten" call center workers' voices by altering their accents into "standard American English". Having them adopt obviously fake English pseudonyms apparently wasn't enough.

Such as system, as Bote points out, will reinforce existing biases. If it works, it's perfectly designed to expand prejudice and entitlement along the lines of "Why should I have to deal with anyone whose voice or demeanor I don't like?" It's worse than virtual reality, which is at least openly a fictional simulation; it puts a layer of fake over the real world and makes us all less tolerant. This idea needs to fail.


Illustrations: One of the Tesla crashes investigated in New York Times Presents, discussed here in June.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 15, 2022

Online harms

boris-johnson-on-his-bike-European-Cycling-Federation-370.jpgAn unexpected bonus of the gradual-then-sudden disappearance of Boris Johnson's government, followed by his own resignation, is that the Online Safety bill is being delayed until after Parliament's September return with a new prime minister and, presumably, cabinet.

This is a bill almost no one likes - child safety campaigners think it doesn't go far enough; digital and human rights campaigners - Big Brother Watch, Article 19, Electronic Frontier Foundation, Open Rights Group, Liberty, a coalition of 16 organizations (PDF) - because it threatens freedom of expression and privacy while failing to tackle genuine harms such as the platforms' business model; and technical and legal folks because it's largely unworkable.

The DCMS Parliamentary committee sees it as wrongly conceived. The he UK Independent Reviewer of Terrorism Legislation, Jonathan Hall QC, says it's muzzled and confused. Index on Censorship calls it fundamentally broken, and The Economist says it should be scrapped. The minister whose job it has been to defend it, Nadine Dorries (C-Mid Bedfordshire), remains in place at the Department for Culture, Media, and Sport, but her insistence that resigning-in-disgrace Johnson was brought down by a coup probably won't do her any favors in the incoming everything-that-goes-wrong-was-Johnson's-fault era.

In Wednesday's Parliamentary debate on the bill, the most interesting speaker was Kirsty Blackman (SNP-Aberdeen North), whose Internet usage began 30 years ago, when she was younger than her children are now. Among passionate pleas that her children should be protected from some of the high-risk encounters she experienced, was: "Every person, nearly, that I have encountered talking about this bill who's had any say over it, who continues to have any say, doesn't understand how children actually use the Internet." She called this the bill's biggest failing. "They don't understand the massive benefits of the Internet to children."

This point has long been stressed by academic researchers Sonia Livingstone and Andy Phippen, both of whom actually do talk to children. "If the only horse in town is the Online Safety bill, nothing's going to change," Phippen said at last week's Gikii, noting that Dorries' recent cringeworthy TikTok "rap" promoting the bill focused on platform liability. "The liability can't be only on one stakeholder." His suggestion: a multi-pronged harm reduction approach to online safety.

UK politicians have publicly wished to make "Britain the safest place in the world to be online" all the way back to Tony Blair's 1997-2007 government. It's a meaningless phrase. Online safety - however you define "safety" - is like public health; you need it everywhere to have it anywhere.

Along those lines, "Where were the regulators?" Paul Krugman asked in the New York Times this week, as the cryptocurrency crash continues to flow. The cryptocurrency market, which is now down to $1 trillion from its peak of $3 trillion, is recapitulating all the reasons why we regulate the financial sector. Given the ongoing collapses, it may yet fully vaporize. Krugman's take: "It evolved into a sort of postmodern pyramid scheme". The crash, he suggests, may provide the last, best opportunity to regulate it.

The wild rise of "crypto" - and the now-defunct Theranos - was partly fueled by high-trust individuals who boosted the apparent trustworthiness of dubious claims. The same, we learned this week was true of Uber 2014-2017, Based on the Uber files,124,000 documents provided by whistleblower Mark MacGann, a lobbyist for Uber 2014-2016, the Guardian exposes the falsity of Uber's claims that its gig economy jobs were good for drivers.

The most startling story - which transport industry expert Hubert Horan had already published in 2019 - is the news that the company paid academic economists six-figure sums to produce reports it could use to lobby governments to change the laws it disliked. Other things we knew about - for example, Greyball, the company's technology denying regulators and police rides so they couldn't document Uber's regulatory violations and Uber staff's abuse of customer data - are now shown to have been more widely used than we knew. Further appalling behavior, such as that of former CEO Travis Kalanick, who was ousted in 2017, has been thoroughly documented in the 2019 book, Super Pumped, by Mike Isaac, and the 2022 TV series based on it, Super Pumped.

But those scandals - and Thursday/s revelation that 559 passengers are suing the company for failing to protect them from rape and assault by drivers - aren't why Horan described Uber as a regulatory failure in 2019. For years, he has been indefatigably charting Uber's eternal unprofitability. In his latest, he notes that Uber has lost over $20 billion since 2015 while cutting driver compensation by 40%. The company's share price today is less than half its 2019 IPO price of $45 - and a third of its 2021 peak of $60. The "misleading investors" kind of regulatory failure.

So, returning to the Online Safety bill, if you undermine existing rights and increase the large platforms' power by devising requirements that small sites can't meet *and* do nothing to rein in the platforms' underlying business model...the regulatory failure is built in. This pause is a chance to rethink.

Illustrations: Boris Johnson on his bike (European Cyclists Federation via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 24, 2022

Creepiness at scale

Thumbnail image for 2001-hal.pngThis week, Amazon announced a prospective new feature for its Alexa "smart" speakers: the ability to mimic anyone;s voice from less than on minute of recording. Amazon is, incredibly, billing this as the chance to memorialize a dead loved one as a digital assistant.

As someone commented on Twitter, technology companies are not *supposed* to make ideas from science fiction dystopias into reality. As so often, Philip K. Dick got here first; in his 1969 novel Ubik, a combination of psychic powers and cryonics lets (rich) people visit and consult their dead, whose half-life fades with each contact.

Amazon can call this preserving "memories", but at The Overspill Charles Arthur is likely closer to reality, calling it "deepfake for voice". Except that were deepfakes emerged from a Reddit group and requires some technical effort, Amazon's functionality will be right there in millions of people's homes, planted by one of the world's largest technology companies. Questions abound: who gets access to the data and models, and will Amazon link it to its Ring doorbell network and thousands of partnerships with law enforcement?

The answers, like the service, are probably years off. The lawsuits may not be.

This piece began as some notes on the company that so far has been the technology industry's creepiest: the facial image database company Clearview AI. Clearview, which has built its multibillion-item database by scraping images off social media and other publicly accessible sites, has fallen foul of regulators in the UK, Australia, France, Italy, Canada, and Illinois. In a world full of intrusive companies collecting mass amounts of personal data about all of us, Clearview AI still stands out.

It has few, if any, defenders outside its own offices. For one thing, unlike Facebook or Google, it offers us - citizens, consumers - nothing in return for our data, which it appropriates wholesale. It is the ultimate two-sided market in which we are nothing but salable data points. It came to public notice in January 2020, when Kashmir Hill exposed its existence and asked if this was the company that was going to end privacy.

Clearview, which bills itself as "building a secure world one face at a time", defends itself against both data protection and copyright laws by arguing that scraping and storing billions of images from what law enforcement likes to call "open source intelligence" is legitimate because the images are posted in public. Even if that were how data protection laws work, it's not how copyright works! Both Twitter and Facebook told Clearview to stop scraping their sites shortly after Hill's article appeared in 2020, as did Google, LInkedIn, and YouTube. It's not clear if the company stopped or deleted any of the data.

Among regulators, Canada was first, starting federal and provincial investigations in June 2020, when Clearview claimed its database held 3 billion images. In February 2021, the Canadian Privacy Commissioner, David Therrien, issued a public warning that the company could not use facial images of Canadians without their explicit consent. Clearview, which had been selling its service to the Royal Canadian Mounted Police among dozens of others, opted to leave the country and mount a court challenge - but not to delete images of Canadians, as Therrien had requested.

In December 2021, the French data protection authority, CNIL, ordered Clearview to delete all the data it holds relating to French citizens within two months, and threatened further sanctions and administrative fines if the company failed to comply within that time.

In March 2022, with Clearview openly targeting 100 billion images and commercial users, Italian DPA Garante per la protezione dei dati personali fined Clearview €20 million, ordered it to delete any data it holds on Italians, and banned it from further processing of Italian citizens' biometrics.

In May 2022, the UK's Information Commissioner's Office fined the company £7.5 million and ordered it to delete the UK data it holds.

All these cases are based on GDPR and find the same complaints: Clearview has no legal basis for holding the data, and it is in breach of data retention rules and subjects' rights. Clearview appears not to care, taking the view that it is not subject to GDPR because it's not a European company.

It couldn't make that argument to the state of Illinois. In early May 2022, Clearview and the American Civil Liberties Union settled a court action filed in May 2020 under Illinois' Biometric Information Privacy Act. Result: Clearview has accepted a ban on selling its services or offering them for free to most private companies *nationwide* and a ban on selling access to its database to any private or state or local government entity, including law enforcement, in Illinois for five years. Clearview has also developed an opt-out form for Illinois residents to use to withdraw their photos from searches, and continue to try to filter out photographs taken in or uploaded from Illinois. On its website, Clearview paints all this as a win.

Eleven years ago, Google's then-CEO, Eric Schmidt, thought automating facial recognition was too creepy to pursue and synthesizing a voice from recordings took months. The problem isn't any more that potentially dangerous technology has developed faster than laws can be formulated to control it. It's that we now have well-funded companies that don't care about either.


Illustrations: HAL, from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 27, 2022

Well may the bogeyman come

NCC-EPIC-award-CPDP-2022.jpgIt's only an accident of covid that this year's Computers, Privacy, and Data Protection conference - delayed from late January - coincided with the fourth anniversary of the EU's General Data Protection Regulation. Yet its failures and frustrations were on everyone's mind as they considered new legislation forthcoming from the EU: the Digital Services Act, the Digital Markets Act, and, especially, the AI Act,

Two main frustrations: despite GDPR, privacy invasions continue to expand, and, related, enforcement has been extremely limited. The first is obvious to everyone here. For the second...as Max Schrems explained in a panel on GDPR enforcement, none of the cross-border cases his NGO, noyb, filed on May 19, 2018, the day after GDPR came into force, have been decided, and even decisions on simpler cases have failed to deal with broader questions.

In one of his examples, Spain rejected a complaint because it wasn't doing historic cases and Austria claimed the case was solved because the organization involved had changed its procedures. "But my rights were violated then." There was no redress.

Schrems is the data protection bogeyman; because legal actions he has brought have twice struck down US-EU agreements to enable data flows, the possibility of "Schrems III" if the next version gets it wrong is frequently mentioned. This particular panel highlighted numerous barriers that block effective action.

Other speakers highlighted numerous gaps between countries that impede cross-border complaints: some authorities have tight deadlines that expire while other authorities are working to more leisurely schedules; there are many conflicts between national procedural laws; each data protection authority has its own approach and requirements; and every cross-border complaint must be time-consumingly translated into English, even when both relevant authorities speak, say, German. "Getting an answer to a two-minute question takes four months," Nina Herbort said, highlighting the common underlying problem: underresourcing.

"Weren't they designed to fail?" Finn Myrstad asked.

Even successful enforcement has largely been limited to levying fines - and despite some of the eye-watering numbers they're still just cost of doing business to major technology platforms.

"We have the tools for structural sanctions," Johnny Ryan said in a discussion on judicial actions. Some of that is beginning to happen. A day earlier, the UK'a Information Commissioner's Office fined Clearview AI £7.5 million and ordered it to delete the images it holds of UK residents. In February, Canada issued a similar order; a few weeks ago, Illinois permanently banned the company from selling its database to most private actors and businesses nationwide, and barred it from selling its service to any entity within Illinois for five years. Sanctions like these hurt more than fines as does requiring companies to delete the algorithms they've based on illegally acquired data.

Other suggestions included building sovereignty by ensuring that public procurement does not default to off-the-shelf products from a few foreign companies but is built on local expertise, advocated by. Jan-Philipp Albrecht, the former MEP who panel on the impact of Schrems II that he is now building up cloud providers using locally-built hardware and open source software for the province of Schleswig-Holstein. Quang-Minh Lepescheux suggested requiring transparency in how people are trained to use automated decision making systems and forcing technology providers to accept third-party testing. Cristina Caffara, probably the only antitrust lawyer in sight, wants privacy advocates and antitrust lawyers to work together; the economists inside competition authorities insist that more data means better products so it's good for consumers. Rebecca Slaughter wants to give companies the clarity they say they want (until they get it): clear, regularly updated rules banning a list of practices with a catchall. Ryan also noted that some sanctions can vastly improve enforcement efficiency: there's nothing to investigate after banning a company from making acquisitions. Enforcing purpose limitation and banning the single "OK to everything" is more complicated but, "Purpose limitation is Kryptonite to Big Tech when it's misusing data."

Any and all of these are valuable. But new kinds of thinking are also needed. The more complex issue and another major theme was the limitations of focusing on personal data and individual rights. This was long predicted as a particular problem for genetic data - the former science journalist Tom Wilkie was first to point out the implications, sounding a warning in his book Perilous Knowledge, published in 1994, at the beginning of the Human Genome Project. Singling out individuals who have been harmed can easily obfuscate collective damage. The obvious example is Cambridge Analytica and Facebook; the damage to national elections can't be captured one Friends list at a time, controls on the increasing use of aggregated data require protection at scale, and, perversely, monitoring for bias and discrimination requires data collection.

In response to a panel on harmful patterns in recent privacy proposals, an audience member suggested that the African philosophy of ubuntu as a useful source of ideas for thinking about collective and, even more important, *interdependent* data. This is where we need to go. Many forms of data - including both genetic data and financial data - cannot be thought of any other way.


Illustrations: The Norwegian Consumer Council receives EPIC's International Privacy Champion award at CPDP 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 20, 2022

Mona Lisa smile

Mona Lisa - cropped for net.wars.jpgA few weeks ago, Zoom announced that it intends to add emotion detection technology to its platform. According to Mark DeGeurin at Gizmodo, in response, 27 human rights groups from across the world, led by Fight for the Future, have sent an open letter demanding that the company abandon this little plan, calling the software "invasive" and "inherently biased". On Twitter, I've seen it called "modern phrenology"; a deep insult for those who remember the pseudoscience of studying the bumps on people's heads to predict their personalities.

It's an insult, but it's not really wrong. In 2019, Angela Chen at MIT Technology Review highlighted a study showing that facial expressions on their own are a poor guide to what someone is feeling. Cultures, context, personal style all affect how we present ourselves, and the posed faces AI developers use as part of their training of machine learning systems are even worse indicators, since few of us really how our faces look under the influence of different emotions. In 2021, Kate Crawford, author of Atlas of AI, used the same study to argue in The Atlantic that the evidence that these systems work at all is "shaky".

Nonetheless, Crawford goes on to report, this technology is being deployed in hiring systems and added into facial recognition. A few weeks ago, Kate Kaye reported at Protocol that Intel and virtual school software provider Classroom Technologies are teaming up to offer a version that runs on top of Zoom.

Cue for a bit of nostalgia: I remember the first time I heard of someone proposing computer emotion detection over the Internet. It was the late 1990s, and the source - or the perpetrator, depending on your point of view, was Rosalind Picard at the MIT Media Lab. Her book on the subject, Affective Computing, came out in 1997.

Picard's main idea was that to be truly intelligent - or at least, seem that way to us - computers would have to learn to recognize emotions and produce appropriate responses. One of the potential applications I remember hearing about was online classrooms, where the software could monitor students' expressions for signs of boredom, confusion, or distress and alert the teacher - exactly what Intel and Classroom Technologies want to do now. I remember being dubious: shouldn't teachers be dialed in on that sort of thing? Shouldn't they know their students well enough to notice? OK, remote, over a screen, maybe dozens or hundreds of students at a time...not so easy.... (Of course, the expensive schools offer mass online education schemes to exploit their "brands", but they still keep the small, in-person classes that creates those "brands" by churning out prime ministers and Silicon Valley dropouts.)

That wasn't Picard's main point, of course. In a recent podcast interview, she explains her original groundbreaking insight: that computers need to have emotional intelligence in order to make them less frustrating for us to deal with. If computers can capture the facial expressions we choose to show, the changes in our vocal tones, our gestures and muscle tension, perhaps they can respond more appropriately - or help humans to do so. Twenty-five years later, the ideas in Picard's work are now in use in media companies, ad agencies, and call centers - places where computer-human communication happens.

It seems a doubtful proposition. Humans learn from birth to read faces, and even we have argued for centuries over the meaning of the expression on the face of the Mona Lisa.

In 1997, Picard did not foresee the creepiness and giant technology exploiters. It's hard to know whether to be more alarmed about the technology's inaccuracy or its potential improvement. While it's inaccurate and biased, the dangers are the consequences of mistakes in interpretation; a student marked "inattentive", for example, may be penalized in their grade. But improving and debiasing the technology opens the way for fine-tuned manipulation and far more pervasive and intimate surveillance as it becomes embedded in every company, every conference, every government agency, every doctor's office, all of law enforcement. Meanwhile, the technological imperative of improving the system will require the collection of more and more data: body movements, heart rates, muscle tension, posture, gestures, surroundings.

I'd like to think that by this time we are smarter about how technology can be abused. I'm sure many of Zoom's corporate clients want emotion recognition technology; as in so many other cases, we are pawns because we're largely not the ones paying the bills or making the choice of platform. There's an analogy here to Elon Musk's negotiations with Twitter shareholders; the millions who use the service every day and find it valuable have no say in what will happen to it. If Zoom adopts emotion recognition, how long before law enforcement starts asking for user data in order to feed it into predictive policing systems? One of this week's more startling revelations was Aaron Gordon's report at Vice that San Francisco police are using driverless cars as mobile surveillance cameras, taking advantage of the fact that they are continuously recording their surroundings.

Sometimes the only way to block abuse of technology is to retire the idea entirely. If you really want to know what I'm thinking and feeling, just ask. I promise I'll tell you.


Illustrations: The emotional enigma that is the Mona Lisa.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 6, 2022

Heartbeat

Trigger_law_states.svg.pngThree months ago, for a book Cybersalon is producing, called Twenty-Two Ideas About the Future, I wrote a provocation about a woman living in Heartbeat Act Texas who discovers she's pregnant. When she forgets to disable its chip, the "smart" home pregnancy test uploads the news to the state's health agency, which promptly shares it far and wide. Under the 2021 law's sanctions on intermediaries, payment services, travel companies, supermarkets all fear being sued as intermediaries, and so they block her from doing anything that might lead to liability, like buying alcohol, cigarettes, or a bus ticket to the state line, or paying a website for abortion pills.

It wasn't supposed to come true, and certainly not so soon.

As anyone who's seen any form of news this week will know, in a leaked draft of the US Supreme Court's decision in Dobbs v. Jackson Women's Health Organization, author Justice Samuel Alito argues that its 1973 decision in Roe v. Wade was "wrongly decided". This is not the place to defend the right to choose or deplore the dangers of of valuing the potential life of a fetus over the actual life of the person carrying it (Louisiana legislators have advanced a bill classifying abortion as homicide). But it is the place to consider the privacy loss if the decision proceeds as indicated, and not just in the approximately half of US states predicted to jump at the opportunity to adopt forced-childbirth policies.

On my shelf is Alan E. Nourse's 1965 book Intern, by Doctor X, an extraordinarily frank diary Nourse kept throughout his 1956 internship. Here he is during his OB/GYN rotation: "I don't know who the OB men have to answer to around here when they get back suspicious pathology reports...somebody must be watching them." In an update, he says the hospital's Tissue Committee reviewed pathology reports on all dilation and curettage procedures; first "suspicious" report attracted a private warning, second a censure, and third permanent expulsion from the hospital staff.

I first read that when I was 12, and I did not understand that he was talking about abortion - although D&Cs were and are routine, necessary procedures, in that time and place each one was also suspected, like travelers today boarding a plane. Every miscarriage had to be cleared of suspicion, a process unlikely to help any of the estimated 1 million per year who grieve pregnancy loss. Elsewhere, he notes the number of patients labeled "NO INFORMATION"; they were giving their babies up for adoption. Then, it was sufficient to criminalize the doctors.

Part of Alito's argument is that abortion is not mentioned in either the Constitution or the First, Fourth, Fifth, Ninth, or Fourteenth Amendments Roe cited. Neither, he says, is privacy; that casual little aside is the Easter egg pointing to future human rights rollbacks.

The US has insufficient privacy law, even in the health sector. Worse, the data collected by period trackers, fitness gizmos, sleep monitoring apps, and the rest is not classed as health data to be protected under HIPAA. In 2015, employers' access to such data through "wellness" programs began raising privacy concerns; all types of employee monitoring have expanded since the pandemic began. Finally, as Johana Bhuiyan reported at the Guardian last month, US law enforcement has easy access to the consumer data we trustingly provide to companies like Apple and Meta. And even when don't provide it, others do: in 2016, anti-choice activists were caught snapping pictures of women entering clinics, noting license plate numbers, and surveiling their smarphones via geofencing to target those deemed to be "abortion-minded".

"Leaving it to the states" - Alito writes of states' rights, not of women's rights - means any woman of child-bearing age at risk of living under a prohibitive regime dare not confide in any of these technologies. Also dangerous: insurance companies, support groups for pregnancy loss or for cancer patients whose treatment is incompatible with continuing a pregnancy, centers for health information, GPS-enabled smartphones, even search engines. Heterosexual men can look forward to diminished sex lives dominated by fear of pregnancy (although note that no one's threatening to criminalize ejaculating inside a vagina) and women may struggle to find doctors willing to treat them at all.

My character struggled to travel out of state. This was based on 1980s Ireland, where ending a pregnancy required a trip to England; in 1992 courts famously barred a raped 14-year-old from traveling. At New York Magazine, Irin Carman finds that some Republican politicians are indeed thinking about this.

Encryption, VPNs, Tor - women will need the same tools that aid dissidents in authoritarian countries. The company SafeGraph, Joseph Cox reports at Vice, sells location data showing who has visited abortion clinics. In response, SafeGraph promised to stop. By then Cox had found another one.

At Gizmodo, Shoshona Wodinsky has the advice on privacy protection my fictional character needed. She dares not confide in anyone she knows lest she put them at risk of becoming an attackable intermediary, yet everyone she *doesn't* know has already been informed.

This is the exact near-future Parmy Olson outlines at Bloomberg, quoting US senator Ron Wyden (D-OR): "...every digital record - from web searches, to phone records and app data - will be weaponized in Republican states as a way to control women's bodies."


Illustrations: Map of the US states with "trigger laws" waiting to come into force if Roe v. Wade is overturned (via M. Bitton at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 15, 2022

The data of sport

vlcsnap-2022-04-15-13h01m46s668.pngIn 1989, at 5-6 in the third and final set of the French Open women's singles final, 20-year-old Steffi Graf abruptly ran off-court. Soon afterwards, her opponent, Arantxa Sanchez-Vicario, completed one of the biggest upsets in the history of women's tennis.

Why did Graf do it? the press demanded to know in the post-march interview. When Graf finally (and slightly crankily) explained that she had her period. some journalists - Michael Mewshaw cites Italian Hall of Fame journalist Gianni Clerici for one - followed up by printing her (presumably imagined) menstrual cycle in the newspapers.

Mewshaw recounted this incident in June 2021 to illustrate the unpleasantness that can attend sports press conferences, in sympathy with Naomi Osaka. However, he could as easily have been writing about the commodification of athletes and their personal information. Graf got no benefit from journalists' prurient curiosity. But bettors, obsessive fans, and commentators could imagine they were being sold insight into her on-court performance. Ick.

This week, the Australian Science Academy launched a discussion paper on the use of athlete data in professional sport, chaired by Julia Powles and Toby Walsh. Powles and Walsh have also provided a summary at The Conversation.

The gist: the amount and variety of data collected about athletes has exploded using the justification of improving athletic performance and reducing injury risk. It's being collected and saved with little oversight and no clarity about how it's being used or who gets access to it; the overriding approach is to collect everything possible and save it in case a use is found. "It's rare for sports scientists and support staff to be able to account for it, and rarer still for sports governing bodies and athletes themselves," they write.

In the ASA's launch panel, Powles commented that athletes are "at the forefront of data gathering and monitoring", adding that such monitoring will eventually be extended to the rest of us as it filters from professional sports to junior sports, and onward from there.

Like Britain's intensively monitored children, athletes have little power to object: they have already poured years of their own and their family's resources into their obsession. Who would risk the chance of big wins to argue when their coach or team manager fits them with sensors tracking their sleep, heart rate, blood oxygenation, temperature, and muscle twitches and says it will help them? The field, Kathryn Henne observed is just an athlete's workplace.

In at least one case - the concussion in American football - data analysis has proved the risk to athletes. But, Powles noted, the report finds that it's really the aggregate counts that matter: how many meters you ran, not what your muscles were doing while you ran them. Much of the data being collected lies fallow, and no theory exists for testing its value.

Powles' particular concern is twofold. First, the report finds that the data is not flowing to sports scientists and others who really understand athletes (and therefore does not actually further the goal of helping them) but toward data scientists and other dedicated data-crunchers who have no expertise in sports science. Second, she deplores the resulting opportunity costs.

"What else aren't we spending money on?" she asked. Healthier environments and providing support are things we know work; why not pursue them instead of "technology dreams"? Her biggest surprise, she said, was discovering how cash-strapped most sports are. Even tennis: the stars make millions, but the lower ranks starve.

Professional athletes have always had to surrender aspects of their privacy in order to play their sport, beginning with the long, unpleasant history of gender testing, which began with men-only games in which competitors appeared nude, and continued in 1968 with requiring athletes wishing to compete in women's sports to prove they qualify. Then came anti-doping, which presumes everyone is guilty except when testing finds them innocent: urine tests under observation and blood tests for more sophisticated doping agents like EPO. In 2004, the anti-doping authorities initiated the "Whereabouts rule", which requires athletes to provide their location every day to facilitate no-notice out-of-competition testing. More recently, sporting authorities have begun collecting and storing blood and other parameters to populate the "athlete biological passport" with the idea that longitudinal profiling will highlight changes indicative of doping. An athlete who objects to any of this is likely to be publicly accused of cheating; sympathy is in short supply.

The report adds to those obvious invasions the ongoing blurring of the line between health data - which apparently is determined by the involvement of a doctor - and what the authors call "performance data". This was raised as an issue at the Privacy Health Summit back in 2014, where panelists noted that the range of sensitive data being collected by then-new Fitbits, sleep apps, and period trackers wasn't covered by the US health information law, HIPAA.

Athletes are the commodities in all this. It's not a big stretch to imagine the use of this data turning hostile, particularly as it extends to junior sports, where it can be notoriously difficult to pic future winners. Sports hold our interest because they provide the unexpected. Data-crunching by its nature tries to eliminate it. As Powles put it, "The story of sport is not just the runs and the goals." But that's what data can count.


Illustrations: Arantxa Sanchez-Vicario holding the 1989 French Open women's singles trophy.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 25, 2022

Dangerous corner

War_damages_in_Mariupol,_12_March_2022_(01).jpgIf there is one thing the Western world has near-universally agreed in the last month, it's that in the Russian invasion of Ukraine, the Ukrainians are the injured party. The good guys.

If there's one thing that privacy advocates and much of the public agree on, it's that Clearview AI, which has amassed a database of (it claims) 10 billion facial images by scraping publicly accessible social media without the subjects' consent and sells access to it to myriad law enforcement organizations, is one of the world's creepiest companies. This assessment is exacerbated by the fact that the company and its CEO refuse to see anything wrong about their unconsented repurposing of other people's photos; it's out there for the scraping, innit?

Last week, Reuters reported that Clearview AI was offering Ukraine free access to its technology. Clearview's suggested uses: vetting people at checkpoints; debunking misinformation on social media; reuniting separated family members; and identifying the dead. Clearview's CEO, Hoan Ton-That, told Reuters that the company has 2 billion images of Russians scraped from Russian Facebook clone Vkonakte.

This week, it's widely reported that Ukraine is accepting the offer. At Forbes, Tom Brewster reports that Ukraine is using the technology to identify the dead.

Clearview AI has been controversial ever since January 2020, when Kashmir Hill reported its existence in the New York Times, calling it "the secretive company that might end privacy as we know it". Social media sites LinkedIn, Twitter, and YouTube all promptly sent cease-and-desist notices. A month later, Kim Lyons reported at The Verge that its 2,200 customers included the FBI, Interpol, the US Department of Justice, Immigration and Customs Enforcement, a UAE sovereign wealth fund, the Royal Canadian Mounted Police, and college campus police departments.

In May 2021, Privacy International filed complaints in five countries. In response, Canada, Australia, the UK, France, and Italy have all found Clearview to be in breach of data protection laws and ordered it to delete all the photos of people that it has collected in their territories. Sweden, Belgium, and Canada have declared law enforcement use of Clearview's technology to be illegal.

Ukraine is its first known use in a war zone. In a scathing blog posting, Privacy International says, "...the use of Clearview's database by authorities is a considerable expansion of the realm of surveillance, with very real potential for abuse."

Brewster cites critics, who lay out familiar privacy issues. Misidentification in a war zone could lead to death if a live soldier's nationality is wrongly assessed (especially common when the person is non-white) and unnecessary heartbreak for dead soldiers' families. Facial recognition can't distinguish civilians and combatants. In addition, the use of facial recognition by the "good guys" in a war zone might legitimize the technology. This last seems to me unlikely; we all distinguish the difference between what's acceptable in peace time versus an extreme context. This issue here is *company*, not the technology, as PI accurately pinpoints: "...it seems no human tragedy is off-limits to surveillance companies looking to sanitize their image."

Jack McDonald, a senior lecturer in war studies at Kings College London who researches the relationship between ethics, law, technology, and war, sees the situation differently.

Some of the fears Brewster cites, for example, are far-fetched. "They're probably not going to be executing people at checkpoints." If facial recognition finds a match in those situations, they'll more likely make an arrest and do a search. "If that helps them to do this, there's a very good case for it, because Russia does appear to be flooding the country with saboteurs." Cases of misidentification will be important, he agrees, but consider the scale of harm in the conflict itself.

McDonald notes, however, that the use of biometrics to identify refugees is an entirely different matter and poses huge problems. "They're two different contexts, even though they're happening in the same space."

That leaves the use Ukraine appears to be most interested in: identifying dead bodies. This, McDonald explains, represents a profound change from the established norms, which include social and institutional structures and has typically been closely guarded. Even though the standard of certainty is much lower, facial recognition offers the possibility of being able to do identification at scale. In both cases, the people making the identification typically have to rely on photographs taken elsewhere in other contexts, along with dental records and, if all else fails, public postings.

The reality of social media is already changing the norms. In this first month of the war, Twitter users posting pictures of captured Russian soldiers are typically reminded that it is technically against the Geneva Convention to do so. The extensive documentation - video clips, images, first-person reports - that is being posted from the conflict zones on services like TikTok and Twitter is a second front in its own right. In the information war, using facial recognition to identify the dead is strategic.

This is particularly true because of censorship in Russia, where independent media have almost entirely shut down and citizens have only very limited access to foreign news. Dead bodies are among the only incontrovertible sources of information that can break through the official denials. The risk that inaccurate identification could fuel Russian propaganda remains, however.

Clearview remains an awful idea. But if I thought it would help save my country from being destroyed, would I care?


Illustrations: War damage in Mariupol, Ukraine (Ministry of Internal Affairs of Ukraine, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 11, 2022

Freedom fries

"Someone ratted me out," a friend complained recently. They meant: after a group dinner, one of the participants had notified everyone to say they'd tested positive for covid a day later, and a third person had informed the test and trace authorities and now my friend was getting repeated texts along the lines of "isolate and get tested". Which they found invasive and offensive, and...well, just plain *unreasonable*.

Last night, Boris Johnson casually said in Parliament that he thought we could end all covid-related restrictions in a couple of weeks. Today there's a rumor that the infection survey that has produced the most reliable data on the prevalence and location of covid infections may be discontinued soon. There have been rumors, too, of charging for covid tests.

Fifteen hundred people died of covid in this country in the past week. Officially, there were more than 66,000 new infections yesterday - and that doesn't include all the people who felt like crap and didn't do a test, or did do a test and didn't bother to report the results (because the government's reporting web form demands a lot of information each time that it only needs if you tested positive), or didn't know they were infected. If he follows through. Johnson's announcement would mean that if said dinner happened a month from now, my friend wouldn't be told to isolate. They can get exposed and perhaps infected and mingle as normal in complete ignorance. The tradeoff is the risk for everyone else: how do we decide when it's safe enough to meet? Is the plan to normalize high levels of fatalities?

Brief digression: no one thinks Johnson's announcement is a thought-out policy. Instead, given the daily emergence of new stories about rule-breaking parties at 10 Downing Street during lockdown, his comment is widely seen as an attempt to distract us and quiet fellow Conservatives who might vote to force him out of office. Ironically, a key element in making the party stories so compelling is the hundreds of pictures from CCTV, camera phones, social media, Johnson's official photographer... Teenagers have known for a decade to agree to down cameras at parties, but British government officials are apparently less afraid anything bad will happen to them if they're caught.

At the beginning of the pandemic, we wrote about the inevitable clash between privacy and the needs of public health and epidemiology. Privacy was indeed much discussed then, at the design stage for contact tracing apps, test and trace, and other measures. Democratic countries had to find a balance between the needs of public health and human rights. In the end, Google and Apple wound up largely dictating the terms on which contact tracing apps could operate on their platforms.

To the chagrin of privacy activists, "privacy" has rarely been a good motivator for activism. The arguments are too complicated, though you can get some people excited over "state surveillance". In this pandemic, the big rallying cry has been "freedom", from the media-friendly Freedom Day, July 19, 2021, when Johnson removed that round of covid restrictions, to anti-mask and anti-vaccination protesters, such as the "Freedom Convoy" currently blocking up normally bland, government-filled downtown Ottawa, Ontario, and an increasing number of other locations around he world. Understanding what's going on there is beyond the scope of net.wars.

More pertinent is the diverging meaning of "freedom". As the number of covid prevention measures shrinks, the freedom available to vulnerable people shrinks in tandem. I'm not talking about restrictions like how many people may meet in a bar, but simple measures like masking on public transport, or getting restaurants and bars to information about their ventilation that would make assessing risk easier.

Elsewise, we have many people who seem to define "freedom" to mean "It's my right to pretend the pandemic doesn't exist". Masks, even on other people, then become intolerable reminders that there is a virus out there making trouble. In that scenario, however, self-protection, even for reasonably healthy people who just don't want to get sick, becomes near-impossible. The "personal responsibility" approach doesn't work in a situation where what's most needed is social collaboration.

The people landed with the most risk can do the least about it. As the aftermath of Hurricane Sandy highlighted, the advent of the Internet has opened up a huge divide between the people who have to go to work and the people who can work anywhere. I can Zoom into my friend's group dinner rather than attend in person, but the caterers and waitstaff can't. If "your freedom ends where my nose begins" (Zechariah Chafee Jr, it says hereapplies to physical violence, shouldn't it include infection by virus?

Many human rights activists warned against creating second-class citizens via vaccination passports. The idea was right, but privacy was the wrong lens, because we still view it predominantly as a right for the individual. You want freedom? Instead of placing the burden on each of us, as health psychologist Susan Michie has been advocating for months, make the *places* safer - set ventilation standards, have venues publish their protocols, display CO2 readings, install HEPA air purifiers. Less risk, greater freedom, and you'd get some privacy, too - and maybe fewer of us would be set against each other in standoffs no one knows how to fix.


Illustrations: Trucks protesting in Ottawa, February 2022 (via ΙΣΧΣΝΙΚΑ-888 at Wikimedia, CC-BY-SA-4.0).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 3, 2021

Trust and antitrust

coyote-roadrunner-cliff.pngFour years ago, 2021's new Federal Trade Commission chair, Lina Khan, made her name by writing an antitrust analysis of Amazon that made three main points: 1) Amazon is far more dangerously dominant than people realize; 2) antitrust law, which for the last 30 years has used consumer prices as its main criterion, needs reform; and 3) two inventors in a garage can no longer upend dominant companies because they'll either be bought or crushed. She also accused Amazon of leveraging the Marketplace sellers data it collects to develop and promote competing products.

For context, that was the year Amazon bought Whole Foods.

What made Khan's work so startling is that throughout its existence Amazon has been easy to love: unlike Microsoft (system crashes and privacy), Google (search spam and privacy), or Facebook (so many issues), Amazon sends us things we want when we want them. Amazon is the second-most trusted institution in America after the military, according to a 2018 study by Georgetown University and NYU Rounding out the top five: Google, local police, and colleges and universities. The survey may need some updating.

And yet: recent stories suggest our trust is out of date.

This week, a study by the Institute for Local Self-Reliance claims that Amazon's 20-year-old Marketplace takes even higher commissions - 34% - than the 30% Apple and Google are being investigated for taking (30%) from their app stores. The study estimates that Amazon will earn $121 billion from these fees in 2021, double its 2019 takings and that Amazon's 2020 operating profits from Marketplace will reach $24 billion. The company responded to TechCrunch that some of those fees are optional add-ons, while report author Stacy Mitchell counters that "add-ons" such as better keyword search placement and using Amazon's shipping and warehousing have become essential because of the way the company disadvantages sellers who don't "opt" for them. In August, Amazon passed Walmart as the world's largest retailer outside of China). It is the only source of income for 22% of its sellers and the single biggest sales channel for many more; 56% of items sold on Amazon are from third-party sellers.

I started buying from Amazon so long ago that I have an insulated mug they sent every customer as a Christmas gift. Sometime in the last year, I started noticing the frequency of unfamiliar brand names in search results for things like cables, USB sticks, or socks. Smartwool I recognize, but Yuedge, KOOOGEAR, and coskefy? I suddenly note a small, new? tickbox on the left: "our brands". And now I see : "our brands" this time are ouhos, srclo, SuMade, and Sunew. Is it me, or are these names just plain weird?

Of course I knew Amazon owned Zappos, IMDB, Goodreads, and Abe Books, but this is different. Amazon now has hundreds of house brands, according to a study The Markup published in October. The main finding: Amazon promotes its own brands at others' expense, and being an Amazon brand or Amazon-exclusive is more important to your product's prominence than its star ratings or reviews. Amazon denies doing this. It's a classic antitrust conflict of interest: shoppers rarely look beyond the first five listed products, and the platform owner has full control over the order. The Markup used public records to identify more than 150 Amazon brands and developed a browser add-on that highlights them for you. Personally, I'm more inclined to just shop elsewhere.

Also often overlooked is Amazon's growing advertising business. Insider Intelligence estimates its digital ad revenues in 2021 at $24.47 billion - 55.5% higher than 2020, and representing 11.6% (and rising) of the (US) digital advertising market. In July, noting its riseCNBC surmised that Amazon's first-party relationship with its customers relieves it of common technology-company privacy issues. This claim - perhaps again based on the unreasonable trust so many of us place in the company - has to be wrong. Amazon collects vast arrays of personal data from search and purchase records, Alexa recordings, home camera videos, and health data from fitness trackers. We provide it voluntarily, but we don't sign blank checks for its use. Based on confidential documents, Reuters reports that Amazon's extensive lobbying operation has "killed or undermined" more than three dozen privacy bills in 25 US states. (The company denies the story and says it has merely opposed poorly crafted privacy bills.)

Privacy may be the thing that really comes to bite the company. A couple of weeks ago, Will Evans reported at Reveal News, based on a lengthy study of leaked internal documents, that Amazon's retail operation has so much personal data that it has no idea what it has, where it's stored, or how many copies are scattered across its IT estate: "sprawling, fragmented, and promiscuously shared". The very long story is that prioritizing speed of customer service has its downside, in that the company became extraordinarily vulnerable to insider threats such as abuse of access.

Organizations inevitably change over time, particularly when they're as ambitious as this one. The systems and culture that are temporary in startup mode become entrenched and patched, but never fixed. If trust is the land mass we're running on, what happens is we run off the edge of a cliff like Wile E. Coyote without noticing that the ground we trust isn't there any more. Don't look down.


Illustrations: Wile E. Coyote runs off a cliff, while the roadrunner watches.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 10, 2021

Globalizing Britain

Chatsworth_Cascade_and_House_-_geograph.org.uk_-_2191570.jpgBrexit really starts now. It was easy to forget, during the dramas that accompanied the passage of the Withdrawal Agreement and the disruption of the pandemic, that the really serious question had still not been answered: given full control, what would Britain do with it? What is a reshaped "independent global Britain" going to be when it grows up? Now is when we find out, as this government, which has a large enough majority to do almost anything it wants, pursues the policies it announced in the Queen's Speech last May.

Some of the agenda is depressingly cribbed from the current US Republican playbook. First and most obvious in this group is the Elections bill. The most contentious change is requiring voter ID at polling stations (even though there was a total of one conviction for voter fraud in 2019, the year of the last general election). What those in other countries may not realize is how many eligible voters in Britain lack any form of photo ID. The Guardian that 11 million people - a fifth of eligible voters - have neither driver's license nor passport. Naturally they are disproportionately from black and Asian backgrounds, older and disabled, and/or poor. The expected general effect, especially coupled with the additional proposal to remove the 15-year cap on voting while expatriate, is to put the thumb on the electoral scale to favor the Conservatives.

More nettishly, the government is gearing up for another attack on encryption, pulling out all the same old arguments. As Gareth Corfield explains at The Register, the current target is Facebook, which intends to roll out end-to-end encryption for messaging and other services, mixed with some copied FBI going dark rhetoric.

This is also the moment when the Online Safety bill (previously online harms). The push against encryption, which includes funding technical development is part of that because the bill makes service providers responsible for illegal content users post - and also, as Heather Burns points out at the Open Rights Group, legal but harmful content. Burns also details the extensive scope of the bill's age verification plans.

These moves are not new or unexpected. Slightly more so was the announcement that the UK will review data protection law with an eye to diverging from the EU; it opened the consultation today. This is, as many have pointed out before dangerous for UK businesses that rely on data transfers to the EU for survival. The EU's decision a few months ago to grant the UK an adequacy decision - that is, the EU's acceptance of the UK's data protection laws as providing equivalent protection - will last for four years. It seems unlikely the EU will revisit it before then, but even before divergence Ian Brown and Douwe Korff have argued that the UK's data protection framework should be ruled inadequate. It *sounds* great when they say it will mean getting rid of the incessant cookie pop-ups, but at risk is privacy protections that have taken years to build. The consultation document wants to promise everything: "even better data protection regime" and "unlocking the power of data" appear in the same paragraph, and the new regime will also both be "pro-growth and innovation-friendly" and "maintain high data protection standards".

Recent moves have not made it easier to trust this government with respect to personal data- first the postponed-for-now medical data fiasco and second this week's revelation that the government is increasingly using our data and hiring third-party marketing firms to target ads and develop personalized campaigns to manipulate the country's behavior. This "influence government" is the work of the ten-year-old Behavioural Insights Team - the "nudge unit", whose thinking is summed up in its behavioral economy report.

Then there's the Police, Crime, Sentencing, and Courts bill currently making its way through Parliament. This one has been the subject of street protests across the UK because of provisions that permit police and Home Secretary Priti Patel to impose various limits on protests.

Patel's Home Office also features in another area of contention, the Nationality and Borders bill. This bill would make criminal offenses out of arriving in the UK without permission a criminal offense and helping an asylum seeker enter the UK. The latter raises many questions, and the Law Society lists many legal issues that need clarification. Accompanying this is this week's proposal to turn back migrant boats, which breaks maritime law.

A few more entertainments lurk, for one, the plan to review of network neutrality announced by Ofcom, the communications regulator. At this stage, it's unclear what dangers lurk, but it's another thing to watch, along with the ongoing consultation on digital identity.

More expected, no less alarming, this government also has an ongoing independent review of the 1998 Human Rights Act, which Conservatives such as former prime minister Theresa May have long wanted to scrap.

Human rights activists in this country aren't going to get much rest between now and (probably) 2024, when the next general election is due. Or maybe ever, looking at this list. This is the latest step in a long march, and it reminds that underneath Britain's democracy lies its ancient feudalism.


Illustrations: Derbyshire stately home Chatsworth (via Trevor Rickards at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 27, 2021

The threat we left behind

afghan-455th_ESFG_scanning_iris.JPGBe careful what systems you build with good intentions. The next owner may not be so kind.

It has long been a basic principle among privacy activists that a significant danger in embedding surveillance technologies is regime change: today's government is benign, but tomorrow's may not be, so let's not build the technologies that could support a police state for that hostile government to wield. Equally - although it's often politic not to say this explicitly - the owner may remain the same but their own intentions may change as the affordances of the system give them new ideas about what it's possible for them to know.

I would be hard-pressed to produce evidence of a direct connection, but one of the ideas floating around Virtual Diplomacy, a 1997 conference that brought together the Internet and diplomacy communities, was that the systems that are privacy-invasive in Western contexts could save lives and avert disasters on the ground in crisis situations. Not long afterwards, the use of biometric identification and other technologies were being built into refugee systems in the US and EU.

In a 2018 article for The New Humanitarian, Paul Currian observes that the systems' development were "driven by the interests of national governments, technology companies, and aid agencies - in that order". Refugees quoted in the article express trust in the UN, but not much understanding of the risks of compliance.

Currian dates the earliest use of "humanitarian biometrics" to 2003 - and identifies the location of that groundbreaking use as...Afghanistan, which iris testing to verify the identities of Afghans returning from Pakistan to prevent fraud. In 2006, then-current, now just-departed, president Ashraf Ghani wrote a book pinpointing biometric identification as the foundation of Afghanistan's social policy. Afghanistan, the article concludes, is "the most biometrically identifiable country in the world" - and, it adds, "although UNHCR and the Afghan government have both invested heavily in biometric databases, the US military has been the real driving force." It bases this latter claim on a 2014 article in Public Intelligence that studies US military documents on the use of biometrics in Afghanistan.

These are the systems that now belong to the Taliban.

Privacy International began warning of the issues surrounding privacy and refugees in the mid-2000s. In 2011, by which time it had been working with UNHCR to improve its practices for four years, PI noted how little understanding there was among funders and the public of why privacy mattered to refugees.

Perhaps it's the word: "privacy" sounds like a luxury, a nice-to-have rather than a necessity, and anyway, how can people held in camps waiting to be moved on to their next location care about privacy when what they need is safety, food, shelter, and a reunion with the rest of their families? PI's answer: "Putting it bluntly, getting privacy wrong will get people arrested, imprisoned, tortured, and may sometimes lead to death." Refugees are at risk from both the countries they're fleeing *from* and the countries they're fleeing *to*, which may welcome and support them - or reject, return, deport, or imprison them, or hold them in bureaucratic purgatory. (As I type this, HIAS president and CEO Mark Hetfield is telling MSNBC that the US's 14-step checking process is stopping Afghan-Americans from getting their families out.)

As PI goes on to explain, there is no such thing as "meaningful consent" in these circumstances. At The New Humanitarian, in a June 2021 article, Zara Rahman agrees. She was responding to a Human Rights Watch report that the United Nations High Commissioner for Refugees had handed a detailed biometric database covering hundreds of thousands of Rohynga refugees to the Myanmar government from which they fled. HRW accused the agency of breaking its own rules for collecting and protecting data, and failing to obtain informed consent; UNHCR denies this charge. But you're desperate and in danger, and UNHCR wants your fingerprint. Can you really say no?

In many countries UNHCR is the organization that determines refugee status. Personal information is critical to this process. The amount of information has increased in some areas to include biometrics; as early as 2008 the US was considering using genetic information to confirm family relationships. More important, UNHCR is not always in control of the information it collects. In 2013, PI published a detailed analysis of refugee data collection in Syria. Last week, it published an even more detailed explanation of the systems built in Afghanistan over the last 20 years and that now have been left behind.

Shortly after the current crisis began, April Glaser and Sephora Smith reported at NBC News that Afghans were hastily deleting photographs and documents on their phones that might link them to Westerners, international human rights groups, the Afghan military, or the recently-departed Afghan government. It's an imperfect strategy: instructions on how to do this in local Afghan languages are not always available, and much of the data and the graph of their social connections are stored on social media that don't necessarily facilitate mass deletions. Facebook has released tools to help, including a one-click locking button and pop-up instructions on Instagram. Access Now also offers help and is telling international actors to close down access to these databases before leaving.

This aspect of the Afghan crisis was entirely avoidable.


Illustrations: Afghan woman being iris-scanned for entry into the Korean hospital at Bagram Airfield, Afghanistan, 2012 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 13, 2021

Legacy

QRCode-2-Structure.pngThe first months of the pandemic saw a burst of energetic discussion about how to make it an opportunity to invest in redressing inequalities and rebuilding decaying systems - public health, education, workers' rights. This always reminded me of the great French film director François Truffaut, who, in his role as the director of the movie-within-the-movie in Day for Night, said, "Before starting to shoot, I hope to make a fine film. After the problems begin, I lower my ambition and just hope to finish it." It seemed more likely that if the pandemic went on long enough - back then the journalist Laurie Garrett was predicting a best case of three years - early enthusiasm for profound change would drain away to leave most people just wishing for something they could recognize as "normal". Drinks at the pub!

We forget what "normal" was like. London today seems busy. But with still no tourists, it's probably a tenth as crowded as in August 2019.

Eighteen months (so far) has been long enough to make new habits driven by pandemic-related fears, if not necessity, begin to stick. As it turns out the pandemic's new normal is really not the abrupt but temporary severance of lockdown, which brought with it fears of top-down government-driven damage to social equity and privacy: covid legislation, imminuty passports, and access to vaccines. Instead, the dangerous "new normal" is the new habits building up from the bottom. If Garrett was right, and we are at best halfway through this, these are likely to become entrenched. Some are healthy: a friend has abruptly realized that his grandmother's fanaticism about opening windows stemmed from living through the 1918 Spanish flu pandemic. Others...not so much.

One of the first non-human casualties of the pandemic has been cash, though the loss is unevenly spread. This week, a friend needed more than five minutes to painfully single-finger-type masses of detail into a pub's app, the only available option for ordering and paying for a drink. I see the convenience for the pub's owner, who can eliminate the costs of cash (while assuming the costs of credit cards and technological intermediation) and maybe thin the staff, but it's no benefit to a customer who'd rather enjoy the unaccustomed sunshine and chat with a friend. "They're all like this now," my friend said gloomily. Not where I live, fortunately.

Anti-cash campaigners have long insisted that cash is dirty and spreads disease; but, as we've known for a year, covid rarely spreads through surfaces, and (as Dave Birch has been generous enough to note) a recent paper finds that cash is sometimes cleaner. But still: try to dislodge the apps.

A couple of weeks ago, the Erin Woo at the New York Times highlighted cash-free moves. In New York City, QR codes have taken over in restaurants and stores as contact-free menus and ordering systems. In the UK, QR codes mostly appear as part of the Test and Trace contact tracing app; the idea is you check in when you enter any space, be it restaurant, cinema, or (ludicrously) botanic garden, and you'll be notified if it turns out it was filled with covid-infected people when you were there.

Whatever the purpose, the result is tight links between offline and online behavior. Pre-pandemic, these were growing slowly and insidiously; now they're growing like an invasive weed at a time when few of us can object. The UK ones may fall into disuse alongside the app itself. But Woo cites Bloomberg: half of all US full-service restaurant operators have adopted QR-code menus since the pandemic began.

The pandemic has also helped entrench workplace monitoring. By September 2020, Alex Hern was reporting at the Guardian that companies were ramping up their surveillance of workers in their homes, using daily mandatory videoconferences, digital timecards in the form of cloud logins, and forced participation on Slack and other channels.

Meanwhile at NBC News, Olivia Solon reports that Teleperformance, one of the world's largest call center companies, to which companies like Uber, Apple, and Amazon outsource customer service, has inserted clauses in its employment contracts requiring workers to accept in-home cameras that surveil them, their surroundings, and family members under 18. Solon reports that the anger over this is enough to get these workers thinking about unionizing. Teleperformance is global; it's trying this same gambit in other countries.

Nearer to home, all along, there's been a lot of speculation about whether anyone would ever again accept commuting daily. This week, the Guardian reports that only 18% of workers have gone back to their offices since UK prime minister Boris Johnson ended all official restrictions on July 19. Granted, it won't be clear for some time whether this is new habit or simply caution in the face of the fact that Britain's daily covid case numbers are still 25 times what they were a year ago. In the US, Google is suggesting it will cut pay for staff who resist returning to the office, on the basis that their cost of living is less. Without knowing the full financial position, doesn't it sound like Google is saving money twice?

All these examples suggest that what were temporary accommodations are hardening into "the way things are". Undoing them is a whole new set of items for last year's post-pandemic to-do list.


Illustrations: Graphic showing the structure of QR codes (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 6, 2021

Privacy-preserving mass surveillance

new-22portobelloroad.jpgEvery time it seems like digital rights activists need to stop quoting George Orwell so much, stuff like this happens.

In an abrupt turnaround, on Thursday Apple announced the next stage in the decades-long battle over strong cryptography: after years of resisting law enforcement demands, the company is U-turning to backdoor its cryptography to scan personal devices and cloud stores for child abuse images. EFF sums up the problem nicely: "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor". Or, more simply, a hole is a hole. Most Orweliian moment: Nicholas Weaver framing it on Lawfare as "privacy-sensitive mass surveillance".

Smartphones, particularly Apple phones, have never really been *our* devices in the way that early personal computers were, because the supplying company has always been able to change the phone's software from afar without permission. Apple's move makes this reality explicit.

The bigger question is: why? Apple hasn't said. But the pressure has been mounting on all the technology companies in the last few years, as an increasing number of governments have been demanding the right of access to encrypted material. As Amie Stepanovich notes on Twitter, another factor may be the "online harms" agenda that began in the UK but has since spread to New Zealand, Canada, and others. The UK's Online Safety bill is already (controversially) in progress., as Ross Anderson predicted in 2018. Child exploitation is a terrible thing; this is still a dangerous policy.

Meanwhile, 2021 is seeing some of the AI hype of the last ten years crash into reality. Two examples: health and autonomous vehicles. At MIT Technology Review, Will Douglas Heaven notes the general failure of AI tools in the pandemic. Several research studies - the British Medical Journal, Nature, and the Turing Institute (PDF) - find that none of the hundreds of algorithms were any clinical use and some were actively harmful. The biggest problem appears to have been poor-quality training datasets, leading the AI to either identify the wrong thing, miss important features, or appear deceptively accurate. Finally, even IBM is admitting that Watson, its Jeopardy! champion has not become a successful AI medical diagnostician. Medicine is art as well as science; who knew? (Doctors and nurses, obviously.)

As for autonomous vehicles, at Wired Andrew Kersley reports that Amazon is abandoning its drone delivery business. The last year has seen considerable consolidation among entrants in the market for self-driving cars, as the time and resources it will take to achieve them continue to expand. Google's Waymo is nonetheless arguing that the UK should not cap the number of self-driving cars on public roads and the UK-grown Oxbotica is proposing a code of practice for deployment. However, as Christian Wolmar predicted in 2018, the cars are not here. Even some Tesla insiders admit that.

The AI that has "succeeded" - in the narrow sense of being deployed, not in any broader sense - has been the (Orwellian) surveillance and control side of AI - the robots that screen job applications, the automated facial recognition, the AI-driven border controls. The EU, which invests in this stuff, is now proposing AI regulations; if drafted to respect human rights, they could be globally significant.

However, we will also have to ensure the rules aren't abused against us. Also this week, Facebook blocked the tool a group of New York University social scientists were using to study the company's ad targeting, along with the researchers' personal accounts. The "user privacy" excuse: Cambridge Analytica. The 2015 scandal around CA's scraping a bunch of personal data via an app users voluntarily downloaded eventually cost Facebook $5 billion in its 2019 settlement with the US Federal Trade Commission that also required it to ensure this sort of thing didn't happen again. The NYU researchers' Ad Observatory was collecting advertising data via a browser extension users opted to install. They were, Facebook says, scraping data. Potato, potahto!

People who aren't Facebook's lawyers see the two situations as entirely different. CA was building voter profiles to study how to manipulate them. The Ad Observatory was deliberately avoiding collecting personal data; instead, they were collecting displayed ads in order to study their political impact and identify who pays for them. Potato, *tomahto*.

One reason for the universal skepticism is that this move has companions - Facebook has also limited journalist access to CrowdTangle, a data tool that helped establish that far-right news content generate higher numbers of interactions than other types and suffer no penalty for being full of misinformation. In addition, at the Guardian, Chris McGreal finds that InfluenceMap reports that fossil fuel companies are using Facebook ads to promote oil and gas use as part of remediating climate change (have some clean coal).

Facebook's response has been to claim it's committed to transparency and blame the FTC. The FTC was not amused: "Had you honored your commitment to contact us in advance, we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest." The FTC knows Orwellian fiction when it sees it.


Illustrations: Orwell's house on Portobello Road, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 23, 2021

Immune response

Thumbnail image for china-alihealth.jpegThe slight reopening of international travel - at least inbound to the UK - is reupping discussions of vaccination passports, which we last discussed here three months ago. In many ways, the discussion recapitulates not only the ID card battles of 2006-2010 but also last year's concerns about contact tracing apps.

We revisit so soon for two reasons. First, the UK government has been sending out conflicting messages for the last month or more. Vaccination passports may - or may not - be required for university attendance and residence; they may be required for domestic venues - and football games! - in September. One minister - foreign secretary Dominic Raab - says the purpose would be to entice young people to get vaccinated, an approach that apparently worked in France, where proposing to require vaccination passports in order to visit cafes caused a Eiffel Tower-shaped spike in people presenting for shots. Others seem to think that certificates of either vaccination or negative tests will entice people to go out more and spend money. Or maybe the UK won't do them at all; if enough people are vaccinated why would we need proof of any one individual's status? Little has been said about whatever the government may have learned from the test events that were supposed to show if it was safe to resume mass entertainment gatherings.

Second, a panel discussion last month hosted by Allyson Pollack raised some new points. Many of us have thought of covids passport for international travel as roughly equivalent to proof of vaccination for yellow fever. However, Linet Taylor argues that the only time someone in a high-income country needs one is if they're visiting a country where the disease is endemic. By contrast, every country has covid, and large numbers - children, especially - either can't access or do not qualify for covid vaccinations. The problems that disparity caused for families led Israel to rethink its Green Pass, which expired in June and was not renewed. Therefore, Taylor said, it's more relevant to think about lowering the prevalence of the disease than to try to distinguish between vaccinated and unvaccinated. The chief result of requiring vaccination passports for international travel, she said, will be to add extra barriers for those traveling from low-income countries to high-income countries and cement into place global health inequality and unequal access to vaccines. She concluded that giving the responsibility to technology companies merely shows we have "no plan to solve them any other way".

It also brings other risks. Michael Veale, and Seda F. Gürses explain why the computational infrastructure required to support online vaccination verification undercuts public health objectives. Ellen Ullman wrote about this in 1997: computer logic eliminates fuzzy human accommodations, and its affordances foster administrative change from help to surveillance and inclusion to exclusion. No one using the system - that is people going to pubs and concerts - will have any control over what it's doing.

Last year, Westerners were appalled at the passport-like controls China put in place. This year, New York state is offering the Excelsior Pass. Once you load the necessary details into the pass, a mobile phone app, scanning it gains you admission to a variety of venues. IBM, which built the system, is supposedly already investigating how it can be expanded.

As Veale pointed out, a real-time system to check vaccination certificates will also know everywhere each individual certificate hass been checked, adding inevitable intrusion far beyond the vaccinated-yes/no binary. Two stories this week bear Veale out. The first is the New York Times story that highlighted the privacy risks of QR codes that are proliferating in the name of covid safety. Again, the average individual has no way to tell what data is incorporated into the QR code or what's being saved.

The second story is the outing of Monsignor Jeffrey Burrill by The Pillar, a Medium newsletter that covers the Catholic Church. The Pillar says its writers legally obtained 24 months' worth of supposedly anonymized, aggregated app signal data. Out of that aggregated mass they used known locations Burrill frequents to pick out a phone ID with matching history, and used that to track the phone's use of the LGBTQ dating app Grindr and visits to gay nightclubs. Burrill resigned shortly after being informed of the story.

More important is the conclusion Bruce Schneier draws: location data cannot be successfully anonymized. So checking vaccination passports in fact means building the framework of a comprehensive tracking system, whether or not that's the intention..

Like contact tracing apps before them, vaccination passports are a mirage that seem to offer the prospect of living - in this case, to people who've been vaccinated against covid - as if the pandemic does not exist. Whether it "works" depends on what your goal is. If it's to create an airport-style fast track through everyday life, well, maybe. If it's to promote public health, then safety measures such as improved ventilation, moving events outdoors, masks, and so on are likely a better bet. If we've learned anything from the last year and a half, it should be that no one can successfully create an individual bubble in which they can pretend the pandemic is over even while it rages in the rest of the world,


Illustrations: China's Alipay Health Code in March, 2020 (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 9, 2021

The border-industrial complex*

Rohingya_Refugee_Camp_26_(sep_2020).jpgMost people do not realize how few rights they have at the border of any country.

I thought I did know: not much. EFF has campaigned for years against unwarranted US border searches of mobile phones, where "border" legally extends 100 miles into the country. If you think, well, it's a big country, it turns out that two-thirds of the US population lives within that 100 miles.

No one ever knows what the border of their own country is like for non-citizens. This is one reason it's easy for countries to make their borders hostile: non-citizens have no vote and the people who do have a vote assume hostile immigration guards only exist in the countries they visit. British people have no idea what it's like to grapple with the Home Office, just as most Americans have no experience of ICE. Datafication, however, seems likely to eventually make the surveillance aspect of modern border passage universal. At Papers, Please, Edward Hasbrouck charts the transformation of travel from right to privilege.

In the UK, the Open Rights Group and the3million have jointly taken the government to court over provisions in the post-Brexit GDPR-enacting Data Protection Act (2018) that exempted the Home Office from subject access rights. The Home Office invoked the exemption in more than 70% of the 19,305 data access requests made to its office in 2020, while losing 75% of the appeals against its rulings. In May, ORG and the3million won on appeal.

This week's announced Nationality and Borders Bill proposes to make it harder for refugees to enter the country and, according to analyses by the Refugee Council and Statewatch, make many of them - and anyone who assists them - into criminals.

Refugees have long had to verify their identity in the UK by providing biometrics. On top of that, the cash support they're given comes in the form of prepaid "Aspen" cards, which means the Home Office can closely monitor both their spending and their location, and cut off assistance at will, as Privacy International finds. Scotland-based Positive Action calls the results "bureaucratic slow violence".

That's the stuff I knew. I learned a lot more at this week's workshop run by Security Flows, which studies how datafication is transforming borders. The short version: refugees are extensively dataveilled by both the national authorities making life-changing decisions about them and the aid agencies supposed to be helping them, like the UN High Commissioner for Refugees (UNHCR). Recently, Human Rights Watch reported that UNHCR had broken its own policy guidelines by passing data to Myanmar that had been submitted by more than 830,000 ethnic Rohingya refugees who registered in Bangladeshi camps for the "smart" ID cards necessary to access aid and essential services.

In a 2020 study of the flow of iris scans submitted by Syrian refugees in Jordan, Aalborg associate professor Martin Lemberg-Pedersen found that private companies are increasingly involved in providing humanitarian agencies with expertise, funding, and new ideas - but that those partnerships risk turning their work into an experimental lab. He also finds that UN agencies' legal immunity coupled with the absence of common standards for data protection among NGOs and states in the global South leave gaps he dubs "loopholes of externalization" that allow the technology companies to evade accountability.

At the 2020 Computers, Privacy, and Data Protection conference a small group huddled to brainstorm about researching the "creepy" AI-related technologies the EU was funding. Border security represents a rare opportunity, invisible to most people and justified by "national security". Home Secretary Priti Patel's proposal to penalize the use of illegal routes to the UK is an example, making desperate people into criminals. People like many of the parents I knew growing up in 1960s New York.

The EU's immigration agencies are particularly obscure. I had encoutnered Warsaw-based Frontex, the European Border and Coast Guard Agency which manages operational control of the Schengen Area, but not of EU-LISA, which since 2012 has managed the relevant large-scale IT systems SIS II, VIS, EURODAC, and ETIAS (like the US's ESTA). Unappetizing alphabet soup whose errors few know how to challenge.

The behind-the-scenes the workshop described sees the largest suppliers of ICT, biometrics, aerospace, and defense provide consultants who help define work plans and formulate calls to which their companies respond. The list of vendors appearing in Javier Sánchez-Monedero's 2018 paper for the Data Justice Lab, begins to trace those vendors, a mix of well-known and unknown. A forthcoming follow-up focuses on the economics and lobbying behind all these databases.

In the recent paper on financing border wars, Mark Akkerman analyzes the economic interests behind border security expansion, and observes "Migration will be one of the defining human rights issues of the 21st century." We know it will increase, increasingly driven by climate change; the fires that engulfed the Canadian village of Lytton, BC on July 1 made 1,000 people homeless, and that's just the beginning.

It's easy to ignore the surveillance and control directed at refugees in the belief that they are not us. But take the UK's push to create a hostile environment by pushing border checks into schools, workplaces, and health services as your guide, and it's obvious: their surveillance will be your surveillance.

*Credit the phrase "border-industrial complex" to Luisa Izuzquiza.

Illustrations: Rohingya refugee camp in Bangladesh, 2020 (by Rocky Masum, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 2, 2021

This land

Nomadland-van.pngAn aging van drives off down a highway into a fantastical landscape of southwestern mountains and mesquite. In 1977, that could have been me, or any of my folksinging friends as we toured the US, working our way into debt (TM Andy Cohen). In 2020, however, the van is occupied by Fern (Frances McDormand), one of the few fictional characters in the film Nomadland, directed by Chloé Zhao, and based on the book by Jessica Bruder, which itself grew out of her 2014 article for Harper's magazine.

Nomadland captures two competing aspects of American life. First, the middle-class dream of the nice house with the car in the driveway, a chicken in a pot inside, and secure finances. Anyone who rejects this dream must be dangerous. But deep within also lurks the other American dream, of freedom and independence, which in the course of the 20th century moved from hopping freight trains to motor vehicles and hitting the open road.

For many of Nomadland's characters, living on the road begins as a necessary accommodation to calamity but becomes a choice. They are "retirees" who can't afford to retire, who balk at depending on the kindness of relatives, and have carved out a circuit of seasonal jobs. Echoing many of the vandwellers Bruder profiles, Fern tells a teen she used to tutor, "I'm not homeless - just houseless."

Linda May, for example, began working at the age of 12, but discovered at 62 that her social security benefits amounted to $550 a month (the fate that perhaps awaits the people Barbara Ehrenreich profiles in Nickel and Dimed). Others lost their homes in the 2008 crisis. Fern, whose story frames the movie, lost job and home in Empire, Nevada when the gypsum factory abruptly shut down, another casualty of the 2008 financial crisis. Six months later, the zipcode was scrubbed. This history appears as a title at the beginning of the movie. We watch Fern select items and lock a storage unit. It's go time.

Fern's first stop is the giant Amazon warehouse in Fernley, Nevada, where the money is good and a full-service parking space is included. Like thousands of other workampers, she picks stock and packs boxes for the Christmas rush until, come January, it's time to gracefully accept banishment. People advise her: go south, it's warmer. Shivering and scraping snow off the van, Fern soon accepts the inevitable. I don't know how cold she is, but it brought flashbacks to a few of those 1977 nights in my pickup-truck-with-camper-top when I slept in a full set of clothes and a hat while the shampoo solidified. I was 40 years younger than Fern, and it was never going to be my permanent life. On the other hand: no smartphone.

At the Rubber Tramp Rendezvous nearQuartzsite, Arizona, Fern finds her tribe: Swankie, Bob Wells, and the other significant fictional character, Dave (David Strathairn). She traces the annual job circuit: Amazon, camp hosting, beet harvesting in Nebraska, Wall Drug in South Dakota. Old hands teach her skills she needs: changing tires, inventing and building things out of scrap, remodeling her van, keeping on top of rust. She learns what size bucket to buy and that you must be ready to solve your own emergencies. Finally, she learns to say "See you down the road" instead of "Goodbye".

Earlier this year, at Silicon Flatiron's Privacy at the Margins, Tristia Bauman, executive director of the National Homelessness Law Center, explained that many cities have broadly-written camping bans that make even the most minimal outdoor home impossible. Worse, those policies often allow law enforcement to seize property. It may be stored, but often people still don't get it back; the fees to retrieving a towed-away home (that is, van) can easily be out of reach. This was in my mind when Bob talks about fearing the knock on the van that indicates someone in authority wants you gone.

"I've heard it's depressing," a friend said, when I recommended the movie. Viewed one way, absolutely. These aging Baby Boomers never imagined doing the hardest work of their lives in their "golden years", with no health insurance, no fixed abodes, and no prospects. It's not that they failed to achieve the American Dream. It's that they believed in the American Dream and then it broke up with them.

And yet "depressing" is not how I or my companion saw it, because of that *other* American Dream. There's a sense of ownership of both the land and your own life that comes with living on the road in such a spacious and varied country, as Woody Guthrie knew. Both Guthrie in the 1940s and Zhao now unsparingly document the poverty and struggles of the people they found in those wide-open spaces - but they also understand that here a person can breathe and find the time to appreciate the land's strange, secret wonders. Secret, because most of us never have the time to find them. This group does, because when you live nowhere you live everywhere. We get to follow them to some of these places, share their sense of belonging, and admire their astoundingly adaptable spirit. Despite the hardships they unquestionably face, they also find their way to extraordinary moments of joy.

See you down the road.

Illustrations: Fern's van, heading down the road.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 4, 2021

Data serfs

Asklepios_-_Epidauros.jpgIt is shameful that the UK government has apparently refused to learn anything over decades of these discussions, and is now ordering GPs in England to send their patient data to NHSx beginning on July 1 and continuing daily thereafter. GPs are unhappy about this. Patients - that is, the English population - have until June 23 to opt out. Government information has been so absent that if it were not for medConfidential we might not even know it was happening. The opt-out process is a dark pattern; here's how.

The pandemic has taught us a lot about both upsides and downsides of sharing information. The downside is the spread of covid conspiracy theories, refusal to accept public health measures, and death threats to public health experts.

But there's so much more upside. The unprecedented speed with which we got safe and effective vaccinations was enormously boosted by the Internet. The original ("ancestral") virus was genome-sequenced and shared across the world within days, enabling everyone to get cracking. While the heavy reliance on preprint servers meant some errors have propagated, rapid publication and direct access to experts has done far more good than harm overall.

Crowdsourcing is also proving its worth: by collecting voluntary symptom and test/vaccination status reports from 4.6 million people around the UK, the Covid Symptom Study, to which I've contributed daily for more than a year, has identified additional symptoms, offered early warning of developing outbreaks, and assessed the likelihood of post-vaccination breakthrough covid infections. The project is based on an app built by the startup Joinzoe in collaboration with 15 charities and academic research organizations. From the beginning it has seemed an obviously valuable effort worth the daily five seconds it takes to report - and worth giving up a modest amount of data privacy for - because the society-wide benefit is so obvious. The key points: the data they collect is specific, they show their work and how my contribution fits in, I can review what I've sent them, and I can stop at any time. In the blog, the project publishes ongoing findings, many of which have generated journal papers for peer review.

The government plans meet none of these criteria. The data grab is comprehensive, no feedback loop is proposed, and the subject access rights enshrined in data protection law are not available. How could it be more wrong?

Established in 2019, NHSx is the "digital arm" of the National Health Service. It's the branch that commissioned last year's failed data-collecting contact tracing app ("failed", as in many people correctly warned that their centralized design was risky and wouldn't work,). NHSx is all data and contracts. It has no direct relationship with patients, and many people don't know it exists. This is the organization that is demanding the patient records of 56 million people, a policy Ross Anderson dates to 1992.

If Britain has a national religion it's the NHS. Yes, it's not perfect, and yes, there are complaints - but it's a lot like democracy: the alternatives are worse. The US, the only developed country that has refused a national health system, is near-universally pitied by those outside it. For those reasons, no politician is ever going to admit to privatizing the NHS, and most citizens are suspicious, particularly of conservatives, that this is what they secretly want to do.

Brexit has heightened these fears, especially among those of us who remember 2014, when NHS England announced care.data, a plan to collect and potentially sell NHS patient data to private companies. Reconstructing the UK's economy post-EU membership has always been seen as involving a trade deal with the US, which is likely to demand free data flows and, most people believe, access to the NHS for its private medical companies. Already, more than 50 GPs' practices (1%) are managed by Operose, a subsidiary of US health insurer Centene. The care.data plan was rapidly canceled with a promise to retreat and rethink.

Seven years later, the new plan is the old plan, dusted off, renamed, and expanded. The story here is the same: it's not that people aren't willing to share data; it's that we're not willing to hand over full control. The Joinzoe app has worked because every day each contributor remakes the decision to participate and because the researchers provide a direct feedback loop that shows how the data is being used and the results. NHSx isn't offering any of that. It is assuming the right to put our most sensitive personal data into a black box it owns and controls and keep doing so without granting us any feedback or recourse. This is worse than advertisers pretending that we make free choices to accept tracking. No one in this country has asked for their relationship with their doctor to be intermediated by a bunch of unknown data managers, however well-meaning. If their case for the medical and economic benefits is so strong (and really, it is, *when done right*), why not be transparent and open about it?

The pandemic has made the case for the value of pooling medical data. But it has also been a perfect demonstration of what happens when trust seeps out of a health system - as it does when governments feudally treat citizens as data serfs. *Both* lessons should be learned.


Illustrations: Asklepios, Greek god of medicine.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 28, 2021

Judgments day

1024px-Submarine_cable_map_umap.pngThis has been quite a week for British digital rights campaigners, who have won two significant cases against the UK government.

First is a case regarding migrants in the UK, brought by the Open Rights Group and the3mllion. The case challenged a provision in the Data Protection Act (2018) that exempted the Home Office from subject access requests, meaning that migrants refused settled status or immigration visas had no access to the data used to decide their cases, placing them at an obvious disadvantage. ORG and the3million argued successfully in the Court of Appeal that this was unfair, especially given that nearly half the appeals against Home Office decisions before the law came into effect were successful.

This is an important win, but small compared to the second case.

Eight years after Edward Snowden revealed the extent of government interception of communications, the reverberations continue. This week, the the Grand Chamber of the Europgean Court of Human Rights found Britain's data interception regime breached the rights to privacy and freedom of expression. Essentially, as Haroon Siddique sums it up at the Guardian, the court found deficiencies in three areas. First, bulk interception was authorized by the secretary of state but not by an independent body such as a court. Second, the application for a warrant did not specify the kinds of communication to be examined. Third, search terms linked to an individual were not subject to prior authorization. The entire process, the court ruled, must be subject to "end-to-end safeguards".

This is all mostly good news. Several of the 18 applicants (16 organizations and two individuals), argue the ruling didn't go far enough because it didn't declare bulk interference illegal in and of itself. Instead, it merely condemned the UK's implementation. Privacy International expects that all 47 members of the Council of Europe, all signatories to the European Convention on Human Rights, will now review their surveillance laws and practices and bring them into line with the ruling, giving the win much broader impact./

Particularly at stake for the UK is the adequacy decision it needs to permit seamless sharing data with EU member states under the General Data Protection Regulation. In February the EU issued a draft decision that would grant adequacy for four years. This judgment highlights the ways the UK's regime is non-compliant.

This case began as three separate cases filed between 2013 and 2015; they were joined together by the court. PI, along with ACLU, Amnesty International, Liberty, and six other national human rights organizations, was among the first group of applicants. The second included Big Brother Watch, Open Rights Group, and English PEN; the third added the Bureau of Investigative Journalism.

Long-time readers will know that this is not the first time the UK's surveillance practices have been ruled illegal. In 2008, the CJEU ruled against the UK's DNA database. More germane, in 2014, the CJEU invalidated the Data Retention Directive as a disproportionate intrusion on fundamental human rights, taking down with it the UK's supporting legislation. At the end of 2014, to solve the "emergency" created by that ruling, the UK hurriedly passed the Data Retention and Investigatory Powers Act (DRIPA). The UK lost the resulting legal case in 2016, when the CJEU largely struck it down again.

Currently, the legislation that enables the UK's communications surveillance regime is the Investigatory Powers Act (2016), which built on DRIPA and its antecedents, plus the Terrorism Prevention and Investigation Measures Act (2011), whose antecedents go back to the Anti-Terrorism, Crime, and Security Act (2001), passed two months after 9/11. In 2014, I wrote a piece explaining how the laws fit together.

Snowden's revelations were important in driving the post-2013 items on that list; the IPA was basically designed to put the practices he disclosed on a statutory footing. I bring up this history because I was struck by a comment in Albuquerque's dissent: "The RIPA distinction was unfit for purpose in the developing Internet age and only served the political aim of legitimising the system in the eyes of the British public with the illusion that persons within the United Kingdom's territorial jurisdiction would be spared the governmental 'Big Brother'".

What Albuquerque is criticizing here, I think, is the distinction made in RIPA between metadata, which the act allowed the government to collect, and content, which is protected. Campaigners like the late Caspar Bowden frequently warned that metadata is often more revealing than content. In 2015, Steve Bellovin, Matt Blaze, Susan Landau, and Stephanie Pell showed that the distinction is no longer meaningful (PDF in any case.

I understand that in military-adjacent circles Snowden is still regarded as a traitor. I can't judge the legitimacy of all his revelations, but in at least one category it was clear from the beginning that he was doing the world a favor. That is alerting the world to the intelligence services' compromising crucial parts of the world's security systems that protect all of us. In ruling that the UK practices he disclosed are illegal, the ECtHR has gone a long way toward vindicating him as a whistleblower in a second category.


Illustrations: Map of cable data by Greg Mahlknecht, map by Openstreetmap contributors (CC-by-SA 2.0), from the Privacy International report on the ruling.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 20, 2021

Ontology recapiltulates phylogeny

Thumbnail image for sidewalklabs-streetcrossing.pngI may be reaching the "get off my lawn!" stage of life, except the things I'm yelling at are not harmless children but new technologies, many of which, as Charlie Stross writes, leak human stupidity into our environment.

Case in point: a conference this week chose for its platform an extraordinarily frustrating graphic "virtual congress center" that was barely more functional than Second Life (b. 2003). The big board displaying the agenda was not interactive; road signs and menu items pointed to venues by name, but didn't show what was going on in them. Yes, there was a reception desk staffed with helpful avatars. I do not want to ask for help, I want simplicity. The conference website advised: "This platform requires the installation of a dedicated software in your computer and a basic training." Training? To watch people speak on my computer screen? Why can't I just "click here to attend this session" and see the real, engaged faces of speakers, instead of motionless cartoon avatars?

This is not a new-technology issue but a usability issue that hasn't changed since Donald Norman's 1988 The Design of Everyday Things sought to do away with user manuals.

I tell myself that this isn't just another clash between generational habits.

Even so, if current technology trends continue I will be increasingly left behind, not just because I don't *want* to join in but because, through incalculable privilege, much of the time I don't *need* to. My house has no smart speakers, I see no reason to turn on open banking, and much of the time I can leave my mobile phone in a coat pocket, ignored.

But Out There in the rest of the world, where I have less choice, I read that Amazon is turning on Sidewalk, a proprietary mesh network that uses Bluetooth and 900MHz radio connections to join together Echo speakers, Ring cameras, and any other compatible device the company decides to produce. The company is turning this thing on by default (free software update!), though if you're lucky enough to read the right press articles you can turn it off. When individuals roam the streets piggybacking on open wifi connections, they're dubbed "hackers". But a company - just ask forgiveness, not permission, yes?

The idea appears to be that the mesh network will improve the overall reliability of each device when its wifi connection is iffy. How it changes the range and detail of the data each device collects is unclear. Connecting these devices into a network is a step change in physical tracking; CNet suggests that a Tile tag attached to a dog, while offering the benefit of an alert if the dog gets loose, could also provide Amazon with detailed tracking of all your dog walks. Amazon says the data is protected with three layers of encryption, but protection from outsiders is not the same as protection from Amazon itself. Even the minimal data Amazon says in its white paper (PDF) it receives - the device serial number and application server ID - reveal the type of device and its location.

We have always talked about smart cities as if they were centrally planned, intended to offer greater efficiency, smoother daily life, and a better environment, and built with some degree of citizen acceptance. But the patient public deliberation that image requires does not fit the "move fast and break things" ethos that continues to poison organizational attitudes. Google failed to gain acceptance for its Toronto plan; Amazon is just doing it. In London in 2019, neither private operators nor police bothered to inform or consult anyone when they decided to trial automated facial recognition.

In the white paper, Amazon suggests benefits such as finding lost pets, diagnostics for power tools, and supporting lighting where wifi is weak. Nice use cases, but note that the benefits accrue to the devices' owner while the costs belong to neighbors who may not have actively consented, but simply not known they had to change the default settings in order to opt out. By design, neither device owners nor server owners can see what they're connected to. I await the news of the first researcher to successfully connect an unauthorized device.

Those external costs are minimal now, but what happens when Amazon is inevitably joined by dozens more similar networks, like the collisions that famously plague the more than 50 companies that dig up London streets? It's disturbingly possible to look ahead and see our public spaces overridden by competing organizations operating primarily in their own interests. In my mind, Amazon's move opens up the image of private companies and government agencies all actively tracking us through the physical world the way they do on the web and fighting over the resulting "insights". Physical tracking is a sizable gap in GDPR.

Again, these are not new-technology issues, but age-old ones of democracy, personal autonomy, and the control of public and private spaces. As Nicholas Couldry and Ulises A. Mejias wrote in their 2020 book The Costs of Connection, this is colonialism in operation. "What if new ways of appropriating human life, and the freedoms on which it depends, are emerging?" they asked. Even if Amazon's design is perfect, Sidewalk is not a comforting sign.


Illustrations: A mock-up from Google's Sidewalk Labs plan for Toronto.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 14, 2021

Pre-crime

Unicorn_sculpture,_York_Crown_Court-Tim-Green.jpgMuch is being written about this week's Queen's speech, which laid out plans to restrict protests (the Police, Crime, Sentencing, and Courts bill), relax planning measures to help developers override communities, and require photo ID in order to vote even though millions of voters have neither passport nor driver's license and there was just one conviction for voting fraud in the 2019 general election. We, however, will focus here on the Online Safety bill, which includes age verification and new rules for social media content moderation.

At Politico, technology correspondent Mark Scott picks three provisions: the exemption granting politicians free rein on social media; the move to require moderation of content that is not illegal or criminal (however unpleasant it may be); and the carve-outs for "recognised news publishers". I take that to mean they wanted to avoid triggering the opposition of media moguls like Rupert Murdoch. Scott read it as "journalists".

The carve-out for politicians directly contradicts a crucial finding in last week's Facebook oversight board ruling on the suspension of former US president Donald Trump's account: "The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users." Politicians, in other words, may not be more special than other influencers. Given the history of this particular government, it's easy to be cynical about this exemption.

In 2019, Heather Burns, now policy manager for the Open Rights Group, predicted this outcome while watching a Parliamentary debate on the white paper: "Boris Johnson's government, in whatever communication strategy it is following, is not going to self-regulate its own speech. It is going to double down on hard-regulating ours." At ORG's blog, Burns has critically analyzed the final bill.

Few have noticed the not-so-hidden developing economic agenda accompanying the government's intended "world-leading package of online safety measures". Jen Persson, director of the children's rights advocacy group DefendDigitalMe, is the exception, pointing out that in May 2020 the Department of Culture, Media, and Sport released a report that envisions the UK as a world leader in "Safety Tech". In other words, the government views online safety (PDF; see Annex C) as not just an aspirational goal for the country's schools and citizens but also as a growing export market the UK can lead.

For years, Persson has been tirelessly highlighting the extent to which children's online use is monitored. Effectively, monitoring software watches every use of any school-owned device and whenever the child is logged into their school Gsuite account; some types can even record photos of the child at home, a practice that became notorious when it was tried in Pennsylvania.

Meanwhile, outside of DefendDigitalMe's work - for example its case study of eSafe and discussion of NetSupport DNA and this discussion of school safeguarding - we know disturbingly little about the different vendors, how they fit together in the education ecosystem, how their software works, how capabilities vary from vendor to vendor, how well they handle multiple languages, what they block, what data it collects, how they determine risk, what inferences are drawn and retained and by whom, and the rate of errors and their consequences. We don't even really know if any of it works - or what "works" means. "Safer online" does not provide any standard against which the cost to children's human rights can be measured. Decades of government policy have all trended toward increased surveillance and filtering, yet wherever "there" is we never seem to arrive. DefendDigitalMe has called for far greater transparency.

Persson notes both mission creep and scope creep: "The scope has shifted from what was monitored to who is being monitored, then what they're being monitored for." The move from harmful and unlawful content to lawful but "harmful" content is what's being proposed now, and along with that, Persson says, "children being assessed for potential risk". The controversial Prevent program program is about this: monitoring children for signs of radicalization. For their safety, of course.

Previous UK children's rights campaigners used to say that successive UK governments have consistently used children as test subjects for the controversial policies they wish to impose on adults, normalizing them early. Persson suggests the next market for safetytech could be employers monitoring employees for mental health issues. I imagine elderly people.

DCMS's comments support market expansion: "Throughout the consultations undertaken when compiling this report there was a sector consensus that the UK is likely to see its first Safety Tech unicorn (i.e. a company worth over $1bn) emerge in the coming years, with three other companies also demonstrating the potential to hit unicorn status within the early 2020s. Unicorns reflect their namesake - they are incredibly rare, and the UK has to date created 77 unicorn businesses across all sectors (as of Q4 2019)." (Are they counting the much-litigated Autonomy?)

There's something peculiarly ghastly about this government's staking the UK's post-Brexit economic success on exporting censorship and surveillance to the rest of the world, especially alongside its stated desire to opt out of parts of human rights law. This is what "global Britain" wants to be known for?

Illustrations: Unicorn sculpture at York Crown Court (by Tim Green via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 23, 2021

Fast, free, and frictionless

Sinan-Aral-20210422_224835.jpg"I want solutions," Sinan Aral challenged at yesterday's Social Media Summit, "not a restatement of the problems". Don't we all? How many person-millennia have we spent laying out the issues of misinformation, disinformation, harassment, polarization, platform power, monopoly, algorithms, accountability, and transparency? Most of these have been debated for decades. The big additions of the last decade are the privatization of public speech via monopolistic social media platforms, the vastly increased scale, and the transmigration from purely virtual into physical-world crises like the January 6 Capitol Hill invasion and people refusing vaccinations in the middle of a pandemic.

Aral, who leads the MIT Initiative on the Digital Economy and is author of the new book The Hype Machine, chose his panelists well enough that some actually did offer some actionable ideas.

The issues, as Aral said, are all interlinked. (see also 20 years of net.wars). Maria Ressla connected the spread of misinformation to system design that enables distribution and amplification at scale. These systems are entirely opaque to us even while we are open books to them, as Guardian journalist Carole Cadwalladr noted, adding that while US press outrage is the only pressure that moves Facebook to respond, it no longer even acknowledges questions from anyone at her newspaper. Cadwalladr also highlighted the Securities and Exchange Commission's complaint that says clearly: Facebook misled journalists and investors. This dismissive attitude also shows in the leaked email, in which Facebook plans to "normalize" the leak of 533 million users' data.

This level of arrogance is the result of concentrated power, and countering it will require antitrust action. That in turn leads back to questions of design and free speech: what can we constrain while respecting the First Amendment? Where is the demarcation line between free speech and speech that, like crying "Fire!" in a crowded theater, can reasonably be regulated? "In technology, design precedes everything," Roger McNamee said; real change for platforms at global or national scale means putting policy first. His Exhibit A of the level of cultural change that's needed was February's fad, Clubhouse: "It's a brand-new product that replicates the worst of everything."

In his book, Aral opposes breaking up social media companies as was done incases such as Standard Oil, the AT&T. Zephyr Teachout agreed in seeing breakup, whether horizontal (Facebook divests WhatsApp and Instagram, for example) or vertical (Google forced to sell Maps) as just one tool.

The question, as Joshua Gans said, is, what is the desired outcome? As Federal Trade Commission nominee Lina Khan wrote in 2017, assessing competition by the effect on consumer pricing is not applicable to today's "pay-with-data-but-not-cash" services. Gans favors interoperability, saying it's crucial to restoring consumers' lost choice. Lock-in is your inability to get others to follow when you want to leave a service, a problem interoperability solves. Yes, platforms say interoperability is too difficult and expensive - but so did the railways and telephone companies, once. Break-ups were a better option, Albert Wenger added, when infrastructures varied; today's universal computers and data mean copying is always an option.

Unwinding Facebook's acquisition of WhatsApp and Instagram sounds simple, but do we want three data hogs instead of one, like cutting off one of Lernean Hydra's heads? One idea that emerged repeatedly is slowing "fast, free, and frictionless"; Yael Eisenstat wondered why we allow experimental technology at global scale but policy only after painful perfection.

MEP Marietje Schaake (Democrats 66-NL) explained the EU's proposed Digital Markets Act, which aims to improve fairness by preempting the too-long process of punishing bad behavior by setting rules and responsibilities. Current proposals would bar platforms from combining user data from multiple sources without permission; self-preferencing; and spying (say, Amazon exploiting marketplace sellers' data), and requires data portability and interoperability for ancillary services such as third-party payments.

The difficulty with data portability, as Ian Brown said recently, is that even services that let you download your data offer no way to use data you upload. I can't add the downloaded data from my current electric utility account to the one I switch to, or send my Twitter feed to my Facebook account. Teachout finds that interoperability isn't enough because "You still have acquire, copy, kill" and lock-in via existing contracts. Wenger argued that the real goal is not interoperability but programmability, citing open banking as a working example. That is also the open web, where a third party can write an ad blocker for my browser, but Facebook, Google, and Apple built walled gardens. As Jared Sine told this week's antitrust hearing, "They have taken the Internet and moved it into the app stores."

Real change will require all four of the levers Aral discusses in his book, money, code, norms, and laws - which Lawrence Lessig's 1996 book, Code and Other Laws of Cyberspace called market, software architecture, norms, and laws - pulling together. The national commission on democracy and technology Aral is calling for will have to be very broadly constituted in terms of disciplines and national representation. As Safiya Noble said, diversifying the engineers in development teams is important, but not enough: we need "people who know society and the implications of technologies" at the design stage.


Illustrations: Sinan Aral, hosting the summit.l

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 16, 2021

Frenemies

London-in-lockdown`20201124_144850.jpgThis week, an update to the UK's contact tracing app (which, confusingly, is labeled "NHS" but is actually instead part of the private contractor-run test and trace system) was blocked by Google and Apple because it broke their terms and conditions. What the UK wanted: people who tested positive to upload their collected list of venue check-ins, now that the latest national lockdown is easing. Under Google's and Apple's conditions, to which the government had agreed: banned. Oops.

The previouslies: this time last year, it was being widely suggested that contact tracing apps could save us. In May 2020, the BMJ blog called downloading the app a "moral obligation".

That reaction was part of a battle over privacy. Step One: Western horror at the Chinese Alipay Health Code app that assigned everyone a traffic light code based on their recent movements and contacts and determined which buildings and public places they could enter - the permission-based society at a level that would surely be unacceptable in a Western democracy. Step Two: the UK, like France, designed its own app to collect users' data for centralized analysis, tracking, and tracing. Privacy advocates argued that this design violated data protection law and that public health goals could be met by less invasive means. Technical advisers warned it wouldn't work. Step Three: Google and Apple built a joint "exposure notification" platform to underpin these contact tracing apps and set the terms: no centralized data collection. Data must remain local unless the user opts to upload it. The UK, and France grumpily switched when they discovered everyone else was right: their design didn't work. Later, the two companies embedded exposure notification into their operating systems so public health departments didn't have to build their own app.

Make no mistake: *contact tracing* works. It's a well-established practice in public health emergencies. But we don't know if contact tracing *apps* work where "work" means "reduce infections" as opposed to work technically, are well-designed, or even reject these silly privacy considerations. Most claimed success for these apps seems to have come shortly after release and measure success in download numbers, on the basis that the apps will only work if enough people use them. The sole exception appears to be Singapore, where claimed download rates near 60% and authorities report the app has halved the time to complete contact tracing from four days to two.

In June, Italian biologist Emanuele Rizzo warned in the British Medical Journal that the apps are poorly suited for the particular characteristics of how the coronavirus spreads and the heightened risk for older people, who are least likely to have smartphones. In October, AI researcher Allison Gardner wrote at The Conversation that the worldwide average for downloading these apps was an inadequate 20%.

The UK was slow to get its contact tracing app working, and by the time it did we were locking down for the winter. Even so, last summer most UK venues posted QR codes for visitors to scan to log their visit. If someone tests positive in that venue it's reported to a database, from where your phone retrieves it and alerts you if you were there at the same time so you can get tested and, if necessary, self-isolate.

Of course, for the last five months nothing's been open. Check-ins and contact tracing apps aren't much use when no one is going anywhere. But during the period when people tried this out, there were many reported problems, such as that the app may decide exposure has taken place when you and the infected person only overlapped briefly. It remains simpler, probably overall cheaper, and more future-proof to improve ventilation and make venues safer.

Google's and Apple's action means, I suppose, that I am supposed to be grateful, however grumpily, to Big Tech for protecting me against government intrusion. What I want, though, to be able to trust the health authorities so this sort of issue only arises when absolutely necessary. Depending on the vagaries of private companies' business models to protect us is not a solution.

This is a time when many are not happy with either company. Google's latest wheeze is to replace third-party cookies with Federated Learning of Cohorts, which assign Chrome users to categories it then uses to target ads. EFF has a new tool that shows if you've been "FLoCed" (Firefox users need not apply). Google calls this setup a privacy sandbox, and claims it will more privacy-protective than the present all-tracking, by-everyone, all-the-time situation. EFF calls this "old tracking" versus "new tracking", and argues for a third option: *not* tracking, and letting users decide what information to share and with whom.

Apple, meanwhile, began blocking tracking via third-party cookies last year, with dramatic results, and rejects apps that aren't compliant, though some companies are finding workarounds. This year, new Apple rules requiring privacy labels that identify the categories of data apps collect have exposed the extent of data collection via Google's Chrome browser and search app.

The lesson to be drawn here is not that these companies are reinventing themselves as privacy protectors. The lesson to be drawn is that each wants to be the *only* one to invade our privacy. It's only a coincidence that the result was that they refused to accommodate government demands.


Illustrations: Empty central London in lockdown in November 2020.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 2, 2021

Medical apartheid

swiss-cheese-virus-defence.jpgEver since 1952, when Clarence Willcock took the British government to court to force the end of wartime identity cards, UK governments have repeatedly tried to bring them back, always claiming they would solve the most recent public crisis. The last effort ended in 2010 after a five-year battle. This backdrop is a key factor in the distrust that's greeting government proposals for "vaccination passports" (previously immunity passports). Yesterday, the Guardian reported that British prime minister Boris Johnson backs certificates that show whether you've been vaccinated, have had covid and recovered, or had a test. An interim report will be published on Monday; trials later this month will see attendees to football matches required to produce proof of negative lateral flow tests 24 hours before the game and on entry.

Simultaneously, England chief medical officer Chris Whitty told the Royal Society of Medicine that most experts think covid will become like the flu, a seasonal disease that must be perennially managed.

Whitty's statement is crucial because it means we cannot assume that the forthcoming proposal will be temporary. A deeply flawed measure in a crisis is dangerous; one that persists indefinitely is even more so. Particularly when, as this morning, culture secretary Oliver Dowden tries to apply spin: "This is not about a vaccine passport, this is about looking at ways of proving that you are covid secure." Rebranding as "covid certificates" changes nothing.

Privacy advocates and human rights NGOs saw this coming. In December, Privacy International warned that a data grab in the guise of immunity passports will undermine trust and confidence while they're most needed. "Until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair." We are a long, long way from that universal access and likely to remain so; today's vaccines will have to be updated, perhaps as soon as September. There is substantial, but not enough, parliamentary opposition.

A grassroots Labour discussion Wednesday night showed this will become yet another highly polarized debate. Opponents and proponents combine issues of freedom, safety, medical efficacy, and public health in unpredictable ways. Many wanted safety - "You have no civil liberties if you are dead," one person said; others foresaw segregation, discrimination, and exclusion; still others cited British norms in opposing making compulsory either vaccinations or carrying any sort of "papers" (including phone apps).

Aside from some specific use cases - international travel, a narrow range of jobs - vaccination passports in daily life are a bad idea medically, logistically, economically, ethically, and functionally. Proponents' concerns can be met in better - and fairer - ways.

The Independent SAGE advisory group, especially Susan Michie, has warned repeatedly that vaccination passports are not a good solution for solution life. The added pressure to accept vaccination will increase distrust, she has repeatedly said, particularly among victims of structural racism.

Instead of trying to identify which people are safe, she argues that the government should be guiding employers, businesses, schools, shops, and entertainment venues to make their premises safer - see for example the CDC's advice on ventilation and list of tools. Doing so would not only help prevent the spread of covid and keep *everyone* safe but also help prevent the spread of flu and other pathogens. Vaccination passports won't do any of that. "It again puts the burden on individuals instead of spaces," she said last night in the Labour discussion. More important, high-risk individuals and those who can't be vaccinated will be better protected by safer spaces than by documentation.

In the same discussion, Big Brother Watch's Silkie Carlo predicted that it won't make sense to have vaccination passports and then use them in only a few places. "It will be a huge infrastructure with checkpoints everywhere," she predicted, calling it "one of the civil liberties threats of all time" and "medical apartheid" and imagining two segregated lines of entry to every venue. While her vision is dramatic, parts of it don't go far enough: imagine when this all merges with systems already in place to bar access to "bad people". Carlo may sound unduly paranoid, but it's also true that for decades successive British governments at every decision point have chosen the surveillance path.

We have good reason to be suspicious of this government's motives. Throughout the last year, Johnson has been looking for a magic bullet that will fix everything. First it was contact tracing apps (failed through irrelevance), then test and trace (failing in the absence of "and isolate and support"), now vaccinations. Other than vaccinations, which have gone well because the rollout was given to the NHS, these failed high-tech approaches have handed vast sums of public money to private contractors. If by "vaccination certificates" the government means the cards the NHS gives fully-vaccinated individuals listing the shots they've had, the dates, and the manufacturer and lot number, well fine. Those are useful for those rare situations where proof is really needed and for our own information in case of future issues, it's simple, and not particularly expensive. If the government means a biometric database system that, as Michie says, individualizes the risk while relieving venues of responsibility, just no.

Illustrations: The Swiss Cheese Respiratory Virus Defence, created by virologist Ian McKay.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 19, 2021

Dystopian non-fiction

Screenshot from 2021-03-18 12-51-27.pngHow dumb do you have to be to spend decades watching movies and reading books about science fiction dystopias with perfect surveillance and then go on and build one anyway?

*This* dumb, apparently, because that what Shalini Kantayya discovers in her documentary Coded Bias, which premiered at the 2020 Sundance Film Festival. I had missed it until European Digital Rights (EDRi) arranged a streaming this week.

The movie deserves the attention paid to The Social Dilemma. Consider the cast Kantayya has assembled: "math babe" Cathy O'Neil, data journalism professor Meredith Broussard, sociologist Zeynep Tufekci, Big Brother Watch executive director Silkie Carlo, human rights lawyer Ravi Naik, Virginia Eubanks, futurist Amy Webb, and "code poet" Joy Buolamwini, who is the film's main protagonist and provides its storyline, such as it is. This film wastes no time on technology industry mea non-culpas, opting instead to hear from people who together have written a year's worth of reading on how modern AI disassembles people into piles of data.

The movie is framed by Buoalmwini's journey, which begins in her office at MIT. At nine, she saw a presentation on TV from MIT's Media Lab, and, entranced by Cynthia Breazeal's Kismet robot, she instantly decided: she was going to be a robotics engineer and she was going to MIT.

At her eventual arrival, she says, she imagined that coding was detached from the world - until she started building the Aspire Mirror and had to get a facial detection system working. At that point, she discovered that none of the computer vision tracking worked very well...until she put on a white mask. She started examining the datasets used to train the facial algorithms and found that every system she tried showed the same results: top marks for light-skinned men, inferior results for everyone else, especially the "highly melanated".

Teaming up with Deborah Raji, in 2018 Buolamwini published a study (PDF) of racial and gender bias in Amazon's Rekognition system, then being trialed with law enforcement. The company's response leads to a cameo, in which Buolamwini chats with Timnit Gebru about the methods technology companies use to discredit critics. Poignantly, today's viewers know that Gebru, then still at Google was only months away from becoming the target of exactly that behavior, fired over her own critical research on the state of AI.

Buolamwini's work leads Kantayya into an exploration of both algorithmic bias generally, and the uncontrolled spread of facial recognition in particular. For the first, Kantayya surveys scoring in recruitment, mortgage lending, and health care, and visits the history of discrimination in South Africa. Useful background is provided by O'Neil, whose Weapons of Math Destruction is a must-read on opaque scoring, and Broussard, whose Artificial Unintelligence deplores the math-based narrow conception of "intelligence" that began at Dartmouth in 1956, an arrogance she discusses with Kantayya on YouTube.

For the second, a US unit visits Brooklyn's Atlantic Plaza Towers complex, where the facial recognition access control system issues warnings for tiny infractions. A London unit films the Oxford Circus pilot of live facial recognition that led Carlo, with Naik's assistance, to issue a legal challenge in 2018. Here again the known future intervenes: after the pandemic stopped such deployments, BBW ended the challenge and shifted to campaigning for a legislative ban.

Inevitably, HAL appears to remind us of what evil computers look like, along with a red "I'm an algorithm" blob with a British female voice that tries to sound chilling.

But HAL's goals were straightforward: it wanted its humans dead. The motives behind today's algorithms are opaque. Amy Webb, whose book The Big Nine profiles the nine companies - six American, three Chinese - who are driving today's AI, highlights the comparison with China, where the government transparently tells citizens that social credit is always watching and bad behavior will attract penalties for your friends and family as well as for you personally. In the US, by contrast, everyone is being scored all the time by both government and corporations, but no one is remotely transparent about it.

For Buolamwini, the movie ends in triumph. She founds the Algorithmic Justice League and testifies in Congress, where she is quizzed by Alexandria Ocasio-Cortez(D-NY) and Jamie Raskin (D-MD), who looks shocked to learn that Facebook has patented a system for recognizing and scoring individuals in retail stores. Then she watches as facial recognition is banned in San Francisco, Somerville, Massachusetts, and Oakland, and the electronic system is removed from the Brooklyn apartment block - for now.

Earlier, however, Eubanks, author of Automating Inequality, issued a warning that seems prescient now, when the coronavirus has exposed all our inequities and social fractures. When people cite William Gibson's "The future is already here - it's just not evenly distributed", she says, they typically mean that new tools spread from rich to poor. "But what I've found is the absolute reverse, which is that the most punitive, most invasive, most surveillance-focused tools that we have, they go into poor and working communities first." Then they get ported out, if they work, to those of us with higher expectations that we have rights. By then, it may be too late to fight back.

See this movie!


Illustrations: Joy Buolamwini, in Coded Bias.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 5, 2021

Covid's children

LSE-Livingstone-panel-2021-03.pngI wonder a lot about how the baby downstairs will develop differently because of his September 2020 birth date. In his first five months, the only humans who have been in close contact are his two parents, a smattering of doctors and nurses, and a stray neighbor who occasionally takes him for walks. Walks, I might add, in which he never gets out of his stroller but in which he exhibits real talent for staring contests (though less for intelligent conversation). His grandparents he only knows through video calls. His parents think he's grasped that they're real, though not present, people. But it's hard to be sure.

The effects of the pandemic are likely to be clear a lot sooner for the older children and young people whose lives and education have been disrupted over the past year. This week, as part of the LSE Post-Covid World Festival, Sonia Livingstone (for whose project I wrote some book reviews a few years ago) led a panel to discuss those effects.

Few researchers in the UK - Livingstone, along with Andy Phippen, is one of the exceptions, as is, less formally, filmmaker and House of Lords member Beeban Kidron, whose 2013 film InRealLife explores teens' use of the Internet - ever bother to consult children to find out what their online experiences and concerns really are. Instead, the agenda shaped by politicians and policy makers centers on adults' fears, particularly those that can be parlayed into electoral success. The same people who fret that social media is posing entirely new problems today's adults never encountered as children refuse to find out what those problems look like to the people actually experiencing them. Worse, the focus is narrow: protecting children from pornography, grooming, and radicalization is everywhere, but protecting them from data exploitation is barely discussed. In the UK, as Jen Persson, founder of DefendDigitlMe, keeps reminding us, collecting children's data is endemic in education.

This was why the panel was interesting: all four speakers are involved in projects aimed to understand and amplify children's and young people's own concerns. From that experience, all four - Konstantinos Papachristou, the youth lead for the #CovidUnder19 project, Maya Götz, who researches children, youth, and television, Patricio Cuevas-Parra, who is part of a survey of 10,000 children and young people, and Laurie Day - highlighted similar issues of lack of access and inequality - not just to the Internet but also to vaccines and good information.

In all countries, the shift to remote leaning has been abrupt, exposing infrastructure issues that were always urgent, but never quite urgent enough to fix. Götz noted that in some Asian countries and Chile she's seeing older technologies being pressed into service to remedy some of this - technologies like broadcast TV and radio; even in the UK, after the first lockdown showed how many low-income families could not afford sufficient data plans, the the BBC began broadcasting curriculum-based programming.

"Going back to normal," Day said, "needs a rethink of what support is needed." Yet for some students the move to online learning has been liberating, lightening social and academic pressures and giving space to think about their values and the opportunity to be creative. We don't hear so much about that; British media focus on depression and loss.

By the time the baby downstairs reaches school age, the pandemic will be over, but its footprint will be all over how his education proceeds.

Persson, who focuses on the state's use of data in education, says that one consequence of the pandemic is that Microsoft and Google have entrenched themselves much more deeply into the UK's education infrastructure.

"With or without covid, schools are dependent on them for their core infrastructure now, and that's through platforms joining up their core personal data about students and staff - email addresses, phone numbers, names, organizational data - and joining all that up," she says. Parents are encouraged to link to their children's accounts, and there is, for the children concerned, effectively, "no privacy". The software, she adds, was really designed for business and incompletely adapted for education. For example, while there are controls schools can use for privacy protection, the defaults, as always, are towards open sharing. In her own children's school, which has 2,000 students, the software was set up so every user could see everyone else's email address.

"It's a huge contrast to [the concern about] online harms, child safety, and the protection mantra that we have to watch everything because the world is so unsafe," she says. Partly, this is also a matter of perception: policy makers tend to focus on "stranger danger" and limiting online content rather than ID theft, privacy, and how all this collected data may be used in the future. The European Digital Rights Initiative (EDRi) highlights the similar thinking behind European Commission proposals to require the platforms to scan private communications as part of combating child sexual abuse online.

All this awaits the baby downstairs. The other day, an 18-month-old girl ran up to him, entranced. Her mother pulled her back before she could touch him or the toys tied to his stroller. For now, he, like other pandemic babies, is surrounded by an invisible barrier. We won't know for several decades what the long-term effect will be.


Illustrations: Illustrations: Sonia Livingstone's LSE panel.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 19, 2021

Vaccine conoisseurs

800px-International_Certificates_of_Vaccination.jpgThis is one of those weeks when numerous stories update. Australia's dispute over sharing news has spawned deals that are bad for everyone except Facebook, Google, and Rupert Murdoch; the EU is beginning the final stages of formulating the ePrivacy Regulation; the UK awaits its adequacy decision on data protection; 3D printed guns are back; and the arrival of covid vaccines has revived the push for some form of vaccination certificate, which may (or may not) revive governments' desires for digital identities tied to each of us via biometrics and personal data.

To start with Australia: since the lower house of the Australian parliament has passed the law requiring Google and Facebook to negotiate licensing fees with publishers, Facebook began blocking Australian users from sharing "news content" - and the rest of the world from sharing links to Australian publishers - without waiting for final passage. The block is as overbroad as you might expect.

Google has instead announced a three-year deal under which it will pay Rupert Murdoch's News Corporation for the right to showcase it's output - which is almost universally paywalled.

Neither announcement is good news. Google's creates a damaging precedent of paying for links, and small public interest publishers don't benefit - and any publisher that does becomes even more dangerously dependent on the platforms to keep them solvent. On Twitter, Kate Crawford calls Facebook's move deplatforming at scale.

Next, as Glyn Moody helpfully explains, where GDPR protects personal data at rest, the ePrivacy Regulation covers personal data in transit. It has been pending since 2017, when the European Commission published a draft, which the European Parliament then amended. Massive amounts of lobbying and now-resolved internal squabbling over the text within the Council of the EU have finally been resolved so the three legs of this legislative stool can begin negotiations. Moody highlights two areas to watch: provisions exempting metadata from the prohibition on changing use without consent, and the rules regarding cookie walls. As negotiations proceed, however, there may be more.

As a no-longer EU member, the UK will have to actively adopt this new legislation. The UK's motivation to do so is simple: it wants - or should want - an adequacy decision. That is, for data to flow between the UK and the EU, the EU has to agree that the UK's privacy framework matches the EU's. On Tuesday, The Register reported that such a decision is imminent, a small piece of good news for British businesses in the sea of Brexit issues arising since January 1.

The original panic over 3D-printed guns was in 2013, when the US Department of Justice ordered the takedown of Defcad. In 2018, Defcad's owner, Cody Wilson, won his case against the DoJ in a settlement. At the time, 3D-printed plastic guns were too limited to worry about, and even by 2018 3D printing had failed to take off on the consumer level. This week Gizmodo reported that home-printing alarmingly functional automated weapons may now be genuinely possible for someone with the necessary obsession, home equipment, and technical skill.

Finally, ever since the beginning of this pandemic there has been concern that public health would become the vector for vastly expanded permanent surveillance that would be difficult to dislodge later.

The arrival of vaccinations has brought the weird new phenomenon of the vaccine connoisseur. They never heard of mRNA until a couple of months ago, but if you say you've been vaccinated they'll ask which one. And then say something like, "Oh, that's not the best one, is it?" Don't be fussy! If you're offered a vaccination, just take it. Every vaccine should help keep you alive and out of the hospital; like Willie Nelson's plane landings you can walk away from, they're *all* perfect. All will also need updates.

Israel is first up with vaccination certificates, saying that these will be issued to everyone after their second shot. The certificate will exempt them from some of the requirements for testing and isolation associated with visiting public places.

None of the problems surrounding immunity passports (as they were called last spring) has changed. We are still not sure whether the vaccines halt transmission or how long they last, and access is still enormously limited. Certificates will almost certainly be inescapable for international travel, as for other diseases like yellow fever and smallpox. For ordinary society, however, they would be profoundly discriminatory. In agreement on this: Ada Lovelace Institute, Privacy International, Liberty, Germany's ethics council. At The Lancet some researchers suggest they may be useful when we have more data, as does the the Royal Society; others reject them outright.

There is an ancillary concern. Ever since identity papers were withdrawn after the end of World War II, UK governments have repeatedly tried to reintroduce ID cards. The last attempt, which ended in 2010, came close. There is therefore legitimate concern about immunity passports as ID cards, a concern not allayed by the government's policy paper on digital identities, published last week.

What we need is clarity about what problem certificates are intended to solve. Are they intended to allow people who've been vaccinated greater freedom consistent with the lower risks they face and pose? Or is the point "health theater" for businesses? We need answers.


Illustrations: International vaccination certificates (from SimonWaldherr at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 22, 2021

Surveillance without borders

637px-Sophie_in't_Veld,_print2.jpegThis time last year, the Computers, Privacy, and Data Protection conference was talking about inevitable technology. Two thousand people from all over the world enclosed in two large but unventilated spaces arguing closely over buffets and snacks for four days! I remember occasional nods toward a shadow out there on the Asian horizon, but it was another few weeks before the cloud of dust indicating the coronavirus's gallop westward toward London became visible to the naked eye. This week marks a year since I've traveled more than ten miles from home.

The virus laughs at what we used to call "inevitable". It also laughs at what we think of as "borders".

The concept of "privacy" was always going to have to expand. Europe's General Data Protection Regulation came into force in May 2018; by CPDP 2019 the conference had already moved on to consider its limitations in a world where privacy invasion was going physical. Since then, Austrian lawyer Max Schrems has poked holes in international data transfers, police and others began rolling out automated facial recognition without the least care for public consent...and emergency measures to contain the public health crisis have overwhelmed hard-won rights.

This year two themes are emerging. First is that, as predicted, traditional ideas about consent simply do not work in a world where technology monitors and mediates our physical movements, especially because most citizens don't know to ask what the "legal basis for processing" is when their local bar demands their name and address for contact tracing and claims the would-be drinker has no discretion to refuse. Second is the need for enforcement. This is the main point Schrems has been making through his legal challenges to the Safe Harbor agreement ("Schrems I") and then to its replacement, the EU-US Privacy Shield agreement ("Schrems II"). Schrems is forcing data protection regulators to act even when they don't want to.

In his panel on data portability, Ian Brown pointed out a third problem: access to tools. Even where companies have provided the facility for downloading your data, none provide upload tools, not even archives for academic papers. You can have your data, but you can't use it anywhere. By contrast, he said, open banking is actually working well in the UK. EFF's Christoph Schmon added a fourth: the reality that it's "much easier to monetize hate speech than civil discourse online".

Artist Jonas Staal and lawyer Jan Fermon have an intriguing proposal for containing Facebook: collectivize it. In an unfortunately evidence-free mock trial, witnesses argued that it should be neither nationalized nor privately owned nor broken up, but transformed into a space owned and governed by its 2.5 billion users. Fermon found a legal basis in the right to self-determination, "the basis of all other fundamental rights". In reality, given Facebook's wide-ranging social effects, non-users, too, would have to become part-owners. Lawyers love governing things. Most people won't even read the notes from a school board meeting.

Schmon favored finding ways to make it harder to monetize polarization, chiefly through moderation. Jennifer Cobbe, in a panel on algorithm-assisted decision making suggested stifling some types of innovation. "Government should be concerned with general welfare, public good, human rights, equality, and fairness" and adopt technology only where it supports those values. Transparency is only one part of the answer - and it must apply to all parts of systems such as those controlling whether someone stays in jail or is released on parole, not just the final decision making bit.

But the world in which these debates are taking place is also changing, and not just because of the coronavirus. In a panel on intelligence agencies and fundamental rights, for example, MEP Sophie in't Veld (NL) pointed out the difficulties of exercising meaningful oversight when talk begins about increasing cross-border cooperation. In her view, the EU pretends "national security" is outside its interests, but 20 years of legislation offers national security as a justification for bloc-wide action. The result is to leave national authorities to make their own decisions. and "There is little incentive for national authorities to apply safeguards to citizens from other countries." Plus, lacking an EU-wide definition of "national security", member states can claim "national security" for almost any exemption. "The walls between law enforcement and the intelligence agencies are crumbling."

A day later, Petra Molnar put this a different way: "Immigration management technologies are used as an excuse to infringe on people's rights". Molnar works to highlight the use of refugees and asylum-seekers as experimental subjects for news technologies - drones, AI lie detectors, automated facial recognition; meanwhile the technologies are blurring geographical demarcations, pushing the "border" away from its physical manifestation. Conversely, current UK policy moves the "border" into schools, rental offices, and hospitals by requiring for teachers, landlords, and medical personnel to check immigration status.

Edin Omanovic pointed out a contributing factor: "People are concerned about the things they use every day" - like WhatsApp - "but not bulk data interception". Politicians have more to gain by signing off on more powers than from imposing limits - but the narrowness of their definition of "security" means that despite powers, access to technology, and top-class universities, "We've had 100,000 deaths because we were unprepared for the pandemic we knew was coming and possible."


Illustrations: Sophie in't Veld (via Arnfinn Petersen at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 11, 2020

Facebook in review

parliament-whereszuck.jpgLed by New York attorney general Letitia James, this week 46 US states, plus Guam, and Washington, DC, and, separately, the Federal Trade Commission filed suits against Facebook alleging that it has maintained an illegal monopoly while simultaneously reducing privacy protections and services to boost its bottom line. The four missing states: Alabama, Georgia, South Carolina, and South Dakota.

As they say, we've had this date from the beginning.

It's seemed likely for months that legal action against Facebook was on the way. There were the we-mean-business Congressional hearings and the subsequent committee report, followed by the suit against Google the Department of Justice filed in October.

Facebook seems peculiarly deserving. It began in 2004 as a Harvard-only network, using its snob appeal to expand to the other Ivy League schools, then thousands of universities and high schools, and finally the general public. Mass market adoption grew in tandem with the post-2009 explosion of smart phones. By then, Facebook had frequently tweaked its privacy settings and repeatedly annoyed users with new privacy-invasive features in the (sadly correct) and arrogant belief they'd never leave. By 2010, Zuckerberg was claiming that "privacy is no longer a social norm", adding that were he starting then he would make everything public by default, like Twitter.

It's hard to pick Facebook's creepiest moments out of so many, but here are a few: in 2011 it began auto-recognizing user photographs, in 2012 it dallied with in-network "democracy" - a forerunner of today's unsatisfactory oversight board, and in 2014 it tested emotionally manipulating its users.

In 2011, based on the rise and fall of earlier services like CompuServe, AOL, Geocities, LiveJournal, and MySpace you can practically carbon-date people by their choice of social media - some of us wrongly surmised that perhaps Facebook had peaked. "The [online] party keeps moving" is certainly true; what was different was that Zuckerberg knew it and launched his program of aggressive and defensive acquisitions.

The 2012 $1 billion acquisition of Instagram and 2014 $19 billion purchase of WhatsApp are the heart of the suits. The lawsuits suggest that without Facebook's intervention we'd have social media successfully competing on privacy. In his summary, Matt Stoller credits this idea to Dina Srinivasan, who argued in 2019 that Facebook saw off then-dominant MySpace by presenting itself as "privacy-centered" at a time when the press was claiming that MySpace's openness made it unsafe for children. Once in pole position, Facebook began gradually pushing greater openness on its users - bait and switch, I called it in 2010.

I'm less convinced that MySpace's continued existence could have curbed Facebook's privacy invasion. In 2004, the year of Facebook's birth, Australian privacy activist Roger Clarke surveyed the earliest social networks - chiefly Plaxo - and predicted that all social networks would inevitably exploit their users. "The only logical business model is the value of consumers' data," he told me for the Independent (TXT). I think, therefore, that the privacy-destructive race to the bottom-of-the-business-model was inevitable given the US's regulatory desert. Google began heading that way soon after its 2004 IPO; by 2006 privacy advocates were already warning of its danger.

Srinivasan details Facebook's progressive privacy invasion: the cooption of millions of third parties via logins and the Like button propagandize its service to collect and leverage vast amounts of personal data while it became a vector for the unscrupulous to hack elections. This is all without considering non-US issues such as Free Basics, which has made Facebook effectively the only Internet service in parts of the world. Facebook also had Silicon Valley's venture capital ethos at its back and Facebook's share structure, which awards Zuckerberg full and permanent control.

In a useful paper on nascent competitors, Tim Wu and C. Scott Hemphill discuss how to spot anticompetitive acquisitions. As I recall, though, many - notably the ever-prescient Jeff Chester - protested the WhatsApp and Instagram acquisitions at the time; the EU only agreed because Facebook promised not to merge the user databases, and issued a €110 million fine when it realized the company lied. Last year Facebook announced it would merge the databases, which critics saw as a preemptive move to block a potential breakup. Allowing the mergers to go ahead seems less dumb, however, if you remember that it took until 2017 and Lina Khan to realize that the era of two guys in a garage up-ending entrenched monopolists was over.

The suits ask the court to find Facebook guilty under Section 2 of the Sherman Act (which is a felony) and Section 7 of the Clayton Act, block it from making further acquisitions valued at $10 million or above, and require it to divest or restructure illegally acquired companies or current Facebook assets or business lines. Restoring some competition to the Internet ecosystem in general and social media in particular seems within reach of this action - though there are many other cases that also need attention. It won't be enough to fixing the damage to democracy and privacy, but perhaps the change in attitude it represents will ensure the next Facebook doesn't become a monster.


Illustrations: Mark Zuckerberg's empty chair at last year's Grand Committee hearing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 4, 2020

Scraped

Somehow I had missed the hiQ Labs v. LinkedIn case until this week, when I struggled to explain on Twitter why condemning web scraping is a mistake. Over the years, many have made similar arguments to ban ordinary security tools and techniques because they may also be abused. The usual real world analogy is: we don't ban cars just because criminals can use them to escape.

The basics: hiQ, which styles itself as a "talent management company", used automated bots to scrape public LinkedIn profiles, and analyze them into a service advising companies what training they should invest in or which employee might be on the verge of leaving. All together now: *so* creepy! LinkedIn objected that the practice violates its terms of service and harms its business. In return, hiQ accused LinkedIn of purely anti-competitive motives, and claimed it only objected now because it was planning its own version.

LinkedIn wanted the court to rule that hiQ's scraping its profiles constitutes felony hacking under the Computer Fraud and Abuse Act (1986). Meanwhile, hiQ argued that because the profiles it scraped are public, no "hacking" was involved. EFF, along with DuckDuckGo and the Internet Archive, which both use web scraping as a basic tool, filed an amicus brief arguing correctly that web scraping is a technique in widespread use to support research, journalism, and legitimate business activities. Sure, hiQ's version is automated, but that doesn't make it different in kind.

There are two separate issues here. The first is web scraping itself, which, as EFF says, has many valid uses that don't involve social media or personal data. The TrainTimes site, for example, is vastly more accessible than the National Rail site it scrapes and re-presents. Over the last two decades, the same author, Matthew Somerville, has built numerous other such sites that avoid the heavy graphics and scripts that make so many information sites painful to use. He has indeed gotten in trouble for it sometimes; in this example, the Odeon movie theaters objected to his making movie schedules more accessible. (Query: what is anyone going to do with the Odeon movie schedule beyond choosing which ticket to buy?)

As EFF writes in its summary of the case, web scraping has also been used by journalists to investigate racial discrimination on Airbnb and find discriminatory pricing on Amazon; in the early days of the web, civic-minded British geeks used web scraping to make information about Parliament and its debates more accessible. Web scraping should not be illegal!

However, that doesn't mean that all information that can be scraped should be scraped or that all information that can be scraped should be *legal* to scrape. Like so many other basic techniques, web scraping has both good and bad uses. This is where the tricky bit lies.

Intelligence agency personnel these days talk about OSINT - "open source intelligence". "Open source" in this context (not software!) means anything they can find and save, which includes anything posted publicly on social media. Journalists also tend to view anything posted publicly as fair game for quotation and reproduction - just look at the Guardian's live blog any day of the week. Academic ethics require greater care.

There is plenty of abuse-by-scraping. As Olivia Solon reported last year, IBM scraped Flickr users' innocently posted photographs repurposed them into a database to train facial recognition algorithms, later used by Immigration and Customs Enforcement to identify people to deport. (In June, when the protests after George Floyd's murder led IBM to pull back on selling facial recognition "for mass surveillance or racial profiling".) Clearview AI scraped billions of photographs off social media and collating them into a database service to sell to law enforcement. It's safe to say that no one posted their profile on LinkedIn with the intention of helping a third-party company get paid by their employer to spy on them.

Nonetheless, those abuse cases do not make web scraping "hacking" or a crime. They are difficult to rectify in the US because, as noted in last week's review of 30 years of data protection, the US lacks relevant privacy laws. Here in the UK, since the data Somerville was scraping was not personal, his complainants typically argued that he was violating their copyright. The hiQ case, if brought outside the US, would likely be based in data protection law.

In 2019, the Ninth Circuit ruled in favor of hiQ, saying it did not violate CFAA because LinkedIn's servers were publicly accessible. In March, LinkedIn asked the Supreme Court to review the case. SCOTUS could now decide whether scraping publicly accessible data is (or is not) a CFAA violation.

What's wrong in this picture is the complete disregard for the users in the case. As the National Review says, a ruling for hiQ could deprive users of all control over their publicly posted information. So, call a spade a spade: at its heart this case is about whether LinkedIn has an exclusive right to abuse its users' data or whether it has to share that right with any passing company with a scraping bot. The profile data hiQ scraped is public, to be sure, but to claim that opens it up for any and all uses is no more valid than claiming that because this piece is posted publicly it is not copyrighted.


Illustrations: I simply couldn't think of one.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 27, 2020

Data protection in review

Thumbnail image for 2015_Max_Schrems_(17227117226).jpgA tax on small businesses," a disgusted techie called data protection, circa 1993. The Data Protection Directive became EU law in 1995, and came into force in the UK in 1998.

The narrow data protection story of the last 25 years, like that of copyright, falls into three parts: legislation, government bypasses to facilitate trade, and enforcement. The broader story, however, includes a power struggle between citizens and both public and private sector organizations; a brewing trade war; and the difficulty of balancing conflicting human rights.

Like free software licenses, data protection laws seed themselves across the world by requiring forward compliance. Adopting this approach therefore set the EU on a collision course with the US, where the data-driven economy was already taking shape.

Ironically, privacy law began in the US, with the Fair Credit Reporting Act (1970), which gives Americans the right to view and correct the credit files that determine their life prospects. It was joined by the Privacy Act (1974), which covers personally identifiable information held by federal agencies, and the Electronic Communications Privacy Act (1986), which restricts government wiretaps on transmitted and stored electronic data. Finally, the 1996 Health Insurance Portability and Accountability Act protect health data (with now-exploding exceptions. In other words, the US's consumer protection-based approach leaves huge unregulated swatches of the economy. The EU's approach, by contrast, grew out of the clear historical harms of the Nazis' use of IBM's tabulation software and the Stasi's endemic spying on the population, and regulates data use regardless of sector or actor, minus a few exceptions for member state national security and airline passenger data. Little surprise that the results are not compatible.

In 1999, Simon Davies saw this as impossible to solve for Scientific American (TXT): "They still think that because they're American they can cut a deal, even though they've been told by every privacy commissioner in Europe that Safe Harbor is inadequate...They fail to understand that what has happened in Europe is a legal, constitutional thing, and they can no more cut a deal with the Europeans than the Europeans can cut a deal with your First Amendment." In 2000, he looked wrong: the compromise Safe Harbor agreement enabled EU-US data flows.

In 2008, the EU began discussing an update to encompass the vastly changed data ecosystem brought by Facebook, YouTube, and Twitter, the smartphone explosion, new types of personally identifiable information, and the rise and fall of what Andres Guadamuz last year called "peak cyber-utopianism". By early 2013, it appeared that reforms might weaken the law, not strengthen it. Then came Snowden, whose revelations reanimated privacy protection. In 2016, the upgraded General Data Protection Regulation was passed despite a massive opposing lobbying operation. It the month before GDPR came into force">came into force in 2018, but even now many US sites still block European visitors rather than adapt because "you are very important to us".

Everyone might have been able to go on pretending the fundamental incompatibility didn't exist but for two things. The first is the 2014 European Court of Justice decision requiring Google to honor "right to be forgotten" requests (aka Costeja). Americans still see Costeja as a terrible abrogation of free speech; Europeans more often see it as a balance between conflicting rights and a curb on the power of large multinational companies to determine your life.

The second is Austrian lawyer Max Schrems. While still a student, Schrems saw that Snowden's revelations utterly up-ended the Safe Harbor agreement. He filed a legal case - and won it, in 2016, just as GDPR was being passed.The EU and US promptly negotiated a replacement, Privacy Shield. Schrems challenged again. And won again, this year. "There must be no Schrems III!", EU politicians said in September. In other words: some framework must be found to facilitate transfers that passes muster within the law. The US's approach appears to be trying to get data protection and localization laws barred via trade agreements despite domestic opposition. One of the Trump administration's first acts was to require federal agencies to exempt foreigners from Privacy Act protections.

No country is more affected by this than the UK, which as a new non-member can't trade without an adequacy decision and no longer gets the member-state exception for its surveillance regime. This dangerous high-wire moment for the UK traps it in that EU-US gap.

Last year, I started hearing complaints that "GDPR has failed". The problem, in fact, is enforcement. Schrems took action because the Irish Data Protection Regulator, in pole position because companies like Facebook have sited their European headquarters there, was failing to act. The UK's Information Commissioner's Office was under-resourced from the beginning. This month, the Open Rights Group sued the ICO to force it to act on the systemic breaches of the GDPR it acknowledged in a June 2019 report (PDF) on adtech.

Equally a problem are the emerging limitations of GDPR and consent, which areentirely unsuited for protecting privacy in the onrushing "smart" world in which you are at the mercy of others' Internet of Things. The new masses of data that our cities and infrastructure will generate will need a new approach.


Illustrations: Max Schrems in 2015.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 6, 2020

Crypto in review

Caspar_Bowden-IMG_8994-2013-rama.jpgBy my count, this is net.wars number 990; the first one appeared on November 2, 2001. If you added in its predecessors - net.wars-the-book, and its sequel From Anarchy to Power, as well as the more direct precursors, the news analysis pieces I wrote for the Daily Telegraph between 1997 and early 2001, you'd get a different number I don't know how to calculate. Therefore: this is net.wars #990, and the run-up to 1,000 seems a good moment to review some durable themes of the last 20 years via what we wrote at the time.

net.wars #1 has, sadly, barely aged; it could almost be published today unchanged. It was a ticked-off response to former Home Secretary Jack Straw, who weeks after the 9/11 attacks told Britain's radio audience that the people who had opposed key escrow were now realizing they'd been naive. We were not! The issue Straw was talking about was the use of strong cryptography, and "key escrow" was the rejected plan to require each individual to deposit a copy of their cryptographic key with a trusted third party. "Trusted", on its surface meant someone *we* trusted to guard our privacy; in subtext it meant someone the government trusted to disclose the key when ordered to do so - the digital equivalent of being required to leave a copy of the key to your house with the local police in case they wanted to investigate you. The last half of the 1990s saw an extended public debate that concluded with key escrow being dropped for the final version of the Regulation of Investigatory Powers Act (2000) in favor of requiring individuals to produce cleartext when law enforcement require it. A 2014 piece for IEEE Security & Privacy explains RIPA and its successors and the communications surveillance framework they've created.

With RIPA's passage, a lot of us thought the matter was settled. We were so, so wrong. It did go quiet for a decade. Surveillance-related public controversy appeared to shift, first to data retention and then to ID cards, which were proposed soon after the 2005 attacks on London's tube and finally canned in 2010 when the incoming coalition government found a note from the previous chancellor, "There's no money".

As the world discovered in 2013, when Edward Snowden dropped his revelations of government spying, the security services had taken the crypto debate into their own hands, undermining standards and making backroom access deals. The Internet community reacted quickly with first advice and then with technical remediation.

In a sense, though, the joke was on us. For many netheads, crypto was a cause in the 1990s; the standard advice was that we should all encrypt all our email so the important stuff wouldn't stand out. To make that a reality, however, crypto software had to be frictionless to use - and the developers of the day were never interested enough in usability to make it so. In 2011, after I was asked to write an instruction manual for installing PGP (or GPG), the lack of usability was maddening enough for me to write: "There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners."

The only really successful crypto at that point were backend protocols like SSL (used to secure ecommerce transactions over the web), TLS (secures communications), and HTTPS (secures web connections) and the encryption built into mobile phone standards. Much has changed since, most notably Facebook's and Apple's decision to protect user messages and data, at a stroke turning crypto on for billions of users. The result, as Ross Anderson predicted in 2018, was to change the focus of governments' demand for access to hacking devices rather than cracking individual messages.

The arguments have not changed in all those years; they were helpfully collated by a group of senior security experts in 2015 in the report Keys Under Doormats (PDF). Encryption is mathematics; you cannot create a hole that only "good guys" can use. Everyone wants uncrackable encryption for themselves - but to be able to penetrate everyone else's. That scenario is no more possible than the suggestion some of Donald Trump's team are making that the same votes that are electing Republican senators and Congresspeople are not legally valid when applied to the presidency.

Nonetheless, we've heard repeated calls from law enforcement for breakable encryption: in 2015, 2017, and, most recently, six weeks ago. In between, while complaining that communications were going dark, in 2016 the FBI tried to force Apple to crack its own phones to enable an investigation. When the FBI found someone to crack it to order, Apple turned on end-to-end encryption.

I no longer believe that this dispute can be settled. Because it is built on logic proofs, mathematics will always be hard, non-negotiable, and unyielding, and because of their culture and responsibilities security services and law enforcement will always want more access. For individuals, before you adopt security precautions, think through your threat model and remember that most attacks will target the endpoints, where cleartext is inevitable. For nations, remember whatever holes you poke in others' security will be driven through in your own.


Illustrations: The late Caspar Bowden (1961-2015), who did so much to improve and explain surveillance policy in general and crypto policy in particular (via rama at Wikmedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 21, 2020

The end of choice

new-22portobelloroad.jpgAt the Congressional hearings a few weeks ago, all four CEOs who appeared - Mark Zuckerberg (Facebook), Jeff Bezos (Amazon), Sundar Pichai (Google), and Tim Cook (Apple) - said essentially the same thing in their opening statements: they have lots of competitors, they have enabled millions of people to build small businesses on their platforms, and they do not have monopoly power. The first of these is partly true, the second is true, and the third...well, it depends which country you're talking about, how you look at it, and what you think they're competing for. In some countries outside the US, for example, Facebook *is* the Internet because of its Free Basics program.

In the weeks since: Google still intends to buy Fitbit, which for $2.1 billion would give it access to a huge pile of health-data-that's-not-categorized-as-health data; both the US and the EU are investigating.

In California, an appeals court has found that Amazon can be liable for defective products sold by third-party sellers.

Meanwhile, Apple, which this week became the first company in history to hit a $2 trillion market cap, deleted Epic's hugely popular game Fortnite from the App Store because its latest version breaks Apple's rules by allowing players to bypass the Apple payment system (and 30% commission) to pay Epic directly for in-game purchases. In response, Epic has filed suit - and, writes Matt Stoller, if a company with Epic's clout can't force Apple to negotiate terms, who can? Stoller describes the Apple-Epic suit as certainly about money but even more about "the right way to run an economy". Stoller goes on to find this thread running through other current disputes, and believes this kind of debate leads to real change.

At Stratechery Ben Thompson argues that the Democrats didn't prove their case. Most interesting of the responses to the hearings, though, is an essay by Benedict Evans, who argues that breaking up the platforms will achieve nothing. Instead, he says, citing relevant efforts by the EU and UK competition authorities, better to dig into how the platforms operate and write rules to limit the potential for abuse. I like this idea, in part because it is genuinely difficult to see how break-ups would work. However, the key issue is enforcement; the EU made not merging databases a condition of Facebook's acquisition of WhatsApp - and three years later Facebook decided to do it anyway. The resulting fine of €110 million was less than 1% of the $19 billion purchase price.

In 1998, when the Evil Borg of Tech was Microsoft, it, too, was the subject of antitrust actions. Echoing the 1984 breakup of AT&T, people speculated about creating "Baby Bills", either by splitting the company between operating systems and productivity software or by splitting it into clones and letting them compete with each other. Instead, in 2004 the EU ordered Microsoft to unbundle its media player and, in 2009, Internet Explorer to avoid new fines. The company changed, but so did the world around it: the web, online services, free software, smartphones, and social media all made Microsoft less significant. Since 2010, the landscape has changed again. As the economist Lina Khan wrote in 2017, two guys in a garage can no longer knock off the current crop by creating the next new big technology.

Today's expanding hybrid cyber-physical systems will entrench choices none of us made into infrastructure none of us can avoid. In 2017, for example, San Diego began installing "smart" streetlights intended to do all sorts of good things: drive down energy costs, monitor air pollution, point out empty parking spaces, and so on. The city also thought it might derive some extra income from allowing third parties to run apps on its streetlight network. Instead, as Tekla S. Perry reported at IEEE Spectrum in January, to date the system's sole use has been to provide video footage to law enforcement, which has taken advantage to solve serious crimes but also to investigate vandalism and illegal dumping.

In the UK, private developers and police have been rolling out automated facial recognition without notifying the public; this week, in a case brought by Liberty, the UK Court of Appeal ruled that its use breaches privacy rights and data protection and equality laws. This morning, I see that, undeterred, Lincolnshire Police will trial a facial recognition system that is supposed to be able to detect people's moods.

The issue of monopoly power is important. But even if we find a way to ensure fair competition we won't have solved a bigger problem that is taking shape: individuals increasingly have no choice about whether to participate in the world these companies are building. For decades we have had no choice about being credit-scored. Three years ago, despit the fatuous comments of senior politicians, it was obvious that the only people who can opt out of using the Internet are those who are economically inactive or highly privileged; last year journalist Kashmir Hill proved the difficulty of doing without GAFA. The pandemic response is making opting out either antisocial, a health risk, or both. And increasingly, going out of your house means being captured on video and analyzed whether you like it or not. No amount of controlling individual technology companies will solve this loss of agency. That is up to us.

Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 26, 2020

Mysticism: curmudgeon

Carole_Cadwalladr_2019.jpg"Not voting, or not for us?" the energetic doorstep canvasser asked when I started closing the door as soon as I saw her last November. "Neither," I said. "I just don't want to have the conversation." She nodded and moved on. That's the only canvasser I've seen in years. Either they have me written down as a pointless curmudgeon or they (like so many others) don't notice my very small street.

One of the open questions of the three years since Carole Cadwalladr broke the Cambridge Analytica story is how much impact data profiling had on the 2016 EU referendum vote and US presidential election. We know that thousands of ads were viewed millions of times and aimed at promoting division and that they were precisely targeted. But did they make the crucial difference?

We'll never really know. For its new report, Who Do They Think We Are?, the Open Rights Group set out to explore a piece of this question by establishing what data the British political parties hold on UK voters and where they get it. This week, Pascal Crowe, who leads the data and democracy project, presented the results to date.

You can still participate via tools to facilitate subject access requests and analyze the results. The report is based on the results of SARs submitted by 496 self-selected people, 344 of whom opted into sharing their results with ORG. The ability to do this derives from changes brought in by the General Data Protection Regulation, which eliminated the fees, shrank the response time to 30 days, removed the "in writing" requirement, and widened the range of information organizations were required to supply.

ORG's main findings from the three parties from which it received significant results:

- Labour has compiled up to 100 pages of data per individual, broken down into over 80 categories from sources including commercial suppliers, the electoral register, data calculated in-house, and the subjects themselves. The data included estimates of how long someone had lived at their address, their income, number of children, and scores on issues such as staying in the EU, supporting the Scottish National Party, and switching to vote for another party. Even though participants submitted identification along with their request, they all were asked again for further documentation. None received a response within the statutory time limit.

- The Lib Dems referred ORG to their privacy policy for details of their sources; the data was predominantly from the electoral rolls and includes fields indicating the estimated number of different families in a home, the likelihood that they favored remaining I the EU, or were a "soft Tory". The LibDems outsource some of their processing to CACI.

- The Conservatives also use the electoral rolls and buy data from Experian, but outsource a lot of profiling to the political consultancy Hanbury Strategy. Their profiles include estimates of how long someone has lived at their current address, number of children, age, employment status, income, educational level, preferred newspaper, and first language. Plus "mysticism", an attempt to guess the individual's religion.

There are three separate issues here. The first is whether the political parties have the legal right to engage in this extensive political profiling. The second is whether voters find the practice acceptable or disquieting. The third is the one we began with: does it work to deliver election results?

Regarding the first, there's no question that these profiles contain personal and sensitive data. ORG is doubtful about the parties' claim that "democratic engagement" provides a legal basis, and recommends three remedies: the Information Commissioner's Office should provide guidance and enforcement; the UK should implement the collective redress provision in GDPR that would allow groups like ORG to represent the interests of an ill-informed public; and the political parties should move to a consent-based opt-in model.

More interesting, ORG found that people simply did not recognize themselves in the profiles the parties collected, which were full of errors - even information as basic as gender and age. Under data protection law, correcting such errors is a fundamental right, but the bigger question is how all this data is helping the parties if it's so badly wrong (and whether we should be more scared if it were accurate). For this reason, Crowe suggested the parties would be better served by returning to the traditional method of knocking on every door, not just the doors of those the parties think already agree with them. The data they collected in such an exercise would be right - and consent would be unambiguous. My canvasser, even after five seconds, knows more about me than a pile of data does.

For the third question, this future was predicted: in 2011, Jeff Chester worried greatly about the potential of profiling to enable political manipulation. Even before that, it was the long-running theme inside the TV series Mad Men that pits advertising as persuasion and emotional engagement (the Don Draper or knocking-on-doors approach) or as a numbers game in you just need media space targeted at exactly the right selection of buyers (the Harry Crane and Facebook/Google approach). Draper, who ruled the TV show's 1960s, has lost ground to the numbers guys ever since, culminating in Facebook, which allows the most precise audience targeting we've ever known. Today, he'd be 94 and struggling to convince 20-somethings addicted to data-wrangling that he still knows how to sell things.


Illustrations: Carole Cadwalladr (via MollyMEP at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 22, 2020

The pod exclusion

Vintage_Gloritone_Model_27_Cathedral-Tombstone_Style_Vacuum_Tube_Radio,_AM_Band,_TRF,_Circa_1930_(14663394535).jpgThis week it became plain that another bit of the Internet is moving toward the kind of commercialization and control the Internet was supposed to make difficult in the first place: podcasts. The announcement that one of the two most popular podcasts, the Joe Rogan Experience, will move both new episodes and its 11-year back catalogue to Spotify exclusively in a $100 million multiyear deal is clearly a step change. Spotify has also been buying up podcast networks, and at the Verge, Ashley Carman suggests the podcast world will bifurcate into twin ecosystems, Spotify versus Everyone Else.

Like a few hundred million other people, I am an occasional Rogan listener, my interest piqued by a web forum mention of his interview with Jeff Novitzky, the investigator in the BALCO doping scandal. Other worth-the-time interviews from his prolific output include Lawrence Lessig, epidemiologist Michael Osterholm (particularly valuable because of its early March timing), Andrew Yang, and Bernie Sanders. Parts of Twitter despise him; Rogan certainly likes to book people (usually, but not always, men - for example Roseanne Barr) who are being pilloried in the news and jointly chew over their situation. Even his highest-profile interviewees rarely find, anywhere else, the two to three hours Rogan spends letting them talk quietly about their thinking. He draws them out by not challenging them much, and his predilection for conspiracy theories and interest in unproven ideas about nutrition make it advisable to be selective and look for countervailing critiques.

It's about 20 years since I first read about Dave Winers early experiments in "audio blogging", renamed "podcast" after the 2001 release of the iPod eclipsed all previously existing MP3 players. The earliest podcasts tended to be the typical early-stage is-this-thing-on? that leads the unimaginative to dismiss the potential. But people with skills honed in radio were obviously going to do better, and within a few years (to take one niche example) the skeptical world was seeing weekly podcasts like Skepchick (beginning 2005) and The Pod Delusion (2009-2014). By 2014, podcast networks were forming, and an estimated 20% of Americans were listening to podcasts at least once a month.

That era's podcasts, although high-quality, were - and in some cases still are - produced by people seeking to educate or promote a cause, and were not generally money-making enterprises in their own right. The change seems to have begun around 2010, as the acclerating rise of smartphones made podcasts as accessible as radio for mobile listening. I didn't notice until late 2016, when the veteran screenwriter and former radio announcer and DJ Ken Levine announced on his daily 11-year-old blog that he was starting up Hollywood & Levine and I discovered the ongoing influx of professional comedians, actors, and journalists into podcasting. Notably, they all carried ads for the same companies - at the minimum, SquareSpace and Blue Apron. Like old-time radio, these minimal-production ads were read by the host, sometimes making the whole affair feel uncomfortably fake. Per the Wall Street Journal, US advertising revenue from podcasting was $678.7 million last year, up 42% over 2018.

No wonder advertisers like podcasts: users can block ads on a website or read blog postings via RSS, but no matter how you listen to a podcast the ads remain in place, and if you, like most people, listen to podcasts (like radio) when your hands are occupied, you can't easily skip past them. For professional communicators, podcasts therefore provide direct access to revenues that blogging had begun to offer before it was subsumed by social media and targeted advertising.

The Rogan deal seems a watershed moment that will take all this to a new level. The key element really isn't the money, as impressive as it sounds at first glance; it's the exclusive licensing. Rogan built his massive audience by publishing his podcast in both video and audio formats widely on multiple platforms, primarily his own websites and YouTube; go to any streaming site and you're likely to find it listed. Now, his audience is big enough that Spotify apparently thinks that paying for exclusivity will net the company new subscribers. If you prefer downloads to streaming, however, you'll need a premium subscription. Rogan himself apparently thinks he will lose no control over his show; he distrusts YouTube's censorship.

At his blog on corporate competition, Matt Stoller proclaims that the Rogan deal means the death of independent podcasting. While I agree that podcasts circa 2017-2020 are in a state similar to the web in the 2000s, I don't agree this means the death of all independent podcasting - but it will be much harder for their creators to find audiences and revenues as Spotify becomes the primary gatekeeper. This is what happened with blogs between 2008 and 2015 as social media took over.

Both Carman's and Stoller's predictions are grim: that podcasts will go the way of today's web and become a vector for data collection and targeted advertising. Carman, however, imagines some survival for a privacy-protecting, open ecosystem of podcasts. I want to believe this. But, like blogging now, that ecosystem will likely have to find a new business model.


Illustrations: 1930s vacuum tube radio (via Joe Haupte).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 15, 2020

Quincunx

Thumbnail image for sidewalklabs-streetcrossing.pngIn the last few weeks, unlike any other period in the 965 (!) previous weeks of net.wars columns: there were *five* pieces of (relatively) good news in the (relatively) restricted domain of computers, freedom, and privacy.

One: Google sibling Sidewalk Labs has pulled out of the development it had planned with Waterfront Toronto. This project has been contentious ever since the contract was signed in 2017 to turn a 12-acre section of Toronto's waterfront into a data-driven, sensor-laden futuristic city. In 2018, leading Canadian privacy pioneer Ann Cavoukian quit the project after Sidewalk Labs admitted that instead of ensuring the data it collected wouldn't be identifiable it actually would grant third parties access to it. At a panel on smart city governance at Computers, Privacy, and Data Protection 2019, David Murakami Wood gave the local back story (go to 43:30) on the public consultations and the hubris on display. Now, blaming the pandemic-related economic conditions, Sidewalk Labs has abandoned the plan altogether; its public opponents believe the scheme was really never viable in the first place. This is good news, because although technology can help some of urban centers' many problems, it should always be in the service of the public, not an opportunity for a private company to seize control.

Two: The Internet Corporation for Assigned Names and Numbers has rejected the Internet Society's proposal to sell PIR, the owner of the .org generic top-level domain, to the newly created private equity firm Ethos Capital, Timothy B. Lee reports at Ars Technica. Among its concerns, ICANN cited the $360 million in debt that PIR would have been required to take on, Ethos' lack of qualifications to run such a large gTLD, and the lack of transparency around the whole thing. The decision follows an epistolary intervention by California's Attorney General, who warned ICANN that it thought that the deal "puts profit above the public interest" and that ICANN was "abandoning its core duty to protect the public interest". As the overseer of both it (as a non-profit) and the sale, the AG was in a position to make its opinion hurt. At the time when the sale was announced, the Internet Society claimed there were other suitors. Perhaps now we'll find out who those were.

Three: The textbook publishers Cengage and McGraw-Hill have abandoned their plan to merge, saying that antitrust enforcers' requirements that they divest their overlapping businesses made the merger uneconomical. The plan had attracted pushback from students, consumer groups, libraries, universities, and bookstores, as well as lawmakers and antitrust authorities.

Four: Following a similar ruling from the UK Intellectual Property Office, the US Patent and Trademark Office has rejected two patents listing the Dabus AI system as the inventor. The patent offices argue that innovations must be attributed to humans in order to avoid the complications that would arise from recognizing corporations as inventors. There's been enough of a surge in such applications that the World Intellectual Property Organization held a public consultation on this issue that closed in February. Here again my inner biological supremacist asserts itself: I'd argue that the credit for anything an AI creates belongs with the people who built the AI. It's humans all the way down.

Five: The US Supreme Court has narrowly upheld the right to freely share the official legal code of the state of Georgia. Carl Malamud, who's been liberating it-ought-to-be-public data for decades - he was the one who first got Securities and Exchange Commission company reports online in the 1990s, and on and on - had published the Official Code of Georgia Annotated. The annotations in question, which include summaries of judicial opinions, citations, and other information about the law, are produced by Lexis-Nexus under contract to the state of Georgia. No one claimed the law itself could be copyrighted, but the state argued it owned copyright in the annotations, with Lexis-Nexus as its contracted commercial publisher. The state makes no other official version of its code available, meaning that someone consulting the non-annotated free version Lexis-Nexus does make available would be unaware of later court decisions rejecting parts of some of the laws the legislature passed. So Malamud paid the hundreds of dollars to buy a full copy of the official annotated version, and published it in full on his website for free access. The state sued. Public.Resource lost in the lower courts but won on appeal - and, in a risky move, urged the Supreme Court to take the case and set the precedent. The vote went five to four. The impact will be substantial. Twenty-two other states publish their legal code under similar arrangements with Lexis-Nexus. They will now have to rethink.

All these developments offer wins for the public in one way or another. None should be cause for complacence. Sidewalk Labs and other "surveillance city" purveyors will try again elsewhere with less well-developed privacy standards - and cities still have huge problems to solve. The future of .org, the online home for the world's non-profits and NGOs, is still uncertain. Textbook publishing is still disturbingly consolidated. The owners of AIs will go on seeking ways to own their output. And ensuring that copyright does not impede access to the law that governs those 23 American states does not make those laws any more just. But, for a brief moment, it's good.

Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

This week's net.wars, "Quincunx", wakes up to discover a confluence of (relatively) good news in the last few weeks of computers, freedom, and privacy: https://www.pelicancrossing.net/netwars/2020/05/quincunx.html

May 1, 2020

Appified

china-alihealth.jpegAround 2010, when smartphones took off (Apple's iPhone user base grew from 8 million in 2009 to 100 million in early 2011), "There's an app for that" was a joke widely acknowledged as true. Faced with a pandemic, many countries are looking to develop apps that might offer shortcuts to reaching some variant of "old normal". The UK is no exception, and much of this week has been filled with debate about the nascent contact tracing app being developed by the National Health Service's digital arm, NHSx. The logic is simple: since John Snow investigated cholera in 1854, contact tracing has remained slow, labor-intensive , and dependent on infected individuals' ability to remember all their contacts. With a contagious virus that spreads promiscuously to strangers who happen to share your space for a time, individual memory isn't much help. Surely we can do better. We have technology!

In 2011, Jon Crowcroft and Eiko Yoneki had that same thought. Their Fluphone proved the concept, even helping identify asymptomatic superspreaders through the social graph of contacts developing the illness.

In March, China's Alipay Health got our attention. This all-seeing, all-knowing, data-mining, risk score-outputting app whose green, yellow, and red QR codes are inspected by police at Chinese metro stations, workplaces, and other public areas seeks to control the virus's movements by controlling people's access. The widespread Western reaction, to a first approximation: "Ugh!" We are increasingly likely to end up with something similar, but with very different enforcement and a layer of "democratic voluntary" - *sort* of China, but with plausible deniability.

Or we may not. This is a fluid situation!

This week has been filled with debate about why the UK's National Health Service's digital arm (NHSx) is rolling its own app when Google and Apple are collaborating on a native contact-tracing platform. Italy and Spain have decided to use it; Germany, which was planning to build its own app, pivoted abruptly, and Australia and Singapore (whose open source app, TraceTogether, was finding some international adoption) are switching. France balked, calling Apple "uncooperative".

France wants a centralized system, in which matching exposure notifications is performed on a government-owned central server. That means trusting the government to protect it adequately and not start saying, "Oooh, data, we could do stuff with that!" In a decentralized system, the contact matching us performed on the device itself, with the results released to health officials if the user decides to do so. Apple and Google are refusing to support centralized systems, largely because in many of the countries where iOS and Android phones are sold it poses significant dangers for the population. Essentially, the centralized ones ask you for a lot more trust in your government.

All this led to Parliament's Human Rights Committee, which spent the week holding hearings on the human rights implications of contact tracing apps. (See Michael Veale's and Orla Lynskey's written evidence and oral testimony.) In its report, the committee concluded that the level of data being collected isn't justifiable without clear efficacy and benefits; rights-protecting legislation is needed (helpfully, Lilian Edwards has spearheaded an effort to produce model safeguarding legislation; an independent oversight body is needed along with a Digital Contact Tracing Human Rights Commissioner; the app's efficacy and data security and privacy should be reviewed every 21 days; and the government and health authorities need to embrace transparency. Elsewhere, Marion Oswald writes that trust is essential, and the proposals have yet to earn it.

The specific rights discussion has been accompanied by broader doubts about the extent to which any app can be effective at contact tracing and the other flaws that may arise. As Ross Anderson writes, there remain many questions about practical applications in the real world. In recent blog postings, Crowcroft mulls modern contact tracing apps based on what they learned from Fluphone.

The practical concerns are even greater when you look at Ashkan Soltani's Twitter feed, in which he's turning his honed hacker sensibilities on these apps, making it clear that there are many more ways for these apps to fail than we've yet recognized. The Australian app, for example, may interfere with Bluetooth-connected medical devices such as glucose monitors. Drug interactions matter; if apps are now medical devices, then their interactions must be studied, too. Soltani also raises the possibility of using these apps for voter suppression. The hundreds of millions of downloads necessary to make these apps work means even small flaws will affect large numbers of people.

All of these are reasons why Apple and Google are going to wind up in charge of the technology. Even the UK is now investigating switching. Fixing one platform is a lot easier than debugging hundreds, for example, and interoperability should aid widespread use, especially when international travel resumes, currently irrelevant but still on people's minds. In this case, Apple's and Google's technology, like the Internet itself originally, is a vector for spreading the privacy and human rights values embedded in its design, and countries are changing plans to accept it - one more extraordinary moment among so many.

Illustrations: Alipay Health Code in action (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 12, 2020

Privacy matters

china-alihealth.jpegSometime last week, Laurie Garrett, the Pulitzer Prize-winning author of The Coming Plague, proposed a thought experiment to her interviewer on MSNBC. She had been describing the lockdown procedures in place in China, and mulling how much more limited actions are available to the US to mitigate the spread. Imagine, she said (or more or less), the police out on the interstate pulling over a truck driver "with his gun rack" and demanding a swab, running a test, and then and there ordering the driver to abandon the truck and putting him in isolation.

Um...even without the gun rack detail...

The 1980s AIDS crisis may have been the first time my generation became aware of the tension between privacy and epidemiology. Understanding what was causing the then-unknown "gay cancer" involved tracing contacts, asking intimate questions, and, once it was better understood, telling patients to contact their former and current sexual partners. At a time when many gay men were still closeted, this often meant painful conversations with wives as well as ex-lovers. (Cue a well-known joke from 1983: "What's the hardest part of having AIDS? Trying to convince your wife you're Haitian.")

The descriptions emerging of how China is working to contain the virus indicate a level of surveillance that - for now - is still unthinkable in the West. In a Huangzhou project, for example, citizens are required to install the Alipay Health Code app on their phones that assigns them a traffic light code based on their recent contacts and movements - which in turn determines which public and private spaces they're allowed to enter. Paul Mozur, who co-wrote that piece for the New York Times with Raymond Zhong and Aaron Krolik, has posted on Twitter video clips of how this works on the ground, while Ryutaro Uchiyama marvels at Singapore's command and open publication of highly detailed data This is a level of control that severely frightened people, even in the West, might accept temporarily or in specific circumstances - we do, after all, accept being data-scanned and physically scanned as part of the price of flying. I have no difficulty imagining we might accept barriers and screening before entering nursing homes or hospital wards, but under what conditions would the citizens of democratic societies accept being stopped randomly on the street and our phones scanned for location and personal contact histories?

The Chinese system has automated just such a system. Quite reasonably, at the Guardian Lily Kuo wonders if the system will be made permanent, essentially hijacking this virus outbreak in order to implement a much deeper system of social control than existed before. Along with all the other risks of this outbreak - deaths, widespread illness, overwhelmed hospitals and medical staff, widespread economic damage, and the mental and emotional stress of isolation, loss, and lockdown - there is a genuine risk that "the new normal" that emerges post-crisis will have vastly more surveillance embedded in it.

Not everyone may think this is bad. On Twitter, Stewart Baker, whose long-held opposition to "warrant-proof" encryption we noted last week, suggested it was time for him to revive his "privacy kills" series. What set him off was a New York Times piece about a Washington-based lab that was not allowed to test swabs they'd collected from flu patients for coronavirus, on the basis that the patients would have to give consent for the change of use. Yes, the constraint sounds stupid and, given the situation, was clearly dangerous. But it would be more reasonable to say that either *this* interpretation or *this* set of rules needs to be changed than to conclude unliterally that "privacy is bad". Making an exemption for epidemics and public health emergencies is a pretty easy fix that doesn't require up-ending all patient confidentiality on a permanent basis. The populations of even the most democratic, individualistic countries are capable of understanding the temporary need for extreme measures in a crisis. Even the famously national ID-shy UK accepted identity papers during wartime (and then rejected them after the war ended (PDF)).

The irony is that lack of privacy kills, too. At The Atlantic, Zeynep Tufecki argues that extreme surveillance and suppression of freedom of expression paradoxically results in what she calls "authoritarian blindness": a system designed to suppress information can't find out what's really going on. At The Bulwark, Robert Tracinski applies Tufecki's analysis to Donald Trump's habit of labeling anything he doesn't like "fake news" and blaming any events he doesn't like on the "deep state" and concludes that this, too, engenders widespread and dangerous distrust. It's just as hard for a government to know what's really happening when the leader doesn't want to know as when the leader doesn't want anyone *else* to know.

At this point in most countries it's early stages, and as both the virus and fear of it spread, people will be willing to consent to any measure that they believe will keep them and their loved ones safe. But, as Access Now agrees, there will come a day when this is past and we begin again to think about other issues. When that day comes, it will be important to remember that privacy is one of the tools needed to protect public health.


Illustrations: Alipay Health Code in action (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 6, 2020

Transitive rage

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgSomething has changed," a privacy campaigner friend commented last fall, observing that it had become noticeably harder to get politicians to understand and accept the reasons why strong encryption is a necessary technology to protect privacy, security, and, more generally, freedom. This particular fight had been going on since the 1990s, but some political balance had shifted. Mathematical reality of course remains the same. Except in Australia.

At the end of January, Bloomberg published a leaked draft of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT), backed by US Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT). In its analysis the Center for Democracy and Technology find the bill authorizes a new government commission, led by the US attorney general, to regulate online speech and, potentially, ban end-to-end encryption. At Lawfare, Stewart Baker, a veteran opponent of strong cryptography, dissents, seeing the bill as combating child exploitation by weakening the legal liability protection afforded by Section 230. Could the attorney general mandate that encryption never qualifies as "best practice"? Yes, even Baker admits, but he still thinks the concerns voiced by CDT and EFF are overblown.

In our real present, our actual attorney general, William Barr believes "warrant-proof encryption" is dangerous. His office is actively campaigning in favor of exactly the outcome CDT and EFF fear.

Last fall, my friend connected the "change" to recent press coverage of the online spread of child abuse imagery. Several - such as Michael H. Keller and Gabriel J.X. Dance's November story - specifically connected encryption to child exploitation, complaining that Internet companies fail to use existing tools, and that Facebook's plans to encrypt Messenger, "the main source of the imagery", will "vastly limit detection".

What has definitely changed is *how* encryption will be weakened. The 1990s idea was key escrow, a scheme under which individuals using encryption software would deposit copies of their private keys with a trusted third party. After years of opposition, the rise of ecommerce and its concomitant need to secure in-transit financial details eventually led the UK government to drop key escrow before the passage of the Regulation of Investigatory Powers Act (2000), which closed that chapter of the crypto debates. RIPA and its current successor, the Investigatory Powers Act (2016), requires individuals to descrypt information or disclose keys to government representatives. There have have been three prosecutions.

In 2013, we learned from Edward Snowden's revelations that the security services had not accepted defeat but had gone dark, deliberately weakening standards. The result: the Internet engineering community began the work of hardening the Internet as much as they could.

In those intervening years, though, outside of a few very limited cases - SSL, used to secure web transactions - very few individuals actually used encryption. Email and messaging remained largely open. The hardening exercise Snowden set off eventually included companies like Facebook, which turned on end-to-end encryption for all of WhatsApp in 2016, overnight turning 1 billion people into crypto users and making real the long-ago dream of the crypto nerds of being lost in the noise. If 1 billion people use messaging and only a few hundred use encryption, the encryption itself is a flag that draws attention. If 1 billion people use encrypted messaging, those few hundred are indistinguishable.

In June 2018, at the 20th birthday of the Foundation for Information Policy Research, Ross Anderson predicted that the battle over encryption would move to device hacking. The reasoning is simple: if they can't read the data in transit because of end-to-end encryption, they will work to access it at the point of consumption, since it will be cleartext at that point. Anderson is likely still to be right - the IPA includes provisions allowing the security services to engage in "bulk equipment interference", which means, less politely, "hacking".

At the same time, however, it seems clear that those governments that are in a position to push back at the technology companies now figure that a backdoor in the few giant services almost everyone uses brings back the good old days when GCHQ could just put in a call to BT. Game the big services, and the weirdos who use Signal and other non-mainstream services will stick out again.

At Stanford's Center for Internet and Society, Riana Pfefferkorn believes the DoJ is opportunistically exploiting the techlash much the way the security services rushed through historically and politically unacceptable surveillance provisions in the first few shocked months after the 9/11 attacks. Pfefferkorn calls it "transitive rage": Congresspeople are already mad at the technology companies for spreading false news, exploiting personal data, and not paying taxes, so encryption is another thing to be mad about - and pass legislation to prevent. The IPA and Australia's Assistance and Access Act are suddenly models. Plus, as UN Special Rapporteur David Keye writes in his book Speech Police: The Global Struggle to Govern the Internet, "Governments see that company power and are jealous of it, as they should be."

Pfefferkorn goes on to point out the inconsistency of allowing transitive rage to dictate banning secure encryption. It protects user privacy, sometimes against the same companies they're mad at. We'll let Alec Muffett have the last word, reminding that tomorrow's children's freedom is also worth protecting.


Illustrations: GCHQ's Bude listening post, at dawn (by wizzlewick at Wikimedia, CC3.0).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpg

February 14, 2020

Pushy algorithms

cyberporn.jpgOne consequence of the last three and a half years of British politics, which saw everything sucked into the Bermuda Triangle of Brexit debates, is that things that appeared to have fallen off the back of the government's agenda are beginning to reemerge like so many sacked government ministers hearing of an impending cabinet reshuffle and hoping for reinstatement.

One such is age verification, which was enshrined in the Digital Economy Act (2017) and last seen being dropped to wait for the online harms bill.

A Westminster Forum seminar on protecting children online shortly before the UK's December 2019 general election, reflected that uncertainty. "At one stage it looked as if we were going to lead the world," Paul Herbert lamented before predicting it would be back "sooner or later".

The expectation for this legislation was set last spring, when the government released the Online Harms white paper. The idea was that a duty of care should be imposed on online platforms, effectively defined as any business-owned website that hosts "user-generated content or user interactions, for example through comments, forums, or video sharing". Clearly they meant to target everyone's current scapegoat, the big social media platforms, but "comments" is broad enough to include any ecommerce site that accepts user reviews. A second difficulty is the variety of harms they're concerned about: radicalization, suicide, self-harm, bullying. They can't all have the same solution even if, like one bereaved father, you blame "pushy algorithms".

The consultation exercise closed in July, and this week the government released its response. The main points:

- There will be plentiful safeguards to protect freedom of expression, including distinguishing between illegal content and content that's legal but harmful; the new rules will also require platforms to publish and transparently enforce their own rules, with mechanisms for redress. Child abuse and exploitation and terrorist speech will have the highest priority for removal.

- The regulator of choice will be Ofcom, the agency that already oversees broadcasting and the telecommunications industry. (Previously, enforcing age verification was going to be pushed to the British Board of Film Classification.)

- The government is still considering what liability may be imposed on senior management of businesses that fall under the scope of the law, which it believes is less than 5% of British businesses.

- Companies are expected to use tools to prevent children from accessing age-inappropriate content "and protect them from other harms" - including "age assurance and age verification technologies". The response adds, "This would achieve our objective of protecting children from online pornography, and would also fulfill the aims of the Digital Economy Act."

There are some obvious problems. The privacy aspects of the mechanisms proposed for age verification remain disturbing. The government's 5% estimate of businesses that will be affected is almost certainly a wild underestimate. (Is a Patreon page with comments the responsibility of the person or business that owns it or Patreon itself?). At the Guardian, Alex Hern explains the impact on businesses. The nastiest tabloid journalism is not within scope.

On Twitter, technology lawyer Neil Brown identifies four fallacies in the white paper: the "Wild West web"; that privately operated computer systems are public spaces; that those operating public spaces owe their users a duty of care; and that the offline world is safe by default. The bigger issue, as a commenter points out, is that the privately operated computer systems UK government seeks to regulate are foreign-owned. The paper suggests enforcement could include punishing company executives personally and ordering UK ISPs to block non-compliant sites.

More interesting and much less discussed is the push for "age-appropriate design" as a method of harm reduction. This approach was proposed by Lorna Woods and Will Perrin in January 2019. At the Westminster eForum, Woods explained, "It is looking at the design of the platforms and the services, not necessarily about ensuring you've got the latest generation of AI that can identify nasty comments and take it down."

It's impossible not to sympathize with her argument that the costs of move fast and break things are imposed on the rest of society. However, when she started talking about doing risk assessments for nascent products and services I could only think she's never been close to software developers, who've known for decades that from the instant software goes out into the hands of users they will use it in ways no one ever imagined. So it's hard to see how it will work, though last year the ICO proposed a code of practice.

The online harms bill also has to be seen in the context of all the rest of the monitoring that is being directed at children in the name of keeping them - and the rest of us - safe. DefendDigital.me has done extensive work to highlight the impact of such programs as Prevent, which requires schools and libraries to monitor children's use of the Internet to watch for signs of radicalization, and the more than 20 databases that collect details of every aspect of children's educational lives. Last month, one of these - the Learning Records Service - was caught granting betting companies access to personal data about 28 million children. DefendDigital.me has called for an Educational Rights Act. This idea could be usefully expanded to include children's online rights more broadly.


Illustrations: Time magazine's 1995 "Cyberporn" cover, which marked the first children-Internet panic.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 6, 2020

Mission creep

Haystack-Cora.png"We can't find the needles unless we collect the whole haystack," a character explains in the new play The Haystack, written by Al Blyth and in production at the Hampstead Theatre through March 7. The character is Hannah (Sarah Woodward), and she is director of a surveillance effort being coded and built by Neil (Oliver Johnstone) and Zef (Enyi Ororonkwo), familiarly geeky types whose preferred day-off activities are the cinema and the pub, rather than catching up on sleep and showers, as Hannah pointedly suggests. Zef has a girlfriend (and a "spank bank" of downloaded images) and is excited to work in "counter-terrorism". Neil is less certain, less socially comfortable, and, we eventually learn, more technically brilliant; he must come to grips with all three characteristics in his quest to save Cora (Rona Morison). Cue Fleabag: "This is a love story."

The play is framed by an encrypted chat between Neil and Denise, Cora's editor at the Guardian (Lucy Black). We know immediately from the technological checklist they run down in making contact that there has been a catastrophe, which we soon realize surrounds Cora. Even though we're unsure what it is, it's clear Neil is carrying a load of guilt, which the play explains in flashbacks.

As the action begins, Neil and Zef are waiting to start work as a task force seconded to Hannah's department to identify the source of a series of Ministry of Defence leaks that have led to press stories. She is unimpressed with their youth, attire, and casual attitude - they type madly while she issues instructions they've already read - but changes abruptly when they find the primary leaker in seconds. Two stories remain; because both bear Cora's byline she becomes their new target. Both like the look of her, but Neil is particularly smitten, and when a crisis overtakes her, he breaks every rule in the agency's book by grabbing a train to London, where, calling himself "Tom Flowers", he befriends her in a bar.

Neil's surveillance-informed "god mode" choices of Cora's favorite music, drinks, and food when he meets her remind of the movie Groundhog Day, in which Phil (Bill Murray) slowly builds up, day by day, the perfect approach to the women he hopes to seduce. In another cultural echo, the tense beginning is sufficiently reminiscent of the opening of Laura Poitras's film about Edward Snowden, CitizenFour, that I assumed Neil was calling from Moscow.

The requirement for the haystack, Hannah explains at the beginning of Act Two, is because the terrorist threat has changed from organized groups to home-grown "lone wolves", and threats can come from anywhere. Her department must know *everything* if it is to keep the nation safe. The lone-wolf theory is the one surveillance justification Blyth's characters don't chew over in the course of the play; for an evidence-based view, consult the VOX-Pol project. In a favorite moment, Neil and Hannah demonstrate the frustrating disconnect between technical reality and government targets. Neil correctly explains that terrorists are so rare that, given the UK's 66 million population, no matter how much you "improve" the system's detection rate it will still be swamped by false positives. Hannah, however, discovers he has nonetheless delivered. The false positive rate is 30% less! Her bosses are thrilled! Neil reacts like Alicia Florrick in The Good Wife after one of her morally uncomfortable wins.

Related: it is one of the great pleasures of The Haystack that its three female characters (out of a total of five) are smart, tough, self-reliant, ambitious, and good at their jobs.

The Haystack is impressively directed by Roxana Silbert. It isn't easy to make typing look interesting, but this play manages it, partly by the well-designed use of projections to show both the internal and external worlds they're seeing, and partly by carefully-staged quick cuts. In one section, cinema-style cross-cutting creates a montage that fast-forwards the action through six months of two key relationships.

Technically, The Haystack is impressive; Zef and Neil speak fluent Python, algorithms, and Bash scripts, and laugh realistically over a journalist's use of Hotmail and Word with no encryption ("I swear my dad has better infosec"), while the projections of their screens are plausible pieces of code, video games, media snippets, and maps. The production designers and Blyth, who has a degree in econometrics and a background as a research economist, have done well. There were just a few tiny nitpicks: Neil can't trace Cora's shut-down devices "without the passwords" (huh?); and although Neil and Zef also use Tor, at one point they use Firefox (maybe) and Google (doubtful). My companion leaned in: "They wouldn't use that." More startling, for me, the actors who play Neil and Zef pronounce "cache" as "cachet"; but this is the plaint of a sound-sensitive person. And that's it, for the play's 1:50 length (trust me; it flies by).

The result is an extraordinary mix of a well-plotted comic thriller that shows the personal and professional costs of both being watched and being the watcher. What's really remarkable is how many of the touchstone digital rights and policy issues Blyth manages to pack in. If you can, go see it, partly because it's a fine introduction to the debates around surveillance, but mostly because it's great entertainment.


Illustrations: Rona Morison, as Cora, in The Haystack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 24, 2020

The inevitability narrative

new-22portobelloroad.jpg"We could create a new blueprint," Woody Hartzog said in a rare moment of hope on Wednesday at this year's Computers, Privacy, and Data Protection in a panel on facial recognition. He went on stress the need to move outside of the model of privacy for the last two decades: get consent, roll out technology. Not necessarily in that order.

A few minutes earlier, he had said, "I think facial recognition is the most dangerous surveillance technology ever invented - so attractive to governments and industry to deploy in many ways and so ripe for abuse, and the mechanisms we have so weak to confront the harms it poses that the only way to mitigate the harms is to ban it."

This week, a leaked draft white paper revealed that the EU is considering, as one of five options, banning the use of facial recognition in public places. In general, the EU has been pouring money into AI research, largely in pursuit of economic opportunity: if the EU doesn't develop its own AI technologies, the argument goes, Europe will have to buy it from China or the United States. Who wants to be sandwiched between those two?

This level of investment is not available to most of the world's countries, as Julia Powles elsewhere pointed out with respect to AI more generally. Her country, Australia, is destined to be a "technology importer and data exporter", no matter how the three-pronged race comes out. "The promises of AI are unproven, and the risks are clear," she said. "The real reason we need to regulate is that it imposes a dramatic acceleration on the conditions of the unrestrained digital extractive economy." In other words, the companies behind AI will have even greater capacity to grind us up as dinosaur bones and use the results to manipulate us to their advantage.

At this event last year there was a general recognition that, less than a year after the passage of the general data protection regulation, it wasn't going to be an adequate approach to the growth of tracking through the physical world. This year, the conference is awash in AI to a truly extraordinary extent. Literally dozens of sessions: if it's not AI in policing it's AI and data protection, ethics, human rights, algorithmic fairness, or embedded in autonomous vehicles, Hartzog's panel was one of at least half a dozen on facial recognition, which is AI plus biometrics plus CCTV and other cameras. As interesting are the omissions: in two full days I have yet to hear anything about smart speakers or Amazon Ring doorbells, both proliferating wildly in the soon-to-be non-EU UK.

These technologies are landing on us shockingly fast. This time last year, automated facial recognition wasn't even on the map. It blew up just last May, when Big Brother Watch pushed the issue into everyone's consciousness by launching a campaign to stop the police from using what is still a highly flawed technology. But we can't lean too heavily on the ridiculous - 98%! - inaccuracy of its real-world trials, because as it becomes more accurate it will become even more dangerous to anyone on the wrong list. Here, it has become clear that it's being rapidly followed by "emotional recognition", a build-out of technology pioneered 25 years ago at MIT by Rosalind Picard under the rubric "affective computing".

"Is it enough to ban facial recognition?" a questioner asked. "Or should we ban cameras?"

Probably everyone here is carrying at least two camera (pause to count: two on phone, one on laptop).

Everyone here is also conscious that last week, Kashmir Hill broke the story that the previously unknown, Peter Thiel-backed company Clearview AI had scraped 3 billion facial images off social media and other sites to create a database that enables its law enforcement cutomers to grab a single photo and get back matches from dozens of online sites. As Hill reminds, companies like Facebook have been able to do this since 2011, though at the time - just eight and a half years ago! - this was technology that Google (though not Facebook) thought was "too creepy" to implement.

In the 2013 paper A Theory of Creepy, Omer Tene and Jules Polonetsky. cite three kinds of "creepy" that apply to new technologies or new uses: it breaks traditional social norms; it shows the disconnect between the norms of engineers and those of the rest of society; or applicable norms don't exist yet. AI often breaks all three. Automated, pervasive facial recognition certainly does.

And so it seems legitimate to ask: do we really want to live in a world where it's impossible to go anywhere without being followed? "We didn't ban dangerous drugs or cars," has been a recurrent rebuttal. No, but as various speakers reminded, we did constrain them to become much safer. (And we did ban some drugs.) We should resist, Hartzog suggested, "the inevitability narrative".

Instead, the reality is that, as Lokke Moerel put it, "We have this kind of AI because this is the technology and expertise we have."

One panel pointed us at the AI universal guidelines, and encouraged us to sign. We need that - and so much more.


Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 27, 2019

Runaway

shmatikov.jpgFor me, the scariest presentation of 2019 was a talk given by Cornell University professor Vitaly Shmatikov about computer models. It's partly a matter of reframing the familiar picture; for years, Bill Smart and Cindy Grimm have explained to attendees at We Robot that we don't necessarily really know what it is that neural nets are learning when they're deep learning.

In Smart's example, changing a few pixels in an image can change the machine learning algorithm's perception of it from "Abraham Lincoln" to "zebrafish". Misunderstanding what's important to an algorithm is the kind of thing research scientist Janelle Shane exploits when she pranks neural networks and asks them to generate new recipes or Christmas carols from a pile of known examples. In her book, You Look Like a Thing and I Love You, she presents the inner workings of many more examples.

All of this explains why researchers Kate Crawford and Trevor Paglen's ImageNet Roulette experiment tagged my my Twitter avatar as "the Dalai Lama". I didn't dare rerun it, because how can you beat that? The experiment over, would-be visitors are now redirected to Crawford's and Paglen's thoughtful examination of the problems they found in the tagging and classification system that's being used in training these algorithms.

Crawford and Paglen write persuasively about the world view captured by the inclusion of categories such as "Bad Person" and "Jezebel" - real categories in the Person classification subsystem. The aspect has gone largely unnoticed until now because conference papers focused on the non-human images in ten-year-old ImageNet and its fellow training databases. Then there is the *other* problem, that the people's pictures used to train the algorithm were appropriated from search engines, photo-sharing sites such as Flickr, and video of students walking their university campuses. Even if you would have approved the use of your forgotten Flickr feed to train image recognition algorithms, I'm betting you wouldn't have agreed to be literally tagged "loser" so the algorithm can apply that tag later to a child wearing sunglasses. Why is "gal" even a Person subcategory, still less the most-populated one? Crawford and Paglen conclude that datasets are "a political intervention". I'll take "Dalai Lama", gladly.

Again, though, all of this fits with and builds upon an already known problem: we don't really know which patterns machine learning algorithms identify as significant. In his recent talk to a group of security researchers at UCL, however, Shmatikov, whose previous work includes training an algorithm to recognize faces despite obfuscation, outlined a deeper problem: these algorithms "overlearn". How do we stop them from "learning" (and then applying) unwanted lessons? He says we can't.

"Organically, the model learns to recognize all sorts of things about the original data that were not intended." In his example, in training an algorithm to recognize gender using a dataset of facial images, alongside it will learn to infer race, including races not represented in the training dataset, and even identities. In another example, you can train a text classifier to infer sentiment - and the model also learns to infer authorship.

Options for counteraction are limited. Censoring unwanted features doesn't work because a) you don't know what to censor; b) you can't censor something that isn't represented in the training data; and c) that type of censoring damages the algorithm's accuracy on the original task. "Either you're doing face analysis or you're not." Shmatikov and Congzheng Song explain their work more formally in their paper Overlearning Reveals Sensitive Attributes.

"We can't really constrain what the model is learning," Shmatikov told a group of security researchers at UCL recently, "only how it is used. It is going to be very hard to prevent the model from learning things you don't want it to learn." This drives a huge hole through GDPR, which relies on a model of meaningful consent. How do you consent to something no one knows is going to happen?

What Shmatikov was saying, therefore, is that from a security and privacy point of view, the typical question we ask, "Did the model learn its task well?", is too limited. "Security and privacy people should also be asking: what else did the model learn?" Some possibilities: it could have memorized the training data; discovered orthogonal features; performed privacy-violating tasks; or incorporated a backdoor. None of these are captured in assessing the model's accuracy in performing the assigned task.

My first reaction was to wonder whether a data-mining company like Facebook could use Shmatikov's explanation as an excuse when it's accused of allowing its system to discriminate against people - for example, in digital redlinining. Shmatikov thought not, at least, not more than their work helps people find out what their models are really doing.

"How to force the model to discover the simplest possible representation is a separate problem worth invdstigating," he concluded.

So: we can't easily predict what computer models learn when we set them a task involving complex representations, and we can't easily get rid of these unexpected lessons while retaining the usefulness of the models. I was not the only person who found this scary. We are turning these things loose on the world and incorporating them into decision making without the slightest idea of what they're doing. Seriously?


Illustrations: Vitaly Shmatikov (via Cornell).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 21, 2019

The choices of others

vlcsnap-2019-11-21-21h32m40s545.pngFor the last 30 years, I've lived in the same apartment on a small London street. So small, in fact, that even though London now has so many CCTV cameras - an estimated 627,707 - that the average citizen is captured on camera 300 times a day, it remains free of these devices. Camera surveillance and automated facial recognition are things that happen when I go out to other places.

Until now.

It no longer requires state-level resources to put a camera in place to watch your front door. This is a function that has been wholly democratized. And so it is that my downstairs neighbors, whose front door is side by side with mine, have inserted surveillance into the alleyway we share via an Amazon Ring doorbell.

Now, I understand there are far worse things, both as neighbors go and as intrusions go. My neighbors are mostly quiet. We take in each other's packages. They would never dream of blocking up the alleyway with stray furniture. And yet it never occurred to them that a 180-degree camera watching their door is, given the workings of physics and geography, also inevitably watching mine. And it never occurred to them to ask me whether I minded.

I do mind.

I have nothing to hide, and I mind.

Privacy advocates have talked and written for years about the many ways that our own privacy is limited by the choices of others. I use Facebook very little - but less-restrained friends nonetheless tag me in photographs, and in posts about shared activities. My sister's decision to submit a DNA sample to a consumer DNA testing service in order to get one of those unreliable analyses of our ancestry inevitably means that if I ever want to do the same thing the system will find the similarity and identify us as relatives, even though it may think she's my aunt.

We have yet to develop social norms around these choices. Worse, most people don't even see there's a problem. My neighbor is happy and enthusiastic about the convenience of being able to remotely negotiate with package-bearing couriers and be alerted to possible thieves. "My office has one," he said, explaining that they got it after being burgled several times to help monitor the premises.

We live down an alleyway so out of the way that both we and couriers routinely leave packages on our doorsteps all day.

I do not want to fight with my neighbor. We live in a house with just two flats, one up, one down, on a street with just 20 households. There is no possible benefit to be had from being on bad terms. And yet.

I sent him an email: would he mind walking me through the camera's app so I can see what it sees? In response, he sent a short video; the image above, taken from it, shows clearly that the camera sees all the way down the alleyway in both directions.

So I have questions: what does Amazon say about what data it keeps and for how long? If the camera and microphone are triggered by random noises and movements, how can I tell whether they're on and if they're recording?

Obviously, I can read the terms and conditions for myself, but I find them spectacularly unclear. Plus, I didn't buy this device or agree to any of this. The document does make mention of being intended for monitoring a single-family residence, but I don't think this means Amazon is concerned that people will surveil their neighbors; I think it means they want to make sure they sell a separate doorbell to every home.

Examination of the video and the product description reveals that camera, microphone, and recording are triggered by movement next to his - and therefore also next to my - door. So it seems likely that anyone with access to his account can monitor every time I come or go, and all my visitors. Will my privacy advocate friends ever visit me again? How do my neighbors not see why I think this is creepy?

Even more disturbing is the cozy relationship Amazon has been developing with police, especially in the US, where the company has promoted the doorbells by donating units for neighborhood watch purposes, effectively allowing police to build private surveillance networks with no public oversight. The Sun reports similar moves by UK police forces.

I don't like the idea of the police being able to demand copies of recordings of innocent people - couriers, friends, repairfolk - walking down our alleyway. I don't want surveillance-by-default. But as far as I can tell, this is precisely what this doorbell is delivering.

A lawyer friend corrects my impression that GDPR does not apply. The Information Commissioner's Office is clear that cameras should not be pointed at other people's property or shared spaces, and under GDPR my neighbor is now a data controller. My friends can make subject access requests. Even so: do I want to pick a fight with people who can make my life unpleasant? All over the country, millions of people are up against the reality that no matter how carefully they think through their privacy choices they are exposed by the insouciance of other people and robbed of agency not by police or government action but by their intimate connections - their neighbors, friends, and family..

Yes, I mind. And unless my neighbor chooses to care, there's nothing I can practically do about it.

Illustrations: Ring camera shot of alleyway.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 1, 2019

Nobody expects the Spanish Inquisition

Monty_Python_Live_02-07-14-sporti-jpgSo can we stop now with the fantasy that data can be anonymized?

Two things sparked this train of thought. The first was seeing that researchers at the Mayo Clinic have shown that commercial facial recognition software accurately identified 70 of a sample set of 84 (that's 83%) MRI brain scans. For ten additional subjects, the software placed the correct identification in its top five choices. Yes, on reflection, it's obvious that you can't scan a brain without including its container, and that bone structure defines a face. It's still a fine example of data that is far more revealing than you expect.

The second was when Phil Booth, the executive director of medConfidential, on Twitter called out the National Health Service for weakening the legal definition of "anonymous" in its report on artificial intelligence (PDF).

In writing the MRI story for the Wall Street Journal (paywall), Melanie Evans notes that people have also been reidentified from activity patterns captured by wearables, a cautionary tale now that Google's owner, Alphabet, seeks to buy Fitbit. Cautionary, because the biggest contributor to reidentifying any particular dataset is other datasets to which it can be matched.

The earliest scientific research on reidentification I know of was Latanya Sweeney's 1997 success in identifying then-governor William Weld's medical record by matching the "anonymized" dataset of records of visits to Massachusetts hospitals against the voter database for Cambridge, which anyone could buy for $20. Sweeney has since found that 87% of Americans can be matched from just their gender, date of birth, and zip code. More recently, scientists at Louvain and Imperial College found that just 15 attributes can identify 99.8% of Americans. Scientists have reidentified individuals from anonymized shopping data, and by matching mobile phone logs against transit trips. Combining those two datasets identified 95% of the Singaporean population in 11 weeks; add GPS records and you can do it in under a week.

This sort of thing shouldn't be surprising any more.

The legal definition that Booth cited is Recital 26 of the General Data Protection Regulation, which specifies in a lot more detail about how to assess the odds ("all the means likely to be used", "account should be taken of all objective factors") of successful reidentification.

Instead, here's the passage he highlighted from the NHS report as defining "anonymized" data (page 23 of the PDF, 44 of the report): "Data in a form that does not identify individuals and where identification through its combination with other data is not likely to take place."

I love the "not likely". It sounds like one of the excuses that's so standard that Matt Blaze put them on a bingo card. If you asked someone in 2004 whether it was likely that their children's photos would be used to train AI facial recognition systems that in 2019 would be used to surveil Chinese Muslims and out pornography actors in Russia. And yet here we are. You can never reliably predict what data will be of what value or to whom.

At this point, until proven otherwise it is safer to assume that that there really is no way to anonymize personal data and make it stick for any length of time. It's certainly true that in some cases the sensitivity of any individual piece of data - say your location on Friday at 11:48 - vanishes quickly, but the same is not true of those data points when aggregated over time. More important, patient data is not among those types and never will be. Health data and patient information are sensitive and personal not just for the life of the patient but for the lives of their close relatives on into the indefinite future. Many illnesses, both mental and physical, have genetic factors; many others may be traceable to conditions prevailing where you live or grew up. Either way, your medical record is highly revealing - particularly to insurance companies interested in minimizing their risk of payouts or an employer wishing to hire only robustly healthy people - about the rest of your family members.

Thirty years ago, when I was first encountering large databases and what happens when you match them together, I came up with a simple privacy-protecting rule: if you do not want the data to leak, do not put it in the database. This still seems to me definitive - but much of the time we have no choice.

I suggest the following principles and assumptions.

One: Databases that can be linked, will be. The product manager's comment Ellen Ullman reported in 1997 still pertains: "I've never seen anyone with two systems who didn't want us to hook them together."

Two: Data that can be matched, will be.

Three: Data that can be exploited for a purpose you never thought of, will be.

Four: Stop calling it "sharing" when the entities "sharing" your personal data are organizations, especially governments or commercial companies, not your personal friends. What they're doing is *disclosing* your information.

Five: Think collectively. The worst privacy damage may not be to *you*.

The bottom line: we have now seen so many examples of "anonymized" data that can be reidentified that the claim that any dataset is anonymized should be considered as extraordinary a claim as saying you've solved Brexit. Extraordinary claims require extraordinary proof, as the skeptics say.

Addendum: if you're wondering why net.wars skipped the 50th anniversary of the first ARPAnet connection: first of all, we noted it last week; second of all, whatever headline writers think, it's not the 50th anniversary of the Internet, whose beginnings, as we wrote in 2004, are multiple. If you feel inadequately served, I recommend this from 2013, in which some of the Internet's fathers talk about all the rules they broke to get the network started.


Illustrations: Monty Python performing the Spanish Inquisition sketch in 2014 (via Eduardo Unda-Sanzana at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2019

I never paid for it in my life

lanier-lrm-2017.jpgSo Jaron Lanier is back, arguing that we should be paid for our data. He was last seen in net.wars two years back, arguing that if people had started by charging for email we would not now be the battery fuel for "behavior modification empires". In a 2018 TED talk, he continued that we should pay for Facebook and Google in order to "fix the Internet".

Lanier's latest disquisition goes like this: the big companies are making billions from our data. We should have some of it. That way lies human dignity and the feeling that our lives are meaningful. And fixing Facebook!

The first problem is that fixing Facebook is not the same as fixing the Internet, a distinction Lanier surely understands. The Internet is a telecommunications network; Facebook is a business. You can profoundly change a business by changing who pays for its services and how, but changing a telecommunications network that underpins millions of organizations and billions of people in hundreds of countries is a wholly different proposition. If you mean, as Lanier seems to, that what you want to change is people's belief that content on the Internet should be free, then what you want to "fix" is the people, not the network. And "fixing" people at scale is insanely hard. Just ask health professionals or teachers. We'd need new incentives,

Paying for our data is not one of those incentives. Instead of encouraging people to think more carefully about privacy, being paid to post to Facebook would encourage people to indiscriminately upload more data. It would add payment intermediaries to today's merry band of people profiting from our online activities, thereby creating a whole new class of metadata for law enforcement to claim it must be able to access.

A bigger issue is that even economists struggle to understand how to price data; as Diane Coyle asked last year, "Does data age like fish or like wine?" Google's recent announcement that it would allow users to set their browser histories to auto-delete after three or 12 months has been met by the response that such data isn't worth much three months on, though the privacy damage may still be incalculable. We already do have a class of people - "influencers" - who get paid for their social media postings, and as Chris Stokel-Walker portrays some of their lives, it ain't fun. Basically, while paying us all for our postings would put a serious dent into the revenues of companies like Google, and Facebook, it would also turn our hobbies into jobs.

So a significant issue is that we would be selling our data with no concept of its true value or what we were actually selling to companies that at least know how much they can make from it. Financial experts call this "information asymmetry". Even if you assume that Lanier's proposed "MID" intermediaries that would broker such sales will rapidly amass sufficient understanding to reverse that, the reality remains that we can't know what we're selling. No one happily posting their kids' photos to Flickr 14 years ago thought that in 2014 Yahoo, which owned the site from 2005 to 2015, was going to scrape the photos into a database and offer it to researchers to train their AI systems that would then be used to track protesters, spy on the public, and help China surveil its Uighur population.

Which leads to this question: what fire sales might a struggling company with significant "data assets" consider? Lanier's argument is entirely US-centric: data as commodity. This kind of thinking has already led Google to pay homeless people in Atlanta to scan their faces in order to create a more diverse training dataset (a valid goal, but oh,.the execution).

In a paywalled paper for Harvard Business Review, Lanier apparently argues that instead he views data as labor. That view, he claims, opens the way to collective bargaining via "data labor unions" and mass strikes.

Lanier's examples, however, are all drawn from active data creation: uploading and tagging photos, writing postings. Yet much of the data the technology companies trade in is stuff we unconsciously create - "data exhaust" - as we go through our online lives: trails of web browsing histories, payment records, mouse movements. At Tech Liberation, Will Rinehart critiques Lanier's estimates, both the amount (Lanier suggests a four-person household could gain $20,000 a year) and the failure to consider the differences between and interactions among the three classes of volunteered, observed, and inferred data. It's the inferences that Facebook and Google really get paid for. I'd also add the difference between data we can opt to emit (I don't *have* to type postings directly into Facebook knowing the company is saving every character) and data we have no choice about (passport information to airlines, tax data to governments). The difference matters: you can revise, rethink, or take back a posting; you have no idea what your unconscious mouse movements reveal and no ability to edit them. You cannot know what you have sold.

Outside the US, the growing consensus is that data protection is a fundamental human right. There's an analogy to be made here between bodily integrity and personal integrity more broadly. Even in the US, you can't sell your kidney. Isn't your data just as intimate a part of you?


Illustrations: Jaron Lanier in 2017 with Luke Robert Mason (photo by Eva Pascoe).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 27, 2019

Balancing acts

800px-Netherlands-4589_-_Lady_of_Justice_&_William_of_Orange_Coat-o-Arms_(12171086413).jpgThe Court of Justice of the European Union had an important moment on Tuesday, albeit overshadowed by another court elsewhere, ruling that the right to be forgotten can be limited to the EU. To recap: in 2014, in its ruling in Google Spain v. AEPD and Mario Costeja González ("Costeja") CJEU required Google to delist results returned by searches on a person's name under certain circumstances. Costeja had complained that the fact that a newspaper record of the foreclosure on his house in 1998 was the first thing people saw when they searched for him gave them a false impression. In an effort to balance freedom of expression and privacy, the court's ruling left the original newspaper announcement intact, but ordered Google to remove the link from its index of search results. Since then, Google says it has received 845,501 similar requests representing 3.3 million links, of which it has dereferenced 45%.

Well, now. Left unsettled was the question of territorial jurisdiction: one would think that a European court doesn't have the geographical reach to require Google to remove listings worldwide - but if Google doesn't, then the ability to switch to a differently-located version of the search engine trivially defeats the ruling. What is a search engine to do?

This is a dispute we've seen before, beginning in 2000, when, in a case brought by the Ligue contre le racisme et l'antisémitisme et Union des étudiants juifs de France (LICRA), a French tribunal ordered Yahoo to block sales of Nazi memorabilia on its auction site. Yahoo argued that it was a US company, therefore the sales were happening in the US, and don't-break-the-Internet; the French court claimed jurisdiction anyway. Yahoo appealed *in the US*, where the case was dismissed for lack of jurisdiction. Eventually, Yahoo stopped selling the memorabilia everywhere, and the fuss died down.

Costeja offered the same conundrum with a greater degree of difficulty; the decision has been subsumed into GDPR as Article 17, "right to erasure". Google began delisting Costeja's unwanted result, along with those many others, from EU versions of its search engine but left them accessible in the non-EU domains. The French data protection regulator, CNIL, however, felt this didn't go far enough and in May 2015 it ordered Google to expand dereferencing to all its servers worldwide. Google's version of compliance was to deny access to the listings to anyone coming from the country where the I-want-to-be-forgotten complaint originated. In March 2016 CNIL fined Google €100,000 (pocket change!), saying that the availability of content should not depend on the geographic location of the person seeking to view it. In response to Google's appeal, the French court referred several questions to CJEU, leading to this week's ruling.

The headlines announcing this judgment - for example, the Guardian's - give the impression that the judgment is more comprehensive than it is. Yes, the court ruled that search engines are not required to delist results worldwide in right to be forgotten cases, citing the need to balance the right to be forgotten against other fundamental rights such as freedom of expression. But it also ruled that search engines are not prohibited from doing so. The judgment suggests that they should take into account the details of the particular case and the complainant, as well as the need to balance data protection and privacy rights against the public interest.

The remaining ambiguity means we should expect there will be another case along any minute. Few are going to much happier than they were in 2013, when putting right to be forgotten into law was proposed, or in 2014, when Costeja was decided, or shortly afterwards, when Google first reported on its delisting efforts. Freedom of speech advocates and journalists are still worried that the system is an invitation to censorship, as it has proved to be in at least one case; the French regulator, and maybe some other privacy advocates and data protection authorities, is still unhappy; and we still have a situation where a private company is being asked to make even more nuanced decisions on our behalf. The reality, however, is that given the law there is no solution, only compromise.

This is a good moment for a couple of other follow-ups:

- Mozilla has announced it will not turn on DNS-over-HTTPS by default in Firefox in the UK. This is in response to the complaints noted in May that DoH will break workarounds used in the UK to block child abuse images.

- Uber and Transport for London aren't getting along any better than they were in 2017, when TfL declined to renew its license to operate. Uber made a few concessions, and on appeal it was granted a 15-month extension. With that on the verge of running out, TfL has given the company two months to produce additional information before it makes a final decision. As Hubert Horan continues to point out, the company's aggressive regulation-breaking approach is a strategy, not the work of a rogue CEO, and its long-term prospects remain those of a company with "terrible underlying economics".


Illustrations: Justitia outside the Delft Town Hall, the Netherlands (via Dennis Jarvis at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 16, 2019

The law of the camera

compressed-King's_Cross_Western_Concourse-wikimedia.jpgAs if cued by the end of last week's installment, this week the Financial Times (paywalled), followed by many others broke the news that Argent LLP, the lead developer in regenerating the Kings Cross district of London in the mid-2000s, is using facial recognition to surveil the entire area. The 67-acre site includes two mainline railway stations, a major Underground interchange station, schools, retailers, the Eurostar terminal, a local college, ten public parks, 20 streets, 50 buildings, 1,900 homes...and, because it happens to be there, Google's UK headquarters. (OK, Google: how do you like it when you're on the receiving end instead of dishing it out?)

So, to be clear: this system has been installed - doubtless "for your safety" - even though over and over these automated facial recognition systems are being shown to be almost laughably inaccurate: in London, Big Brother Watch found a 95% inaccuracy rate (PDF); in California, the ACLU found that the software incorrectly matched one in five lawmakers to criminals' mugshots. US cities - San Francisco, Oakland, Somerville, Massachusetts - are legislating bans as a result. In London, however, Canary Wharf, a large development area in east London, told the BBC and the Financial Times that it is considering following Kings Cross's lead.

Inaccuracy is only part of the problem with the Kings Cross situation - and the deeper problem will persist even if and when the systems become accurate enough for prime time (which will open a whole new can of worms). The deeper problem is the effective privatization of public space: here, a private entity has installed a facial recognition system with no notice to any of the people being surveilled, with no public debate, and, according to the BBC, no notice to either local or central government.

To place this in context, it's worth revisiting the history of the growth of CCTV cameras in the UK, the world leader (if that's the word you want) in this area. As Simon Davies recounts in his recently-published memoir about his 30 years of privacy campaigning (and as I also remember), the UK began embracing CCTV in the mid-1990s (PDF), fueled in part by the emotive role it played in catching the murderers in the 1993 Jamie Bulger case. Central government began offering local councils funding to install cameras. Deployment accelerated after 9/11, but the trend had already been set.

By 2012, when the Protection of Freedoms Act was passed to create the surveillance camera commissioner's office, public resistance had largely vanished. At the first Surveillance Camera Conference, in 2013, representatives from several local councils said they frequently received letters from local residents requesting additional cameras. They were not universally happy about this; around that time the responsibility for paying for the cameras and the systems to run them was being shifted to the councils themselves, and many seemed to be reconsidering their value. There has never been much research assessing whether the cameras cut crime; what there is suggests CCTV diverts it rather than stops it. A 2013 briefing paper by the College of Policing (PDF) says CCTV provides a "small, but statistically significant, reduction in crime", though it notes that effectiveness depends on the type of crime and the setting. "It has no impact on levels of violent crime," the paper concludes. A 2014 summary of research to date notes the need to balance privacy concerns and assess cost-effectiveness. Adding on highly unreliable facial recognition won't change that - but it will entrench unnecessary harassment.

The issue we're more concerned about here is the role of private operators. At the 2013 conference, public operators complained that their private counterparts, operating at least ten times as many cameras, were not required to follow the same rules as public bodies (although many did). Reliable statistics are hard to find. A recent estimate claims London hosts 627,707 CCTV cameras, but it's fairer to say that not even the Surveillance Camera Commissioner really knows. It is clear, however, that the vast majority of cameras are privately owned and operated.

Twenty years ago, Davies correctly foresaw that networking the cameras would enable tracking people across the city. Neither he nor the rest of us saw that (deeply flawed) facial recognition would arrive this soon, if only because it's the result of millions of independent individual decisions to publicly post billions of facial photographs. This is what created the necessary mass of training data that, as Olivia Solon has documented, researchers have appropriated.

For an area the size and public importance of Kings Cross to be monitored via privately-owned facial recognition systems that have attracted enormous controversy in the public sector is profoundly disturbing. You can sort of see their logic: Kings Cross station is now a large shopping mall surrounding a major train station, so what's the difference between that and a shopping mall without one? But effectively, in setting the rules of engagement for part of our city that no one voted to privatize, Argent is making law, a job no one voted to give it. A London - or any other major city - carved up into corporately sanitized districts connected by lawless streets - is not where any of us asked to live.


Illustrations: The new Kings Cross Western Concourse (via Colin on Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 9, 2019

Collision course

800px-Kalka-Shimla_Railway_at_night_in_Solan_-_approaching_train.JPGThe walk from my house to the tube station has changed very little in 30 years. The houses and their front gardens look more or less the same, although at least two have been massively remodeled on the inside. More change is visible around the tube station, where shops have changed hands as their owners retired. The old fruit and vegetable shop now sells wine; the weird old shop that sold crystals and carved stones is now a chain drug store. One of the hardware stores is a (very good) restaurant and the other was subsumed into the locally-owned health food store. And so on.

In the tube station itself, the open platforms have been enclosed with ticket barriers and the second generation of machines has closed down the ticket office. It's imaginable that had the ID card proposed in the early 2000s made it through to adoption the experience of buying a ticket and getting on the tube could be quite different. Perhaps instead of an Oyster card or credit card tap, we'd be tapping in and out using a plastic ID smart card that would both ensure that only I could use my free tube pass and ensure that all local travel could be tracked and tied to you. For our safety, of course - as we would doubtless be reminded via repetitive public announcements like the propaganda we hear every day about the watching eye of CCTV.

Of course, tracking still goes on via Oyster cards, credit cards, and, now, wifi, although I do believe Transport for London when it says its goal is to better understand traffic flows through stations in order to improve service. However, what new, more intrusive functions TfL may choose - or be forced - to add later will likely be invisible to us until an expert outsider closely studies the system.

In his recently published memoir, the veteran campaigner and Privacy International founder Simon Davies tells the stories of the ID cards he helped to kill: in Australia, in New Zealand, in Thailand, and, of course, in the UK. What strikes me now, though, is that what seemed like a win nine years ago, when the incoming Conservative-Liberal Democrat alliance killed the ID card, is gradually losing its force. (This is very similar to the early 1990s First Crypto Wars "win" against key escrow; the people who wanted it have simply found ways to bypass public and expert objections.)

As we wrote at the time, the ID card itself was always a brightly colored decoy. To be sure, those pushing the ID card played on it and British wartime associations to swear blind that no one would ever be required to carry the ID card and forced to produce it. This was an important gambit because to much of the population at the time being forced to carry and show ID was the end of the freedoms two world wars were fought to protect. But it was always obvious to those who were watching technological development that what mattered was the database because identity checks would be carried out online, on the spot, via wireless connections and handheld computers. All that was needed was a way of capturing a biometric that could be sent into the cloud to be checked. Facial recognition fits perfectly into that gap: no one has to ask you for papers - or a fingerprint, iris scan, or DNA sample. So even without the ID card we *are* now moving stealthily into the exact situation that would have prevailed if we had. Increasing numbers of police departments - South Wales, London, LA, India, and, notoriously, China - as Big Brother Watch has been documenting for the UK. There are many more remotely observable behaviors to be pressed into service, enhanced by AI, as the ACLU's Jay Stanley warns.

The threat now of these systems is that they are wildly inaccurate and discriminatory. The future threat of these systems is that they will become accurate and discriminatory, allowing much more precise targeting that may even come to seem reasonable *because* it only affects the bad people.

This train of thought occurred to me because this week Statewatch released a leaked document indicating that most of the EU would like to expand airline-style passenger data collection to trains and even roads. As Daniel Boffay explains at the Guardian (and as Edward Hasbrouck has long documented), the passenger name records (PNRs) airlines create for every journey include as many as 42 pieces of information: name, address, payment card details, itinerary, fellow travelers... This is information that gets mined in order to decide whether you're allowed to fly. So what this document suggests is that many EU countries would like to turn *all* international travel into a permission-based system.

What is astonishing about all of this is the timing. One of the key privacy-related objections to building mass surveillance systems is that you do not know who may be in a position to operate them in future or what their motivations will be. So at the very moment that many democratic countries are fretting about the rise of populism and the spread of extremism, those same democratic countries are proposing to put in place a system that extremists who get into power can operate anti-democratic ways. How can they possibly not see this as a serious systemic risk?


Illustrations: The light of the oncoming train (via Andrew Gray at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 26, 2019

Hypothetical risks

Great Hack - data connections.png"The problem isn't privacy," the cryptography pioneer Whitfield Diffie said recently. "It's corporate malfeasance."

This is obviously right. Viewed that way, when data profiteers claim that "privacy is no longer a social norm", as Facebook CEO Mark Zuckerberg did in 2010, the correct response is not to argue about privacy settings or plead with users to think again, but to find out if they've broken the law.

Diffie was not, but could have been, talking specifically about Facebook, which has blown up the news this week. The first case grabbed most of the headlines: the US Federal Trade Commission fined the company $5 billion. As critics complained, the fine was insignificant to a company whose Q2 2019 revenues were $16.9 billion and whose quarterly profits are approximately equal to the fine. Medium-term, such fines have done little to dent Facebook's share prices. Longer-term, as the cases continue to mount up...we'll see. Also this week, the US Department of Justice launched an antitrust investigation into Apple, Amazon, Alphabet (Google), and Facebook.

The FTC fine and ongoing restrictions have been a long time coming; EPIC executive director Marc Rotenberg has been arguing ever since the Cambridge Analytica scandal broke that Facebook had violated the terms of its 2011 settlement with the FTC.

If you needed background, this was also the week when Netflix released the documentary, The Great Hack, in which directors Karim Amer and Jehane Noujairn investigate the role Cambridge Analytica and Facebook played in the 2016 EU referendum and US presidential election votes. The documentary focuses primarily on three people: David Carroll, who mounted a legal action against Facebook to obtain his data; Brittany Kaiser, a director of Cambridge Analytica who testified against the company; and Carole Cadwalladr, who broke the story. In his review at the Guardian, Peter Bradwell notes that Carroll's experience shows it's harder to get your "voter profile" out of Facebook than from the Stasi, as per Timothy Garton Ash. (Also worth viewing: the 2006 movie The Lives of Others.)

Cadwalladr asks in her own piece about The Great Hack and in her 2019 TED talk, whether we can ever have free and fair elections again. It's a difficult question to answer because although it's clear from all these reports that the winning side of both the US and UK 2016 votes used Facebook and Cambridge Analytica's services, unless we can rerun these elections in a stack of alternative universes we can never pinpoint how much difference those services made. In a clip taken from the 2018 hearings on fake news, Damian Collins (Conservative, Folkstone and Hythe), the chair of the Digital, Culture, Media, and Sport Committee, asks Chris Wylie, a whistleblower who worked for Cambridge Analytica, that same question (The Great Hack, 00:25:51). Wylie's response: "When you're caught doping in the Olympics, there's not a debate about how much illegal drug you took or, well, he probably would have come in first, or, well, he only took half the amount, or - doesn't matter. If you're caught cheating, you lose your medal. Right? Because if we allow cheating in our democratic process, what about next time? What about the time after that? Right? You shouldn't win by cheating."

Later in the film (1:08:00), Kaiser, testifying to DCMS, sums up the problem this way: "The sole worth of Google and Facebook is the fact that they own and possess and hold and use the personal data from people all around the world.". In this statement, she unknowingly confirms the prediction made by the veteran Australian privacy advocate Roger Clarke,who commented in a 2009 interview about his 2004 paper, Very Black "Little Black Books", warning about social networks and privacy: "The only logical business model is the value of consumers' data."

What he got wrong, he says now, was that he failed to appreciate the importance of micro-pricing, highlighted in 1999 by the economist Hal Varian. In his 2017 paper on the digital surveillance economy, Clarke explains the connection: large data profiles enable marketers to gauge the precise point at which buyers begin to resist and pitch their pricing just below it. With goods and services, this approach allows sellers to extract greater overall revenue from the market than pre-set pricing would; with politics, you're talking about a shift from public sector transparency to private sector black-box manipulation. Or, as someone puts it in The Great Hack, a "full-service propaganda machine". Load, aim at "persuadables", and set running.

Less noticed than either of these is the Securities and Exchange Commission settlement with Facebook, also announced this week. While the fine is relatively modest - a mere $100 million - the SEC has nailed the company's conflicting statements. On Twitter, Jason Kint has helpfully highlighted the SEC's statements laying out the case that Facebook knew in 2016 that it had sold Cambridge Analytica some of the data underlying the 30 million personality profiles CA had compiled - and then "misled" both the US Congress and its own investors. Besides the fine, the SEC has permanently enjoined Facebook from further violations of the laws it broke in continuing to refer to actual risks as "hypothetical". The mills of trust have been grinding exceeding slow; they may yet grind exceeding small.


Illustrations: Data connections in The Great Hack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 28, 2019

Failure to cooperate

sweat-nottage.jpgIn her 2015 Pulitzer Prize-winning play, Sweat, on display nightly in London's West End until mid-July, Lynn Nottage explores class and racial tensions in the impoverished, post-industrial town of Reading, PA. In scenes alternating between 2000 and 2008, she explores the personal-level effects of twin economic crashes, corporate outsourcing decisions, and tribalism: friends become opposing disputants; small disagreements become violent; and the prize for "winning" shrinks to scraps. Them who has, gets; and from them who have little, it is taken.

Throughout, you wish the characters would recognize their real enemies: the company whose steel tubing factory has employed them for decades, their short-sighted union, and a system that structurally short-changes them. The pain of the workers when they are locked out is that of an unwilling divorce, abruptly imposed.

The play's older characters, who would be in their mid-60s today, are of the age to have been taught that jobs were for life. They were promised pensions and could look forward to wage increases at a steady and predictable pace. None are wealthy, but in 2000 they are financially stable enough to plan vacations, and their children see summer jobs as a viable means of paying for college and climbing into a better future. The future, however, lies in the Spanish-language leaflets the company is distributing to frustrated immigrants the union has refused to admit and who will work for a quarter the price. Come 2008, the local bar is run by one of those immigrants, who of necessity caters to incoming hipsters. Next time you read an angry piece attacking Baby Boomers for wrecking the world, remember that it's a big demographic and only some were the destructors. *Some* Baby Boomers were born wreckage, some achieved it, and some had it thrust upon them.

We leave the characters there in 2008: hopeless, angry, and alienated. Nottage, who has a history of researching working class lives and the loss of heavy industry, does not go on to explore the inner workings of the "digital poorhouse" they're moving into. The phrase comes from Virginia Eubanks' 2018 book, Automating Inequality, which we unfortunately missed reviewing before now. If Nottage had pursued that line, she might have found what Eubanks finds: a punitive, intrusive, judgmental, and hostile benefits system. Those devastated factory workers must surely have done something wrong to deserve their plight.

Eubanks presents three case studies. In the first, struggling Indiana families navigate the state's new automated welfare system, a $1.3 billion, ten-year privatization effort led by IBM. Soon after its 2006 launch, it began sending tens of thousands of families notices of refusal on this Kafkaesque basis: "Failure to cooperate". Indiana eventually canceled IBM's contract, and the two have been suing each other ever since. Not represented in court is, as Eubanks says, the incalculable price paid in the lives of the humans the system spat out.

In the second, "coordinated entry" matches homeless Los Angelenos to available resources in order of vulnerability. The idea was that standardizing the intake process across all possible entryways would help the city reduce waste and become more efficient while reducing the numbers on Skid Row. The result, Eubanks finds, is an unpredictable system that mysteriously helps some and not others, and that ultimately fails to solve the underlying structural problem: there isn't enough affordable housing.

In the third, a Pennsylvania predictive system is intended to identify children at risk of abuse. Such systems are proliferating widely and controversially for varying purposes, and all raise concerns about fairness and transparency: custody decisions (Durham, England), gang membership and gun crime (Chicago and London), and identifying children who might be at risk (British local councils). All these systems gather and retain, perhaps permanently, huge amounts of highly intimate data about each family. The result in Pennsylvania was to deter families from asking for the help they're actually entitled to, lest they become targets to be watched. Some future day, those same records may pop when a hostile neighbor files a minor complaint, or haunt their now-grown children when raising their own children.

All these systems, Eubanks writes, could be designed to optimize access to benefits instead of optimizing for efficiency or detecting fraud. I'm less sanguine. In prior art, Danielle Citron has written about the difficulties of translating human law accurately into programming code, and the essayist Ellen Ullman warned in 1996 that even those with the best intentions eventually surrender to computer system imperatives of improving data quality, linking databases, and cross-checking, the bedrock of surveillance.

Eubanks repeatedly writes that middle class people would never put up with this level of intrusion. They may have no choice. As Sweat highlights, many people's options are shrinking. Refusal is only possible for those who can afford to buy their help, an option increasingly reserved for a privileged few. Poor people, Eubanks is frequently told, are the experimental models for surveillance that will eventually be applied to all of us.

In 2017, Cathy O'Neil argued in Weapons of Math Destruction that algorithmic systems can be designed for fairness. Eubanks' analysis suggests that view is overly optimistic: the underlying morality dates back centuries. Digitization has, however, exacerbated its effects, as Eubanks concludes. County poorhouse inmates at least had the community of shared experience. Its digital successor squashes and separates, leaving each individual to drink alone in that Reading bar.


Illustrations: Sweat's London production poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 7, 2019

The right to lie

Sand_Box-wikimedia.JPGPrivacy, pioneering activist Simon Davies writes in his new book, Privacy: A Personal Chronicle, "varies widely according to context and environment to the extent that even after decades of academic interest in the subject, the world's leading experts have been unable to agree on a single definition." In 2010, I suggested defining it as being able to eat sand without fear. The reference was to the prospect that detailed electronic school records present to small children and their parents of a permanently stored data on everything they do. It didn't occur to me at the time, but in a data-rich future when eating sand has been outlawed (because some pseudoscientist believes it leads to criminality) and someone asks, "Did you eat sand as a child?", saying no because you forgot the incident (because you were *three* and now you're 65) will make you a dangerous liar.

The fact that even innocent pastimes - like eating sand - look sinister when the beholder is already prejudiced - is the kind of reason why sometimes we need privacy even from the people we're supposed to be able to trust. This year's Privacy Law Scholars tossed up two examples, provided by Najarian Peters, whose project examines the reasons why black Americans adopt edu0cational alternatives - home-schooling, "un-schooling" (children follow their own interests, Summerhill-style), and self-directed education (children direct their own activities), and Carleen M. Zubrzycki, who has been studying privacy from doctors. Cue Greg House: Everybody lies. Judging from the responses Zubrzycki is getting from everyone she talks to about her projects, House is right, but, as he would not accept, we have our reasons.

Sometimes lying is essential to get a new opinion untainted by previous incorrect diagnoses or dismissals (women in pain, particularly). In some cases, the problem isn't the doctor but the electronic record and the wider health system that may see it. In some cases, lying may protect the doctor, too; under the new, restrictive Alabama law that makes performing an abortion after six weeks a felony, doctors would depend on their patients' silence. This last topic raised a question: given that women are asked the date of their last period at every medical appointment, will states with these restrictive laws (if they are allowed to stand) begin demanding to inspect women's menstrual apps?

The intriguing part of Peters' project is that most discussions of home-schooling and other alternative approaches to education focus on the stereotype of parents who don't want their kids to learn about evolution, climate change, or sex. But her interviewees have a different set of concerns: they want a solid education for their children, but they also want to protect them from prejudice, stigmatization, and the underachievement that comes with being treated as though you can't achieve much. The same infraction that is minor for a white kid may be noted and used to confirm teachers' prejudices against a black child. And so on. It's another reminder of how little growing up white in America may tell you about growing up black in America.

Zybrzycki and Peters were not alone in finding gaps in our thinking: Anne Toomey McKenna, Amy C. Gaudion, and Jenni L. Evans have discovered that existing laws do not cover the use of data collected by satellites and aggregated via apps - think last year's Strava incident, in which a heat map published by the company from aggregated data exposed the location of military bases and the identities of personnel - while PLSC co-founder Chris Hoofnagle began the initial spadework on the prospective privacy impacts of quantum computing.

Both of these are gaps in current law. GDPR covers processing data; it says little about how the predictions derived from that data may be used. GDPR also doesn't cover the commercial aggregation of satellite data, an intersectional issue requiring expertise in both privacy law and satellite technology. Yet all data may eventually be personal data, as 100,000 porn stars may soon find out. (Or they may not; the claim that a programmer has been able to use facial recognition to match porn performers to social media photographs is considered dubious, at least for now) For this reason, Margot Kaminski is proposing "binary governance", in which one prong governs the use of data and the other ensures due process.

Tl;dr: it's going to be rough. Quantum computing is expected to expose things that today can successfully be hidden- including stealth surveillance technologies. It's long been mooted, for example, that quantum computing will render all of today's encryption crackable, opening up all our historical encrypted data. PLSC's discussion suggests it will also vastly increase the speed of communications. More interesting was a comment from Pam Dixon, whose research shows that high-speech biometric analysis is already beginning to happen, as companies in China find new, much faster, search methods that are bringing "profound breakthroughs" in mass surveillance.

"The first disruption was the commodification of data and data breakers," she said. "What's happening now is the next phase, the commodification of prediction. It's getting really cheap." If the machine predicts that you fit the profile of people who ate sand, what will it matter if you say you didn't? Even if it's true.


Illustrations: Sand box (via Janez Novak at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 24, 2019

Name change

Dns-rev-1-wikimedia.gifIn 2014, six months after the Snowden revelations, engineers began discussing how to harden the Internet against passive pervasive surveillance. Among the results have been efforts like Let's Encrypt, EFF's Privacy Badger, and HTTPS Everywhere. Real inroads have been made into closing some of the Internet's affordances for surveillance and improving security for everyone.

Arguably the biggest remaining serious hole is the domain name system, which was created in 1983. The DNS's historical importance is widely underrated; it was essential in making email and the web usable enough for mass adoption before search engines. Then it stagnated. Today, this crucial piece of Internet infrastructure still behaves as if everyone on the Internet can trust each other. We know the Internet doesn't live there any more; in February the Internet Corporation for Assigned Names and Numbers, which manages the DNS, warned of large-scale spoofing and hijacking attacks. The NSA is known to have exploited it, too.

The problem is the unprotected channel between the computer into which we type humanly-readable names such as pelicancrossing.net and the computers that translate those names into numbered addresses the Internet's routers understand, such as 216.92.220.214. The fact that routers all trust each other is routinely exploited for the captive portals we often see when we connect to public wi-fi systems. These are the pages that universities, cafes, and hotels set up to redirect Internet-bound traffic to their own page so they can force us to log in, pay for access, or accept terms and conditions. Most of us barely think about it, but old-timers and security people see it as a technical abuse of the system.

Several hijacking incidents raised awareness of DNS's vulnerability as long ago as 1998, when security researchers Matt Blaze and Steve Bellovin discussed it at length at Computers, Freedom, and Privacy. Twenty-one years on, there have been numerous proposals for securing the DNS, most notably DNSSEC, which offers an upwards chain of authentication. However, while DNSSEC solves validation, it still leaves the connection open to logging and passive surveillance, and the difficulty of implementing it has meant that since 2010, when ICANN signed the global DNS root, uptake has barely reached14% worldwide.

In 2018, the IETF adopted DNS-over-HTTPS as a standard. Essentially, this sends DNS requests over the same secure channel browsers use to visit websites. Adoption is expected to proceed rapidly because it's being backed by Mozilla, Google, and Cloudflare, who jointly intend to turn it on by default in Chrome and Firefox. In a public discussion at this week's Internet Service Providers Association conference, a fellow panelist suggested that moving DNS queries to the application level opens up the possibility that two different apps on the same device might use different DNS resolvers - and get different responses to the same domain name.

Britain's first public notice of DoH came a couple of week ago in the Sunday Times, which billed it as Warning over Google Chrome's new threat to children. This is a wild overstatement, but it's not entirely false: DoH will allow users to bypass the parts of Britain's filtering system that depend on hijacking DNS requests to divert visitors to blank pages or warnings. An engineer would probably argue that if Britain's many-faceted filtering system is affected it's because the system relies on workarounds that shouldn't have existed in the first place. In addition, because DoH sends DNS requests over web connections, the traffic can't be logged or distinguished from the mass of web traffic, so it will also render moot some of the UK's (and EU's) data retention rules.

For similar reasons, DoH will break captive portals in unfriendly ways. A browser with DoH turned on by default will ignore the hotel/cafe/university settings and instead direct DNS queries via an encrypted channel to whatever resolver it's been set to use. If the network requires authentication via a portal, the connection will fail - a usability problem that will have to be solved.

There are other legitimate concerns. Bypassing the DNS resolvers run by local ISPs in favor of those belonging to, say, Google, Cloudflare, and Cisco, which bought OpenDNS in 2015, will weaken local ISPs' control over the connections they supply. This is both good and bad: ISPs will be unable to insert their own ads - but they also can't use DNS data to identify and block malware as many do now. The move to DoH risks further centralizing the Internet's core infrastructure and strengthening the power of companies most of us already feel have too much control.

The general consensus, however, is that like it or not, this thing is coming. Everyone is still scrambling to work out exactly what to think about it and what needs to be done to mitigate accompanying risks, as well as find solutions to the resulting problems. It was clear from the ISPA conference panel that everyone has mixed feelings, though the exact mix of those feelings and which aspects are identified as problems - differ among ISPs, rights activists, and security practitioners. But it comes down to this: whether you like this particular proposal or not, the DNS cannot be allowed to remain in its present insecure state. If you don't want DoH, come up with a better proposal.


Illustrations: DNS diagram (via Б.Өлзий at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 17, 2019

Genomics snake oil

DNA_Double_Helix_by_NHGRI-NIH-PD.jpgIn 2011, as part of an investigation she conducted into the possible genetic origins of the streak of depression that ran through her family, the Danish neurobiologist Lone Frank had her genome sequenced and interviewed many participants in the newly-opening field of genomics that followed the first complete sequencing of the human genome. In her resulting book, My Beautiful Genome, she commented on the "Wild West" developing around retail genetic testing being offered to consumers over the web. Absurd claims such as using DNA testing to find your perfect mate or direct your child's education abounded.

This week, at an event organized by Breaking the Frame, New Zealand researcher Andelka M. Phillips presented the results of her ongoing study of the same landscape. The testing is just as unreliable, the claims even more absurd - choose your diet according to your DNA! find out what your superpower is! - and the number of companies she's collected has reached 289 while the cost of the tests has shrunk and the size of the databases has ballooned. Some of this stuff makes astrology look good.

To be perfectly clear: it's not, or not necessarily, the gene sequencing itself that's the problem. To be sure, the best lab cannot produce a reading that represents reality from poor-quality samples. And many samples are indeed poor, especially those snatched from bed sheets or excavated from garbage cans to send to sites promising surreptitious testing (I have verified these exist, but I refuse to link to them) to those who want to check whether their partner is unfaithful or whether their child is in fact a blood relative. But essentially, for health tests at least, everyone is using more or less the same technology for sequencing.

More crucial is the interpretation and analysis, as Helen Wallace, the executive director of GeneWatch UK, pointed out. For example, companies differ in how they identify geographical regions, frame populations , and the makeup of their databases of reference contributions. This is how a pair of identical Canadian twins got varying and non-matching test results from five companies, one Ashkenazi Jew got six different ancestry reports, and, according to one study, up to 40% of DNA results from consumer genetic tests are false positives. As I type, the UK Parliament is conducting an inquiry into commercial genomics.

Phillips makes the data available to anyone who wants to explore it. Meanwhile, so far she's examined the terms of service and privacy policies of 71 companies, and finds them filled with technology company-speak, not medical information. They do not explain these services' technical limitations or the risks involved. Yet it's so easy to think of disastrous scenarios: this week, an American gay couple reported that their second child's birthright citizenship is being denied under new State Department rules. A false DNA test could make a child stateless.

Breaking the Frame's organizer, Dave King, believes that a subtle consequence of the ancestry tests - the things everyone was quoting in 2018 that tell you that you're 13% German, 1% Somalian, and whatever else - is to reinforce the essentially racist notion that "Germanness" has a biological basis. He also particularly disliked the services claiming they can identify children's talents; these claim, as Phillips highlighted, that testing can save parents money they might otherwise waste on impossible dreams. That way lies Gattaca and generations of children who don't get to explore their own abilities because they've already been written off.

Even more disturbing questions surround what happens with these large databases of perfect identifiers. In the UK, last October the Department of Health and Social Care announced its ambition to sequence 5 million genomes. Included was the plan to being in 2019 to offer whole genome sequencing to all seriously ill children and adults with specific rare diseases or hard-to-treat cancers as part of their care. In other words, the most desperate people are being asked first, a prospect Phil Booth, coordinator of medConfidential, finds disquieting. As so much of this is still research, not medical care, he said, like the late despised care.data, it "blurs the line around what is your data, and between what the NHS was and what some would like it to be". Exploitation of the nation's medical records as raw material for commercial purposes is not what anyone thought they were signing up for. And once you have that giant database of perfect identifiers...there's the Home Office, which has already been caught using the NHS to hunt illegal immigrants and DNA testing immigrants.

So Booth asked this: why now? Genetic sequencing is 20 years old, and to date it has yet to come close to being ready to produce the benefits predicted for it. We do not have personalized medicine, or, except in a very few cases (such as a percentage of breast cancer) drugs tailored to genetic makeup. "Why not wait until it's a better bet?" he asked. Instead of spending billions today - billions that, as an audience member pointed out, would produce better health more widely if spent on improving the environment, nutrition, and water - the proposal is to spend them on a technology that may still not be producing results 20 years from now. Why not wait, say, ten years and see if it's still worth doing?


Illustrations: DNA double helix (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 10, 2019

Slime trails

ghostbusters-murray-slime.pngIn his 2000 book, Which Lie Did I Tell?, the late, great screenwriter William Goldman called the brilliant 1963 Stanley Donen movie Charade a "money-loser". Oh, sure, it was a great success - for itself. But it cost Hollywood hundreds of millions of dollars in failed attempts to copy its magical romantic-comedy-adventure-thriller mixture. (Goldman's own version, 1992's The Year of the Comet, was - his words - "a flop".) In this sense, Amazon may be the most expensive company ever launched in Silicon Valley because it encouraged everyone to believe losing money in 17 of its first 18 years doesn't matter.

Uber has been playing up this comparison in the run-up to its May 2019 IPO. However, two things make it clear the comparison is false. First - duh - losing money just isn't a magical sign of a good business, even in the Internet era. Second, Amazon had scale on its side, as well as a pioneering infrastructure it was able later to monetize. Nothing about transport scales, as Hubert Horan laid out in 2017; even municipalities can't make Uber cheaper than public transit. Horan's analysis of Uber's IPO filing is scathing. Investment advisers love to advise investing in companies that make popular products, but *not this time*.

Meanwhile, network externalities abound. The Guardian highlights the disparity between Uber's drivers, who have been striking this week, and its early investors, who will make billions even while the company says it intends to continue slicing drivers' compensation. The richest group, says the New York Times, have already decamped to lower-tax states.

If Horan is right, however, the impending shift of billions of dollars from drivers and greater fools to already-wealthy early investors will arguably be a regulatory failure on the part of the Securities and Exchange Commission. I know the rule of the stock market is "buyer beware", but without the trust conferred by regulators there will *be* no buyers, not even pension funds. Everyone needs government to ensure fair play.

Somewhere in one of his 500-plus books, the science/fiction writer Isaac Asimov commented that he didn't like to fly because in case of a plane crash his odds of survival were poor. "It's not sporting." In fact, most passengers survive, unharmed, but not, obviously, in the recent Boeing crashes. Blame, as Madeline Elish correctly predicted in her paper on moral crumple zones, is being sprayed widely, particularly among the humans who build and operate these things: faulty sensors, pilots, and software issues.

The reality seems more likely to be a perfect storm comprising numerous components: 1) the same kind of engineering-management disconnect that doomed Challenger in 1986, 2) trying to compensate with software for a hardware problem, 3) poorly thought-out cockpit warning light design, 4) the number and complexity of vendors involved, and 5) receding regulators. As hybrid cyber-physical systems become more pervasive, it seems likely we will see many more situations where small decisions made by different actors will collide to create catastrophes, much like untested drug interactions.

Again, regulatory failure is the most alarming. Any company can screw up. The failure of any complex system can lead to companies all blaming each other. There are always scapegoats. But in an industry where public perception of safety is paramount, regulators are crucial in ensuring trust. The flowchart at the Seattle Times says it all about how the FAA has abdicated its responsibility. It's particularly infuriating because many in the cybersecurity industry cite aviation as a fine example of what an industry can do to promote safety and security when the parties recognize their collective interests are best served by collaborating and sharing data. Regulators who audit and test provide an essential backstop.

The 6% of the world that flies relies on being able to trust regulators to ensure their safety. Even if the world's airlines now decide that they can't trust the US system, where are they going to go for replacement aircraft? Their own governments will have to step in where the US is failing, as the EU already does in privacy and antitrust. Does the environment win, if people decide it's too risky to fly? Is this a plan?

I want regulators to work. I want to be able to fly with reasonable odds of survival, have someone on the job to detect financial fraud, and be able to trust that medical devices are safe. I don't care how smart you are, no consumer can test these things for themselves, any more than we can tell if a privacy policy is worth the electrons it's printed on.

On that note, last week on Twitter Demos researcher Carl Miller, author of The Death of the Gods, made one of his less-alarming suggestions. Let's replace "cookie": "I'm willing to bet we'd be far less willing to click yes, if the website asked if we [are] willing to have a 'slime trail', 'tracking beacon' or 'surveillance agent' on our browser."

I like "slime trail", which extends to cover the larger use of "cookie" in "cookie crumbs" to describe the lateral lists that show the steps by which you arrived at the current page. Now, when you get a targeted ad, people will sympathize as you shout, "I've been slimed!"


Illustrations: Bill Murray, slimed in Ghostbusters (1984).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 26, 2019

This house

2001-hal.pngThis house may be spying on me.

I know it listens. Its owners say, "Google, set the timer for one minute," and a male voice sounds: "Setting the timer for one minute."

I think, one minute? You need a timer for one minute? Does everyone now cook that precisely?

They say, "Google, turn on the lamp in the family room." The voice sounds: "Turning on the lamp in the family room." The lamp is literally sitting on the table right next to the person issuing the order.

I think, "Arm, hand, switch, flick. No?"

This happens every night because the lamp is programmed to turn off earlier than we go to bed.

I do not feel I am visiting the future. Instead, I feel I am visiting an experiment that years from now people will look back on and say, "Why did they do that?"

I know by feel how long a minute is. A child growing up in this house would not. That child may not even know how to operate a light switch, even though one of the house's owners is a technical support guy who knows how to build and dismember computers, write code, and wire circuits. Later, this house's owner tells me, "I just wanted a reminder."

It's 16 years since I visited Microsoft's and IBM's visions of the smart homes they thought we might be living in by now. IBM imagined voice commands; Microsoft imagined fashion advice-giving closets. The better parts of the vision - IBM's dashboard with a tick-box so your lawn watering system would observe the latest municipal watering restrictions - are sadly unavailable. The worse parts - living in constant near-darkness so the ubiquitous projections are readable - are sadly closer. Neither envisioned giant competitors whose interests are served by installing in-house microphones on constant alert.

This house inaudibly alerts its owner's phones whenever anyone approaches the front door. From my perspective, new people mysteriously appear in the kitchen without warning.

This house has smartish thermostats that display little wifi icons to indicate that they're online. This house's owners tell me these are Ecobee Linux thermostats; the wifi connection lets them control the heating from their phones. The thermostats are not connected to Google.

None of this is obviously intrusive. This house looks basically like a normal house. The pile of electronics in the basement is just a pile of electronics. Pay no attention to the small blue flashing lights behind the black fascia.

One of this house's owners tells me he has deliberately chosen a male voice for the smart speaker so as not to suggest that women are or should be subservient to men. Both owners are answered by the same male voice. I can imagine personalized voices might be useful for distinguishing who asked what, particularly in a shared house or a company, and ensuring only the right people got to issue orders. Google says its speakers can be trained to recognize six unique voices - a feature I can see would be valuable to the company as a vector for gathering more detailed information about each user's personality and profile. And, yes, it would serve users better.

Right now, I could come down in the middle of the night and say, "Google, turn on the lights in the master bedroom." I actually did something like this once by accident years ago in a friend's apartment that was wirelessed up with X10 controls. I know this system would allow it because I used the word "Google" carelessly in a sentence while standing next to a digital photo frame, and the unexpected speaker inside it woke up to say, "I don't understand". This house's owner stared: "It's not supposed to do that when Google is not the first word in the sentence". The photo frame stayed silent.

I think it was just marking its territory.

Turning off the fan in their bedroom would be more subtle. They would wake up more slowly, and would probably just think the fan had broken. This house will need reprogramming to protect itself from children. Once that happens, guests will be unable to do anything for themselves.

This house's owners tell me there are many upgrades they could implement, and they will but: managing them needs skill and thought to segment and secure the network and implement local data storage. Keeping Google and Amazon at bay requires an expert.

This house's owners do not get their news from their smart speakers, but it may be only a matter of time. At a recent Hacks/Hackers, Nic Newman gave the findings of a recent Reuters Institute study: smart speakers are growing faster than smartphones at the same stage, they are replacing radios, and "will kill the remote control". So far, only 46% use them to get news updates. What was alarming was the gatekeeper control providers have: on a computer, the web could offer 20 links; on a smartphone there's room for seven, voice...one. Just one answer to, "What's the latest news on the US presidential race?"

At OpenTech in 2017, Tom Steinberg observed that now that his house was equipped with an Amazon Echo, homes without one seemed "broken". He predicted that this would become such a fundamental technology that "only billionaires will be able to opt out". Yet really, the biggest advance since the beginning of remote controls is that now your garage door opener can collect your data and send it to Google.

My house can stay "broken".


Illustrations: HAL (what else?).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 8, 2019

Pivot

parliament-whereszuck.jpgWould you buy a used social media platform from this man?

"As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platform," Mark Zuckerberg wrote this week at the Facebook blog, also summarized at the Guardian.

Zuckerberg goes on to compare Facebook and Instagram to "the digital equivalent of a town square".

So many errors, so little time. Neither Facebook nor Instagram is open. "Open information, Rufus Pollock explained last year in The Open Revolution, "...can be universally and freely used, built upon, and shared." While, "In a Closed world information is exclusively 'owned' and controlled, its attendant wealth and power more and more concentrated".

The alphabet is open. I do not need a license from the Oxford English Dictionary to form words. The web is open (because Tim Berners-Lee made it so). One of the first social media, Usenet, is open. Particularly in the early 1990s, Usenet really was the Internet's town square.

*Facebook* is *closed*.

Sure, anyone can post - but only in the ways that Facebook permits. Running apps requires Facebook's authorization, and if Facebook makes changes, SOL. Had Zuckerberg said - as some have paraphrased him - "town hall", he'd still be wrong, but less so: even smaller town halls have metal detectors and guards to control what happens inside. However, they're publicly owned. Under the structure Zuckerberg devised when it went public, even the shareholders have little control over Facebook's business decisions.

So, now: this week Zuckerberg announced a seeming change of direction for the service. Slate, the Guardian, and the Washington Post all find skepticism among privacy advocates that Facebook can change in any fundamental way, and they wonder about the impact on Facebook's business model of the shift to focusing on secure private messaging instead of the more public newsfeed. Facebook's former chief security officer Alex Stamos calls the announcement a "judo move" that removes both the privacy complaints (Facebook now can't read what you say to your friends) and allows the site to say that complaints about circulating fake news and terrorist content are outside its control (Facebook now can't read what you say to your friends *and* doesn't keep the data).

But here's the thing. Facebook is still proposing to unify the WhatsApp, Instagram, and Facebook user databases. Zuckerberg's stated intention is to build a single unified secure messaging system. In fact, as Alex Hern writes at the Guardian that's the one concrete action Zuckerberg has committed to, and that was announced back in January, to immediate privacy queries from the EU.

The point that can' t be stressed enough is that although Facebook is trading away the ability to look at the content of what people post it will retain oversight of all the traffic data. We have known for decades that metadata is even more revealing than content; I remember the late Caspar Bowden explaining the issues in detail in 1999. Even if Facebook's promise to vape the messages doesn't include keeping no copies for itself (a stretch, given that we found out in 2013 that the company keeps every character you type), it will be able to keep its insights into the connections between people and the conclusions it draws from them. Or, as Hern also writes, Zuckerberg "is offering privacy on Facebook, but not necessarily privacy from Facebook".

Siva Vaidhyanathan, author of Antisocial Media, seems to be the first to get this, and to point out that Facebook's supposed "pivot" is really just a decision to become more dominant, like China's WeChat.WeChat thoroughly dominates Chinese life: it provides messaging, payments, and a de facto identity system. This is where Vaidhyanathan believes Facebook wants to go, and if encrypting messages means it can't compete in China...well, WeChat already owns that market anyway. Let Google get the bad press.

Facebook is making a tradeoff. The merged database will give it the ability to inspect redundancy - are these two people connected on all three services or just one? - and therefore far greater certainty about which contacts really matter and to whom. The social graph that emerges from this exercise will be smaller because duplicates will have been merged, but far more accurate. The "pivot" does, however, look like it might enable Facebook to wriggle out from under some of its numerous problems - uh, "challenges". The calls for regulation and content moderation focus on the newsfeed. "We have no way to see the content people write privately to each other" ends both discussions, quite possibly along with any liability Facebook might have if the EU's copyright reform package passes with Article 11 (the "link tax") intact.

Even calls that the company should be broken up - appropriate enough, since the EU only approved Facebook's acquisition of WhatsApp when the company swore that merging the two databases was technically impossible - may founder against a unified database. Plus, as we know from this week's revelations, the politicians calling for regulation depend on it for re-election, and in private they accommodate it, as Carole Cadwalladr and Duncan Campbell write at the Guardian and Bill Goodwin writes at Computer Weekly.

Overall, then, no real change.


Illustrations: The international Parliamentary committee, with Mark Zuckerberg's empty seat.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 22, 2019

Metropolis

Metropolis-openingshot.png"As a citizen, how will I know I live in a smarter city, and how will life be different?" This question was probably the smartest question asked at yesterday's Westminster Forum seminar on smart cities (PDF); it was asked by Tony Sceales, acting as moderator.

"If I feel safe and there's less disruption," said Peter van Manen. "You won't necessarily know. Thins will happen as they should. You won't wake up and say, 'I'm in the city of the future'," said Sam Ibbott. "Services become more personalized but less visible," said Theo Blackwell the Chief Digital Office for London.

"Frictionless" said Jacqui Taylor, offering it as the one common factor she sees in the wildly different smart city projects she has encountered. I am dubious that this can ever be achieved: one person's frictionless is another's desperate frustration: streets cannot be frictionless for *both* cars and cyclists, just as a city that is predicted to add 2 million people over the next ten years can't simultaneously eliminate congestion. "Working as intended" was also heard. Isn't that what we all wish computers would do?

Blackwell had earlier mentioned the "legacy" of contactless payments for public transport. To Londoners smushed into stuffed Victoria Line carriages in rush hour, the city seems no smarter than it ever was. No amount of technological intelligence can change the fact that millions of people all want to go home at the same time or the housing prices that force them to travel away from the center to do so. We do get through the ticket barriers faster.

"It's just another set of tools," said Jennifer Schooling. "It should feel no different."

The notion of not knowing as the city you live in smartens up should sound alarm bells. The fair reason for that hiddenness is the reality that, as Sara Degli Esposti pointed out at this year's Computers, Privacy, and Data Protection, this whole area is a business-to-business market. "People forget that, especially at the European level. Users are not part of the picture, and that's why we don't see citizens engaged in smart city projects. Citizens are not the market. This isn't social media."

She was speaking at CPDP's panel on smart cities and governance, convened by the University of Stirling's William Webster, who has been leading a research project, CRISP, to study these technologies. CRISP asked a helpfully different question: how can we use smart city technologies to foster citizen engagement, coproduction of services, development of urban infrastructure, and governance structures?

The interesting connection is this: it's no surprise when CPDP's activists, regulators, and academics talk about citizen engagement and participation, or deplore a model in which smart cities are a business-led excuse for corporate and government, surveillance. The surprise comes when two weeks later the same themes arise among Westminster Forum's more private and public sector speakers and audience. These are the people who are going to build these new programs and services, and they, too, are saying they're less interested in technology and more interested in solving the problems that keep citizens awake at night: health, especially.

There appears to be a paradigm shift beginning to happen as municipalities begin to seriously consider where and on what to spend their funds.

However, the shift may be solely European. At CPDP, Canadian surveillance studies researcher David Murakami Wood told the story of Toronto, where (Google owner) Alphabet subsidiary Sidewalk Labs swooped in circa 2014 with proposals to redevelop the Quayside area of Toronto in partnership with Waterfront Toronto. The project has been hugely controversial - there were hearings this week in Ottawa, the provincial capital.

As Murakami Wood's tells it, for Sidewalk Labs the area is a real-world experiment using real people's lives as input to create products the company can later sell elsewhere. The company has made clear it intends to keep all the data the infrastructure generates on its servers in the US as well as all the intellectual property rights. This, Murakami Wood argued, is the real cost of the "free" infrastructure. It is also, as we're beginning to see elsewhere, the extension of online tracking or, as Murakami Wood put it, surveillance capitalism into the physical world: cultural appropriation at municipal scale from a company that has no track record in building buildings, or even publishing detailed development plans. Small wonder that Murakami Wood laughed when he heard Sidewalk Labs CEO Dan Doctoroff impress a group of enthusiastic young Canadian bankers with the news that the company had been studying cities for *two years*.

Putting these things together, we have, as Andrew Adams suggested, three paradigms, which we might call US corporate, Chinese authoritarian, and, emerging, European participatory and cooperative. Is this the choice?

Yes and no. Companies obviously want to develop systems once, sell them everywhere. Yet the biggest markets are one-off outliers. "Croydon," said Blackwell, "is the size of New Orleans." In addition, approaches vary widely. Some places - Webster mentioned Glasgow - are centralized command and control; others - Brazil - are more bottom-up. Rick Robinson finds that these do not meet in the middle.

The clear takeaway overall is that local context is crucial in shaping smart city projects and despite some common factors each one is different. We should built on that.


Illustrations: Fritz Lang's Metropolis (1927).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 1, 2019

Beyond data protection

3rd-cpdp2019-sign.jpgFor the group assembled this week in Brussels for Computers, Privacy, and Data Protection, the General Data Protection Regulation that came into force in May 2018 represented the culmination of years of effort. The mood, however, is not so much self-congratulatory as "what's next?".

The first answer is a lot of complaints. An early panel featured a number of these. Max Schrems, never one to shirk, celebrated GDPR day in 2018 by joining with La Quadrature du Net to file two complaints against Google, WhatsApp, Instagram, and Facebook over "forced consent". Last week, he filed eight more complaints against Amazon, Apple, Spotify, Netflix, YouTube, SoundCloud, DAZN, and Flimmit regarding their implementation of subject access rights. A day or so later, the news broke: the French data protection regulator, CNIL, has fined Google €50 million (PDF) on the basis of their complaint - the biggest fine so far under the new regime that sets the limit at 4% of global turnover. Google is considering an appeal.

It's a start. We won't know for probably five years whether GDPR will have the intended effect of changing the balance of power between citizens and data-driven companies (even though one site is already happy to call it a failure already. Meanwhile, one interesting new development is Apple's crackdown on Facebook and then Google for abusing its enterprise app system to collect comprehensive data on end users. While Apple is certainly far less dependent on data collection than the rest of GAFA/FAANG, this action is a little like those types of malware that download anti-virus software to clean your system of the competition.

The second - more typical of a conference - is to stop and think: what doesn't GDPR cover? The answers are coming fast: AI, automated decision-making, household or personal use of data, and (oh, lord) blockchain. And, a questioner asked late on Wednesday, "Is data protection privacy, data, or fairness?"

Several of these areas are interlinked: automated decision-making is currently what we mean when we say "AI", and while we talk a lot about the historical bias stored in data and the discrimination that algorithms derive from training data and bake into their results. Discussions of this problem, Angsar Koene tend to portray accuracy and fairness as a tradeoff, with accuracy presented as a scientifically neutral reality and fairness as a fuzzy human wish. Instead, he argued, accuracy depends on values we choose to judge it by. Why shouldn't fairness just be one of those values?

A bigger limitation - which we've written about here since 2015 - is that privacy law tends to focus on the individual. Seda Gürses noted that focusing on the algorithm - how to improve it and reduce its bias - similarly ignores the wider context and network externalities. Optimize the Waze algorithm so each driver can reach their destination in record time, and the small communities whose roads were not built for speedy cut-throughs bear the costs of extra traffic, noise, and pollution they generate. Next-generation privacy will have to reflect that wider context; as Dennis Hirsch put it, social protection rather than individual control. As Schrems' and others' complaints show, individual control is rarely ours on today's web in any case.

Privacy is not the only regulation that suffers from that problem. At Tuesday's pre-conference Privacy Camp, several speakers deplored the present climate in which platforms' success in removing hate speech, terrorist content, and unauthorized copyright material is measured solely in numbers: how many pieces, how fast. Such a regime does not foster thoughtful consideration, nuance, respect for human rights, or the creation of a robust system of redress for the wrongly accused. "We must move away from the idea that illegal content can be perfectly suppressed and that companies are not trying hard enough if they aren't doing it," Mozilla Internet policy manager Owen Bennett said, going on to advocate for a wider harm reduction approach.

The good news, in a way, is that privacy law has fellow warriors: competition, liability, and consumer protection law. The first two of those, said Mireille Hildebrandt need to be rethought, in part because some problems will leave us no choice. She cited, for example, the energy market: as we are forced to move to renewables both supply and demand will fluctuate enormously. "Without predictive technology I don't see how we can solve it." Continuously predicting the energy use of each household will, she wrote in a paper in 2013 (PDF), pose new threats to privacy, data protection non-discrimination, and due process.

One of the more interesting new (to me, at least) players on this scene is Algorithm Watch, which has just released a report on algorithmic decision-making in the EU that recommends looking at other laws that are relevant to specific types off decisions, such as applying equal pay legislation to the gig economy. Data protection law doesn't have to do it all.

Some problems may not be amenable to law at all. Paul Nemitzposed this question: given that machine learning training data is always historical, and that therefore the machines are always perforce backward-looking, how do we as humans retain the drive to improve if we leave all our decisions to machines? No data protection law in the world can solve that.

Illustrations: The CPDP 2019 welcome sign in Brussels.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2019

Misforgotten

European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpg"It's amazing. We're all just sitting here having lunch like nothing's happening, but..." This was on Tuesday, as the British Parliament was getting ready to vote down the Brexit deal. This is definitely a form of privilege, but it's hard to say whether it's confidence born of knowing your nation's democracy is 900 years old, or aristocrats-on-the-verge denial as when World War I or the US Civil War was breaking out.

Either way, it's a reminder that for many people historical events proceed in the background while they're trying to get lunch or take the kids to school. This despite the fact that all of us in the UK and the US are currently hostages to a paralyzed government. The only winner in either case is the politics of disgust, and the resulting damage will be felt for decades. Meanwhile, everything else is overshadowed.

One of the more interesting developments of the past digital week is the European advocate general's preliminary opinion that the right to be forgotten, part of data protection law, should not be enforceable outside the EU. In other words, Google, which brought the case, should not have to prevent access to material to those mounting searches from the rest of the world. The European Court of Justice - one of the things British prime minister Theresa May has most wanted the UK to leave behind since her days as Home Secretary - typically follows these preliminary opinions.

The right to be forgotten is one piece of a wider dispute that one could characterize as the Internet versus national jurisdiction. The broader debate includes who gets access to data stored in another country, who gets to crack crypto, and who gets to spy on whose citizens.

This particular story began in France, where the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection regulator, fined Google €100,000 for selectively removing a particular person's name from its search results on just its French site. CNIL argued that instead the company should delink it worldwide. You can see their point: otherwise, anyone can bypass the removal by switching to .com or .co.jp. On the other hand, following that logic imposes EU law on other countries, such as the US's First Amendment. Americans in particular tend to regard the right to be forgotten with the sort of angry horror of Lady Bracknell contemplating a handbag. Google applied to the European Court of Justice to override CNIL and vacate the fine.

A group of eight digital rights NGOs, led by Article 19 and including Derechos Digitales, the Center for Democracy and Technology, the Clinique d'intérêt public et de politique d'Internet du Canada (CIPPIC), the Electronic Frontier Foundation, Human Rights Watch, Open Net Korea, and Pen International, welcomed the ruling. Many others would certainly agree.

The arguments about jurisdiction and censorship were, like so much else, foreseen early. By 1991 or thereabouts, the question of whether the Internet would be open everywhere or devolve to lowest-common-denominator censorship was frequently debated, particularly after the United States v. Thomas case that featured a clash of community standards between Tennessee and California. If you say that every country has the right to impose its standards on the rest of the world, it's unclear what would be left other than a few Disney characters and some cat videos.

France has figured in several of these disputes: in (I think) the first international case of this kind, in 2000, it was a French court that ruled that the sale of Nazi memorabilia on Yahoo!'s site was illegal; after trying to argue that France was trying to rule over something it could not control, Yahoo! banned the sales on its French auction site and then, eventually, worldwide.

Data protection law gave these debates a new and practical twist. The origins of this particular case go back to 2014, when the European Court of Justice ruled in Google Spain v AEPD and Mario Costeja González that search engines must remove links to web pages that turn up in a name search and contain information that is irrelevant, inadequate, or out of date. This ruling, which arguably sought to redress the imbalance of power between individuals and corporations publishing information about them and free expression. Finding this kind of difficult balance, the law scholar Judith Rauhofer argued at that year's Computers, Freedom, and Privacy, is what courts *do*. The court required search engines to remove from the search results that show up in a *name* search the link to the original material; it did not require the original websites to remove it entirely or require the link's removal from other search results. The ruling removed, if you like, a specific type of power amplification, but not the signal.

How far the search engines have to go is the question the ECJ is now trying to settle. This is one of those cases where no one gets everything they want because the perfect is the enemy of the good. The people who want their past histories delinked from their names don't get a complete solution, and no one country gets to decide what people in other countries can see. Unfortunately, the real winner appears to be geofencing, which everyone hates.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 10, 2019

Secret funhouse mirror room

Lost_City_-_Fun_House.jpg"Here," I said, handing them an old pocket watch. "This is your great-grandfather's watch." They seemed a little stunned.

As you would. A few weeks earlier, one of them had gotten a phone call from a state trooper. A cousin they'd never heard of had died, and they might be the next of kin.

"In this day and age," one of them told me apologetically, "I thought it must be a scam."

It wasn't. Through the combined offices of a 1940 divorce and a lifetime habit of taciturnity on personal subjects, a friend I'd known for 45 years managed to die without ever realizing his father had an extensive tree of living relatives. They would have liked each other, I think.

So they came to the funeral and met their cousin through our memories and the family memorabilia we found in his house. And then they went home bearing the watch, understandably leaving us to work out the rest.

Whenever someone dies, someone else inherits a full-time job. In our time, that full-time job is located at the intersection of security, privacy - and secrecy, the latter a complication rarely discussed. In the eight years since I was last close to the process of closing out someone's life, very much more of the official world has moved online. This is both help and hindrance. I was impressed with the credit card company whose death department looked online for obits to verify what I was saying instead of demanding an original death certificate (New York state charges $15 per copy). I was also impressed with - although a little creeped out by - the credit card company that said, "Oh, yes, we already know." (It had been three weeks, two of them Christmas and New Year's.)

But those, like the watch, were easy, accounts with physical embodiments - that is, paper statements. It's the web that's hard. All those privacy and security settings that we advocate for live someones fall apart when they die without disclosing their passwords. We found eight laptops, the most recent an actively hostile mid-2015 MacBook Pro. Sure, reset the password, but doing so won't grant access to any other stored passwords. If File Vault is turned on, a beneficent fairy - or a frustrated friend trying to honor your stated wishes that you never had witnessed or notarized - is screwed. I'd suggest an "owner deceased" mode, but how do you protect *that* for a human rights worker or a journalist in a war zone holding details of at-risk contacts? Or when criminals arrive knowing how to unlock it? Privacy and security are essential, but when someone dies they turn into secrecy that - I seem to recall predicting in 1997 - means your intended beneficiaries *don't* inherit because they can't unlock your accounts.

It's a genuinely hard problem, not least because most people don't want to plan for their own death. Personal computers operate in binary mode: protect everything, or nothing, and protect it all the same way even though exposing a secret not-so-bad shame is a different threat model from securing a bank account. But most people do not think, "After I'm dead, what do I care?" Instead, they think, "I want people to remember me the way I want and this thing I'm ashamed of they must never, ever know, or they'll think less of me." It takes a long time in life to arrive at, "People think of me the way they think of me, and I can't control that. They're still here in my life, and that must count for something." And some people never realize that they might feel more secure in their relationships if they hid less.

So, the human right to privacy bequeaths a problem: how do you find your friend's long-lost step-sibling, who is now their next of kin, when you only know their first name and your friend's address book is encrypted on a hard drive and not written, however crabbily, in a nice, easily viewed paper notebook?

If there's going to be an answer, I imagine it lies in moving away from binary mode. It's imaginable that a computer operating system could have a "personal rescue mode" that would unlock some aspects of the computer and not others, an extension of the existing facilities for multiple accounts and permissions, though these are geared to share resources, not personal files. The owner of such a system would have to take some care which information went in which bucket, but with a system like that they could give a prospective executor a password that would open the more important parts.

No such thing exists, of course, and some people wouldn't use it even if it did. Instead, the key turned out to be the modest-sized-town people network, which was and is amazing. It was through human connections that we finally understood the invoices we found for a storage unit. Without ever mentioning it, my friend had, for years, at considerable expense, been storing a mirror room from an amusement park funhouse. His love of amusement parks was no surprise. But if we'd known, the mirror room would now be someone's beloved possession instead of broken up in a scrapyard because a few months before he died my friend had stopped paying his bills - also without telling anyone.

Illustrations: The Lost City Fun House (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 21, 2018

Behind you!

640px-Aladdin_pantomime_Nottingham_Playhouse_2008.jpgFor one reason or another - increasing surveillance powers, increasing awareness of the extent to which online activities are tracked by myriad data hogs, Edward Snowden - crypto parties have come somewhat back into vogue over the last few years after a 20-plus-year hiatus. The idea behind crypto parties is that you get a bunch of people together and they all sign each other's keys. Fun! For some value of fun.

This is all part of the web of trust that is supposed to accrue when you use public key cryptography software like PGP or GPG: each new signature on a person's public key strengthens the trust you can have that the key truly belongs to that person. In practice, the web of trust, also known as "public key infrastructure", does not scale well, and the early 1990s excitement about at least the PGP version of the idea died relatively quickly.

A few weeks ago, ORG Norwich held such a meeting and I went along to help workshop about when and how you want to use crypto. Like any security mechanism, encrypting email has its limits. Accordingly, before installing PGP and saying, "Secure now!" a little threat modeling is a fine thing. As bad as it can be to operate insecurely, it is much, much worse to operate under the false belief that you are more secure than you are because the measures you've taken don't fit the risks you face.

For one thing, PGP does nothing to obscure metadata - that is, the record of who sent email to whom. Newer versions offer the option to encrypt the subject line, but then the question arises: how do you get busy people to read the message?

For another thing, even if you meticulously encrypt your email, check that the recipient's public key is correctly signed, and make no other mistakes, you are still dependent on your correspondent to take appropriate care of their archive of messages and not copy your message into a new email and send it out in plain text. The same is true of any other encrypted messaging program such as Signal; you depend on your correspondents to keep their database encrypted and either password-protect their phone and other devices or keep them inaccessible. And then, too, even the most meticulous correspondent can be persuaded to disclose their password.

For that reason, in some situations it may in fact be safer not to use encryption and remain conscious that anything you send may be copied and read. I've never believed that teenagers are innately better at using technology than their elders, but in this particular case they may provide role models: research has found that they are quite adept at using codes only they understand. To their grown-ups, it just looks like idle Facebook chatter.

Those who want to improve their own and others' protection against privacy invasion therefore need to think through what exactly they're trying to achieve.

Some obvious questions are, partly derived from Steve Bellovin's book Thinking Security:

- Who might want to attack you?
- What do they want?
- Are you a random target, the specific target, or a stepping stone to mount attacks on others?
- What do you want to protect?
- From whom do you want to protect it?
- What opportunities do they have?
- When are you most vulnerable?
- What are their resources?
- What are *your* resources?
- Who else's security do you have to depend on whose decisions are out of your control?

At first glance, the simple answer to the first of those is "anyone and everyone". This helpful threat pyramid shows the tradeoff between the complexity of the attack and the number of people who can execute it. If you are the target of a well-funded nation-state that wants to get you, just you, and nobody else but you, you're probably hosed. Unless you're a crack Andromedan hacker unit (Bellovin's favorite arch-attacker), the imbalance of available resources will probably be insurmountable. If that's your situation, you want expert help - for example, from Citizen Lab.

Most of us are not in that situation. Most of us are random targets; beyond a raw bigger-is-better principle, few criminals care whose bank account they raid or which database they copy credit card details from. Today's highly interconnected world means that even a small random target may bring down other, much larger entities when an attacker leverages a foothold on our insignificant network to access the much larger ones that trusts us. Recognizing who else you put at risk is an important part of thinking this through.

Conversely, the point about risks that are out of your control is important. Forcing everyone to use strong, well-designed passwords will not matter if the site they're used for stores them in with inadequate protections.

The key point that most people forget: think about the individuals involved. Security is about practice, not just technology; as Bruce Schneier likes to say, it's a process not a product. If the policy you implement makes life hard for other people, they will eventually adopt workarounds that make their lives more manageable. They won't tell you what they've done, and you won't have anyone to shout to warn you where the risk is lurking.

Illustrations: Aladdin panomime at Nottingham Playhouse, 2008 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 14, 2018

Entirely preventable

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgThis week, the US House of Representatives Committee on Oversight and Government Reform used this phrase to describe the massive 2017 Equifax data breach: "Entirely preventable." It's not clear that the ensuing recommendations, while all sensible and valuable stuff - improve consumers' ability to check their records, reduce the use of Social Security numbers as unique identifiers, improve oversight of credit reporting agencies, increase transparency and accountability, hold federal contractors liable, and modernize IT security - will really prevent another similar breach from taking place. A key element was a bit of unpatched software that left open a vulnerability used by the attackers to gain a foothold - in part, the report says, because the legacy IT systems made patching difficult. Making it easier to do the right thing is part of the point of the recommendation to modernize the IT estate.

How closely is it feasible to micromanage companies the size and complexity of Equifax? What protection against fraud will we have otherwise?

The massive frustration is that none of this is new information or radical advice. On the consumer rights side, the committee is merely recommending practices that have been mandated in the EU for more than 20 years in data protection law. Privacy advocates have been saying for more than *30* years that the SSN is every example of how a unique identifier should *not* be used. Patching software is so basic that you can pick any random top ten security tips and find it in the top three. We sort of make excuses for small businesses because their limited resources mean they don't have dedicated security personnel, but what excuse can there possibly be for a company the size of Equifax that holds the financial frailty of hundreds of millions of people in its grasp?

The company can correctly say this: we are not its customers. It is not its job to care about us. Its actual customers - banks, financial services, employers, governments - are all well served. What's our problem? Zeynep Tufecki summed it up correctly on Twitter when she commented that we are not Equifax's customers but its victims. Until there are proportionate consequences for neglect and underinvestment in security, she said later, the companies and their departing-with-bonuses CEOs will continue scrimping on security even though the smallest consumer infraction means they struggle for years to reclaim their credit rating.

If Facebook and Google should be regulated as public utilities, the same is even more true for the three largest credit agencies, Equifax, Experian, and TransUnion, who all hold much more power over us, and who are much less accountable. We have no opt-out to exercise.

But even the punish-the-bastards approach merely smooths over and repaints the outside of a very ugly tangle of amyloid plaques. Real change would mean, as Mydex CEO David Alexander is fond of arguing, adopting a completely different approach that puts each of us in charge of our own data and avoids creating these giant attacker-magnet databases in the first place. See also data brokers, which are invisible to most people.

Meanwhile, in contrast to the committee, other parts of the Five Eyes governments seem set on undermining whatever improvements to our privacy and security we can muster. Last week the Australian parliament voted to require companies to back-door their encryption when presented with a warrant. While the bill stops at requiring technology companies to build in such backdoors as a permanent fixture - it says the government cannot require companies to introduce a "systemic weakness" or "systemic vulnerability" - the reality is that being able to break encryption on demand *is* a systemic weakness. Math is like that: either you can prove a theorem or you can't. New information can overturn existing knowledge in other sciences, but math is built on proven bedrock. The potential for a hole is still a hole, with no way to ensure that only "good guys" can use it - even if you can agree who the good guys are.

In the UK, GCHQ has notified the intelligence and security committee that it will expand its use of "bulk equipment interference". In other words, having been granted the power to hack the world's computers - everything from phones and desktops to routers, cars, toys, and thermostats - when the 2016 Investigatory Powers Act was being debated, GCHQ now intends to break its promise to use that power sparingly.

As I wrote in a submission to the consultation, bulk hacking is truly dangerous. The best hackers make mistakes, and it's all too easy to imagine a hacking error becoming the cause of a 100-car pile-up. As smart meters roll out, albeit delayed, and the smart grid takes shape, these, too, will be "computers" GCHQ has the power to hack. You, too, can torture someone in their own home just by controlling their thermostat. Fun! And important for national security. So let's do more of it.

In a time when attacks on IT infrastructure are growing in sophistication, scale, and complexity, the most knowledgeable people in government, whose job it is to protect us, are deliberately advocating weakening it. The consequences that are doubtless going to follow the inevitable abuse of these powers - because humans are humans and the mindset inside law enforcement is to assume the worst of all of us - will be entirely preventable.


Illustrations: GCHQ listening post at dawn (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


November 2, 2018

The Brother proliferation

Thumbnail image for Security_Monitoring_Centre-wikimedia.jpgThere's this about having one or two big threats: they distract attention from the copycat threats forming behind them. Unnoticed by most of us - the notable exception being Jeff Chester and his Center for Digital Democracy, the landscape of data brokers is both consolidating and expanding in new and alarming ways. Facebook and Google remain the biggest data hogs, but lining up behind them are scores of others embracing the business model of surveillance capitalism. For many, it's an attempt to refresh their aging business models; no one wants to become an unexciting solid business.

The most obvious group is the telephone companies - we could call them "legacy creepy". We've previously noted their moves into TV. For today's purposes, Exhibit A is Verizon's 2015 acquisition of AOL, which Fortune magazine attributed to AOL's collection of advertising platforms, particularly in video, as well as its more visible publishing sites (which include the Huffington Post, Engadget, and TechCrunch). Verizon's 2016 acquisition of Yahoo! and its 3 billion user accounts and long history also drew notice, most of it negative. Yahoo!, the reasoning went, was old and dying, plus: data breaches that were eventually found to have affected all 3 billion Yahoo! accounts. Oath, Verizon's name for the division that owns AOL and Yahoo!, also owns MapQuest and Tumblr. For our purposes, though, the notable factor is that with these content sites Verizon gets a huge historical pile of their users' data that it can combine with what it knows about its subscribers in truly disturbing ways. This is a company that only two years ago was fined $1.35 million for secretly tracking its customers.

Exhibit B is AT&T, which was barely finished swallowing Time-Warner (and presumably its customer database along with it) when it announced it would acquire the adtech company AppNexus, a deal Forrester's Joanna O'Connell calls a material alternative to Facebook and Google. Should you feel insufficiently disturbed by that prospect, in 2016 AT&T was caught profiting from handing off data to federal and local drug officials without a warrant. In 2015, the company also came up with the bright idea of charging its subscribers not to spy on them via deep packet inspection. For what it's worth, AT&T is also the longest-serving campaigner against network neutrality.

In 2017, Verizon and AT&T were among the biggest lobbyists seeking to up-end the Federal Communications Commission's privacy protections.

The move into data mining appears likely to be copied by legacy telcos internationally. As evidence, we can offer Exhibit C, Telenor, which in 2016 announced its entry into the data mining business by buying the marketing technology company Tapad.

Category number two - which we can call "you-thought-they-had-a-different-business-model creepy" - is a surprise, at least to me. Here, Exhibit A is Oracle, which is reinventing itself from enterprise software company to cloud and advertising platform supplier. Oracle's list of recent acquisitions is striking: the consumer spending tracker Datalogix, the "predictive intelligence" company DataFox, the cross-channel marketing company Responsys, the data management platform BlueKai, the cross-channel machine learning company Crosswise, and audience tracker AddThis. As a result, Oracle claims it can link consumers' activities across devices, online and offline, something just about everyone finds creepy except, apparently, the people who run the companies that do it. It may surprise you to find Adobe is also in this category.

Category number three - "newtech creepy" - includes data brokers like Acxiom, perhaps the best-known of the companies that have everyone's data but that no one's ever heard of. It, too, has been scooping up competitors and complementary companies, for example LiveRamp, which it acquired from fellow profiling company RapLeaf, and which is intended to help it link online and offline identities. The French company Criteo uses probabilistic matching to send ads following you around the web and into your email inbox. My favorite in this category is Quantcast, whose advertising and targeting activities include "consent management". In other words, they collect your consent or lack thereof to cookies and tracking at one website and then follow you around the web with it. Um...you have to opt into tracking to opt out?

Meanwhile, the older credit bureaus Experian and Equifax - "traditional creepy" - have been buying enhanced capabilities and expanded geographical reach and partnering with telcos. One of Equifax's acquisitions, TALX, gave the company employment and payroll information on 54 million Americans.

The detail amounts to this: big companies with large resources are moving into the business of identifying us across devices, linking our offline purchases to our online histories, and packaging into audience segments to sell to advertisers. They're all competing for the same zircon ring: our attention and our money. Doesn't that make you feel like a valued member of society?

At the 2000 Computers, Freedom, and Privacy conference, the science fiction writer Neal Stephenson presciently warned that focusing solely on the threat of Big Brother was leaving us open to invasion by dozens of Little Brothers. It was good advice. Now, Very Large Brothers are proliferating all around us. GDPR is supposed to redress this imbalance of power, but it only works when you know who's watching you so you can mount a challenge.


Illustrations: "Security Monitoring Centre" (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 27, 2018

We know where you should live

Thumbnail image for PatCadigan-Worldcon75.jpgIn the memorable panel "We Know Where You Will Live" at the 1996 Computers, Freedom, and Privacy conference, the science fiction writer Pat Cadigan startled everyone, including fellow panelists Vernor Vinge, Tom Maddox, and Bruce Sterling, by suggesting that some time in the future insurance companies would levy premiums for "risk purchases" - beer, junk foods - in supermarkets in real time.

Cadigan may have been proved right sooner than she expected. Last week, John Hancock, a 156-year-old US insurance company, announced it would discontinue underwriting traditional life insurance policies. Instead, in future all its policies will be "interactive"; that is, they will come with the "Vitality" program, under which customers supply data collected by their wearable fitness trackers or smartphones. John Hancock promotes the program, which it says is already used by 8 million customers in 18 countries, and as providing discounts. In the company's characterization, it's a sort of second reward for "living healthy". In the company's depiction, everyone wins - you get lower premiums and a healthier life, and John Hancock gets your data, enabling it to make more accurate risk assessments and increase its efficiency.

Even then, Cadigan was not the only one with the idea that insurance companies would exploit the Internet and the greater availability of data. A couple of years later, a smart and prescient friend suggested that we might soon be seeing insurance companies offer discounts for mounting a camera on the hood of your car so they could mine the footage to determine blame when accidents occurred. This was long before smartphones and GoPros, but the idea of small, portable cameras logging everything goes back at least to 1945, when Vannevar Bush wrote As We May Think, an essay that imagined something a lot like the web, if you make allowances for storing the whole thing on microfilm.

This "interactive" initiative is clearly a close relative of all these ideas, and is very much the kind of thing University of Maryland professor Frank Pasquale had in mind when writing his book The Black Box Society. John Hancock may argue that customers know what data they're providing, so it's not all that black a box, but the reality is that you only know what you upload. Just like when you download your data from Facebook, you do not know what other data the company matches it with, what else is (wrongly or rightly) in your profile, or how long the company will keep penalizing you for the month you went bonkers and ate four pounds of candy corn. Surely it's only a short step to scanning your shopping cart or your restaurant meal with your smartphone to get back an assessment of how your planned consumption will be reflected in your insurance premium. And from there, to automated warnings, and...look, if I wanted my mother lecturing me in my ear I wouldn't have left home at 17.

There has been some confusion about how much choice John Hancock's customers have about providing their data. The company's announcement is vague about this. However, it does make some specific claims: Vitality policy holders so far have been found to live 13-21 years longer than the rest of the insured population; generate 30% lower hospitalization costs; take nearly twice as many steps as the average American; and "engage with" the program 576 times a year.

John Hancock doesn't mention it, but there are some obvious caveats about these figures. First of all, the program began in 2015. How does the company have data showing its users live so much longer? Doesn't that suggest that these users were living longer *before* they adopted the program? Which leads to the second point: the segment of the population that has wearable fitness trackers and smartphones tends to be more affluent (which tends to favor better health already) and more focused on their health to begin with (ditto). I can see why an insurance company would like me to "engage with" its program twice a day, but I can't see why I would want to. Insurance companies are not my *friends*.

At the 2017 Computers, Privacy, and Data Protection, one of the better panels discussed the future for the insurance industry in the big data era. For the insurance industry to make sense, it requires an element of uncertainty: insurance is about pooling risk. For individuals, it's a way of managing the financial cost of catastrophes. Continuously feeding our data into insurance companies so they can more precisely quantify the risk we pose to their bottom line will eventually mean a simple equation: being able to get insurance at a reasonable rate is a pretty good indicator you're unlikely to need it. The result, taken far enough, will be to undermine the whole idea of insurance: if everything is known, there is no risk, so what's the point? betting on a sure thing is cheating in insurance just as surely as it is in gambling. In the panel, both Katja De Vries and Mireille Hildebrandt noted the sinister side of insurance companies acting as "nudgers" to improve our behavior for their benefit.

So, less "We know where you will live" and more "We know where and how you *should* live."


Illustrations: Pat Cadigan (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 14, 2018

Hide by default

Beeban-Kidron-Dubai-2016.jpgLast week, defenddigitalme, a group that campaigns for children's data privacy and other digital rights, and Livingstone's group at the London School of Economics assembled a discussion of the Information Commissioner's Office's consultation on age-appropriate design for information society services, which is open for submissions until September 19. The eventual code will be used by the Information Commissioner when she considers regulatory action, may be used as evidence in court, and is intended to guide website design. It must take into account both the child-related provisions of the child-related provisions of the General Data Protection Regulation and the United National Convention on the Rights of the Child.

There are some baseline principles: data minimization, comprehensible terms and conditions and privacy policies. The last is a design question: since most adults either can't understand or can't bear to read terms and conditions and privacy policies, what hope of making them comprehensible to children? The summer's crop of GDPR notices is not a good sign.

There are other practical questions: when is a child not a child any more? Do age bands make sense when the capabilities of one eight-year-old may be very different from those of another? Capacity might be a better approach - but would we want Instagram making these assessments? Also, while we talk most about the data aggregated by commercial companies, government and schools collect much more, including biometrics.

Most important, what is the threat model? What you implement and how is very different if you're trying to protect children's spaces from ingress by abusers than if you're trying to protect children from commercial data aggregation or content deemed harmful. Lacking a threat model, "freedom", "privacy", and "security" are abstract concepts with no practical meaning.

There is no formal threat model, as the Yes, Minister episode The Challenge (series 3, episode 2), would predict. Too close to "failure standards". The lack is particularly dangerous here, because "protecting children" means such different things to different people.

The other significant gap is research. We've commented here before on the stratification of social media demographics: you can practically carbon-date someone by the medium they prefer. This poses a particular problem for academics, in that research from just five years ago is barely relevant. What children know about data collection has markedly changed, and the services du jour have different affordances. Against that, new devices have greater spying capabilities, and, the Norwegian Consumer Council finds (PDF), Silicon Valley pays top-class psychologists to deceive us with dark patterns.

Seeking to fill the research gap are Sonia Livingstone and Mariya Stoilova. In their preliminary work, they are finding that children generally care deeply about their privacy and the data they share, but often have little agency and think primarily in interpersonal terms. The Cambridge Analytica scandal has helped inform them about the corporate aggregation that's taking place, but they may, through familiarity, come to trust people such as their favorite YouTubers and constantly available things like Alexa in ways their adults disl. The focus on Internet safety has left many thinking that's what privacy means. In real-world safety, younger children are typically more at risk than older ones; online, the situation is often reversed because older children are less supervised, explore further, and take more risks.

The breath of passionate fresh air in all this, is Beeban Kidron, an independent - that is, appointed - member of the House of Lords who first came to my attention by saying intelligent and measured things during the post-referendum debate on Brexit. She refuses to accept the idea that oh, well, that's the Internet, there's nothing we can do. However, she *also* genuinely seems to want to find solutions that preserve the Internet's benefits and incorporate the often-overlooked child's right to develop and make mistakes. But she wants services to incorporate the idea of childhood: if all users are equal, then children are treated as adults, a "category error". Why should children have to be resilient against systemic abuse and indifference?

Kidron, who is a filmmaker, began by doing her native form of research: in 2013 she made a the full-length documentary InRealLife that studied a number of teens using the Internet. While the film concludes on a positive note, many of the stories depressingly confirm some parents' worst fears. Even so it's a fine piece of work because it's clear she was able to gain the trust of even the most alienated of the young people she profiles.

Kidron's 5Rights framework proposes five essential rights children should have: remove, know, safety and support, informed and conscious use, digital literacy. To implement these, she proposes that the industry should reverse its current pattern of defaults which, as is widely known, 95% of users never change (while 98% never read terms and conditions). Companies know this, and keep resetting the defaults in their favor. Why shouldn't it be "hide by default"?

This approach sparked ideas. A light that tells a child they're being tracked or recorded so they can check who's doing it? Collective redress is essential: what 12-year-old can bring their own court case?

The industry will almost certainly resist. Giving children the transparency and tools with which to protect themselves, resetting the defaults to "hide"...aren't these things adults want, too?


Illustrations: Beeban Kidron (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 24, 2018

Cinema surveillant

Dragonfly-Eyes_poster_3-web-460.jpgThe image is so low-resolution that it could be old animation. The walking near-cartoon figure has dark, shoulder-length hair and a shape that suggests: young woman. She? stares at a dark oblong in one hand while wandering ever-closer to a dark area. A swimming pool? A concrete river edge? She wavers away, and briefly it looks like all will be well. Then another change of direction, and in she falls, with a splash.

This scene opens Dragonfly Eyes, which played this week at London's Institute of Contemporary Arts. All I knew going in was that the movie had been assembled from fragments of imagery gathered from Chinese surveillance cameras. The scene described above wasn't *quite* the beginning - first, the filmmaker, Chinese artist Xu Bing, provides a preamble explaining that he originally got the idea of telling a story through surveillance camera footage in 2013, but it was only in 2015, when the cameras began streaming live to the cloud, that it became a realistic possibility. There was also, if I remember correctly, a series of random images and noise that in retrospect seem like an orchestra tuning up before launching into the main event, but at the time were rather alarming. Alarming as in, "They're not going to do this for an hour and a half, are they?"

They were not. It was when the cacophony briefly paused to watch a bare-midriffed young woman wriggle suggestively on a chair, pushing down on the top of her jeans (I think) that I first thought, "Hey, did these guys get these people's permission?" A few minutes later, watching the phone?-absorbed woman ambling along the poolside seemed less disturbing, as her back was turned to the camera. Until: after she fell the splashing became fainter and fainter, and after a little while she did not reappear and the water calmed. Did we just watch the recording of a live drowning?

Apparently so. At various times during the rest of the movie we return to a police control room where officers puzzle over that same footage much the way we in the audience were puzzling over Xu's film. Was it suicide? the police ponder while replaying the footage.

Following the plot was sufficiently confusing that I'm grateful that Variety explains it. Ke Fan, an agricultural technician, meets a former Buddhist-in-training, Qing Ting, while they bare both working at a dairy farm and follows her when she moves to a new city. There, she gets fired from her job at a dry cleaner's for failing to be sufficiently servile to an unpleasant, but wealthy and valuable customer. Angered by the situation, Ke Fan repeatedly rams the unpleasant customer's car; this footage is taken from inside the car being rammed, so he appears to be attacking you directly. Three years later, when he gets out of prison, he finds (or possibly just believes he finds) that Qing Ting has had plastic surgery and under a new name is now a singing webcam celebrity who makes her living by soliciting gifts and compliments from her viewers, who turn nasty when she insults a more popular rival...

The characters and narration are voiced by Chinese actors, but the pictures, as one sees from the long list of camera locations and GPS coordinates included in the credits, are taken from 10,000 hours of real-world found imagery, which Xu and his assistants edited down to 81 minutes. Given this patchwork, it's understandably hard to reliably follow the characters through the storyline; the cues we usually rely on - actors and locations that become familiar - simply aren't clear. Some sequences are tagged with the results of image recognition and numbering; very Person of Interest. About a third of the way through, however, the closer analogue that occurred to me is Woody Allen's 1966 movie What's Up, Tiger Lily?, which Allen constructed by marrying the footage from a Japanese spy film to his own unrelated dialogue. It was funny, in 1966.

While Variety calls the storyline "run-of-the-mill melodramatic", in reality the plot is supererogatory. Much more to the point - and indicated in the director's preamble - is that all this real-life surveillance footage can be edited into any "reality" you want. We sort of knew this from reality TV, but the casts of those shows signed up to perform, even if they didn't quite expect the extent to which they'd be exploited. The people captured on Xu's extracts from China's estimated 200 million surveillance cameras, are...just living. The sense of that dissonance never leaves you at any time during the movie.

I can't spoil the movie's ending by telling you whether Ke Fan finds Qing Ting because it matters so little that I don't remember. The important spoiler is this: the filmmaker has managed to obtain permission from 90% of the people who appear in the fragments of footage that make up the film (how he found them would be a fascinating story in itself), and advertises a contact address for the rest to seek him out. In one sense, whew! But then: this is the opt-out, "ask forgiveness, not permission" approach we're so fed up with from Silicon Valley. The fact that Chinese culture is different and the camera streams were accessible via the Internet doesn't make it less disturbing. Yes, that is the point.


Illustrations: Dragonfly Eyes poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


August 17, 2018

Redefinition

Robber-barons2-bosses-senate.pngOnce upon a nearly-forgotten time, the UK charged for all phone calls via a metered system that added up frighteningly fast when you started dialing up to access the Internet. The upshot was that early Internet services like the now-defunct Demon Internet could charge a modest amount (£10) per month, secure that the consciousness of escalating phone bills would drive subscribers to keep their sessions short. The success of Demon's business model, therefore, depended on the rapaciousness of strangers.

I was reminded of this sort of tradeoff by a discussion in the LA Times (proxied for EU visitors) of cable-cutters. Weary of paying upwards of $100 a month for large bundles of TV channels they never watch, Americans are increasingly dumping them in favor of cheaper streaming subscriptions. As a result, ISPs that depend on TV package revenues are raising their broadband prices to compensate, claiming that the money is needed to pay for infrastructure upgrades. In the absence of network neutrality requirements, those raised prices could well be complemented by throttling competitors' services.

They can do this, of course, because so many areas of the US are lucky if they have two choices of Internet supplier. That minimalist approach to competition means that Americans pay more to access the Internet than many other countries - for slower speeds. It's easy to raise prices when your customers have no choice.

The LA Times holds out hope that technology will save them; that is, the introduction of 5G, which promises better speeds and easier build-out, will enable additional competition from AT&T, Verizon, and Sprint - or, writer David Lazarus adds, Google, Facebook, and Amazon. In the sense of increasing competition, this may be the good news Lazarus thinks it is, even though he highlights AT&T's and Verizon's past broken promises. I'm less sure: physics dictates that despite its greater convenience the fastest wireless will never be as fast as the fastest wireline.

5G has been an unformed mirage on the horizon for years now, but apparently no longer: CNBC says Verizon's 5G service will begin late this year in Houston, Indianapolis, Los Angeles, and Sacramento and give subscribers TV content in the form of an Apple TV and a YouTube subscription. A wireless modem will obviate the need for cabling.

The potential, though, is to entirely reshape competition in both broadband and TV content, a redefinition that began with corporate mergers such as Verizon's acquisition of AOL and Yahoo (now gathered into its subsidiary, "Oath") and AT&T's whole-body swallowing of Time Warner, which includes HBO. Since last year's withdrawal of privacy protections passed during the Obama administration, ISPs have greater latitude to collect and exploit their customers' online data trails. Their expansion into online content makes AT&T and Verizon look more like competitors to the online behemoths. For consumers, greater choice in bandwidth provider is likely to be outweighed by the would-you-like-spam-with-that complete lack of choice about data harvesting. If the competition 5G opens up is provided solely by avid data miners who all impose the same terms and conditions...well, which robber baron would you like to pay?

There's a twist. The key element that's enabled Amazon and, especially, Netflix to succeed in content development is being able to mine the data they collect about their subscribers. Their business models differ - for Amazon, TV content is a loss-leader to sell subscriptions to its premium delivery service; for Netflix, TV production is a bulwark against dependence on third-party content creators and their licensing fees - but both rely on knowing what their customers actually watch. Their ambitions, too, are changing. Amazon has canceled much of its niche programming to chase HBO-style blockbusters, while Netflix is building local content around the world. Meanwhile, AT&T wants HBO to expand worldwide and focus less on its pursuit of prestige; Apple is beginning TV production; and Disney is pulling its content from Netflix to set up its own streaming service.

The idea that many of these companies will be directly competing in all these areas is intriguing, and its impact will be felt outside the US. It hardly matters to someone in London or Siberia how much Internet users in Indianapolis pay for their broadband service or how good it is. But this reconfiguration may well end the last decade's golden age of US TV production, particularly but not solely for drama. All the new streaming services began by mining the back catalogue to build and understand an audience and then using creative freedom to attract talent frustrated by the legacy TV networks' micromanagement of every last detail, a process the veteran screenwriter Ken Levine has compared to being eaten to death by moths.

However, one last factor could provide an impediment to the formation of this landscape: on June 28, California adopted the Consumer Privacy Act, which will come into force in 2020. As Nick Confessore recounts in the New York Times Magazine, this "overnight success" required years of work. Many companies opposed the bill: Amazon, Google, Microsoft, Uber, Comcast, AT&T, Cox, Verizon, and several advertising lobbying groups; Facebook withdrew its initial opposition.. EFF calls it "well-intentioned but flawed", and is proposing changes. ISPs and technology companies also want (somewhat different) changes. EPIC's Mark Rotenberg called the bill's passage a "milestone moment". It could well be.


Illustrations: Robber barons overseeing the US Congress (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 22, 2018

Humans

virginmary-devil.jpgOne of the problems in writing about privacy over the last nearly 30 years is that it's easy for many people to see it as a trivial concern when you look at what's going on in the world: terrorist attacks, economic crashes, and the rise of extremism. To many, the case for increasing surveillance "for your safety" is a reasonable one.

I've never believed the claim that people - young or old - don't care about their privacy. People do care about their privacy, but, as previously noted, it's complicated. The biggest area of agreement is money: hardly anyone publishes the details of their finances unless forced. But beyond that, people have different values about what is private, and who should know it. For some women, saying openly they've had abortions is an essential political statement to normalize a procedure and a choice that is under threat. For others, it's too personal to disclose.

The factors involved vary: personality, past experience, how we've been treated, circumstances. It is easy for those of us who were born into economic prosperity and have lived in sectors of society where the governments in our lifetimes have treated us benignly to underestimate the network externalities of the decisions we make.

In February 2016, when the UK's Investigatory Power Act (2016) was still a mere bill under discussion, I wrote this:

This column has long argued that whenever we consider granting the State increased surveillance powers we should imagine life down the road if those powers are available to a government less benign than the present one. Now, two US 2016 presidential primaries in, we can say it thusly: what if the man wielding the Investigatory Powers Bill is Donald Trump?

Much of the rest of that net.wars focused on the UK bill and some aspects of the data protection laws. However, it also included this:

Finally, Privacy International found "thematic warrants" hiding in paragraph 212 of the explanatory notes and referenced in clauses 13(2) and 83 of the draft bill. PI calls this a Home Office attempt to disguise these as "targeted surveillance". They're so vaguely defined - people or equipment "who share a common purpose who carry on, or may carry on, a particular activity" - that they could include my tennis club. PI notes that such provisions contravene a long tradition of UK law that has prohibited general warrants, and directly conflict with recent rulings by the European Court of Human Rights.

It's hard to guess who Trump would turn this against first: Muslims, Mexicans, or Clintons.

The events of the last year and a half - parents and children torn apart at the border; the Border Patrol operating an 11-hour stop-and-demand-citizenship checkpoint on I-95 in Maine, legal under the 1953 rule that the "border" is a 100-mile swath in which the Fourth Amendment is suspended; and, well you read the news - suggest the question was entirely fair.

Now, you could argue that universal and better identification could stop this sort of the thing by providing the facility to establish quickly and unambiguously who has rights. You could even argue that up-ending the innocent-until-proven-guilty principle (being required to show papers on demand presumes that you have no right to be where you are until you prove you do) is worth it (although you'd still have to fight an angry hive of constitutional lawyers). I believe you'd be wrong on both counts. Identification is never universal; there are always those who lack the necessary resources to acquire it. The groups that wind up being disenfranchised by such rules are the most vulnerable members of the groups that are suffering now. It won't even deter those who profit from spreading hate - and yes, I am looking at the Daily Mail - from continuing to do so; they will merely target another group. The American experience already shows this. Despite being a nation of immigrants, Americans are taught that their own rights matter more than other people's; and as Hua Hsu writes in a New Yorker review of Nancy Isenberg's recent book, White Trash, that same view is turned daily on the "lower" parts of the US's classist and racist hierarchy.

I have come to believe that there is a causative link between violating people's human rights and the anti-privacy values of surveillance and control. The more horribly we treat people and the less we offer them trust, the more reason we have to be think that they and their successors will want revenge - guilt and the expectation of punishment operating on a nation-state scale. The logic would then dictate that they must be watched even more closely. The last 20 years of increasing inequality have caused suspicion to burst the banks of "the usual suspects". "Privacy" is an inadequate word to convey all this, but it's the one we have.

A few weeks ago, I reminded a friend of the long-running mantra that if you have nothing to hide you have nothing to fear. "I don't see it that way at all," he said. "I see it as, I have nothing to hide, so why are you looking at me?"


Illustrations: 'Holy Mary full of grace, punch that devil in the face', book of hours ('The De Brailes Hours'), Oxford ca. 1240 BL, Add 49999, fol. 40V (via Discarding Images).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 8, 2018

Block that metaphor

oldest-robot-athens-2015-smaller.jpgMy favourite new term from this year's Privacy Law Scholars conference is "dishonest anthropomorphism". The term appeared in a draft paper written by Brenda Leung and Evan Selinger as part of a proposal for its opposite, "honest anthropomorphism". The authors' goal was to suggest a taxonomy that could be incorporated into privacy by design theory and practice, so that as household robots are developed and deployed they are less likely to do us harm. Not necessarily individual "harm" as in Isaac Asimov's Laws of Robotics, which tended to see robots as autonomous rather than a projection of its manufacturer into our personal space, therefore glossing over this more intentional and diffuse kind of deception. Pause to imagine that Facebook goes into making robots and you can see what we're talking about here.

"Dishonest anthropomorphism" derives from an earlier paper, Averting Robot Eyes by Margo Kaminski, Matthew Rueben, Bill Smart, and Cindy Grimm, which proposes "honest anthropomorphism" as a desirable principle in trying to protect people from the privacy problems inherent in admitting a robot, even something as limited as a Roomba, into your home. (At least three of these authors are regular attendees at We Robot since its inception in 2012.) That paper categorizes three types of privacy issues that robots bring: data privacy, boundary management, and social/relational.

The data privacy issues are substantial. A mobile phone or smart speaker may listen to or film you, but it has to stay where you put it (as Smart has memorably put it, "My iPad can't stab me in my bed"). Add movement and processing, and you have a roving spy that can collect myriad kinds of data to assemble an intimate picture of your home and its occupants. "Boundary management" refers to capabilities humans may not realize their robots have and therefore don't know to protect themselves against - thermal sensors that can see through walls, for example, or eyes that observe us even when the robot is apparently looking elsewhere (hence the title).

"Social/relational" refers to the our social and cultural expectations of the beings around us. In the authors' examples, unscrupulous designers can take advantage of our inclination to apply our expectations of other humans to entice us into disclosing more than we would if we truly understood the situation. A robot that mimics human expressions that we understand through our own muscle memory may be highly deceptive, inadvertently or intentionally. Robots may also be given the capability of identifying micro-reactions we can't control but that we're used to assuming go unnoticed.

A different session - discussing research by Marijn Sax, Natalie Helberger, and Nadine Bol - provided a worked example, albeit one without the full robot component. In other words: they've been studying mobile health apps. Most of these are obviously aimed at encouraging behavioral change - walk 10,000 steps, lose weight, do yoga. What the authors argue is that they are more aimed at effecting economic change than at encouraging health, an aspect often obscured from users. Quite apart from the wrongness of using an app marketed to improve your health as a vector for potentially unrelated commercial interests, the health framing itself may be questionable. For example, the famed 10,000 steps some apps push you to take daily has no evidence basis in medicine: the number was likely picked as a Japanese marketing term in the 1960s. These apps may also be quite rigid; in one case that came up during the discussion, an injured nurse found she couldn't adapt the app to help her follow her doctor's orders to stay off her feet. In other words, they optimize one thing, which may or may not have anything to do with health or even health's vaguer cousin, "wellness".

Returning to dishonest anthropomorphism, one suggestion was to focus on abuse rather than dishonesty; there are already laws that bar unfair practices and deception. After all, the entire discipline of user design is aimed at nudging users into certain behaviors and discouraging others. With more complex systems, even if the aim is to make the user feel good it's not simple: the same user will react differently to the same choice at different times. Deciding which points to single out in order to calculate benefit is as difficult as trying to decide where to begin and end a movie story, which the screenwriter William Goldman has likened to deciding where to cut a piece of string. The use of metaphor was harmless when we were talking desktops and filing cabinets; much less so when we're talking about a robot cat that closely emulates a biological cat and leads us into the false sense that we can understand it in the same way.

Deception is becoming the theme of the year, perhaps partly inspired by Facebook and Cambridge Analytica. It should be a good thing. It's already clear that neither the European data protection approach nor the US consumer protection approach will be sufficient in itself to protect privacy against the incoming waves of the Internet of Things, big data, smart infrastructure, robots, and AI. As the threats to privacy expand, the field itself must grow in new directions. What made these discussions interesting is that they're trying to figure out which ones.

Illustrations: Recreation of oldest known robot design (from the Ancient Greek Technology exhibition)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 1, 2018

The three IPs

Thumbnail image for 1891_Telegraph_Lines.jpgAgainst last Friday's date history will record two major European events. The first, as previously noted is the arrival into force of the General Data Protection Regulation, which is currently inspiring a number of US news sites to block Europeans. The second is the amazing Irish landslide vote to repeal the 8th amendment to the country's constitution, which barred legislators from legalizing abortion. The vote led the MEP Luke Ming Flanagan to comment that, "I always knew voters were not conservative - they're just a bit complicated."

"A bit complicated" sums up nicely most people's views on privacy; it captures perfectly the cognitive dissonance of someone posting on Facebook that they're worried about their privacy. As Merlin Erroll commented, terrorist incidents help governments claim that giving them enough information will protect you. Countries whose short-term memories include human rights abuses set their balance point differently.

The occasion for these reflections was the 20th birthday of the Foundation for Information Policy Research. FIPR head Ross Anderson noted on Tuesday that FIPR isn't a campaigning organization, "But we provide the ammunition for those who are."

Led by the late Caspar Bowden, FIPR was most visibly activist in the late 1990s lead-up to the passage of the now-replaced Regulation of Investigatory Powers Act (2000). FIPR in general and Bowden in particular were instrumental in making the final legislation less dangerous than it could have been. Since then, FIPR helped spawn the 15-year-old European Digital Rights and UK health data privacy advocate medConfidential.

Many speakers noted how little the debates have changed, particularly regarding encryption and surveillance. In the case of encryption, this is partly because mathematical proofs are eternal, and partly because, as Yes, Minister co-writer Antony Jay said in 2015, large organizations such as governments always seek to impose control. "They don't see it as anything other than good government, but actually it's control government, which is what they want.". The only change, as Anderson pointed out, is that because today's end-to-end connections are encrypted, the push for access has moved to people's phones.

Other perennials include secondary uses of medical data, which Anderson debated in 1996 with the British Medical Association. Among significant new challenges, Anderson, like many others noted the problems of safety and sustainability. The need to patch devices that can kill you changes our ideas about the consequences of hacking. How do you patch a car over 20 years? he asked. One might add: how do you stop a botnet of pancreatic implants without killing the patients?

We've noted here before that built infrastructure tends to attract more of the same. Today, said Duncan Campbell, 25% of global internet traffic transits the UK; Bude, Cornwall remains the critical node for US-EU data links, as in the days of the telegraph. As Campbell said, the UK's traditional position makes it perfectly placed to conduct global surveillance.

One of the most notable changes in 20 years: there were no less than two speakers whose open presence would have been unthinkable: Ian Levy, the technical director of the National Cyber Security centre, the defensive arm of GCHQ, and Anthony Finkelstein, the government's chief scientific advisor for national security. You wouldn't have seen them even ten years ago, when GCHQ was deploying its Mastering the Internet plan, known to us courtesy of Edward Snowden. Levy made a plea to get away from the angels versus demons school of debate.

"The three horsemen, all with the initials 'IP' - intellectual property, Internet Protocol, and investigatory powers - bind us in a crystal lattice," said Bill Thompson. The essential difficulty he was getting at is that it's not that organizations like Google DeepMind and others have done bad things, but that we can't be sure they haven't. Being trustworthy, said medConfidential's Sam Smith, doesn't mean you never have to check the infrastructure but that people *can* check it if they want to.

What happens next is the hard question. Onora O'Neill suggested that our shiny, new GDPR won't work, because it's premised on the no-longer-valid idea that personal and non-personal data are distinguishable. Within a decade, she said, new approaches will be needed. Today, consent is already largely a façade; true consent requires understanding and agreement.

She is absolutely right. Even today's "smart" speakers pose a challenge: where should my Alexa-enabled host post the privacy policy? Is crossing their threshold consent? What does consent even mean in a world where sensors are everywhere and how the data will be used and by whom may be murky. Many of the laws built up over the last 20 years will have to be rethought, particularly as connected medical devices pose new challenges.

One of the other significant changes will be the influx of new and numerous stakeholders whose ideas about what the internet is are very different from those of the parties who have shaped it to date. The mobile world, for example, vastly outnumbers us; the Internet of Things is being developed by Asian manufacturers from a very different culture.

It will get much harder from here, I concluded. In response, O'Neill was not content. It's not enough, she said, to point out problems. We must propose at least the bare bones of solutions.


Illustrations: 1891 map of telegraph lines (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


May 18, 2018

Fool me once

new-22portobelloroad.jpgMost of the "us" who might read this rarely stop to marvel at the wonder that is our daily trust in the society that surrounds us. One of the worst aspects of London Underground's incessant loud reminders to report anything suspicious - aside from the slogan, which is dumber than a bag of dead mice - is that it interrupts the flow of trust. It adds social friction. I hear it, because I don't habitually block out the world with headphones.

Friction is, of course, the thing that so many technologies are intended to eliminate. And they might, if only we could trust them.

Then you read things like this news, that Philip Morris wants to harvest data from its iQOS e-cigarette. If regulators allow, Philip Morris will turn on functions in the device's internal chips that capture data on its user's smoking habits, not unlike ebook readers' fine-grained data collection. One can imagine the data will be useful for testing strategies for getting people to e-smoke longer.

This example did not arrive in time for this week's Nuances of Trust event, hosted by the Alliance for Internet of Things Innovation (AIOTI) and aimed at producing intelligent recommendations for how to introduce trust into the Internet of Things. But, so often, it's the company behind the devices you can't trust. For another example: Volkswagen.

Partly through the problem-solving session, we realized we had regenerated three of Lawrence Lessig's four modalities of constraining behavior: technology/architecture, law, market, social norms. The first changes device design to bar shipping loads of data about us to parts unknown; law pushes manufacturers into that sort of design, even if it cost more; market would mean people refused to buy privacy-invasive devices, and social norms used to be known as "peer pressure". Right now, technology is changing faster than we can create new norms. If a friend has an Amazon Echo at home, does entering their house constitute signing Amazon's privacy policy? Should they show me the privacy policy before I enter? Is it reasonable to ask them to turn it off while I'm there? We could have asked questions like "Are you surreptitiously recording me?" at any time since portable tape recorders were invented, but absent a red, blinking light we felt safe in assuming no. Now, suddenly, trusting my friend requires also trusting a servant belonging to a remote third party. If I don't, it's a social cost - to me, and maybe to my friend, but not to Amagoople.

On Tuesday, Big Brother Watch provided a far more alarming example when director Silkie Carlo launched BBW's report on automated facial recognition (PDF). Now, I know the technically minded will point out grumpily that all facial recognition is "automated" because it's a machine what does it, but what BBW means is a system in which CCTV and other cameras automatically feed everything they gather into a facial recognition system that sprinkles AI fairy dust and pops out Persons of Interest (I blame TV). Various UK police have deployed these AFR systems at concerts and football and rugby games; at the 2016 and 2017 Notting Hill Carnivals; on Remembrance Sunday 2017 to restrict "fixated individuals"; and at peaceful demonstrations. On average, fewer than 9% of matches were accurate; but that's little consolation when police pick you out of the hordes arriving by train for an event and insist on escorting you under watch. The system London's Met Police used had a false positive rate of over 98%! How does a system like that even get out of the lab?

Neither the police nor the Home Office seem to think that bringing in this technology requires any public discussion; when asked they play the Yes, Minister game of pass the policy. Within the culture of the police, it may in fact be a social norm that invasive technologies whose vendors promise magical preventative results should be installed as quickly as possible before anyone can stop them. Within the wider culture...not so much.

This is the larger problem with what AIOTI is trying to do. It's not just that the devices themselves are insecure, their risks capricious, and the motives of their makers suspect. It's that long after you've installed and stopped thinking about a system incorporating these devices someone else can come along to subvert the whole thing. How do you ensure that the promise you make today cannot be broken by yourself or others in future? The problem is near-identical to the one we face with databases: each may be harmless on its own, but mash them together and you have a GDPR fine-to-the-max dataset of reidentification.

Somewhere in the middle of this an AIOTI participant suggested that the IoT rests on four pillars: people, processes, things, data. Trust has pillars, too, that take a long time to build but that can be destroyed in an instant: choice, control, transparency, and, the one we talk about least, but perhaps the most important, familiarity. The more something looks familiar, the more we trust it, even when we shouldn't. Both the devices AIOTI is fretting about and the police systems BBW deplores have this in common: they center on familiar things whose underpinnings have changed without our knowledge - yet their owners want us to trust them. We wish we could.


Illustrations:: Orwell's house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 3, 2018

Data protection panic

gdpr-countdown.jpgWherever you go at the moment someone is asking panicked questions about the General Data Protection Regulation, which comes into effect on May 25, 2018. The countdown above appeared at a privacy engineering workshop on April 27, and looked ominous enough for Buffy to want to take a whack at it.

Every day new emails arrive asking me to confirm I want to stay on various mailing lists and announcing new privacy policies. Most seem to have grasped the idea that positive consent is required, but some arrive saying you need to nothing to stay stay on their list. I am not a lawyer, but I know that's backwards. The new regime is opt-in, not opt-out. You cannot extract consent from silence.

At the local computer repair place (hard drive failure, don't ask), where my desktop was being punished with diagnostics, the owner asks, "Is encryption necessary? A customer is asking." We agree, from our own reading, that encryption is not *required*, but that liability is less if the data is encrypted and therefore can't be read, and as a consequence sold, reidentified, sprayed across the internet, or used for blackmail. And you don't have to report it as a data breach or notify customers. I explain this to my tennis club and another small organization. Then I remember: crypto is ridiculously hard to implement.

The UK's Information Commissioner's Office has a helpful 12-step guide to assessing what you have to do. My reading, for example, is that a small community interest organization does not have to register or appoint a data controller, though it does need to agree who will answer any data protection complaints it gets. The organization's web host, however, has sent a contract written in data-protectionese, a particularly arcane subset of lawyerese. Asked to look at it, I blanched and started trying to think which of my privacy lawyer friends might be most approachable. Then I realized: tear up that contract and write a new one in English that says who's responsible for what. Someone probably found a model contract somewhere that was written for businesses with in-house lawyers who understood it.

So much is about questioning your assumptions. You think the organization you're involved with has acquired all its data one record at a time when people have signed up to become members. Well, is that true? Have you ever used anyone else's mailing list to trawl for new members? Have you ever shared yours with another organization because you were jointly running a conference? How many copies of the data exist and where are they stored, and how? These are audits few ever stop to do. The threat of the loss of 4% of global revenues is very effective in making them happen.

The computer repair store owner began to realize this aspect. The shop asks new customers to fill out a form, and then adds their information to their database, which means that the next time you bring your machine in they have its whole service history. We mulled over this form for a bit. "I should add a line at the bottom," he said. Yes: a line that asks for permission to include the person on their mailing list for offers and discounts and that says the data won't be shared.

Then I asked him, "How much benefit does the shop get from emailing these offers?" Um, well...none, really. People sometimes come in and ask about them, but they don't buy. So why do them? Good point. The line shrank to something on the order of: "We do not share your data with any third parties".

This is in fact the effect GDPR is intended to have: make people rethink their practices. Some people don't need to keep all the data they have - one organization I'm involved with has a few thousand long-lapsed members in its database with no clear way to find and delete them. For others, the marketing they do isn't really worth the customer irritation. Getting organizations to clean up just those two things seems worth the trouble.

But then he asked, "Who is going to enforce this?" And the reality is there is probably no one until there's a complaint. In the UK, the ICO's budget (PDF) is widely held to be inadequate, and it's not increasing. Elsewhere, it took the tenacity of Max Schrems to get regulators to take the actions that eventually brought down Safe Harbor. A small shop would be hugely unlucky to be a target of regulatory action unless customers were complaining and possibly not even then. Except in rare cases these aren't the people we want targeted; we want the regulators to focus first on egregious harms, repeat offenders with great power, such as Google, and incessant offenders, such as Facebook, whose list of apologies and missteps includes multiple entries for every year of its existence. No wonder the WhatsApp CEO quit (though there's little else he can do, since he sold his company).

Nonetheless, it's the smallest companies and charities who are in the greatest panic about this. Possibly for good reason: there is mounting concern that GDPR will be the lever via which the big data-driven companies lock out small competitors and start-ups. Undesirable unintended consequences, if that's the outcome.


Illustrations: GDPR countdown clock on April 27.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 20, 2018

Deception

werobot-pepper-head_zpsrvlmgvgl.jpg"Why are robots different?" 2018 co-chair Mark Lemley asked repeatedly at this year's We Robot. We used to ask this in the late 1990s when trying to decide whether a new internet development was worth covering. "Would this be a story if it were about telephones?" Tom Standage and Ben Rooney frequently asked at the Daily Telegraph.

The obvious answer is physical risk and our perception of danger. The idea that autonomously moving objects may be dangerous is deeply biologically hard-wired. A plant can't kill you if you don't go near it. Or, as Bill Smart put it at the first We Robot in 2012, "My iPad can't stab me in my bed." Autonomous movement fools us into thinking things are smarter than they are.

It is probably not much consolation to the driver of the crashed autopiloting Tesla or his bereaved family that his predicament was predicted two years ago at We Robot 2016. In a paper, Madeline Elish called humans in these partnerships "Moral Crumple Zones", because, she argued, in a human-machine partnership, the human would take all the pressure, like the crumple zone in a car.

Today, Tesla is fulfilling her prophecy by blaming the driver for not getting his hands onto the steering wheel fast enough when commanded. (Other prior art on this: Dexter Palmer's brilliant 2016 book Version Control.)

As Ian Kerr pointed out, the user's instructions are self-contradictory. The marketing brochure uses the metaphors "autopilot" and "autosteer" to seduce buyers into envisioning a ride of relaxed luxury while the car does all the work. But the legal documents and user manual supplied with the car tell you that you can't rely on the car to change lanes, and you must keep your hands on the wheel at all times. A computer ingesting this would start smoking.

Granted, no marketer wants to say, "This car will drive itself in a limited fashion, as long as you watch the road and keep your hands on the steering wheel." The average consumer reading that says, "Um...you mean I have to drive it?"

The human as moral crumple zone also appears in analyses of the Arizona Uber crash. Even-handedly, Brad Templeton points plenty of blame at Uber and its decisions: the car's LIDAR should have spotted the pedestrian crossing the road in time to stop safely. He then writes, "Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired." And yet humans are notoriously bad at the job required of her: monitor a machine. Safety drivers are typically deployed in pairs to split the work - but also to keep each other attentive.

The larger We Robot discussion was part about public perception of risk, based on a paper (PDF) by Aaron Mannes that discussed how easy it is to derail public trust in a company or new technology when statistically less-significant incidents spark emotional public outrage. Self-driving cars may in fact be safer overall than human drivers despite the fatal crash in Arizona; Mannes also mentioned were Three Mile Island, which made the public much more wary of nuclear power, and the Ford Pinto, which spent the 1970s occasionally catching fire.

Mannes suggested that if you have that trust relationship you may be able to survive your crisis. Without it, you're trying to win the public over on "Frankenfoods".

So much was funnier and more light-hearted seven years ago, as a long-time attendee pointed out; the discussions have darkened steadily year by year as theory has become practice and we can no longer think the problems are as far away as the Singularity.

In San Francisco, delivery robots cause sidewalk congestion and make some homeless people feel surveilled; in Chicago and Durham we risk embedding automated unfairness into criminal justice; the egregious extent of internet surveillance has become clear; and the world has seen its first self-driving car road deaths. The last several years have been full of fear about the loss of jobs; now the more imminent dragons are becoming clearer. Do you feel comfortable in public spaces when there's a like a mobile unit pointing some of its nine cameras at you?

Karen Levy, finds that truckers are less upset about losing their jobs than about automation invading their cabs, ostensibly for their safety. Sensors, cameras, and wearables that monitor them for wakefulness, heart health, and other parameters are painful and enraging to this group, who chose their job for its autonomy.

Today's drivers have the skills to step in; tomorrow's won't. Today's doctors are used to doing their own diagnostics; tomorrow's may not be. In the paper by Michael Froomkin, Ian Kerr, and Joëlle Pinea (PDF), automation may mean not only deskilling humans (doctors) but also a frozen knowledge base. Many hope that mining historical patient data will expose patterns that enable more accurate diagnostics and treatments. If the machines take over, where will the new approaches come from?

Worse, behind all that is sophisticated data manipulation for which today's internet is providing the prototype. When, as Woody Hartzog suggested, Rocco, your Alexa-equipped Roomba, rolls up to you, fakes a bum wheel, and says, "Daddy, buy me an upgrade or I'll die", will you have the heartlessness to say no?

Illustrations: Pepper and handler at We Robot 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


March 23, 2018

Aspirational intelligence

2001-hal.png"All commandments are ideals," he said. He - Steven Croft, the Bishop of Oxford - had just finished reading out to the attendees of Westminster Forum's seminar (PDF) his proposed ten commandments for artificial intelligence. He's been thinking about this on our behalf: Croft malware writers not to adopt AI enhancements. Hence the reply.

The first problem is: what counts as AI? Anders Sandberg has quipped that it's only called AI until it starts working, and then it's called automation. Right now, though, to many people "AI" seems to mean "any technology I don't understand".

Croft's commandment number nine seems particularly ironic: this week saw the first pedestrian killed by a self-driving car. Early guesses are that the likely weakest links were the underemployed human backup driver and the vehicle's faulty LIDAR interpretation of a person walking a bicycle. Whatever the jaywalking laws are in Arizona, most of us instinctively believe that in a cage match between a two-ton automobile and an unprotected pedestrian the car is always the one at fault.

Thinking locally, self-driving cars ought to be the most ethics-dominated use of AI, if only because people don't like being killed by machines. Globally, however, you could argue that AI might be better turned to finding the best ways to phase out cars entirely.

We may have better luck at persuading criminal justice systems to either require transparency, fairness, and accountability in machine learning systems that predict recidivism and who can be helped or drop them entirely.

The less-tractable issues with AI are on display in the still-developing Facebook and Cambridge Analytica scandals. You may argue that Facebook is not AI, but the platform certainly uses AI in fraud detection and to determine what we see and decide which of our data parts to use on behalf of advertisers. All on its own, Facebook is a perfect exemplar of all the problems Australian privacy advocate foresaw in 2004 after examining the first social networks. In 2012, Clark wrote, "From its beginnings and onward throughout its life, Facebook and its founder have demonstrated privacy-insensitivity and downright privacy-hostility." The same could be said of other actors throughout the tech industry.

Yonatan Zunger is undoubtedly right when he argues in the Boston Globe that computer science has an ethics crisis. However, just fixing computer scientists isn't enough if we don't fix the business and regulatory environment built on "ask forgiveness, not permission". Matthew Stoll writes in the Atlantic about the decline since the 1970s of American political interest in supporting small, independent players and limiting monopoly power. The tech giants have widely exported this approach; now, the only other government big enough to counter it is the EU.

The meetings I've attended of academic researchers considering ethics issues with respect to big data have demonstrated all the careful thoughtfulness you could wish for. The November 2017 meeting of the Research Institute in Science of Cyber Security provided numerous worked examples in talks from Kat Hadjimatheou at the University of Warwick, C Marc Taylor from the the UK Research Integrity Office, and Paul Iganski the Centre for Research and Evidence on Security Threats (CREST). Their explanations of the decisions they've had to make about the practical applications and cases that have come their way are particularly valuable.

On the industry side, the problem is not just that Facebook has piles of data on all of us but that the feedback loop from us to the company is indirect. Since the Cambridge Analytica scandal broke, some commenters have indicated that being able to do without Facebook is a luxury many can't afford and that in some countries Facebook *is* the internet. That in itself is a global problem.

Croft's is one of at least a dozen efforts to come up with an ethics code for AI. The Open Data Institute has its Data Ethics Canvas framework to help people working with open data identify ethical issues. The IEEE has published some proposed standards (PDF) that focus on various aspects of inclusion - language, cultures, non-Western principles. Before all that, in 2011, Danah Boyd and Kate Crawford penned Six Provocations for Big Data, which included a discussion of the need for transparency, accountability, and consent. The World Economic Forum published its top ten ethical issues in AI in 2016. Also in 2016, a Stanford University Group published a report trying to fend off regulation by saying it was impossible.

If the industry proves to be right and regulation really is impossible, it won't be because of the technology itself but because of the ecosystem that nourishes amoral owners. "Ethics of AI", as badly as we need it, will be meaningless if the necessary large piles of data to train it are all owned by just a few very large organizations and well-financed criminals; it's equivalent to talking about "ethics of agriculture" when all the seeds and land are owned by a child's handful of global players. The pre-emptive antitrust movement of 2018 would find a way to separate ownership of data from ownership of the AI, algorithms, and machine learning systems that work on them.


Illustrations: HAL.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 16, 2018

Homeland insecurity

United_Kingdom_foreign_born_population_by_country_of_birth.pngTo the young people," a security practitioner said at a recent meeting, speaking of a group he'd been working with, "it's life lived on their phone."

He was referring to the tendency for adults to talk to kids about fake news, or sexting, or sexual abuse and recruitment, and so on as "online" dangers the adults want to protect them from. But, as this practitioner was trying to explain (and we have said here before), "online" isn't separate to them. Instead, all these issues are part of the context of pressures, relationships, economics, and competition that makes up their lives. This will become increasingly true as widely deployed sensors and hybrid cyber-physical systems and tracking become the norm.

This is a real generation gap. Older adults have taken on board each of these phenomena as we've added it into our existing understanding of the world. Watching each arrive singly over time allows the luxury of consideration and the mental space in which to plot a strategy. If you're 12, all of these things are arriving at once as pieces that are coalescing into your picture of the world. Even if you only just finally got your parents to let you have your own phone you've been watching videos on YouTube, FaceTiming your friends, and playing online games all your life.

An important part of "life lived on the phone" is in the UK's data protection bill implementation of the General Data Protection Regulation, now going through Parliament. The bill carves out some very broad exemptions. Most notably, opposed by the Open Rights Group and the3million, the bill would remove a person's rights as a data subject in the interests of "effective immigration control". In other words, under this exemption the Home Office could make decisions about where and whether you were allowed to live but never have to tell you the basis for its decisions. Having just had *another* long argument with a different company about whether or not I've ever lived in Iowa, I understand the problem of being unable to authenticate yourself because of poor-quality data.

It's easy for people to overlook laws that "only" affect immigrants, but as Gracie Mae Bradley, an advocacy and policy officer, made clear at this week's The State of Data 2018 event, hosted by Jen Persson, one of the consequences is to move the border from Britain's ports into its hospitals, schools, and banks, which are now supposed to check once a quarter that their 70 million account holders are legitimate. NHS Digital is turning over confidential patient information to help the Home Office locate and deport undocumented individuals. Britain's schools are being pushed to collect nationality. And, as Persson noted, remarkably few parents even know the National Pupil Database exists, and yet it catalogues highly detailed records of every schoolchild.

"It's obviously not limited to immigrants," Bradley said of the GDPR exemption. "There is no limit on the processes that might apply this exemption". It used to be clear when you were approaching a national border; under these circumstances the border is effectively gummed to your shoe.

The data protection bill also has the usual broad exemptions for law enforcement and national security.

Both this discussion (implicitly) and the security conversation we began with (explicitly) converged on security as a felt, emotional state. Even a British citizen living in their native country in conditions of relative safety - a rich country with good health care, stable governance, relatively little violence, mostly reasonable weather - may feel insecure if they're constantly being required to prove the legitimacy of their existence. Conversely, people may live in objectively more dangerous conditions and yet feel more secure because they know the local government is not eying them suspiciously with a view to telling them to repatriate post-haste.

Put all these things together with other trends, and you have the potential for a very high level of social insecurity that extends far outwards from the enemy class du jour, "illegal immigrants". This in itself is a damaging outcome.

And the potential for social control is enormous. Transport for London is progressively eliminating both cash and its Oyster payment cards in favor of direct payment via credit or debit card. What happens to people who one quarter fail the bank's inspection. How do they pay the bus or tube fare to get to work?

Like gender, immigration status is not the straightforward state many people think. My mother, brought to the US when she was four, often talked about the horror of discovering in her 20s that she was stateless: marrying my American father hadn't, as she imagined, automatically made her an American, and Switzerland had revoked her citizenship because she had married a foreigner. In the 1930s, she was naturalized without question. Now...?

Trying to balance conflicting securities is not new. The data protection bill-in-progress offers the opportunity to redress a serious imbalance, which Persson called, rightly, a "disconnect between policy, legislation, technological change, and people". It is, as she and others said, crucial that the balance of power that data protection represents not be determined by a relatively small, relatively homogeneous group.


Illustrations: 2008 map of nationalities of UK residents (via Wikipedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 16, 2018

Data envy

new-22portobelloroad.jpgWhile we're all fretting about Facebook, Google, and the ecosystem of advertisers that track our every online move, many other methods for tracking each of us are on the rise, sprawling out across the cyber-physical continuum. You can see the world's retailers, transport authorities, and governments muttering, "Why should *they* have all the data?" CCTV was the first step, and it's a terrible role model. Consent is never requested; instead, where CCTV's presence is acknowledged it comes with "for your safety" propaganda.

People like the Center for Digital Democracy's Jeff Chester or security and privacy researcher Chris Soghoian have often exposed the many hidden companies studying us in detail online. At a workshop in 2011, they predicted much of 2016's political interference and manipulation. They didn't predict that Russians would seek to interfere with Western democracies; but they did correctly foresee the possibility of individual political manipulation via data brokers and profiling. Was this, that workshop asked, one of the last moments at which privacy incursions could be reined in?

A listener then would have been introduced to companies like Axciom and Xaxis, behind-the-scenes swappers of our data trails. Like Equifax, we do not have direct relationships with these companies, and as people said on Twitter during the Equifax breach, "We are their victims, not their customers".

At Freedom to Tinker, in September Steven Engelhardt exposed the extent to which email has become a tracking device. Because most people use just one email address, it provides an easy link. HTML email is filled with third-party trackers that send requests to myriad third-parties, which can then match the email address against other information they hold. Many mailing lists add to this by routing clicks on links through their servers to collect information about what you view, just like social media sites. There are ways around these things - ban your email client from loading remote content, view email as plain text, and copy the links rather than clicking on them. Google is about to make all this much worse by enabling programs to run within email messages. It is, as they say at TechCrunch, a terrible idea for everyone except Google: it means more ads, more trackers, and more security risks.

In December, also at Freedom to Tinker, Gunes Acar explained that a long-known vulnerability in browsers' built-in password managers helps third parties track us. The browser memorizes your login details the first time you land on a website and enter them. Then, as you browse on the site to a non-login page, the third party plants a script with an invisible login form that your browser helpfully autofills . The script reads and hashes the email address, and sends it off to the mother ship, where it can be swapped and matched to other profiles with the same email address hash. Again, since people use the same one for everything and rarely change it, email addresses are exceptionally good connectors between browsing profiles, mobile apps, and devices. Ad blockers help protect against this; browser vendors and publishers could also help.

But these are merely extensions of the tracking we already have. Amazon Go's new retail stores rely on tracking customers throughout, noting not only what they buy but how long they stand in front of a shelf and what they pick up and put back. This should be no surprise: Recode predicted as much in 2015. Other retailers will copy this: why should online retailers have all the data?

Meanwhile, police in Wales have boasted about using facial recognition to arrest people, matching images of people of interest against both its database of 500,000 custody images and live CCTV feeds while the New York Times warns that the technology's error rate spikes when the subjects being matched are not white and male. In the US, EFF reports that according to researchers at Georgetown Law School an estimated 117 million Americans are already in law enforcement facial recognition systems with little oversight.

We already knew that phones are tracked by their attempts to connect to passing wifi SSIDs; at last month's CPDP, the panel on physical tracking introduced targeted tracking using MAC addresses extracted via wifi connections. In many airports, said Future of Privacy Forum's Jules Polonetsky, courtesy of Blip Systems deploys sensors to help with logistical issues such as traffic flow and queue management. In Cincinnati, says the company's website, these sensors help the Transportation Security Agency better allocate resources and provide smoother "passenger processing" (should you care to emerge flat and orange like American cheese).

Visitors to office buildings used to sign in with name, company, and destination; now, tablets demand far more detailed information with no apparent justification. Every system, as Infomatica's Monica McDonnell explained at CPDP, is made up of dozens of subsystems, some of which may date to the 1960s, all running slightly different technologies that may or may not be able to link together the many pockets of information generated for each person.

These systems are growing much faster than most of us realize, and this is even before autonomous vehicles and the linkage of systems into smart cities. If the present state of physical tracking is approximately where the web was in 2000...the time to set the limits is now.


Illustrations: George Orwell's house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 2, 2018

Schrödinger's citizen

cpdp-nationality2.pngOne of the more intriguing panels at this year's Computers, Privacy, and Data Protection (obEgo: I moderated) began with a question from Peter Swire: Can the nationality of the target ever be a justified basis for different surveillance rules?

France, the Netherlands, Sweden, Germany, and the UK, explained Mario Oetheimer, an expert on data protection and international human rights with the European Union Agency for Fundamental Rights, do apply a lower level of safeguards for international surveillance as compared to domestic surveillance. He believes Germany is the only EU country whose surveillance legislation includes nationality criteria.

The UK's 2016 Investigatory Powers Act (2016), parts of which were struck down this week in the European Court of Justice, was an example. Oetheimer, whose agency has a report on fundamental rights in surveillance, said introducing nationality-based differences will "trickle down" into an area where safeguards are already relatively underdeveloped and hinder developing further protections.

Thumbnail image for peterswire-cpdp2018.pngIn his draft paper, Swire favors allowing greater surveillance of non-citizens than citizens. While some countries - he cited the US and Germany - provide greater protection from surveillance to their own citizens than to foreigners, there is little discussion about why that's justified. In the US, he traces the distinction to Watergate, when Nixon's henchmen were caught unacceptably snooping on the opposition political party. "We should have very strong protections in a democracy against surveilling the political opposition and against surveilling the free press." But granting everyone else the same protection, he said, is unsustainble politically and incorrect as a matter of law and philosophy.

This is, of course, a very American view, as the late Caspar Bowden impatiently explained to me in 2013. Elsewhere, human rights - including privacy - are meant to be universal. Still, there is a highly practical reason for governments and politicians to prefer their own citizens: foreigners can't vote them out of office. For this reason (besides being American), I struggle to believe in the durability of any rights granted to non-citizens. The difference seems to me the whole point of having citizens in the first place. At the very least, citizens have the unquestioned right to live and enter the country, which non-citizens do not have. But, as Bowden might have said, there is a difference between *fewer* rights and *no* rights. Before that conversation, I did not really understand about American exceptionalism.

Like so many other things, citizenship and nationality are multi-dimensional rather than binary. Swire argues that it's partly a matter of jurisdiction: governments have greater ability and authority to ask for information about their own citizens. Here is my reference to Schrödinger's cat: one may be a dual citizen, simultaneously both foreign and not-foreign and regarded suspiciously by all.

Joseph Cannataci disagreed, saying that nationality does not matter: "If a person is a threat, I don't care if he has three European passports...The threat assessment should reign supreme."

German privacy advocate Thorsten Wetzling outlined Germany's surveillance law, recently reformulated in response to the Snowden revelations. Germany applies three categories to data collection: domestic, domestic-foreign (or "international"), and foreign. "International" means that one end of the communication is in Germany; "foreign" means that both ends are outside the country. The new law specifically limits data collected on those outside Germany and subjects non-targeted foreign data collection to new judicial oversight.

Wetzling believes we might find benefits in extending greater protection to foreigners than accrues to domestic citizens. Extending human rights protection would mean "the global practice of intelligence remains within limits", and would give a country the standing to suggest to other countries that they reciprocate. This had some resonance for me: I remember hearing the computer scientist George Danezis say something about since we all have few nationalities, at any given time we can be surveilled by a couple of hundred other countries. We can have a race to the bottom...or to the top.

One of Swire's points was that one reason to allow greater surveillance of foreigners is that it's harder to conduct. Given that technology is washing away that added difficulty, Amie Stepanovich asked, shouldn't we recognize that? Like Wetzling, she suggested that privacy is a public good; the greater the number of people who have it the more we may benefit.

As abstruse as these legal points may sound, ultimately the US's refusal to grant human rights to foreigners is part of what's at stake in determining whether the US's privacy regime is strong enough for the EU-US Privacy Shield to pass its legal challenges. As the internet continues to raise jurisdictional disputes, Swire's question will take its place alongside others, such as how much location should matter when law enforcement wants access to data (Microsoft v. United States, due to be heard in the US Supreme Court on February 27) and countries follow the UK's lead in claiming extraterritorial jurisdiction over data and the right to bulk-hack computers around the world.

But, said Cannataci in disputing Swire's arguments, the US Constitution says, "All men are created equal". Yes, it does. But in "men" the Founding Fathers did not include women, black people, slaves, people who didn't own property.... "They didn't mean it," I summarized. Replied Cannataci: "But they *should* have." Indeed.


Illustrations: The panel, left to right: Cannataci, Swire, Stepanovich, Grossman, Wetzling, Oetheimer.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 27, 2017

The opposite of privilege

new-22portobelloroad.jpgA couple of weeks ago, Cybersalon held an event to discuss modern trends in workplace surveillance. In the middle, I found myself reminding the audience, many of whom were too young to remember, that 20 or so years ago mobile phones were known locally as "poserphones". "Poserphone" because they were still expensive enough recently enough that they were still associated with rich businessmen who wanted to show off their importance.

The same poseurship today looks like this: "I'm so grand I don't carry a mobile phone." In a sort of rerun of the 1997 anti-internet backlash, which was kicked off by Clifford Stoll's Silicon Snake-Oil, all over the place right now we're seeing numerous articles and postings about how the techies of Silicon Valley are disconnecting themselves and removing technology from the local classrooms. Granted, this has been building for a while: in 2014 the New York Times reported that Steve Jobs didn't let his children use iPhones or iPads.

It's an extraordinary inversion in a very short time. However, the notable point is that the people profiled in these stories are people with the agency to make this decision and not suffer for it. In April, Congressman Jim Sensenbrenner (R-WI), claimed airily that "Nobody has to use the internet", a statement easily disputed. A similar argument can be made about related technology such as phones and tablets: it's perfectly reasonable to say you need downtime or that you want your kids to have a solid classical education with plenty of practice forming and developing long-form thinking. But the option to opt out depends on a lot of circumstances outside of most people's control. You can't, for example disconnect your phone if your zero-hours contracts specifies you will be dumped if you don't answer when they call, nor if you're in high-urgency occupations like law, medicine, or journalism; nor can you do it if you're the primary carer for anyone else. For a homeless person, their mobile phone may be their only hope of finding a job or a place to live.

Battery concerns being what they are, I've long had the habit of turning off wifi and GPS unless I'm actively using them. As Transport for London increasingly seeks to use passenger data to understand passenger flow through the network and within stations, people who do not carry data-generating devices are arguably anti-social because they are refusing to contribute to improving the quality of the service. This argument has been made in the past with reference to NHS data, suggesting that patients who declined to share their data didn't deserve care.

cybersalon-october.jpgToday's employers, as Cybersalon highlighted and as speakers have previously pointed out at the annual Health Privacy Summit, may learn an unprecedented amount of intimate information about their employees via efforts like wellness programs and the data those capture from devices like Fitbits and smart watches. At Cornell, Karen Levy has written extensively about the because-safety black box monitoring coming to what historically has been the most independent of occupations, truck driving. At Middlesex Phoebe Moore is studying the impact of workplace monitoring on white collar workers. How do you opt out of monitoring if doing so means "opting out" of employment?

The latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location. Your voice may be captured by the waiting speech-driven device in your friend's car or home; ever tried asking someone to turn off Alexa-Siri-OKGoogle while you're there?

For these reasons, publicly highlighting your choice to opt out reads as, "Look how privileged I am", or some much more compact and much more offensive term. This will be even more true soon, when opting out will require vastly more effort than it does now and there will be vastly fewer opportunities to do it. Even today, someone walking around London has no choice about how many CCTV cameras capture them in motion. You can ride anonymously on the tube and buses as long as you are careful to buy, and thereafter always top up, your Oyster smart card with cash. But the latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location.

It's clear "normal" people are beginning to know this. This week, in a supermarket well outside of London, I was mocking a friend for paying for some groceries by tapping a credit card. "Cash," I said. "What's wrong with nice, anonymous cash?" "It took 20 seconds!" my friend said. The aging cashier regarded us benignly. "They can still track you by the mobile phones you're carrying," she said helpfully. Touché.

Illustrations: George Orwell's house at 22 Portobello road; Cybersalon (Phoebe Moore, center).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 20, 2017

Risk profile

Thumbnail image for Fingerprint-examiner-FBI-1930s.jpgSo here is this week's killer question: "Are you aware of any large-scale systems employing this protection?"

It's a killer question because this was the answer: "No."

Rewind. For as long as I can remember - and I first wrote about biometrics in 1999 - biometrics vendors have claimed that these systems are designed to be privacy-protecting. The reason, as I was told for a Guardian article on fingerprinting in schools in 2006, is that these systems don't store complete biometric images. Instead, when your biometric is captured, whether that's a fingerprint to pay for a school lunch or an iris scan for some other purpose - the system samples points in the resulting image and deploys some fancy mathematics to turn them into a "template", a numerical value that is what the system stores. The key claim: there is no way to reverse-engineer the template to derive the original image because the template doesn't contain enough information.

The claim sounds plausible to anyone used to one-way cryptographic hashes, or who is used to thinking about compressed photographs and music files, where no amount of effort can restore Humpty-Dumpty's missing data. And yet.

Even at the time, some of the activists I interviewed were dubious about the claim. Even if it was true in 1999, or 2003, or 2006, they argued, it might not be true in the future. Plus, in the meantime these systems were teaching kids that it was OK to use these irreplaceable iris scans, fingerprints, and so on for essentially trivial purposes. What would the consequences be someday in the future when biometrics might become a crucial element of secure identification?

Thumbnail image for wayman-from-video.pngWell, here we are in 2017, and biometrics are more widely used, even though not as widely deployed as they might have hoped in 1999. (There are good reasons for this, as James L. Wayman explained in a 2003 interview for New Scientist: deploying these systems is much harder than anyone ever thinks. The line that has always stuck in my mind: "No one ever has what you think they're going to have where you think they're going to have it." His example was the early fingerprint system he designed that was flummoxed on the first day by the completely unforeseen circumstance of a guy who had three thumbs.)

So-called "presentation attacks" - for example, using high-resolution photographs to devise a spoof dummy finger - have been widely discussed already. For this reason, such applications have a "liveness" test. But it turns out there are other attacks to be worried about.

Thumbnail image for rotated-nw-marta-gomez-barrerro-2017.jpgThis week, at the European Association for Biometrics held a symposium on privacy, surveillance, and biometrics, I discovered that Andrew Clymer, who said in 2003 that, "Anybody who says it is secure and can't be compromised is silly", was precisely right. As Marta Gomez-Barrero explained, in 2013 she published a successful attack on these templates she called "hill climbing". Essentially, this is an iterative attack. Say you have a database of stored templates for an identification system; a newly-presented image is compared with the database looking for a match. In a hill-climbing attack, you generate synthetic templates and run them through the comparator, and then apply a modification scheme to the synthetic templates until you get a match. The reconstructions Gomez-Barrero showed aren't always perfect - the human eye may see distortions - but to the biometrics system it's the same face. You can fix the human problem by adding some noise to the image. The same is true of iris scans (PDF), hand shapes, and so on.

Granted, someone wishing to conduct this attack has to have access to that database, but given the near-daily headlines about breaches, this is not a comforting thought.

Slightly better is the news that template protection techniques do exist; in fact, they've been known for ten to 15 years and are the subject of ISO standard 24745. Simply encrypting the data doesn't help as much as you might think, because every attempted match requires the template to be decrypted. Just like reused passwords, biometric templates are vulnerable to cross-matching that allows an attacker to extract more information. Second, if the data is available on the internet - this is especially applicable to face-based systems - an attacker can test for template matches.

It was at this point that someone asked the question we began with: are these protection schemes being used in large-scale systems? And...Gomez-Barrerra said: no. Assuming she's right, this is - again - one of those situations where no matter how carefully we behave we are the mercy of decisions outside our control that very few of us even know are out there waiting to cause trouble. It is market failure in its purest form, right up there with Equifax, which none of us chooses to use but still inflicted intimate exposure on hundreds of millions of people; and the 7547 bug, which showed you can do everything right in buying network equipment and still get hammered.

It makes you wonder: when will people learn that you can't avoid problems by denying there's any risk? Biometric systems are typically intended to handle the data of millions of people in sensitive applications such as financial transactions and smartphone authentication. Wouldn't you think security would be on the list of necessary features?


Illustrations: A 1930s FBI examiner at work (via FBI); James Wayman; Marta Gomez-Barrero.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 6, 2017

Send lawyers, guns, and money

Thumbnail image for Las_Vegas_strip.jpgThere are many reasons why, Bryan Schatz finds at Mother Jones, people around Las Vegas disagree with President Donald Trump's claim that now is not the time to talk about gun control. The National Rifle Association probably agrees; in the past, it's been criticized for saving its public statements for proposed legislation and staying out of the post-shooting - you should excuse the expression - crossfire.

Gun control doesn't usually fit into net.wars' run of computers, freedom, and privacy subjects. There are two reasons for making an exception now. First, the discovery of the Firearm Owners Protection Act, which prohibits the creation of *any* searchable registry of firearms in the US. Second, the rhetoric surrounding gun control debates.

To take the second first, in a civil conversation on the subject, it was striking that the arguments we typically use to protest knee-jerk demands for ramped-up surveillance legislation to atrocious incidents are the same ones used to oppose gun control legislation. Namely: don't pass bad laws out of fear that do not make us safer; tackle underlying causes such as mental illness and inequality; put more resources into law enforcement/intelligence. In the 1990s crypto wars, John Perry Barlow deliberately and consciously adapted the NRA' slogan to create "You can have my encryption algorithm...when you pry it from my cold, dead fingers from my private key".

Using the same rhetoric doesn't mean both are right or both are wrong: we must decide on evidence. Public debates over surveillance do typically feature evidence about the mathematical underpinnings of how encryption works, day-to-day realities of intelligence work, and so on. The problem with gun control debates in the US is that evidence from other countries is automatically written off as irrelevant, and, more like the subject of copyright reform, lobbying money hugely distorts the debate.

Thumbnail image for Atf_ffl_check-licensed-gun-dealer.jpgThe second issue touches directly on privacy. Soon after the news of the Las Vegas shooting broke, a friend posted a link to the 2016 GQ article Inside the Federal Bureau of Way Too Many Guns. In it, writer and author Jeanne Marie Laskas pays a comprehensive visit to Martinsburg, West Virginia, where she finds a "low, flat, boring building" with a load of shipping containers kept out in the parking lot so the building's floors don't collapse under the weight of the millions of gun license records they contain. These are copies of federal form 4473, which is filled out at the time of gun purchases and retained by the retailer. If a retailer goes out of business, the forms it holds are shipped to the tracing center. When a law enforcement officer anywhere in the US finds a gun at a crime scene, this is where they call to trace it. The kicker: all those records are eventually photographed and stored on microfilm. Miles and miles of microfilm. Charlie Houser, the tracing center's head, has put enormous effort into making his human-paper-microfilm system as effective and efficient as possible; it's an amazing story of what humans can do.

Why microfilm? Gun control began in 1968, five years after the shooting of President John F. Kennedy. Even at that moment of national grief and outrage, the only way President Lyndon B. Johnson could get the Gun Control Act passed was to agree not to include a clause he wanted that would have set up a national gun registry to enable speedy tracing. In 1986, the NRA successfully lobbied for the Firearm Owners Protection Act, which prohibits the creation of *any* registry of firearms. What you register can be found and confiscated, the reasoning apparently goes. So, while all the rest of us engaged in every other activity - getting health care, buying homes, opening bank accounts, seeking employment - were being captured, collected, profiled, and targeted, the one group whose activities are made as difficult to trace as possible is...gun owners?

It is to boggle.

That said, the reasons why the American gun problem will likely never be solved include the already noted effect of lobbying money and, as E.J. Dionne Jr., Norman J. Ornstein and Thomas E. Mann discuss in the Washington Post, the non-majoritarian democracy the US has become. Even though majorities in both major parties favor universal background checks and most Americans want greater gun control, Congress "vastly overrepresents the interests of rural areas and small states". In the Senate that's by design to ensure nationwide balance: the smallest and most thinly populated states have the same number of senators - two - as the biggest, most populous states. In Congress, the story is more about gerrymandering and redistricting. Our institutions, they conclude, are not adapting to rising urbanization: 63% in 1960, 84% in 2010.

Besides those reasons, the identification of guns and personal safety endures, chiefly in states where at one time it was true.

A month and a half ago, one of my many conversations around Nashville went like this, after an opening exchange of mundane pleasantries:

"I live in London."

"Oh, I wouldn't want to live there."

"Why?"

"Too much terrorism." (When you recount this in London, people laugh.)

"If you live there, it actually feels like a very safe city." Then, deliberately provocative, "For one thing, there are practically no guns."

"Oh, that would make me feel *un"safe."

Illustrations: Las Vegas strip, featuring the Mandelay Bay; an ATF inspector checks up on a gun retailer.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 15, 2017

Equifaction

equifax-announcement.pngThe Equifax announcement this week is peculiarly terrible. It's not just that 143 million Americans and uncertain numbers of Canadians and Britons are made vulnerable to decades of identity fraud (social security numbers can't - yet - be replaced with new ones). Nor is it the unusually poor apology issued by the company or its ham-fisted technical follow-up (see also Argentina). No, the capper is that no one who is in Equifax's database has had any option about being in it in the first place. "We are its victims, not its customers," a number of people observed on Twitter this week.

Long before Google, Amazon, Facebook, and Apple became GAFA, Equifax and its fellow credit bureaus viewed consumers as the product. Citizens have no choice about this; our reward is access to financial services, which we *pay* for. Americans' credit reports are routinely checked on every applications forcredit, bank accounts, or even employment. The impact was already visibly profound enough in 1970, when Congress passed the Fair Credit Reporting Act. In granting Americans the right to inspect their credit reports and request corrections, it is the only US legislation offering rights similar to those granted to Europeans by the data protection laws. The only people who can avoid the tentacled reach of Equifax are those who buy their homes and cars with cash, operate no bank accounts or credit cards, pay cash for medical care and carry no insurance, and have not need for formal employment or government benefits.

Based on this breach and prior examples, investigative security journalist Brian Krebs calls the credit bureaus "terrible stewards of very sensitive data".

It was with this in the background that I attended a symposium on reforming Britain's Computer Misuse Act run by the Criminal Law Reform Now Network. In most hacking cases you don't want to blame the victim, but one might make an exception for Equifax. Since the discussion allowed for such flights of fancy, I queried whether a reformed act should include something like "contributory negligence" to capture such situations. "That's data protection laws," someone said (the between-presentation discussions were under the Chatham House Rule). True. Later, however, merging that thought with other comments about the fact that the public interest in secure devices is not being met either by legislators or by the market inspired Duncan Campbell to suggest that perhaps what we need as a society is a "computer security act" that embraces the whole of society - individuals and companies - that needs protection. Companies like Equifax, with whom we have no direct connection but whose data management deeply affects our lives, he suggested, should arguably be subject to a duty of care. Another approach several of those at the meeting favored was introducing a public interest defense for computer misuse, much as the Defamation Act has for libel. Such a defense could reasonably include things like security research, journalism, and whistleblowing,

The law we have is of course nothing like this.

As of 2013, according to the answer to a Parliamentary question, there had been 339 prosecutions and 262 convictions under the CMA. A disproportionate number of those who are arrested under the act are young - average age, 17. There is ongoing work on identifying ways to turn the paths for young computer whizzes toward security and societal benefit rather than cracking and computer crime. In the case of "Wannacry hero" Marcus Hutchins, arrested by the FBI after Defcon, investigative security journalist Brian Krebs did some digging and found that it appears likely he was connected to writing malware at one time but had tried to move toward more socially useful work. Putting smart young people with no prior criminal record in prison with criminals and ruining their employment prospects isn't a good deal for either them or us.

Yet it's not really surprising that this is who the CMA is capturing, since in 1990 that was the threat: young, obsessive, (predominantly) guys exploring the Net and cracking into things. Hardly any of them sought to profit financially from their exploits beyond getting free airtime so they could stay online longer - not even Kevin Mitnick, the New York Times's pick for "archetypal dark side hacker", now a security consultant and book author. In the US, the police Operation Sundown against this type of hacker spurred the formation of the Electronic Frontier Foundation. "I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves," John Perry Barlow wrote at the time.

Thumbnail image for schifreen.jpgSchifreen and Gold , who were busted for hacking into Prince Philip's Prestel mailbox, established the need for a new law. The resulting CMA was not written for a world in which everyone is connected, street lights have their own network nodes, and Crime as a Service relies on a global marketplace of highly specialized subcontractors. Lawmakers try to encode principles, not specifics, but anticipating such profound change is hard. Plus, as a practical matter, it is feasible to capture a teenaged kid traceable to (predominantly) his parents' basement, but not the kingpin of a worldwide network who could be anywhere. And so CLRNN's question: what should a new law look like? To be continued...


Illustrations: Equifax CEO Rick Smith; Robert Schifreen;

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 4, 2017

Imaginary creatures

virginmary-devil.jpgI learned something new this week: I may not be a real person.

"Real people often prefer ease of use and a multitude of features to perfect, unbreakable security."

So spake the UK's Home Secretary Amber Rudd on August 1, and of course what she was really saying was, We need a back door in all encryption so we can read anything we deem necessary, and anyone who opposes this perfectly sensible idea is part of a highly vocal geek minority who can safely be ignored.

The way I know I'm not a real person is that around the time she was saying that I was emailing my accountant a strongly-worded request that they adopt some form of secured communications for emailing tax returns and accounts back and forth. To my astonishment, their IT people said they could do PGP. Oh, frabjous day. Is PGP-encrypted email more of a pain in the ass than ordinary email? You betcha. Conclusion: I am an imaginary number.

Thumbnail image for Thumbnail image for Amber_Rudd_2016.jpgAccording to Cory Doctorow at BoingBoing's potted history of this sort of pronouncement, Rudd is at a typical first stage. At some point in the future, Doctorow predicts, she will admit that people want encryption but say they shouldn't have it, nonetheless.

I've been trying to think of analogies that make clear how absurd her claim is. Try food safety: >>Real people often prefer ease of use and a multitude of features to perfect, healthy food.>> Well, that's actually true. People grab fast food, they buy pre-prepared meals, and we all know why: a lot of people lack the time, expertise, kitchen facilities, sometimes even basic access to good-quality ingredients to do their own cooking, which overall would save them money and probably keep them in better health (if they do it right). But they can choose this convenience in part because they know - or hope - that food safety regulations and inspections mean the convenient, feature-rich food they choose is safe to eat. A government could take the view that part of its role is to ensure that when companies promise their encryption is robust it actually is.

But the real issue is that it's an utterly false tradeoff. Why shouldn't "real people" want both? Why shouldn't we *have* both? Why should anyone have to justify why they want end-to-end encryption? "I'm sorry, officer. I had to lock my car because I was afraid someone might steal it." Does anyone query that logic on the basis that the policeman might want to search the car?

The second-phase argument (the first being in the 1990s) about planting back doors has been recurring for so long now that it's become like a chronic illness with erupting cycles. In response, so much good stuff has been written to point out the technical problems with that proposal that there isn't really much more to say about it. Go forth and read that link.

There is a much more interesting question we should be thinking about. The 1990s public debate about back doors in the form of key escrow ended with the passage in the UK of the Regulation of Investigatory Powers Act (2000) and in the US with the gradual loosening of the export controls. We all thought that common sense and ecommerce had prevailed. Instead, we now know, the security services ignored these public results and proceeded to go their own way. As we now know, they secretly spent a decade working to undermine security standards. They installed vulnerabilities, and generally borked public trust in the infrastructure.

So: it seems reasonable to assume that the present we-must-have-back-doors noise is merely Plan A. What's Plan B ? What other approaches would you be planning if you ran the NSA or GCHQ? I'm not enough of a technical expert to guess at what clever solutions they might find, but historically a lot of access has been gained by leveraging relationships with appropriate companies such as BT (in the UK) and AT&T (in the US). Today's global tech companies have so far seemed to be more resistant to this approach than a prior generation's national companies were.

Tim_Cook_2009_cropped.jpgThis week's news that Apple began removing censorship-bypassing VPNs from its app store in China probably doesn't contradict this. The company says it complies with national laws; in the FBI case it fought an order in court. However, Britain's national laws unfortunately include 2016's Investigatory Powers Act (2016), which makes it legal for security services to hack everyone's computers ("bulk equipment interference" by any other name...) and has many other powers that have barely been invoked publicly yet. A government that's rational on this sort of topic might point this out, and say, let's give these new powers a chance to bed down for a year or two and *then* see what additional access we might need.

Instead, we seem doomed to keep having this same conversation on an endless loop. Those of us wanting to argue for the importance of securing national infrastructure, particularly as many more billions of points of vulnerability are added to it, can't afford to exit the argument. But, like decoding a magician's trick, we should remember to look in all those other directions. That may be where the main action is, for those of us who aren't real enough to count.

Illustrations: The Virgin Mary punching the devil in the face (book of hours ('The De Brailes Hours'), Oxford ca. 1240 (BL, Add 49999, fol. 40v), via Discarding Images); Amber Rudd; Tim Cook (Valery Marchive).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 12, 2012

My identity, my self

Last week, the media were full of the story that the UK government was going to start accepting Facebook logons for authentication. This week, in several presentations at the RSA Conference, representatives of the Government Digital Service begged to differ: the list of companies that have applied to become identity providers (IDPs) will be published at the end of this month and until then they are not confirming the presence or absence of any particular company. According to several of the spokesfolks manning the stall and giving presentations, the press just assumed that when they saw social media companies among the categories of organization that might potentially want to offer identity authentication, that meant Facebook. We won't actually know for another few weeks who has actually applied.

So I can mercifully skip the rant that hooking a Facebook account to the authentication system you use for government services is a horrible idea in both directions. What they're actually saying is, what if you could choose among identification services offered by the Post Office, your bank, your mobile network operator (especially for the younger generation), your ISP, and personal data store services like Mydex or small, local businesses whose owners are known to you personally? All of these sounded possible based on this week's presentations.

The key, of course, is what standards the government chooses to create for IDPs and which organizations decide they can meet those criteria and offer a service. Those are the details the devil is in: during the 1990s battles about deploying strong cryptography, the government's wanted copies of everyone's cryptography keys to be held in escrow by a Trusted Third Party. At the time, the frontrunners were banks: the government certainly trusted those, and imagined that we did, too. The strength of the disquiet over that proposal took them by surprise. Then came 2008. Those discussions are still relevant, however; someone with a long memory raised the specter of Part I of the Electronic Communications Act 2000, modified in 2005, as relevant here.

It was this historical memory that made some of us so dubious in 2010, when the US came out with proposals rather similar to the UK's present ones, the National Strategy for Trusted Identities in Cyberspace (NSTIC). Ross Anderson saw it as a sort of horror-movie sequel. On Wednesday, however, Jeremy Grant, the senior executive advisor for identity management at the US National Institute for Standards and Technology (NIST), the agency charged with overseeing the development of NSTIC, sounded a lot more reassuring.

Between then and now came both US and UK attempts to establish some form of national ID card. In the US, "Real ID", focused on the state authorities that issue driver's licenses. In the UK, it was the national ID card and accompanying database. In both countries the proposals got howled down. In the UK especially, the combination of an escalating budget, a poor record with large government IT projects, a change of government, and a desperate need to save money killed it in 2006.

Hence the new approach in both countries. From what the GDS representatives - David Rennie (head of proposition at the Cabinet Office), Steven Dunn (lead architect of the Identity Assurance Programme; Twitter: @cuica), Mike Pegman (security architect at the Department of Welfare and Pensions, expected to be the first user service; Twitter: @mikepegman), and others manning the GDS stall - said, the plan is much more like the structure that privacy advocates and cryptographers have been pushing for 20 years: systems that give users choice about who they trust to authenticate them for a given role and that share no more data than necessary. The notion that this might actually happen is shocking - but welcome.

None of which means we shouldn't be asking questions. We need to understand clearly the various envisioned levels of authentication. In practice, will those asking for identity assurance ask for the minimum they need or always go for the maximum they could get? For example, a bar only needs relatively low-level assurance that you are old enough to drink; but will bars prefer to ask for full identification? What will be the costs; who pays them and under what circumstances?

Especially, we need to know what the detail of the standards organizations must meet to be accepted as IDPs, in particular, what kinds of organization they exclude. The GDS as presently constituted - composed, as William Heath commented last year, of all the smart, digitally experienced people you *would* hire to reinvent government services for the digital world if you had the choice - seems to have its heart in the right place. Their proposals as outlined - conforming, as Pegman explained happily, to Kim Cameron's seven laws of identity - pay considerable homage to the idea that no one party should have all the details of any given transaction. But the surveillance-happy type of government that legislates for data retention and CCDP might also at some point think, hey, shouldn't we be requiring IDPs to retain all data (requests for authentication, and so on) so we can inspect it should we deem it necessary? We certainly want to be very careful not to build a system that could support such intimate secret surveillance - the fundamental objection all along to key escrow.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.


September 21, 2012

This is not (just) about Google

We had previously glossed over the news, in February, that Google had overridden the "Do Not Track" settings in Apple's Safari Web browser, used on both its desktop and mobile machines. For various reasons, Do Not Track is itself a divisive issue, pitting those who favour user control over privacy issues against those who ask exactly how people plan to pay for all that free content0 if not through advertising. But there was little disagreement about this: Google goofed badly in overriding users' clearly expressed preferences. Google promptly disabled the code, but the public damage was done - and probably made worse by the company's initial response.

In August, the US Federal Trade Commission fined Google $22.5 million for that little escapade. Pocket change, you might say, and compared to Google's $43.6 billion in 2011 revenues you'd be right. As the LSE's Edgar Whitely pointed out on Monday, a sufficiently large company can also view such a fine strategically: paying might be cheaper than fixing the problem. I'm less sure: fines have a way of going up a lot if national regulators believe a company is deliberately and repeatedly flouting their authority. And to any of the humans reviewing the fine - neither Page nor Brin grew up particularly wealthy, and I doubt Google pays its lawyers more than six figures - I'd bet $22.5 million still seems pretty much like real money.

On Monday, Simon Davies, the founder and former director of Privacy International, convened a meeting at the LSE to discuss this incident and its eventual impact. This was when it became clear that whatever you think about Google in particular, or online behavioral advertising in general, the questions it raises will apply widely to the increasing numbers of highly complex computer systems in all sectors. How does an organization manage complex code? What systems need to be in place to ensure that code does what it's supposed to do, no less - and no more? How do we make these systems accountable? And to whom?

The story in brief: Stanford PhD student Jonathan Mayer studies the intersection of technology and privacy, not by writing thoughtful papers studying the law but empirically, by studying what companies do and how they do it and to how many millions of people.

"This space can inherently be measured," he said on Monday. "There are wide-open policy questions that can be significantly informed by empirical measurements." So, for example, he'll look at things like what opt-out cookies actually do (not much of benefit to users, sadly), what kinds of tracking mechanisms are actually in use and by whom, and how information is being shared between various parties. As part of this, Mayer got interested in identifying the companies placing cookies in Safari; the research methodology involved buying ads that included codes enabling him to measure the cookies in place. It was this work that uncovered Google's bypassage of Safari's Do Not Track flag, which has been enabled by default since 2004. Mayer found cookies from four companies, two of which he puts down to copied and pasted circumvention code and two of which - Google and Vibrant - he were deliberate. He believes that the likely purpose of the bypass was to enable social synchronizing features (such as Google+'s "+1" button); fixing one bit of coded policy broke another.

This wasn't much consolation to Whitley, however: where are the quality controls? "It's scary when they don't really tell you that's exactly what they have chosen to do as explicitly corporate policy. Or you have a bunch of uncontrolled programmers running around in a large corporation providing software for millions of users. That's also scary."

And this is where, for me, the issue at hand jumped from the parochial to the global. In the early days of the personal computer or of the Internet, it didn't matter so much if there were software bugs and insecurities, because everything based on them was new and understood to be experimental enough that there were always backup systems. Now we're in the computing equivalent of the intermediate period in a pilot's career, which is said to be the more dangerous time: that between having flown enough to think you know it all, and having flown enough to know you never will. (John F. Kennedy, Jr, was in that window when he crashed.)

Programmers are rarely brought into these kinds of discussions, yet are the people at the coalface who must transpose human language laws, regulations, and policies into the logical precision of computer code. As Danielle Citron explains in a long and important 2007 paper, Technological Due Process, that process inevitably generates many errors. Her paper focuses primarily on several large, automated benefits systems (two of them built by EDS) where the consequences of the errors may be denying the most needy and vulnerable members of society the benefits the law intends them to receive.

As the LSE's Chrisanthi Avgerou said, these issues apply across the board, in major corporations like Google, but also in government, financial services, and so on. "It's extremely important to be able to understand how they make these decisions." Just saying, "Trust us" - especially in an industry full of as many software holes as we've seen in the last 30 years - really isn't enough.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


September 14, 2012

What did you learn in school today?

One of the more astonishing bits of news this week came from Big Brother Watch: 207 schools across Britain have placed 825 CCTV cameras in toilets or changing rooms. The survey included more than 2,000 schools, so what this is basically saying is that a tenth of the schools surveyed apparently saw nothing wrong in spying on its pupils in these most intimate situations. Overall, the survey found that English, Welsh, and Scottish secondary schools and academies have a total of 106,710 cameras overall, or an average camera-to-pupil ratio of 1:38. As a computer scientist would say, this is non-trivial.

Some added background: the mid 2000s saw the growth of fingerprinting systems for managing payments in school cafeterias, checking library books in and out, and registering attendance. In 2008, the Leave Them Kids Alone campaign, set up by a concerned parent, estimated that more than 2 million UK kids had been fingerprinted, often without the consent of their parents. The Protection of Freedoms Act 2012 finally requires schools and colleges to get parental consent before collecting children's biometrics. That doesn't stop the practice but at least it establishes that these are serious decisions whose consequences need to be considered.

Meanwhile, Ruth Cousteau, the editor of the Open Rights Group's ORGzine, one of the locations where you can find net.wars every week, sends the story that a Texas school district is requiring pupils to carry RFID-enabled cards at all times while on school grounds. The really interesting element is that the real goal here is primarily and unashamedly financial, imposed on the school by its district: the school gets paid per pupil per day, and if a student isn't in homeroom when the teacher takes attendance, that's a little less money to finance the school in doing its job. The RFID cards enable the school to count the pupils who are present somewhere on the grounds but not in their seats, as if they were laptops in danger of being stolen. In the Wired write-up linked above, the school's principal seems not to see any privacy issues connecting to the fact that the school can track kids anywhere on the campus. It's good for safety. And so on.

There is constant debate about what kids should be taught in schools with respect to computers. In these discussions, the focus tends to be on what kids should be directly taught. When I covered Young Rewired State in 2011, one of the things we asked the teams I followed was about the state of computer education in their schools. Their answers: dire. Schools, apparently under the impression that their job was to train the office workforce of the previous decade, were teaching kids how to use word processors, but nothing or very little about how computers work, how to program, or how to build things.

There are signs that this particular problem is beginning to be rectified. Things like the Raspberry Pi and the Arduino, coupled with open source software, are beginning provide ways to recapture teaching in this area, essential if we are to have a next generation of computer scientists. This is all welcome stuff: teaching kids about computers by supplying them with fundamentally closed devices like iPads and Kindles is the equivalent of teaching kids sports by wheeling in a TV and playing a videotape of last Monday's US Open final between Andy Murray and Novak Djokovic.

But here's the most telling quote from that Wired article: "The kids are used to being monitored."

Yes, they are. And when they are adults, they will also be used to being monitored. I'm not quite paranoid enough to suggest that there's a large conspiracy to "soften up" the next generation (as Terri Dowty used to put it when she was running Action for the Rights of Children), but you can have the effect whether or not you have the intent. All these trends are happening in multiple locations: in the UK, for example, there were experiments in 2007 with school uniforms with embedded RFID chips (that wouldn't work in the US, where school uniforms are a rarity); in the trial, these not only tracked students' movements but pulled up data on academic performance.

These are the lessons we are teaching these kids indirectly. We tell them that putting naked photos on Facebook is a dumb idea and may come back to bite them in the future - but simultaneously we pretend to them that their electronic school records, down to the last, tiniest infraction, pose no similar risk. We tell them that plagiarism is bad and try to teach them about copyright and copying - but real life is meanwhile teaching them that a lot of news is scraped almost directly from press releases and that cheating goes on everywhere from financial markets and sports to scientific research. And although we try to tell them that security is important, we teach them by implication that it's OK to use sensitive personal data such as fingerprints and other biometrics for relatively trivial purposes, even knowing that these data's next outing may be to protect their bank accounts and validate their passports.

We should remember: what we do to them now they will do to us when we are old and feeble, and they're the ones in charge.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

.

August 10, 2012

Wiped out

There are so many awful things in the story of what happened this week to technology journalist Matt Honan that it's hard to know where to start. The fundamental part - that through not particularly clever social engineering an outsider was able in about 20 minutes to take over and delete his Google account, take over and defame his Twitter account, and then wipe all the data on his iPhone, iPad, and MacBook - would make a fine nightmare, or maybe a movie with some of the surrealistic quality of Martin Scorsese's After Hours (1985). And all, as Honan eventually learned, because the hacker fancied an outing with his three-digit Twitter ID, a threat so unexpected there's no way you'd make it your model.

Honan's first problem was the thing Suw Charman-Anderson put her finger on for an Infosecurity Magazine piece I did earlier this year: gaining access to a single email address to which every other part of your digital life - ecommerce accounts, financial accounts, social media accounts, password resets all over the Web - is locked puts you in for "a world of hurt". If you only have one email account you use for everything, given access to it, an attacker can simply request password resets all over the place - and then he has access to your accounts and you don't. There are separate problems around the fact that the information required for resets is both the kind of stuff people disclose without thinking on social networks and commonly reused. None of this requires fancy technology fix, just smarter, broader thinking

There are simple solutions to the email problem: don't use one email account for everything and, in the case of Gmail, use two-factor authentication. If you don't operate your own server (and maybe even if you do) it may be too complicated to create a separate address for every site you use, but it's easy enough to have a public address you use for correspondence, a private one you use for most of your site accounts, and then maybe a separate, even less well-known one for a few selected sites that you want to protect as much as you can.

Honan's second problem, however, is not so simple to fix unless an incident like this commands the attention of the companies concerned: the interaction of two companies' security practices that on their own probably seemed quite reasonable. The hacker needed just two small bits of information: Honan's address (sourced from the Whois record for his Internet domain name), and the last four digits of a credit card number, The hack to get the latter involved adding a credit card to Honan's Amazon.com account over the phone and then using that card number, in a second phone call, to add a new email address to the account. Finally, you do a password reset to the new email address, access the account, and find the last four digits of the cards on file - which Apple then accepted, along with the billing address, as sufficient evidence of identity to issue a temporary password into Honan's iCloud account.

This is where your eyes widen. Who knew Amazon or Apple did any of those things over the phone? I can see the point of being able to add an email address; what if you're permanently locked out of the old one? But I can't see why adding a credit card was ever useful; it's not as if Amazon did telephone ordering. And really, the two successive calls should have raised a flag.

The worst part is that even if you did know you'd likely have no way to require any additional security to block off that route to impersonators; telephone, cable, and financial companies have been securing telephone accounts with passwords for years, but ecommerce sites do not (or haven't) think of themselves as possible vectors for hacks into other services. Since the news broke, both Amazon and Apple have blocked off this phone access. But given the extraordinary number of sites we all depend on, the takeaway from this incident is that we ultimately have no clue how well any of them protect us against impersonation. How many other sites can be gamed in this way?

Ultimately, the most important thing, as Jack Schofield writes in his Guardian advice column is not to rely on one service for everything. Honan's devastation was as complete as it was because all his devices were synched through iCloud and could be remotely wiped. Yet this is the service model that Apple has and that Microsoft and Google are driving towards. The cloud is seductive in its promises: your data is always available, on all your devices, anywhere in the world. And it's managed by professionals, who will do all the stuff you never get around to, like make backups.

But that's the point: as Honan discovered to his cost, the cloud is not a backup. If all your devices are hooked to it, it is your primary data pool, and, as Apple co-founder Steve Wozniak pointed out this week it is out of your control. Keep your own backups, kids. Develop multiple personalities. Be careful out there.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


June 15, 2012

A license to print money

"It's only a draft," Julian Huppert, the Liberal Democrat MP for Cambridge, said repeatedly yesterday. He was talking about the Draft Communications Data Bill (PDF), which was published on Wednesday. Yesterday, in a room in a Parliamentary turret, Hupper convened a meeting to discuss the draft; in attendance were a variety of Parliamentarians plus experts from civil society groups such as Privacy International, the Open Rights Group, Liberty, and Big Brother Watch. Do we want to be a nation of suspects?

The Home Office characterizes the provisions in the draft bill as vital powers to help catch criminals, save lives, and protect children. Everyone else - the Guardian, ZDNet UK, and dozens more - is calling them the "Snooper's charter".

Huppert's point is important. Like the Defamation Bill before it, publishing a draft means there will be a select committee with 12 members, discussion, comments, evidence taken, a report (by November 30, 2012), and then a rewritten bill. This draft will not be voted on in Parliament. We don't have to convince 650 MPs that the bill is wrong; it's a lot easier to talk to 12 people. This bill, as is, would never pass either House in any case, he suggested.

This is the optimistic view. The cynic might suggest that since it's been clear for something like ten years that the British security services (or perhaps their civil servants) have a recurring wet dream in which their mountain of data is the envy of other governments, they're just trying to see what they can get away with. The comprehensive provisions in the first draft set the bar, softening us up to give away far more than we would have in future versions. Psychologists call this anchoring, and while probably few outside the security services would regard the wholesale surveillance and monitoring of innocent people as normal, the crucial bit is where you set the initial bar for comparison for future drafts of the legislation. However invasive the next proposals are, it will be easy for us to lose the bearings we came in with and feel that we've successfully beaten back at least some of the intrusiveness.

But Huppert is keeping his eye on the ball: maybe we can not only get the worst stuff out of this bill but make things actually better than they are now; it will amend RIPA. The Independent argues that private companies hold much more data on us overall but that article misses that this bill intends to grant government access to all of it, at any time, without notice.

The big disappointment in all this, as William Heath said yesterday, is that it marks a return to the old, bad, government IT ways of the past. We were just getting away from giant, failed public IT projects like the late unlamented NHS platform for IT and the even more unlamented ID card towards agile, cheap public projects run by smart guys who know what they're doing. And now we're going to spend £1.8 billion of public money over ten years (draft bill, p92) building something no one much wants and that probably won't work? The draft bill claims - on what authority is unclear - that the expenditure will bring in £5 to £6 billion in revenues. From what? Are they planning to sell the data?

Or are they imagining the economic growth implied by the activity that will be necessary to build, install, maintain, and update the black boxes that will be needed by every ISP in order to comply with the law. The security consultant Alec Muffet has laid out the parameters for this SpookBox 5000: certified, tested, tamperproof, made by, say, three trusted British companies. Hundreds of them, legally required, with ongoing maintenance contracts. "A license to print money," he calls them. Nice work if you can get it, of course.

So we're talking - again - about spending huge sums of government money on a project that only a handful of people want and whose objectives could be better achieved by less intrusive means. Give police better training in computer forensics, for example, so they can retrieve the evidence they need from the devices they find when executing a search warrant.

Ultimately, the real enemy is the lack of detail in the draft bill. Using the excuse that the communications environment is changing rapidly and continuously, the notes argue that flexibility is absolutely necessary for Clause 1, the one that grants the government all the actual surveillance power, and so it's been drafted to include pretty much everything, like those contracts that claim copyright in perpetuity in all forms of media that exist now or may hereinafter be invented throughout the universe. This is dangerous because in recent years the use of statutory instruments to bypass Parliamentary debate has skyrocketed. No. Make the defenders of this bill prove every contention; make them show the evidence that makes every extra bit of intrusion necessary.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 25, 2012

Camera obscura

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't), or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 13, 2012

The people perimeter

People with jobs are used to a sharp division between their working lives and their private lives. Even in these times, when everyone carries a mobile phone and may be on call at any moment, they still tend to believe that what they say to their friends is no concern of their employer's. (Freelances tend not to have these divisions; to a much larger extent we have always been "in public" most of the time.)

These divisions were always less in small towns, where teachers or clergy had little latitude, and where even less-folk would be well advised to leave town before doing anything they wouldn't want discussed in detail. Then came social media, which turns everywhere into a small town and where even if you behave impeccably details about you and your employer may be exposed without your knowledge.

That's all a roundabout way of leading to yesterday's London Tea camp, where the subject of discussion was developing guidelines for social media use by civil servants.

Civil servants! The supposedly faceless functionaries who, certainly at the senior levels, are probably still primarily understood by most people through the fictional constructs of TV shows like Yes, Minister and The Thick of It. All of the 50 or 60 people from across government who attended yesterday have Twitter IDs; they're on Facebook and Foursquare, and probably a few dozen other things that would horrify Sir Humphrey. And that's as it should be: the people administering the nation's benefits, transport, education, and health absolutely should live like the people they're trying to serve. That's how you get services that work for us rather than against us.

The problem with social media is the same as their benefit: they're public in a new and different way. Even if you never identify your employer, Foursquare or the geotagging on Twitter or Facebook checks you in at a postcode that's indelibly identified with the very large government building where your department is the sole occupant. Or a passerby photographs you in front of it and Facebook helpfully tags your photograph with your real name, which then pops up in outside searches. Or you say something to someone you know who tells someone else who posts it online for yet another person to identify and finally the whole thing comes back and bites you in the ass. Even if your Tweets are clearly personal, and even if your page says, "These are just my personal opinions and do not reflect those of my employer", the fact of where you can be deduced to work risks turning anything connected to you into something a - let's call it - excitable journalist can make into a scandal. Context is king.

What's new about this is the uncontrollable exposure of this context. Any Old Net Curmudgeon will tell you that the simple fact of people being caught online doing things their employers don't like goes back to the dawn of online services. Even now I'm sure someone dedicated could find appalling behavior in the Usenet archives by someone who is, 25 years on, a highly respected member of society. But Usenet was a minority pastime; Facebook, Twitter et al are mainstream.

Lots has been written by and about employers in this situation: they may suffer reputational damage, legal liability, or a breach that endangers their commercial secrets. Not enough has been written about individuals struggling to cope with sudden, unwanted exposure. Don't we have the right to private lives? someone asked yesterday. What they are experiencing is the same loss of border control that security engineers are trying to cope with. They call it "deperimeterization", because security used to mean securing the perimeter of your network and now security means coping with its loss. Adding wireless, remote access for workers at home, personal devices such as mobile phones, and links to supplier and partner networks have all blown holes in it.

There is no clear perimeter any more for networks - or individuals, either. Trying to secure one by dictating behavior, whether by education, leadership by example, or written guidelines, is inevitably doomed. There is, however, a very valid reason to have these things: to create a general understanding between employer and employee. It should be clear to all sides what you can and cannot get fired for.

In 2003, Danny O'Brien nailed a lot of this when he wrote about the loss of what he called the "private-intermediate sphere". In that vanishing country, things were private without being secret. You could have a conversation in a pub with strangers walking by and be confident that it would reach only the audience present at the time and that it would not unexpectedly be replayed or published later (see also Don Harmon and Chevy Chase's voicemail). Instead, he wrote, the Net is binary: secret or public, no middle ground.

What's at stake here is really not private life, but *social* life. It's the addition of the online component to our social lives that has torn holes in our personal perimeters.

"We'll learn a kind of tolerance for the private conversation that is not aimed at us, and that overreacting to that tone will be a sign of social naivete," O'Brien predicted. Maybe. For now, hard cases make bad law (and not much better guidelines) *First* cases are almost always hard cases.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 9, 2012

Private parts

In 1995, when the EU Data Protection Directive was passed, Facebook founder and CEO Mark Zuckerberg was 11 years old. Google was three years away from incorporation. Amazon.com was a year old and losing money fast enough to convince many onlookers that it would never be profitable; the first online banner ads were only months old. It was the year eBay and Yahoo! were founded and Netscape went public. This is how long ago it was: CompuServe was a major player in online services, AOL was just setting up its international services, and both of them were still funded by per-minute usage fees.

In other words: even when it was published there were no Internet companies whose business models depended on exploiting user data. During the years it was being drafted only posers and rich people owned mobile phone, selling fax machines was a good business, and women were still wearing leggings the *first* time. It's impressive that the basic principles formulated then have held up well. Practice, however, has been another matter.

The discussions that led to the publication in January of of a package of reforms to the data protection rules began in 2008. Discussions among data protection commissioners, Peter Hustinx, the European Data Protection Supervisor, said at Thursday's Westminster eForum on data protection and electronic privacy, produced a consensus that changes were needed, including making controllers more accountable, increasing "privacy by design", and making data protection a top-level issue for corporate governance.

These aren't necessarily the issues that first spring to mind for privacy advocates, particularly in the UK, where many have complained that the Information Commissioner's Office has failed. (It was, for example, out of step with the rest of the world with respect to Google's Street View.) Privacy International has a long history of complaints about the ICO's operation. But even the EU hasn't performed as well as citizens might hope under the present regime: PI also exposed the transfer of SWIFT financial data to the US, while Edward Hasbrouck has consistently and publicly opposed the transfer of passenger name record data from the EU to the US.

Hustinx has published a comprehensive opinion of the reform package. The details of both the package itself and the opinion require study. But some of the main points are an effort to implement a single regime and the rights to erasure (aka the right to be forgotten), require breach notification within 24 hours of discovery, strengthen the data protection authorities and make them more accountable.

Of course, everyone has a complaint. The UK's deputy information commissioner, David Smith, complained that the package is too prescriptive of details and focuses on paperwork rather than privacy risk. Lord McNally, Minister of State at the Ministry of Justice, complained that the proposed fines of up to 2 percent of global corporate income are disproportionate and that 24 hours is too little time. Hustinx outlined his main difficulties: that the package has gaps, most notably surrounding the transfer of telephone data to law enforcement; that fines should be discretionary and proportionate rather than compulsory; and that there remain difficulties in dealing with national and EU laws.

We used to talk about the way the Internet enabled the US to export the First Amendment. You could, similarly, see the data protection laws as the EU's effort to export privacy rules; a key element is the prohibition on transferring data to countries without similar regimes - which is why the SWIFT and PNR cases were so problematic. In 1999, for a piece that's now behind Scientific American's paywall, PI's Simon Davies predicted that US companies might find themselves unable to trade in Europe because of data flows. Big questions, therefore, revolve around the business corporate rules, which allow companies to transfer data to third countries without equivalent data protection as long as the data stays within their corporate boundaries.

The arguments over data protection law have a lot in common with the arguments over copyright. In both cases, the goal is to find a balance of power between competing interests that keeps individuals from being squashed. Also like copyright, data protection policy is such a dry and esoteric subject that it's hard to get non-specialists engaged with it. Hard, but not impossible: copyright has never had a George Orwell to make the dangers up close and personal. Copyright law began, Lawrence Lessig argued in (I think it was) Free Culture, as a way to curb the power of publishers (although by now it has ended up greatly empowering them). Similarly while most of us may think of data protection law as protecting the abuse of personal data, a voice argued from the floor yesterday that the law was originally drafted to enable free data transfers within the single market.

There is another similarity. Rightsholders and government policymakers often talk as though the population-at-large are consumers, not creators in their own right. Similarly, yesterday, Mydex's David Alexander had this objection to make: "We seem to keep forgetting that humans are not just subjects, but participants in the management of their own personal data...Why can't we be participants?"


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 27, 2012

Principle failure

The right to access, correct, and delete personal information held about you and the right to bar data collected for one purpose from being reused for another are basic principles of the data protection laws that have been the norm in Europe since the EU adopted the Privacy Directive in 1995. This is the Privacy Directive that is currently being updated; the European Commission's proposals seem, inevitably, to please no one. Businesses are already complaining compliance will be unworkable or too expensive (hey, fines of up to 2 percent of global income!). I'm not sure consumers should be all that happy either; I'd rather have the right to be anonymous than to be forgotten (which I believe will prove technically unworkable), and the jurisdiction for legal disputes with a company to be set to my country rather than theirs. Much debate lies ahead.

In the meantime, the importance of the data protection laws has been enhanced by Google's announcement this week that it will revise and consolidate the more than 60 privacy policies covering its various services "to create one beautifully simple and intuitive experience across Google". It will, the press release continues, be "Tailored for you". Not the privacy policy, of course, which is a one-size-fits-all piece of corporate lawyer ass-covering, but the services you use, which, after the fragmented data Google holds about you has been pooled into one giant liquid metal Terminator, will be transformed into so-much-more personal helpfulness. Which would sound better if 2011 hadn't seen loud warnings about the danger that personalization will disappear stuff we really need to know: see Eli Pariser's filter bubble and Jeff Chester's worries about the future of democracy.

Google is right that streamlining and consolidating its myriad privacy policies is a user-friendly thing to do. Yes, let's have a single policy we can read once and understand. We hate reading even one privacy policy, let alone 60 of them.

But the furore isn't about that, it's about the single pool of data. People do not use Google Docs in order to improve their search results; they don't put up Google+ pages and join circles in order to improve the targeting of ads on YouTube. This is everything privacy advocates worried about when Gmail was launched.

Australian privacy campaigner Roger Clarke's discussion document sets out the principles that the decision violates: no consultation, retroactive application; no opt out.

Are we evil yet?

In his 2011 book, In the Plex, Steven Levy traces the beginnings of a shift in Google's views on how and when it implements advertising to the company's controversial purchase of the DoubleClick advertising network, which relied on cookies and tracking to create targeted ads based on Net users' browsing history. This $3.1 billion purchase was huge enough to set off anti-trust alarms. Rightly so. Levy writes, "...sometime after the process began, people at the company realized that they were going to wind up with the Internet-tracking equivalent of the Hope Diamond: an omniscient cookie that no other company could match." Between DoubleClick's dominance in display advertising on large, commercial Web sites and Google AdSense's presence on millions of smaller sites, the company could track pretty much all Web users. "No law prevented it from combining all that information into one file," Levy writes, adding that Google imposed limits, in that it didn't use blog postings, email, or search behavior in building those cookies.

Levy notes that Google spends a lot of time thinking about privacy, but quotes founder Larry Page as saying that the particular issues the public chooses to get upset about seem randomly chosen, the reaction determined most often by the first published headline about a particular product. This could well be true - or it may also be a sign that Page and Brin, like Facebook's Mark Zuckberg and some other Silicon Valley technology company leaders, are simply out of step with the public. Maybe the reactions only seem random because Page and Brin can't identify the underlying principles.

In blending its services, the issue isn't solely privacy, but also the long-simmering complaint that Google is increasingly favoring its own services in its search results - which would be a clear anti-trust violation. There, the traditional principle is that dominance in one market (search engines) should not be leveraged to achieve dominance in another (social networking, video watching, cloud services, email).

SearchEngineLand has a great analysis of why Google's Search Plus is such a departure for the company and what it could have done had it chosen to be consistent with its historical approach to search results. Building on the "Don't Be Evil" tool built by Twitter, Facebook, and MySpace, among others, SEL demonstrates the gaps that result from Google's choices here, and also how the company could have vastly improved its service to its search customers.

What really strikes me in all this is that the answer to both the EU issues and the Google problem may be the same: the personal data store that William Heath has been proposing for three years. Data portability and interoperability, check; user control, check. But that is as far from the Web 2.0 business model as file-sharing is from that of the entertainment industry.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 6, 2012

Only the paranoid

Yesterday's news that the Ramnit worm has harvested the login credentials of 45,000 British and French Facebook users seems to me a watershed moment for Facebook. If I were an investor, I'd wish I had already cashed out. Indications are, however, that founding CEO Mark Zuckerberg is in it for the long haul, in which case he's going to have to find a solution to a particularly intractable problem: how to protect a very large mass of users from identity fraud when his entire business is based on getting them to disclose as much information about themselves as possible.

I have long complained about Facebook's repeatedly changing privacy controls. This week, while working on a piece on identity fraud for Infosecurity, I've concluded that the fundamental problem with Facebook's privacy controls is not that they're complicated, confusing, and time-consuming to configure. The problem with Facebook's privacy controls is that they exist.

In May 2010, Zuckerberg enraged a lot of people, including me, by opining that privacy is no longer a social norm. As Judith Rauhofer has observed, the world's social norms don't change just because some rich geeks in California say so. But the 800 million people on Facebook would arguably be much safer if the service didn't promise privacy - like Twitter. Because then people wouldn't post all those intimate details about themselves: their kids' pictures, their drunken, sex exploits, their incitements to protest, their porn star names, their birth dates... Or if they did, they'd know they were public.

Facebook's core privacy problem is a new twist on the problem Microsoft has: legacy users. Apple was willing to make earlier generations of its software non-functional in the shift to OS X. Microsoft's attention to supporting legacy users allows me to continue to run, on Windows 7, software that was last updated in 1997. Similarly, Facebook is trying to accommodate a wide variety of privacy expectations, from those of people who joined back when membership was limited to a few relatively constrained categories to those of people joining today, when the system is open to all.

Facebook can't reinvent itself wholesale: it is wholly and completely wrong to betray users who post information about themselves into what they are told is a semi-private space by making that space irredeemably public. The storm every time Facebook makes a privacy-related change makes that clear. What the company has done exceptionally well is to foster the illusion of a private space despite the fact that, as the Australian privacy advocate Roger Clarke observed in 2003, collecting and abusing user data is social networks' only business model.

Ramnit takes this game to a whole new level. Malware these days isn't aimed at doing cute, little things like making hard drive failure noises or sending all the letters on your screen tumbling into a heap at the bottom. No, it's aimed at draining your bank account and hijacking your identity for other types of financial exploitation.

To do this, it needs to find a way inside the circle of trust. On a computer network, that means looking for an unpatched hole in software to leverage. On the individual level, it means the malware equivalent of viral marketing: get one innocent bystander to mistakenly tell all their friends. We've watched this particular type of action move through a string of vectors as the human action moves to get away from spam: from email to instant messaging to, now, social networks. The bigger Facebok gets, the bigger a target it becomes. The more information people post on Facebook - and the more their friends and friends of friends friend promiscuously - the greater the risk to each individual.

The whole situation is exacerbated by endemic, widespread, poor security practices. Asking people to provide the same few bits of information for back-up questions in case they need a password reset. Imposing password rules that practically guarantee people will use and reuse the same few choices on all their sites. Putting all the eggs in services that are free at point of use and that you pay for in unobtainable customer service (not to mention behavioral targeting and marketing) when something goes wrong. If everything is locked to one email account on a server you do not control, if your security questions could be answered by a quick glance at your Facebook Timeline and a Google search, if you bank online and use the same passwords throughout...you have a potential catastrophe in waiting.

I realize not everyone can run their own mail server. But you can use multiple, distinct email addresses and passwords, you can create unique answers on the reset forms, and you can limit your exposure by presuming that everything you post *is* public, whether the service admits it or not. Your goal should be to ensure that when - it's no longer safe to say "if" - some part of your online life is hacked the damage can be contained to that one, hopefully small, piece. Relying on the privacy consciousness of friends means you can't eliminate the risk; but you can limit the consequences.

Facebook is facing an entirely different risk: that people, alarmed at the thought of being mugged, will flee elsewhere. It's happened before.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 16, 2011

Location, location, location

In the late 1970s, I used to drive across the United States several times a year (I was a full-time folksinger), and although these were long, long days at the wheel, there were certain perks. One was the feeling that the entire country was my backyard. The other was the sense that no one in the world knew exactly where I was. It was a few days off from the pressure of other people.

I've written before that privacy is not sleeping alone under a tree but being able to do ordinary things without fear. Being alone on an interstate crossing Oklahoma wasn't to hide some nefarious activity (like learning the words to "There Ain't No Instant Replay in the Football Game of Life"). Turn off the radio and, aside from an occasional billboard, the world was quiet.

Of course, that was also a world in which making a phone call was a damned difficult thing to do, which is why professional drivers all had CB radios. Now, everyone has mobile phones, and although your nearest and dearest may not know where you are, your phone company most certainly does, and to a very fine degree of "granularity".

I imagine normal human denial is broad enough to encompass pretending you're in an unknown location while still receiving text messages. Which is why this year's A Fine Balance focused on location privacy.

The travel privacy campaigner Edward Hasbrouck has often noted that travel data is particularly sensitive and revealing in a way few realize. Travel data indicate your religion (special meals), medical problems, and life style habits affecting your health (choosing a smoking room in a hotel). Travel data also shows who your friends are, and how close: who do you travel with? Who do you share a hotel room with, and how often?

Location data is travel data on a steady drip of steroids. As Richard Hollis, who serves on the ISACA Government and Regulatory Advocacy Subcommittee, pointed out, location data is in fact travel data - except that instead of being detailed logging of exceptional events it's ubiquitous logging of everything you do. Soon, he said, we will not be able to opt out - and instead of travel data being a small, sequestered, unusually revealing part of our lives, all our lives will be travel data.

Location data can reveal the entire pattern of your life. Do you visit a church every Monday evening that has an AA meeting going on in the basement? Were you visiting the offices of your employer's main competitor when you were supposed to have a doctor's appointment?

Research supports this view. Some of the earliest work I'm aware of is of Alberto Escudero-Pascual. A month-long experiment tracking the mobile phones in his department enabled him to diagram all the intra-departmental personal relations. In a 2002 paper, he suggests how to anonymize location information (PDF). The problem: no business wants anonymization. As Hollis and others said, businesses want location data. Improved personalization depends on context, and location provides a lot of that.

Patrick Walshe, the director of privacy for the GSM Association, compared the way people care about privacy to the way they care about their health: they opt for comfort and convenience and hope for the best. They - we - don't make changes until things go wrong. This explains why privacy considerations so often fail and privacy advocates despair: guarding your privacy is like eating your vegetables, and who except a cranky person plans their meals that way?

The result is likely to be the world that Microsoft UK's director of Search, advertising, and online UK, Dave Coplin, outlined, arguing that privacy today is at the turning point that the Melissa virus represented for security 11 years ago when it first hit.

Calling it "the new battleground," he said, "This is what happens when everything is connected." Similarly, Blaine Price, a senior lecturer in computing at the Open University, had this cheering thought: as humans become part of the Internet of Things, data leakage will become almost impossible to avoid.

Network externalities mean that the number of people using a network increase its value for all other users of that network. What about privacy externalities? I haven't heard the phrase before, although I see it's not new (PDF). But I mean something different than those papers do: the fact that we talk about privacy as an individual choice when instead it's a collaborative effort. A single person who says, "I don't care about my privacy" can override the pro-privacy decisions of dozens of their friends, family, and contacts. "I'm having dinner with @wendyg," someone blasts, and their open attitude to geolocation reveals mine.

In his research on tracking, Price has found that the more closely connected the trackers are the less control they have over such decisions. I may worry that turning on a privacy block will upset my closest friend; I don't obsess at night, "Will the phone company think I'm mad at it?"

So: you want to know where I am right now? Pay no attention to the geolocated Twitterer who last night claimed to be sitting in her living room with "wendyg". That wasn't me.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 11, 2011

The sentiment of crowds

Context is king.

Say to a human, "I'll meet you at the place near the thing where we went that time," and they'll show up at the right place. That's from the 1987 movieBroadcast News: Aaron (Albert Brooks) says it; cut to Jane (Holly Hunter), awaiting him at a table.

But what if Jane were a computer and what she wanted to know from Aaron's statement was not where to meet but how Aaron felt about it? This is the challenge facing sentiment analysis.

At Wednesday's Sentiment Analysis Symposium, the key question of context came up over and over again as the biggest challenge to the industry of people who claim that they can turn Tweets, blog postings, news stories, and other mass data sources into intelligence.

So context: Jane can parse "the place", "the thing", and "that time" because she has expert knowledge of her past with Aaron. It's an extreme example, but all human writing makes assumptions about the knowledge and understanding of the reader. Humans even use those assumptions to implement privacy in a public setting: Stephen Fry could retweet Aaron's words and still only Jane would find the cafe. If Jane is a large organization seeking to understand what people are saying about it and Aaron is 6 million people posting on Twitter, Tom can use sentiment analyzer tools to give a numerical answer. And numbers always inspire confidence...

My first encounter with sentiment analysis was this summer during Young Rewired State, when a team wanted to create a mood map of the UK comparing geolocated tweets to indices of multiple deprivation. This third annual symposium shows that here is a rapidly engorging industry, part PR, part image consultancy, and part artificial intelligence research project.

I was drawn to it out of curiosity, but also because it all sounds slightly sinister. What do sentiment analyzers understand when I say an airline lounge at Heathrow Terminal 4 "brings out my inner Sheldon? What is at stake is not precise meaning - humans argue over the exact meaning of even the greatest communicators - but extracting good-enough meaning from high-volume data streams written by millions of not-monkeys.

What could possibly go wrong? This was one of the day's most interesting questions, posed by the consultant Meta Brown to representatives of the Red Cross, the polling organization Harris Interactive, and Paypal. Failure to consider the data sources and the industry you're in, said the Red Cross's Banafsheh Ghassemi. Her example was the period just after Hurricane Irene, when analyzing social media sentiment would find it negative. "It took everyday disaster language as negative," she said. In addition, because the Red Cross's constituency is primarily older, social media are less indicative than emails and call center records. For many organizations, she added, social media tend to skew negative.

Earlier this year, Harris Interactive's Carol Haney, who has had to kill projects when they failed to produce sufficiently accurate results for the client, told a conference, "Sentiment analysis is the snake oil of 2011." Now, she said, "I believe it's still true to some extent. The customer has a commercial need for a dial pointing at a number - but that's not really what's being delivered. Over time you can see trends and significant change in sentiment, and when that happens I feel we're returning value to a customer because it's not something they received before and it's directionally accurate and giving information." But very small changes over short time scales are an unreliable basis for making decisions.

"The difficulty in social media analytics is you need a good idea of the questions you're asking to get good results," says Shlomo Argamon, whose research work seems to raise more questions than answers. Look at companies that claim to measure influence. "What is influence? How do you know you're measuring that or to what it correlates in the real world?" he asks. Even the notion that you can classify texts into positive and negative is a "huge simplifying assumption".

Argamon has been working on technology to discern from written text the gender and age - and perhaps other characteristics - of the author, a joint effort with his former PhD student Ken Bloom. When he says this, I immediately want to test him with obscure texts.

Is this stuff more or less creepy than online behavioral advertising? Han-Sheong Lai explained that Paypal uses sentiment analysis to try to glean the exact level of frustration of the company's biggest clients when they threaten to close their accounts. How serious are they? How much effort should the company put into dissuading them? Meanwhile Verint's job is to analyze those "This call may be recorded" calls. Verint's tools turn speech to text, and create color voiceprint maps showing the emotional high points. Click and hear the anger.

"Technology alone is not the solution," said Philip Resnik, summing up the state of the art. But, "It supports human insight in ways that were not previously possible." His talk made me ask: if humans obfuscate their data - for example, by turning off geolocation - will this industry respond by finding ways to put it all back again so the data will be more useful?

"It will be an arms race," he agrees. "Like spam."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 4, 2011

The identity layer

This week, the UK government announced a scheme - Midata - under which consumers will be able to reclaim their personal information. The same day, the Centre for the Study of Financial Innovation assembled a group of experts to ask what the business model for online identification should be. And: whatever that model is, what the the government's role should be. (For background, here's the previous such discussion.)

My eventual thought was that the government's role should be to set standards; it might or might not also be an identity services provider. The government's inclination now is to push this job to the private sector. That leaves the question of how to serve those who are not commercially interesting; at the CSFI meeting the Post Office seemed the obvious contender for both pragmatic and historical reasons.

As Mike Bracken writes in the Government Digital Service blog posting linked above, the notion of private identity providers is not new. But what he seems to assume is that what's needed is federated identity - that is, in Wikipedia's definition, a means for linking a person's electronic identity and attributes across multiple distinct systems. What I meant is a system in which one may have many limited identities that are sufficiently interoperable that you can make a choice which to use at the point of entry to a given system. We already have something like this on many blogs, where commenters may be offered a choice of logging in via Google, OpenID, or simply posting a name and URL.

The government gateway circa Year 2000 offered a choice: getting an identity certificate required payment of £50 to, if I remember correctly, Experian or Equifax, or other companies whose interest in preserving personal privacy is hard to credit. The CSFI meeting also mentioned tScheme - an industry consortium to provide trust services. Outside of relatively small niches it's made little impact. Similarly, fifteen years ago, the government intended, as part of implementing key escrow for strong cryptography, to create a network of trusted third parties that it would license and, by implication, control. The intention was that the TTPs should be folks that everyone trusts - like banks. Hilarious, we said *then*. Moving on.

In between then and now, the government also mooted a completely centralized identity scheme - that is, the late, unlamented ID card. Meanwhile, we've seen the growth a set of competing American/global businesses who all would like to be *the* consumer identity gateway and who managed to steal first-mover advantage from existing financial institutions. Facebook, Google, and Paypal are the three most obvious. Microsoft had hopes, perhaps too early, when in 1999 it created Passport (now Windows Live ID). More recently, it was the home for Kim Cameron's efforts to reshape online identity via the company's now-cancelled CardSpace, and Brendon Lynch's adoption of U-Prove, based on Stefan Brands' technology. U-Prove is now being piloted in various EU-wide projects. There are probably lots of other organizations that would like to get in on such a scheme, if only because of the data and linkages a federated system would grant them. Credit card companies, for example. Some combination of mobile phone manufacturers, mobile network operators, and telcos. Various medical outfits, perhaps.

An identity layer that gives fair and reasonable access to a variety of players who jointly provide competition and consumer choice seems like a reasonable goal. But it's not clear that this is what either the UK's distastefully spelled "Midata" or the US's NSTIC (which attracted similar concerns when first announced, has in mind. What "federated identity" sounds like is the convenience of "single sign-on", which is great if you're working in a company and need to use dozens of legacy systems. When you're talking about identity verification for every type of transaction you do in your entire life, however, a single gateway is a single point of failure and, as Stephan Engberg, founder of the Danish company Priway, has often said, a single point of control. It's the Facebook cross-all-the-streams approach, embedded everywhere. Engberg points to a discussion paper) inspired by two workshops he facilitated for the Danish National IT and Telecom Agency (NITA) in late 2010 that covers many of these issues.

Engberg, who describes himself as a "purist" when it comes to individual sovereignty, says the only valid privacy-protecting approach is to ensure that each time you go online on each device you start a new session that is completely isolated from all previous sessions and then have the choice of sharing whatever information you want in the transaction at hand. The EU's LinkSmart project, which Engberg was part of, created middleware to do precisely that. As sensors and RFID chips spread along with IPv6, which can give each of them its own IP address, linkages across all parts of our lives will become easier and easier, he argues.

We've seen often enough that people will choose convenience over complexity. What we don't know is what kind of technology will emerge to help us in this case. The devil, as so often, will be in the details.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 28, 2011

Crypto: the revenge

I recently had occasion to try out Gnu Privacy Guard, the Free Software Foundation's version of PGP, Phil Zimmermann's legendary Pretty Good Privacy software. It was the first time I'd encrypted an email message since about 1995, and I was both pleasantly surprised and dismayed.

First, the good. Public key cryptography is now implemented exactly the way it should have been all along: once you've installed it and generated a keypair, encrypting a message is ticking a box or picking a menu item inside your email software. Even key management is handled by a comprehensible, well-designed graphical interface. Several generations of hard work have created this and also ensured that the various versions of PGP, OpenPGP, and GPG are interoperable, so you don't have to worry about who's using what. Installation was straightforward and the documentation is good.

Now, the bad. That's where the usability stops. There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners.

Item: the subject line doesn't get encrypted. There is nothing you can do about this except put a lot of thought into devising a subject line that will compel people to read the message but that simultaneously does not reveal anything of value to anyone monitoring your email. That's a neat trick.

Item: watch out for attachments, which are easily accidentally sent in the clear; you need to encrypt them separately before bundling them into the message.

Item: while there is a nifty GPG plug-in for Thunderbird - Enigmail - Outlook, being commercial software, is less easily supported. GPG's GpgOL module works only with 2003 (SP2 and above) and 2007, and not on 64-bit Windows. The problem is that it's hard enough to get people to change *one* habit, let alone several.

Item: lacking appropriate browser plug-ins, you also have to tell them to stop using Webmail if the service they're used to won't support IMAP or POP3, because they won't be able to send encrypted mail or read what others send them over the Web.

Let's say you're running a field station in a hostile area. You can likely get users to persevere despite these points by telling them that this is their work system, for use in the field. Most people will put up with a some inconvenience if they're being paid to do so and/or it's temporary and/or you scare them sufficiently. But that strategy violates one of the basic principles of crypto-culture, which is that everyone should be encrypting everything so that sensitive traffic doesn't stand out. They are of course completely right, just as they were in 1993, when the big political battles over crypto were being fought.

Item: when you connect to a public keyserver to check or download someone's key, that connection is in the clear, so anyone surveilling you can see who you intend to communicate with.

Item: you're still at risk with regard to traffic data. This is what RIPA and data retention are all about. What's more significant? Being able to read a message that says, "Can you buy milk?" or the information that the sender and receiver of that message correspond 20 times a day? Traffic data reveals the pattern of personal relationships; that's why law enforcement agencies want it. PGP/GPG won't hide that for you; instead, you'll need to set up a proxy or use Tor to mix up your traffic and also protect your Web browsing, instant messaging, and other online activities. As Tor's own people admit, it slows performance, although they're working on it (PDF).

All this says we're still a long way from a system that the mass market will use. And that's a damn shame, because we genuinely need secure communications. Like a lot of people in the mid-1990s, I'd have thought that by now encrypted communications would be the norm. And yet not only is SSL, which protects personal details in transit to ecommerce and financial services sites, the only really mass-market use, but it's in trouble. Partly, this is because of the technical issues raised in the linked article - too many certification authorities, too many points of failure - but it's also partly because hardly anyone understands how to check that a certificate is valid or knows what to do when warnings pop up that it's expired or issued for a different name. The underlying problem is that many of the people who like crypto see it as both a cool technology and a cause. For most of us, it's just more fussy software. The big advance since the mid 1990s is that at least now the *developers* will use it.

Maybe mobile phones will be the thing that makes crypto work the way it should. See, for example, Dave Birch's current thinking on the future of identity. We've been arguing about how to build an identity infrastructure for 20 years now. Crypto is clearly the mechanism. But we still haven't solved the how.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 29, 2011

Name check

How do you clean a database? The traditional way - which I still experience from time to time from journalist directories - is that some poor schnook sits in an office and calls everyone on the list, checking each detail. It's an immensely tedious job, I'm sure, but it's a living.

The new, much cheaper method is to motivate the people in the database to do it themselves. A government can pass a law and pay benefits. Amazon expects the desire to receive the goods people have paid for to be sufficient. For a social network it's a little harder, yet Facebook has managed to get 750 million users to upload varying amounts of information. Google hopes people will do the same with Google+,

The emotional connections people make on social networks obscure their basic nature as databases. When you think of them in that light, and you remember that Google's chief source of income is advertising, suddenly Google's culturally dysfunctional decision to require real names on |Google+ makes some sense. For an advertising company,a fuller, cleaner database is more valuable and functional. Google's engineers most likely do not think in terms of improving the company's ability to serve tightly targeted ads - but I'd bet the company's accountants and strategists do. The justification - that online anonymity fosters bad behavior - is likely a relatively minor consideration.

Yet it's the one getting the attention, despite the fact that many people seem confused about the difference between pseudonymity, anonymity, and throwaway identity. In the reputation-based economy the Net thrives on, this difference matters.

The best-known form of pseudonymity is the stage name, essentially a form of branding for actors, musicians, writers, and artists, who may have any of a number of motives for keeping their professional lives separate from their personal lives: privacy for themselves, their work mates, or their families, or greater marketability. More subtly, if you have a part-time artistic career and a full-time day job you may not want the two to mix: will people take you seriously as an academic psychologist if they know you're also a folksinger? All of those reasons for choosing a pseudonym apply on the Net, where everything is a somewhat public performance. Given the harassment some female bloggers report, is it any wonder they might feel safer using a pseudonym?

The important characteristic of pseudonyms, which they share with "real names", is persistence. When you first encounter someone like GrrlScientist, you have no idea whether to trust her knowledge and expertise. But after more than ten years of blogging, that name is a known quantity. As GrrlScientist writes about Google's shutting down her account, it is her "real-enough" name by any reasonable standard. What's missing is the link to a portion of her identity - the name on her tax return, or the one her mother calls her. So what?

Anonymity has long been contentious on the Net; the EU has often considered whether and how to ban it. At the moment, the driving justification seems to be accountability, in the hope that we can stop people from behaving like malicious morons, the phenomenon I like to call the Benidorm syndrome.

There is no question that people write horrible things in blog and news site comments pages, conduct flame wars, and engage in cyber bullying and harassment. But that behaviour is not limited to venues where they communicate solely with strangers; every mailing list, even among workmates, has flame wars. Studies have shown that the cyber versions of bullying and harassment, like their offline counterparts, are most often perpetrated by people you know.

The more important downside of anonymity is that it enables people to hide, not their identity but their interests. Behind the shield, a company can trash its competitors and those whose work has been criticized can make their defense look more robust by pretending to be disinterested third parties.

Against that is the upside. Anonymity protects whistleblowers acting in the public interest, and protesters defying an authoritarian regime.

We have little data to balance these competing interests. One bit we do have comes from an experiment with anonymity conducted years ago on the WELL, which otherwise has insisted on verifying every subscriber throughout its history. The lesson they learned, its conferencing manager, Gail Williams, told me once, was that many people wanted anonymity for themselves - but opposed it for others. I suspect this principle has very wide applicability, and it's why the US might, say, oppose anonymity for Bradley Manning but welcome it for Egyptian protesters.

Google is already modifying the terms of what is after all still a trial service. But the underlying concern will not go away. Google has long had a way to link Gmail addresses to behavioral data collected from those using its search engine, docs, and other services. It has always had some ability to perform traffic analysis on Gmail users' communications; now it can see explicit links between those pools of data and, increasingly, tie them to offline identities. This is potentially far more powerful than anything Facebook can currently offer. And unlike government databases, it's nice and clean, and cheap to maintain.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 15, 2011

Dirty digging

The late, great Molly Ivins warns (in Molly Ivins Can't Say That, Can She?) about the risk to journalists of becoming "power groupies" who identify more with the people they cover than with their readers. In the culture being exposed by the escalating phone hacking scandals the opposite happened: politicians and police became "publicity groupies" who feared tabloid wrath to such an extent that they identified with the interests of press barons more than those of the constituents they are sworn to protect. I put the apparent inconsistency between politicians' former acquiescence and their current baying for blood down to Stockholm syndrome: this is what happens when you hold people hostage through fear and intimidation for a few decades. When they can break free, oh, do they want revenge.

The consequences are many and varied, and won't be entirely clear for a decade or two. But surely one casualty must have been the balanced view of copyright frequently argued for in this column. Murdoch's media interests are broad-ranging. What kind of copyright regime do you suppose he'd like?

But the desire for revenge is a really bad way to plan the future, as I said (briefly) on Monday at the Westminster Skeptics.

For one thing, it's clearly wrong to focus on News International as if Rupert Murdoch and his hired help were the only contaminating apple. In the 2006 report What price privacy now? the Information Commissioner listed 30 publications caught in the illegal trade in confidential information. News of the World was only fifth; number one, by a considerable way, was the Daily Mail (the Observer was number nine). The ICO wanted jail sentences for those convicted of trading in data illegally, and called on private investigators' professional bodies to revoke or refuse licenses to PIs who breach the rules. Five years later, these are still good proposals.

Changing the culture of the press is another matter.
When I first began visiting Britain in the late 1970s, I found the tabloid press absolutely staggering. I began asking the people I met how the papers could do it.

"That's because *we* have a free press," I was told in multiple locations around the country. "Unlike the US." This was only a few years after The Washington Post backed Bob Woodward and Carl Bernstein's investigation of Watergate, so it was doubly baffling.

Tom Stoppard's 1978 play Night and Day explained a lot. It dropped competing British journalists into an escalating conflict in a fictitious African country. Over the course of the play, Stoppard's characters both attack and defend the tabloid culture.

"Junk journalism is the evidence of a society that has got at least one thing right, that there should be nobody with power to dictate where responsible journalism begins," says the naïve and idealistic new journalist on the block.

"The populace and the popular press. What a grubby symbiosis it is," complains the play's only female character, whose second marriage - "sex, money, and a title, and the parrots didn't harm it, either" - had been tabloid fodder.

The standards of that time now seem almost quaint. In the movie Starsuckers, filmmaker Chris Atkins fed fabricated celebrity stories to a range of tabloids. All were published. That documentary also showed in action illegal methods of obtaining information. In 2009, right around the time The Press Complaints Commission was publishing a report concluding, "there is no evidence that the practice of phone message tapping is ongoing".

Someone on Monday asked why US newspapers are better behaved despite First Amendment protection and less constraint by onerous libel laws. My best guess is fear of lawsuits. Conversely, Time magazine argues that Britain's libel laws have encouraged illegal information gathering: publication requires indisputable evidence. I'm not completely convinced: the libel laws are not new, and economics and new media are forcing change on press culture.

A lot of dangers lurk in the calls for greater press regulation. Phone hacking is illegal. Breaking into other people's computers is illegal. Enforce those laws. Send those responsible to jail. That is likely to be a better deterrent than any regulator could manage.

It is extremely hard to devise press regulations that don't enable cover-ups. For example, on Wednesday's Newsnight, the MP Louise Mensch, head of the DCMS committee conducting the hearings, called for a requirement that politicians disclose all meetings with the press. I get it: expose too-cosy relationships. But whistleblowers depend on confidentiality, and the last thing we want is for politicians to become as difficult to access as tennis stars and have their contact with the press limited to formal press conferences.

Two other lessons can be derived from the last couple of weeks. The first is that you cannot assume that confidential data can be protected simply by access rules. The second is the importance of alternatives to commercial, corporate journalism. Tom Watson has criticized the BBC for not taking the phone hacking allegations seriously. But it's no accident that the trust-owned Guardian was the organization willing to take on the tabloids. There's a lesson there for the US, as the FBI and others prepare to investigate Murdoch and News Corp: keep funding PBS.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 8, 2011

The grey hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views, also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers - not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea - and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden?
One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friendly default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night, believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help - but for that to be effective we need far greater transparency from all these - largely American - companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 1, 2011

Free speech, not data

Congress shall make no law...abridging the freedom of speech...

Is data mining speech? This week, in issuing its ruling in the case of IMS Health v Sorrell, the Supreme Court of the United States took the view that it can be. The majority (6-3) opinion struck down a Vermont law that prohibited drug companies from mining physicians' prescription data for marketing purposes. While the ruling of course has no legal effect outside the US, the primary issue in the case - the use of aggregated patient data - is being considered in many countries, including the UK, and the key technical debate is relevant everywhere.

IMS Health is a new species of medical organization: it collects aggregated medical data and mines it for client pharmaceutical companies, who use the results to determine their strategies for marketing to doctors. Vermont's goal was to save money by encouraging doctors to prescribe lower-cost generic medications. The pharmaceutical companies know, however, that marketing to doctors is effective. IMS Health accordingly sued to get the law struck down, claiming that the law abrogated the company's free speech rights. NGOs from the digital - EFF and EPIC - to the not-so-digital - AARP, - along with a host of medical organizations, filed amicus briefs arguing that patient information is confidential data that has never before been considered to fall within "free speech". The medical groups were concerned about the threat to trust between doctors and patients; EPIC and EFF added the more technical objection that the deidentification measures taken by IMS Health are inadequate.

At first glance, the SCOTUS ruling is pretty shocking. Why can't a state protect its population's privacy by limiting access to prescription data? How do marketers have free speech?

The court's objection - or rather, the majority opinion - was that the Vermont law is selective: it prohibits the particular use of this data for marketing but not other uses. That, to the six-judge majority, made the law censorship. The three remaining judges dissented, partly on privacy grounds, but mostly on the well-established basis that commercial speech typically enjoys a lower level of First Amendment protection than non-commercial speech.

When you are talking about traditional speech, censorship means selectively banning a type or source of content. Let's take Usenet in the early 1990s as an example. When spam became a problem, a group of community-minded volunteers devised cancellation practices that took note of this principle and defined spam according to the behavior involved in posting it. Deciding a particular posting was spam requires no subjective judgments about who posted the message or whether it was a commercial ad. Instead, postings are scored against a bunch of published, objective criteria: x number of copies, posted to y number of newsgroups, over z amount of time., or off-topic for that particular newsgroup, or a binary file posted to a text-only newsgroup. In the Vermont case, if you can accept the argument that data mining is speech, as SCOTUS did, then the various uses of the data are content and therefore a law that bans only one of many possible uses or bans use by specified parties is censorship.

The decision still seems intuitively wrong to me, as it apparently also did to the three remaining judges, who wrote a dissenting opinion that instead viewed the Vermont law as an attempt to regulate commercial activity, something that has never been covered by the First Amendment.

But note this: the concern for patient privacy that animated much of the interest in this case was only a bystander (which must surely have pleased the plaintiffs).

Obscured by this case, however, is the technical question that should be at the heart of such disputes (several other states have passed Vermont-style laws): how effectively can data be deidentified? If it can be easily reidentified and linked to specific patients, making it available for data mining ends medical privacy. If it can be effectively anonymized, then the objections go away.

At this year's Computers, Freedom, and Privacy there was some discussion of this issue; an IMS Health representative and several of the experts EPIC cited in its brief were present and disagreeing. Khaled El Emam, from the University of Ottawa, filed a brief (PDF) opposing EPIC's analysis; Latanya Sweeney, who did the seminal work in this area in the early 2000s, followed with a rebuttal. From these, my non-expert conclusion is that just as you cannot trust today's secure cryptographic system to remain unbreakable for the future as computing power continues to increase in speed and decrease in price, you cannot trust today's deidentification to remain robust against the increasing masses of data available for matching to it.

But it seems the technical and privacy issues raised by the Vermont case are yet to be decided. Vermont is free to try again to frame a law that has the effect the state wants but takes a different approach. As for the future of free speech, it seems clear that it will encompass many technological artefacts still being invented - and that it will be quite a fight to keep it protecting individuals instead of, increasingly, commercial enterprises.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 14, 2011

Untrusted systems

Why does no one trust patients?

On the TV series House, the eponymous sort-of-hero has a simple answer: "Everybody lies." Because he believes this, and because no one appears able to stop him, he sends his minions to search his patients' homes hoping they will find clues to the obscure ailments he's trying to diagnose.

Today's Health Privacy Summit in Washington, DC, the zeroth day of this year's Computers, Freedom, and Privacy conference, pulled together, in the best Computers, Freedom, and Privacy tradition, speakers from all aspects of health care privacy. Yet many of them agreed on one thing: health data is complex, decisions about health data are complex, and it's demanding too much of patients to expect them to be able to navigate these complex waters. And this is in the US, where to a much larger extent than in Europe the patient is the customer. In the UK, by contrast, the customer is really the GP and the patient has far less direct control. (Just try looking up a specialist in the phone book.)

The reality is, however, as several speakers pointed out, that doctors are not going to surrender control of their data either. Both physicians and patients have an interest in medical records. Patients need to know about their care; doctors need records both for patient care and for billing and administrative purposes. But beyond these two parties are many other interests who would like access to the intimate information doctors and patients originate: insurers, researchers, marketers, governments, epidemiologists. Yet no one really trusts patients to agree to hand over their data; if they did, these decisions would be a lot simpler. But if patients can't trust their doctor's confidentiality, they will avoid seeking health care until they're in a crisis. In some situations - say, cancer - that can end their lives much sooner than is necessary.

The loss of trust, said lawyer Jim Pyles, could bring on an insurance crisis, since the cost of electronic privacy breaches could be infinite, unlike the ability of insurers to insure those breaches. "If you cannot get insurance for these systems you cannot use them."

If this all (except for the insurance concerns) sounds familiar to UK folk, it's not surprising. As Ross Anderson pointed out, greatly to the Americans' surprise, the UK is way ahead on this particular debate. Nationalized medicine meant that discussions began in the UK as long ago as 1992.

One of Anderson's repeated points is that the notion of the electronic patient record has little to do with the day-to-day reality of patient care. Clinicians, particularly in emergency situations, want to look at the patient. As you want them to do: they might have the wrong record, but you know they haven't got the wrong patient.

"The record is not the patient," said Westley Clarke, and he was so right that this statement was repeated by several subsequent speakers.

One thing that apparently hasn't helped much is the Health Insurance Portability and Accountability Act, which one of the breakout sessions considered scrapping. Is HIPAA a failure or, as long-time Canadian privacy activist Stephanie Perrin would prefer it, a first step? The distinction is important: if HIPPA is seen as an expensive failure it might be scrapped and not replaced. First steps can be succeeded by further, better steps.

Perhaps the first of those should be another of Perrin's suggestions: a map of where your data goes, much like Barbara Garson's book Money Makes the World Go Around? followed her bank deposit as it was loaned out across the world. Most of us would like to believe that what we tell our doctors remains cosily tucked away in their files. These days, not so much.

For more detail see Andy Oram's blog.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 10, 2011

The creepiness factor

"Facebook is creepy," said the person next to me in the pub on Tuesday night.

The woman across from us nodded in agreement and launched into an account of her latest foray onto the service. She had, she said uploaded a batch of 15 photographs of herself and a friend. The system immediately tagged all of the photographs of the friend correctly. It then grouped the images of her and demanded to know, "Who is this?"

What was interesting about this particular conversation was that these people were not privacy advocates or techies; they were ordinary people just discovering their discomfort level. The sad thing is that Facebook will likely continue to get away with this sort of thing: it will say it's sorry, modify some privacy settings, and people will gradually get used to the convenience of having the system save them the work of tagging photographs.

In launching its facial recognition system, Facebook has done what many would have thought impossible: it has rolled out technology that just a few weeks ago *Google* thought was too creepy for prime time.

Wired UK has a set of instructions for turning tagging off. But underneath, the system will, I imagine, still recognize you. What records are kept of this underlying data and what mining the company may be able to do on them is, of course, not something we're told about.

Facebook has had to rein in new elements of its service so many times now - the Beacon advertising platform, the many revamps to its privacy settings - that the company's behavior is beginning to seem like a marketing strategy rather than a series of bungling missteps. The company can't be entirely privacy-deaf; it numbers among its staff the open rights advocate and former MP Richard Allan. Is it listening to its own people?

If it's a strategy it's not without antecedents. Google, for example, built its entire business without TV or print ads. Instead, every so often it would launch something so cool everyone wanted to use it that would get it more free coverage than it could ever have afforded to pay for. Is Facebook inverting this strategy by releasing projects it knows will cause widely covered controversy and then reining them back in only as far as the boundary of user complaints? Because these are smart people, and normally smart people learn from their own mistakes. But Zuckerberg, whose comments on online privacy have approached arrogance, is apparently justified, in that no matter what mistakes the company has made, its user base continues to grow. As long as business success is your metric, until masses of people resign in protest, he's golden. Especially when the IPO moment arrives, expected to be before April 2012.

The creepiness factor has so far done nothing to hurt its IPO prospects - which, in the absence of an actual IPO, seem to be rubbing off on the other social media companies going public. Pandora (net loss last quarter: $6.8 million) has even increased the number of shares on offer.

One thing that seems to be getting lost in the rush to buy shares - LinkedIn popped to over $100 on its first day, and has now settled back to $72 and change (for a Price/Earnings ratio 1076) - is that buying first-day shares isn't what it used to be. Even during the millennial technology bubble, buying shares at the launch of an IPO was approximately like joining a queue at midnight to buy the new Apple whizmo on the first day, even though you know you'll be able to get it cheaper and debugged in a couple of months. Anyone could have gotten much better prices on Amazon shares for some months after that first-day bonanza, for example (and either way, in the long term, you'd have profited handsomely).

Since then, however, a new game has arrived in town: private exchanges, where people who meet a few basic criteria for being able to afford to take risks, trade pre-IPO shares. The upshot is that even more of the best deals have already gone by the time a company goes public.

In no case is this clearer than the Groupon IPO, about which hardly anyone has anything good to say. Investors buying in would be the greater fools; a co-founder's past raises questions, and its business model is not sustainable.

Years ago, Roger Clarke predicted that the then brand-new concept of social networks would inevitably become data abusers simply because they had no other viable business model. As powerful as the temptation to do this has been while these companies have been growing, it seems clear the temptation can only become greater when they have public markets and shareholders to answer to. New technologies are going to exacerbate this: performing accurate facial recognition on user-uploaded photographs wasn't possible when the first pictures were being uploaded. What capabilities will these networks be able to deploy in the future to mine and match our data? And how much will they need to do it to keep their profits coming?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


June 3, 2011

A forgotten man and a bowl of Japanese goldfish

"I'm the forgotten man," Godfrey (William Powell) explains in the 1936 film My Man Godfrey.

Godfrey was speaking during the Great Depression, when prosperity was just around the corner ("Yes, it's been there a long time," says one of Godfrey's fellow city dump dwellers) but the reality for many people was unemployment, poverty, and a general sense that they had ceased to exist except, perhaps, as curiosities to be collected by the rich in a scavenger hunt. Today the rich in question would record their visit to the city dump in an increasingly drunken stream of Tweets and Facebook postings, and people in Nepal would be viewing photographs and video clips even if Godfrey didn't use a library computer to create his own Facebook page.

The EU's push for a right to be forgotten is a logical outgrowth of today's data protection principles, which revolve around the idea that you have rights over your data even when someone else has paid to collect it. EU law grants the right to inspect and correct the data held about us and to prevent its use in unwanted marketing. The idea that we should also have the right to delete data we ourselves have posted seems simple and fair, especially given the widely reported difficulty of leaving social networks.

But reality is complicated. Godfrey was fictional; take a real case, from Pennsylvania. A radiology trainee, unsure what to do when she wanted a reality check whether the radiologist she was shadowing was behaving inappropriately, sought advice from her sister, also a health care worker before reporting the incident. The sister told a co-worker about the call, who told others, and someone in that widening ripple posted the story on Facebook, from where it was reported back to the student's program director. Result: the not-on-Facebook trainee was expelled on the grounds that she had discussed a confidential issue on a cell phone. Lawsuit.

So many things had to go wrong for that story to rebound and hit that trainee in the ass. No one - except presumably the radiologist under scrutiny - did anything actually wrong, though the incident illustrates the point that than people think. Preventing this kind of thing is hard. No contract can bar unrelated, third-hand gossipers from posting information that comes their way. There's nothing to invoke libel law. The worst you can say is that the sister was indiscreet and that the program administrator misunderstood and overreacted. But the key point for our purposes here is: which data belongs to whom?

Lilian Edwards has a nice analysis of the conflict between privacy and freedom of expression that is raised by the right to forget. The comments and photographs I post seem to me to belong to me, though they may be about a dozen other people. But on a social network your circle of friends are also stakeholders in what you post; you become part of their library. Howard Rheingold, writing in his 1992 book The Virtual Community, noted the ripped and gaping fabric of conversations on The Well when early member Blair Newman deleted all his messages. Photographs and today's far more pervasive, faster-paced technology make such holes deeper and multi-dimensional. How far do we need to go in granting deletion rights?

The short history of the Net suggests that complete withdrawal is roughly impossible. In the 1980s, Usenet was thought of as an ephemeral medium. People posted in the - they thought - safe assumption that anything they wrote would expire off the world's servers in a couple of weeks. And as long as everyone read live online that was probably true. But along came offline readers and people with large hard disks and Deja News, and Usenet messages written in 1981 with no thought of any future context are a few search terms away.

"It's a mistake to only have this conversation about absolutes," said Google's Alma Whitten at the Big Tent event two weeks ago, arguing that it's impossible to delete every scrap about anyone. Whitten favors a "reasonable effort" approach and a user dashboard to enable that so users can see and control the data that's being held. But we all know the problem with market forces: it is unlikely that any of the large corporations will come up with really effective tools unless forced. For one thing, there is a cultural clash here between the EU and the US, the home of many of these companies. But more important, it's just not in their interests to enable deletion: mining that data is how those companies make a living and in return we get free stuff.

Finding the right balance between freedom of expression (my right to post about my own life) and privacy, including the right to delete, will require a mix of answers as complex as the questions: technology (such as William Heath's Mydex), community standards, and, yes, law, applied carefully. We don't want to replace Britain's chilling libel laws with a DMCA-like deletion law.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 20, 2011

The world we thought we lived in

If one thing is more annoying than another, it's the fantasy technology on display in so many TV shows. "Enhance that for me!" barks an investigator. And, obediently, his subordinate geek/squint/nerd pushes a button or few, a line washes over the blurry image on screen, and now he can read the maker's mark on a pill in the hand of the target subject that was captured by a distant CCTV camera. The show 24 ended for me 15 minutes into season one, episode one, when Kiefer Sutherland's Jack Bauer, trying to find his missing daughter, thrust a piece of paper at an underling and shouted, "Get me all the Internet passwords associated with that telephone number!" Um...

But time has moved on, and screenwriters are more likely to have spent their formative years online and playing computer games, and so we have arrived at The Good Wife, which gloriously wrapped up its second season on Tuesday night (in the US; in the UK the season is still winding to a close on Channel 4). The show is a lot of things: a character study of an archetypal humiliated politician's wife (Alicia Florrick, played by Julianna Margulies) who rebuilds her life after her husband's betrayal and corruption scandal; a legal drama full of moral murk and quirky judges ( Carob chip?); a political drama; and, not least, a romantic comedy. The show is full of interesting, layered men and great, great women - some of them mature, powerful, sexy, brilliant women. It is also the smartest show on television when it comes to life in the time of rapid technological change.

When it was good, in its first season, Gossip Girl cleverly combined high school mean girls with the citizen reportage of TMZ to produce a world in which everyone spied on everyone else by sending tips, photos, and rumors to a Web site, which picks the most damaging moment to publish them and blast them to everyone's mobile phones.

The Good Wife goes further to exploit the fact that most of us, especially those old enough to remember life before CCTV, go on about our lives forgetting that everywhere we leave a trail. Some are, of course, old staples of investigative dramas: phone records, voice messages, ballistics, and the results of a good, old-fashioned break-in-and-search. But some are myth-busting.

One case (S2e15, "Silver Bullet") hinges on the difference between the compressed, digitized video copy and the original analog video footage: dropped frames change everything. A much earlier case (S1e06, "Conjugal") hinges on eyewitness testimony; despite a slightly too-pat resolution (I suspect now, with more confidence, it might have been handled differently), the show does a textbook job of demonstrating the flaws in human memory and their application to police line-ups. In a third case (S1e17, "Heart"), a man faces the loss of his medical insurance because of a single photograph posted to Facebook showing him smoking a cigarette. And the disgraced husband's (Peter Florrick, played by Chris Noth) attempt to clear his own name comes down to a fancy bit of investigative work capped by camera footage from an ATM in the Cayman Islands that the litigator is barely technically able to display in court. As entertaining demonstrations and dramatizations of the stuff net.wars talks about every week and the way technology can be both good and bad - Alicia finds romance in a phone tap! - these could hardly be better. The stuffed lion speaker phone (S2e19, "Wrongful Termination") is just a very satisfying cherry topping of technically clever hilarity.

But there's yet another layer, surrounding the season two campaign mounted to get Florrick elected back into office as State's Attorney: the ways that technology undermines as well as assists today's candidates.

"Do you know what a tracker is?" Peter's campaign manager (Eli Gold, played by Alan Cumming) asks Alicia (S2e01, "Taking Control"). Answer: in this time of cellphones and YouTube, unpaid political operatives follow opposing candidates' family and friends to provoke and then publish anything that might hurt or embarrass the opponent. So now: Peter's daughter (Makenzie Vega) is captured praising his opponent and ham-fistedly trying to defend her father's transgressions ("One prostitute!"). His professor brother-in-law's (Dallas Roberts) in-class joke that the candidate hates gays is live-streamed over the Internet. Peter's son (Graham Phillips) and a manipulative girlfriend (Dreama Walker), unknown to Eli, create embarrassing, fake Facebook pages in the name of the opponent's son. Peter's biggest fan decides to (he thinks) help by posting lame YouTube videos apparently designed to alienate the very voters Eli's polls tell him to attract. (He's going to post one a week; isn't Eli lucky?) Polling is old hat, as are rumors leaked to newspaper reporters; but today's news cycle is 20 minutes and can we have a quote from the candidate? No wonder Eli spends so much time choking and throwing stuff.

All of this fits together because the underlying theme of all parts of the show is control: control of the campaign, the message, the case, the technology, the image, your life. At the beginning of season one, Alicia has lost all control over the life she had; by the end of season two, she's in charge of her new one. Was a camera watching in that elevator? I guess we'll find out next year.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 13, 2011

Lay down the cookie

British Web developers will be spending the next couple of weeks scrambling to meet the May 26 deadline after which new legislation require users to consent before a cookie can be placed on their computers. The Information Commissioner's guidelines allow a narrow exception for cookies that are "strictly necessary for a service requested by the user"; the example given is a cookie used to remember an item the user has chosen to buy so it's there when they go to check out. Won't this be fun?

Normally, net.wars comes down on the side of privacy even when it's inconvenient for companies, but in this case we're prepared to make at least a partial exception. It's always been a little difficult to understand the hatred and fear with which some people regard the cookie. Not the chocolate chip cookie, which of course we know is everything that is good, but the bits of code that reside on your computer to give Web pages the equivalent of memory. Cookies allow a server to assemble a page that remembers what you've looked at, where you've been, and which gewgaw you've put into your shopping basket. At least some of this can be done in other ways such as using a registration scheme. But it's arguably a greater invasion of privacy to require users to form a relationship with a Web site they may only use once.

The single-site use of cookies is, or ought to be, largely uncontroversial. The more contentious usage is third-party cookies, used by advertising agencies to track users from site to site with the goal of serving up targeted, rather than generic, ads. It's this aspect of cookies that has most exercised privacy advocates, and most browsers provide the ability to block cookies - all, third-party, or none, with a provision to make exceptions.

The new rules, however, seem overly broad.

In the EU, the anti-cookie effort began in 2001 (the second-ever net.wars), seemed to go quiet, and then revived in 2009, when I called the legislation "masterfully stupid". That piece goes into some detail about the objections to the anti-cookie legislation, so we won't review that here. At the time, reader email suggested that perhaps making life unpleasant for advertisers would force browser manufacturers to design better privacy controls. 'Tis a consummation devoutly to be wished, but so far it hasn't happened, and in the meantime that legislation has become an EU directive and now UK law.

The chief difference is moving from opt-out to opt-in: users must give consent for cookies to be placed on their machines; the chief flaw is banning a technology instead of regulating undesirable actions and effects. Besides the guidelines above, the ICO refers people to All About Cookies for further information.

Pete Jordan, a Hull-based Web developer, notes that when you focus legislation on a particular technology, "People will find ways around it if they're ingenious enough, and if you ban cookies or make it awkward to use them, then other mechanisms will arise." Besides, he says, "A lot of day-to-day usage is to make users' experience of Web sites easier, more friendly, and more seamless. It's not life-threatening or vital, but from the user's perception it makes a difference if it disappears." Cookies, for example, are what provide the trail of "breadcrumbs" at the top of a Web page to show you the path by which you arrived at that page so you can easily go back to where you were.

"In theory, it should affect everything we do," he says of the legislation. A possible workaround may be to embed tokens in URLs, a strategy he says is difficult to manage and raises the technical barrier for Web developers.

The US, where competing anti-tracking bills are under consideration in both houses of Congress, seems to be taking a somewhat different tack in requiring Web sites to honor the choice if consumers set a "Do Not Track" flag. Expect much more public debate about the US bills than there has been in the EU or UK. See, for example, the strong insistence by What Would Google Do? author Jeff Jarvis that media sites in particular have a right to impose any terms they want in the interests of their own survival. He predicts paywalls everywhere and the collapse of media economics. I think he's wrong.

The thing is, it's not a fair contest between users and Web site owners. It's more or less impossible to browse the Web with all cookies turned off: the complaining pop-ups are just too frequent. But targeting the cookie is not the right approach. There are many other tracking technologies that are invisible to consumers which may have both good and bad effects - even Web bugs are used helpfully some of the time. (The irony is, of course, regulating the cookie but allowing increases in both offline and online surveillance by police and government agencies.)

Requiring companies to behave honestly and transparently toward their customers would have been a better approach for the EU; one hopes it will work better in the US.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 22, 2011

Applesauce

Modern life is full of so many moments when you see an apparently perfectly normal person doing something that not so long ago was the clear sign of a crazy person. They're walking down the street talking to themselves? They're *on the phone*. They think the inanimate objects in their lives are spying on them? They may be *right*.

Last week's net.wars ("The open zone") talked about the difficulty of finding the balance between usability, on the one hand, and giving users choice, flexibility, and control, on the other. And then, as if to prove this point, along comes Apple and the news that the iPhone has been storing users' location data, perhaps permanently.

The story emerged this week when two researchers presenting at O'Reilly's Where 2.0 conference presented an open-source utility they'd written to allow users to get a look at the data the iPhone was saving. But it really begins last year, when Alex Levinson discovered the stored location data as part of his research on Apple forensics. Based on his months of studying the matter, Levinson contends that it's incorrect to say that Apple is gathering this data: rather, the device is gathering the data, storing it, and backing it up when you sync your phone. Of course, if you sync your phone to Apple's servers, then the data is transferred to your account - and it is also migrated when you purchase a new iPhone or iPad.

So the news is not quite as bad as it first sounded: your device is spying on you, but it's not telling anybody. However: the data is held in unencrypted form and appears never to expire, and this raises a whole new set of risks about the devices that no one had really focused on until now.

A few minutes after the story broke, someone posted on Twitter that they wondered how many lawyers handling divorce cases were suddenly drafting subpoenas for copies of this file from their soon-to-be-exes' iPhones. Good question (although I'd have phrased it instead as how many script ideas the wonderful, tech-savvy writers of The Good Wife are pitching involving forensically recovered location data). That is definitely one sort of risk; another, ZDNet's Adrian Kingsley-Hughes points out is that the geolocation may be wildly inaccurate, creating a false picture that may still be very difficult to explain, either to a spouse or to law enforcement, who, as Declan McCullagh writes know about and are increasingly interested in accessing this data.

There are a bunch of other obvious privacy things to say about this, and Privacy International has helpfully said them in an open letter to Steve Jobs.

"Companies need openness and procedures," PI's executive director, Simon Davies, said yesterday, comparing Apple's position today to Google's a couple of months before the WiFi data-sniffing scandal.

The reason, I suspect, that so many iPhone users feel so shocked and betrayed is that Apple's attention to the details of glossy industrial design and easy-to-understand user interfaces leads consumers to cuddle up to Apple in a way they don't to Microsoft or Google. I doubt Google will get nearly as much anger directed at it for the news that Android phones also collect location data (the Android saves only the last 50 mobile masts and 200 WiFi networks). In either event, the key is transparency: when you post information on Twitter or Facebook about your location or turn on geo-tagging you know you're doing it. In this case, the choice is not clear enough for users to understand what they've agreed to.

The question is: how best can consumers be enabled to make informed decisions? Apple's current method - putting a note saying "Beware of the leopard" at the end of a 15,200-word set of terms and conditions (which are in any case drafted by the company's lawyer to protect the company, not to serve consumers) that users agree to when they sign up for iTunes - is clearly inadequate. It's been shown over and over again that consumers hate reading privacy policies, and you have only to look at Facebook's fumbling attempts to embed these choices in a comprehensible interface to realize that the task is genuinely difficult. This is especially true because, unlike the issue of user-unfriendly sysstems in the early 1990s, it's not particularly in any of these companies' interests to solve this intransigent and therefore expensive problem. Make it easy for consumers to opt out and they will, hardly an appetizing proposition for companies supported in whole or in part by advertising.

The answer to the question, therefore, is going to involve a number of prongs: user interface design, regulation, contract law, and industry standards, both technical and practical. The key notion, however, is that it should be feasible - even easy - for consumers to tell what information gathering they're consenting to. The most transparent way of handling that is to make opting out the default, so that consumers must take a positive action to turn these things on.

You can say - as many have - that this particular scandal is overblown. But we're going to keep seeing dust-ups like this until industry practice changes to reflect our expectations. Apple, so sensitive to the details of industrial design that will compel people to yearn to buy its products, will have to develop equal sensitivity for privacy by design.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 11, 2011

The ten-year count

My census form arrived the other day - 32 lavender and white pages of questions about who will have been staying overnight in my house on March 27, their religions, and whether they will be cosseted with central heating and their own bedroom.

I seem to be out of step on this one, but I've always rather liked the census. It's a little like finding your name in an old phone book: I was here. Reportedly, this, Britain's 21st national census, may be the last. Cabinet Office minister Francis Maude has complained that it is inaccurate and out of date by the time it's finished, and £482 million is expensive.

Until I read the Guardian article cited above, I had never connected the census to Thomas Malthus' 1798 prediction that the planet would run out of the resources necessary to support an ever-increasing human population. I blame the practice of separating science, history, and politics: Malthus is taught in science class, so you don't realize he was contemporaneous with the inclusion of the census in the US Constitution, which you learn about in civics class.

The census seems to be the one moment when attention really gets focused on the amount and types of data the government collects about all of us. There are complaints from all political sides that it's intrusive and that the government already has plenty of other sources.

I have - both here and elsewhere - written a great deal about privacy and the dangers of thoughtlessly surrendering information but I'm inclined to defend the census. And here's why: it's transparent. Of all the data-gathering exercises to which our lives are subject it's the only one that is. When you fill out the form you know exactly what information you are divulging, when, and to whom. Although the form threatens you with legal sanctions for not replying, it's not enforced.

And I can understand the purpose of the questions: asking the size and disposition of homes, the amount of time spent working and at what, racial and ethnic background, religious affiliation, what passports people hold and what languages they speak. These all make sense to me in the interests of creating a snapshot of modern Britain that is accurate enough for the decisions the government must make. How many teachers and doctors do we need in which areas who speak which languages? How many people still have coal fires? These are valid questions for a government to consider.

But most important, anyone can look up census data and develop some understanding of the demographics government decisions are based on.

What are the alternatives? There are certainly many collections of data for various purposes. There are the electoral rolls, which collect the names and nationalities of everyone at each address in every district. There are the council tax registers, which collect the householder's name and the number of residents at each address. Other public sector sources include the DVLA's vehicle and driver licensing data, school records, and the NHS's patient data. And of course there are many private sector sources, too: phone records, credit card records, and so on.

Here's the catch: every one of those is incomplete. Everyone does not have a phone or credit card; some people are so healthy they get dropped from their doctors' registers because they haven't visisted in many years; some people don't have an address; some people have five phones, some none. Most of those people are caught by the census, since it relies on counting everyone wherever they're staying on a single particular night.

Here's another catch: the generation of national statistics to determine the allocation of national resources is not among the stated purposes for which those data are gathered. That is of course fixable. But doing so might logically lead government to mandate that these agencies collect more data from us than they do now - and with more immediate penalties for not complying. Would you feel better about telling the DVLA or your local council your profession and how many hours you work? No one is punished for leaving a question blank on the census, but suppose leaving your religious affiliation blank on your passport application means not getting a passport until you've answered it?

Which leads to the final, biggest catch. Most of the data that is collected from us is in private hands or is confidential for one reason or another. Councils are pathological about disliking sharing data with the public; commercial organizations argue that their records are commercially sensitive; doctors are rightly concerned about protecting patient data. Despite the data protection laws we often do not know what data has been collected, how it's being used, or where it's being held. And although we have the right to examine and correct our own records we won't find it easy to determine the basis for government decisions: open season for lobbyists.

The census, by contrast, is transparent and accountable. We know what information we have divulged, we know who is responsible for it, and we can even examine the decisions it is used to support. Debate ways to make it less intrusive by all means, but do you really want to replace it with a black box?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 21, 2011

Fogged

The Reform Club, I read on its Web site, was founded as a counterweight to the Carlton Club, where conservatives liked to meet and plot away from public scrutiny. To most of us, it's the club where Phileas Fogg made and won his bet that he could travel around the world in 80 days, no small feat in 1872.

On Wednesday, the club played host to a load of people who don't usually talk to each other much because they come at issues of privacy from such different angles. Cityforum, the event's organizer, pulled together representatives from many parts of civil society, government security, and corporate and government researchers.

The key question: what trade-offs are people willing to make between security and privacy? Or between security and civil liberties? Or is "trade-off" the right paradigm? It was good to hear multiple people saying that the "zero-sum" attitude is losing ground to "proportionate". That is, the debate is moving on from viewing privacy and civil liberties as things we must trade away if we want to be secure to weighing the size of the threat against the size of the intrusion. It's clear to all, for example, that one thing that's disproportionate is local councils' usage of the anti-terrorism aspects of the Regulation of Investigatory Powers Act to check whether householders are putting out their garbage for collection on the wrong day.

It was when the topic of the social value of privacy was raised that it occurred to me that probably the closest model to what people really want lay in the magnificent building all around us. The gentleman's club offered a social network restricted to "the right kind of people" - that is, people enough like you that they would welcome your fellow membership and treat you as you would wish to be treated. Within the confines of the club, a member like Fogg, who spent all day every day there, would have had, I imagine, little privacy from the other members or, especially, from the club staff, whose job it was to know what his favorite drink was and where and when he liked it served. But the club afforded members considerable protection from the outside world. Pause to imagine what Facebook would be like if the interface required each would-be addition to your friends list to be proposed and seconded and incomers could be black-balled by the people already on your list.

This sort of web of trust is the structure the cryptography software PGP relies on for authentication: when you generate your public key, you are supposed to have it signed by as many people as you could. Whenever someone wanted to verify the key, they could look at the list of who had signed it for someone they themselves knew and could trust. The big question with such a structure is how you make managing it scale to a large population. Things are a lot easier when it's just a small, relatively homogeneous group you have to deal with. And, I suppose, when you have staff to support the entire enterprise.

We talk a lot about the risks of posting too much information to things like Facebook, but that may not be its biggest issue. Just as traffic data can be more revealing than the content of messages, complex social linkages make it impossible to anonymize databases: who your friends are may be more revealing than your interactions with them. As governments and corporations talk more and more about making "anonymized" data available for research use, this will be an increasingly large issue. An example: an little-known incident in 2005, when the database of a month's worth of UK telephone calls was exported to the US with individuals' phone numbers hashed to "anonymize" them. An interesting technological fix comes from Microsoft' in the notion of differential privacy, a system for protecting databases both against current re-identification and attacks with external data in the future. The catch, if it is one, is that you must assign to your database a sort of query budget in advance - and when it's used up you must burn the database because it can no longer be protected.

We do know one helpful thing: what price club members are willing to pay for the services their club provides. Public opinion polls are a crude tool for measuring what privacy intrusions people will actually put up with in their daily lives. A study by Rand Europe released late last year attempted to examine such things by framing them in economic terms. The good news is they found that you'd have to pay people £19 to get them to agree to provide a DNA sample to include in their passport. The weird news is that people would pay £7 to include their fingerprints. You have to ask: what pitch could Rand possibly have made that would make this seem worth even one penny to anyone?

Hm. Fingerprints in my passport or a walk across a beautiful, mosaic floor to a fine meal in a room with Corinthian columns, 25-foot walls of books, and a staff member who politely fails to notice that I have not quite confirmed to the dress code? I know which is worth paying for if you can afford it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 7, 2011

Scanning the TSA

There are, Bruce Schneier said yesterday at the Electronic Privacy Information Center mini-conference on the TSA (video should be up soon), four reasons why airport security deserves special attention, even though it directly affects a minority of the population. First: planes are a favorite terrorist target. Second: they have unique failure characteristics - that is, the plane crashes and everybody dies. Third: airlines are national symbols. Fourth: planes fly to countries where terrorists are.

There's a fifth he didn't mention but that Georgetown lawyer Pablo Molina and We Won't Fly founder James Babb did: TSAism is spreading. Random bag searches on the DC Metro and the New York subways. The TSA talking about expanding its reach to shopping malls and hotels. And something I found truly offensive, giant LED signs posted along the Maryland highways announcing that if you see anything suspicious you should call the (toll-free) number below. Do I feel safer now? No, and not just because at least one of the incendiary devices sent to Maryland state offices yesterday apparently contained a note complaining about those very signs.

Without the sign, if you saw someone heaving stones at the cars you'd call the police. With it, you peer nervously at the truck in front of you. Does that driver look trustworthy? This is, Schneier said, counter-productive because what people report under that sort of instruction is "different, not suspicious".

But the bigger flaw is cover-your-ass backward thinking. If someone tries to bomb a plane with explosives in a printer cartridge, missing a later attempt using the exact same method will get you roasted for your stupidity. And so we have a ban on flying with printer cartridges over 500g and, during December, restrictions on postal mail, something probably few people in the US even knew about.

Jim Harper, a policy scholar with the Cato Institute and a member of the Department of Homeland Security's Data Privacy and Integrity Advisory Committee, outlined even more TSA expansion. There are efforts to create mobile lie detectors that measure physiological factors like eye movements and blood pressure.

Technology, Lillie Coney observed, has become "like butter - few things are not improved if you add it."

If you're someone charged with blocking terrorist attacks you can see the appeal: no one wants to be the failure who lets a bomb onto a plane. Far, far better if it's the technology that fails. And so expensive scanners roll through the nation's airports despite the expert assessment - on this occasion, from Schneier and Ed Luttwak, a senior associate with the Center for Strategic and International Studies - that the scanners are ineffective, invasive, and dangerous. As Luttwak said, the machines pull people's attention, eyes, and brains away from the most essential part of security: watching and understanding the passengers' behavior.

"[The machine] occupies center stage, inevitably," he said, "and becomes the focus of an activity - not aviation security, but the operation of a scanner."

Equally offensive in a democracy, many speakers argued, is the TSA's secrecy and lack of accountability. Even Meera Shankar, the Indian ambassador, could not get much of a response to her complaint from the TSA, Luttwak said. "God even answered Job." The agency sent no representative to this meeting, which included Congressmen, security experts, policy scholars, lawyers, and activists.

"It's the violation of the entire basis of human rights," said the Stanford and Oxford lawyer Chip Pitts around the time that the 112th Congress was opening up with a bipartisan reading of the US Constitution. "If you are treated like cattle, you lose the ability to be an autonomous agent."

As Libertarian National Committee executive director Wes Benedict said, "When libertarians and Ralph Nader agree that a program is bad, it's time for our government to listen up."

So then, what are the alternatives to spending - so far, in the history of the Department of Homeland Security, since 2001 - $360 billion, not including the lost productivity and opportunity costs to the US's 100 million flyers?

Well, first of all, stop being weenies. The number of speakers who reminded us that the US was founded by risk-takers was remarkable. More people, Schneier noted, are killed in cars every month than died on 9/11. Nothing, Ralph Nader said, is spent on the 58,000 Americans who die in workplace accidents every year or the many thousands more who are killed by pollution or medical malpractice.

"We need a comprehensive valuation of how to deploy resources in a rational manner that will be effective, minimally invasive, efficient, and obey the Constitution and federal law," Nader said

So: dogs are better at detecting explosives than scanners. Intelligent profiling can whittle down the mass of suspects to a more manageable group than "everyone" in a giant game of airport werewolf. Instead, at the moment we have magical thinking, always protecting ourselves from the last attack.

"We're constantly preparing for the rematch," said Lillie Coney. "There is no rematch, only tomorrow and the next day." She was talking as much about Katrina and New Orleans as 9/11: there will always, she said, be some disaster, and the best help in those situations is going to come from individuals and the people around them. Be prepared: life is risky.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 31, 2010

Good, bad, ugly...the 2010 that was

Every year deserves its look back, and 2010 is no exception. On the good side, the younger generation beginning to enter politics is bringing with it a little more technical sense than we've had in government before. On the bad side, the year's many privacy scandals reminded us all how big a risk we take in posting as much information online as we do. The ugly...we'd have to say the scary new trends in malware. Happy New Year.

By the numbers:

$5.3 billion: the Google purchase offer that Groupon turned down. Smart? Stupid? Shopping and social networks ought to mix combustibly (and could hit local newspapers and their deal flyers), but it's a labor-intensive business. The publicity didn't hurt: Groupon has now managed to raise half a billion dollars on its own. They aren't selling anything we want to buy, but that doesn't seem to hurt Wal-Mart or McDonalds.

$497 million: the amount Harvard scientists Tyler Moore and Benjamin Edelman estimate that Google is earning from "typosquatting". Pocket change, really: Google's 2009 revenues were $23 billion. But still.

15 million (estimated): number of iPads sold since its launch in May. It took three decades of commercial failures for someone to finally launch a successful tablet computer. In its short life the iPad has been hailed and failed as the savior of print publications, and halved Best Buy's laptop sales. We still don't want one - but we're keyboard addicts, hardly its target market.

250,000: diplomatic cables channeled to Wikileaks. We mention this solely to enter The Economist's take on Bruce Sterling's take into the discussion. Wikileaks isn't at all the crypto-anarchy that physicist Timothy C. May wrote about in 1992. May's essay imagined the dark uses of encrypted secrecy; Wikileaks is, if anything, the opposite of it.

500: airport scanners deployed so far in the US, at an estimated cost of $80 million. For 2011, Obama has asked for another $88 million for the next round of installations. We'd like fewer scanners and the money instead spent on...well, almost anything else, really. Intelligence, perhaps?

65: Percentage of Americans that Pew Internet says have paid for Internet content. Yeah, yeah, including porn. We think it's at least partly good news.

58: Number of investigations (countries and US states) launched into Google's having sniffed approximately 600Gb of data from open WiFi connections, which the company admitted in May. The progress of each investigation is helpfully tallied by SearchEngineLand. Note that the UK's ICO's reaction was sufficiently weak that MPs are complaining.

24: Hours of Skype outage. Why are people writing about this as though it were the end of Skype? It was a lot more shocking when it happened to AT&T in 1990 - in those days, people only had one phone number!

5: number of years I've wished Google would eliminate useless shopping aggregator sites from its search results listings. Or at least label them and kick them to the curb.

2: Facebook privacy scandals that seem to have ebbed leaving less behavorial change than we'd like in their wake. In January, Facebook founder and CEO Mark Zuckerberg opined that privacy is no longer a social norm; in May the revamped its privacy settings to find an uproar in response (and not for the first time). Still, the service had 400 million users at the beginning of 2010 and has more than 500 million now. Resistance requires considerable anti-social effort, though the cool people have, of course, long fled.

1: Stuxnet worm. The first serious infrastructure virus. You knew it had to happen.

In memoriam:

- Kodachrome. The Atlantic reports that December 30, 2010 saw the last-ever delivery of Kodak's famous photographic film. As they note, the specific hues and light-handling of Kodachrome defined the look of many decades of the 20th century. Pause to admire The Atlantic's selection of the 75 best pictures they could find: digital has many wonderful qualities, but these seem to have a three-dimensional roundness you don't see much any more. Or maybe we just forget to look.

- The 3.5in floppy disk. In April, Sony announced it would stop making the 1.4Mb floppy disk that defined the childhoods of today's 20-somethings. The first video clip I ever downloaded, of the exploding whale in Oregon (famed of Web site and Dave Barry column), required 11 floppy disks to hold it. You can see why it's gone.

- Altavista: A leaked internal memo puts Altavista on Yahoo!'s list of services due for closure. Before Google, Altavista was the best search engine by a long way, and if it had focused on continuing to improve its search algorithms instead of cluttering up its front page in line with the 1995 fad for portals it might be still. Google's overwhelming success had as much to do with its clean, fast-loading design as it did with its superior ability to find stuff. Altavista also pioneered online translation with its Babelfish (and don't you have to love a search engine that quotes Douglas Adams?).

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 24, 2010

Random acts of security

When I was in my 20s in the 1970s, I spent a lot of time criss-crossing the US by car. One of the great things about it, as I said to a friend last week, was the feeling of ownership that gave me wherever I was: waking up under the giant blue sky in Albuquerque, following the Red River from Fargo to Grand Forks, or heading down the last, dull hour of New York State Thruway to my actual home, Ithaca, NY, it was all part of my personal backyard. This, I thought many times, is my country!

This year's movie (and last year's novel) Up in the Air highlighted the fact that the world's most frequent flyers feel the same way about airports. When you've traversed the same airports so many times that you've developed a routine it's hard not to feel as smug as George Clooney's character when some disorganized person forgets to take off her watch before going through the metal detector. You, practiced and expert, slide through smoothly without missing a beat. The check-in desk staff and airline club personnel ask how you've been. You sit in your familiar seat on the plane. You even know the exact moment in the staff routine to wander back to the galley and ask for a mid-flight cup of tea.

Your enemy in this comfortable world is airport security, which introduces each flight by putting you back in your place as an interloper.

Our equivalent back then was the Canadian border, which we crossed in quite isolated places sometimes. The border highlighted a basic fact of human life: people get bored. At the border crossing between Grand Forks, ND and Winnipeg, Manitoba, for example, the guards would keep you talking until the next car hove into view. Sometimes that was one minute, sometimes 15.

We - other professional travelers and I - had a few other observations. If you give people a shiny, new toy they will use it, just for the novelty. One day when I drove through Lewiston-Queenston they had drug-sniffing dogs on hand to run through and around the cars stopped for secondary screening. Fun! I was coming back from a folk festival in a pickup truck with a camper on the back, so of course I was pulled over. Duh: what professional traveler who crosses the border 12 times a year risks having drugs in their car?

Cut to about a week ago, at Memphis airport. It was 10am on a Saturday, and the traffic approaching the security checkpoint was very thin. The whole body image scanners - expensive, new, the latest in cover-your-ass-ness - are in theory only for secondary screening: you go through them if you alarm the metal detectors or are randomly selected.

How does that work? When there's little traffic everyone goes through the scanner. For the record, I opted out and was given an absolutely professional and courteous pat-down, in contrast to the groping reports in the media for the last month. Yes: felt around under my waistband and hairline. No: groping. You've got to love the Net's many charming inhabitants: when I posted this report to a frequent flyer forum a poster hazarded that I was probably old and ugly.

My own theory is simply that it was early in the day, and everyone was rested and fresh and hadn't been sworn at a whole lot yet. So no one was feeling stressed out or put-upon by a load of uppity, obnoxious passengers.

It seems clear, however, that if you wanted to navigate security successfully carrying items that are typically unwanted on a flight, your strategy for reducing the odds of attracting extra scrutiny would be fairly simple, although the exact opposite of what experienced (professional) travelers are in the habit of doing:

- Choose a time when it's extremely crowded. Scanners are slower than metal detectors, so the more people there are the smaller the percentage going through them. (Or study the latest in scanner-defeating explosives fashions.)

- Be average and nondescript, someone people don't notice particularly or feel disposed to harass when they're in a bad mood. Don't be a cute, hot young woman; don't be a big, fat, hulking guy; don't wear clothes that draw the eye: expensive designer fashions, underwear, Speedos, a nun's habit (who knows what that could hide? and anyway isn't prurient curiosity about what could be under there a thing?).

- Don't look rich, powerful, special, or attitudinous. The TSA is like a giant replication of Stanley Milgram's experiment. Who's the most fun to roll over? The business mogul or the guy just like you who works in a call center? The guy with the video crew spoiling for a fight, or the guy who treats you like a servant? The sexy young woman who spurned you in high school or the crabby older woman like your mean second-grade teacher? Or the wheelchair-bound or medically challenged who just plain make you uncomfortable?

- When you get in line, make sure you're behind one or more of the above eye-catching passengers.

Note to TSA: you think the terrorists can't figure this stuff out, too? The terrorist will be the last guy your agents will pick for closer scrutiny.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 26, 2010

Like, unlike

Some years back, the essayist and former software engineer Ellen Ullman wrote about the tendency of computer systems to infect their owners. The particular infectious she covered in Close to the Machine: Technophilia and Its Discontents was databases. Time after time, she saw good, well-meaning people commission a database to help staff or clients, and then begin to use it to monitor those they originally intended to help. Why? Well, because they *can*.

I thought - and think - that Ullman was onto something important there, but that this facet of human nature is not limited to computers and databases. Stanley Milgram's 1961 experiments showed that humans under the influence of apparent authority will obey instructions to administer treatment that outside of such a framework they would consider abhorrent. This seems to me sufficient answer to Roger Ebert's comment that no TSA agent has yet refused to perform the "enhanced pat-down", even on a child.

It would almost be better if the people running the NHS Choices Web site had been infected with the surveillance bug because they would be simply wrong. Instead, the NHS is more complicatedly wrong: it has taken the weird decision that what we all want is to . share with our Facebook friends the news that we have just looked at the page on gonorrhea. Or, given the well-documented privacy issues with Facebook's rapid colonization of the Web via the "Like" button, allow Facebook to track our every move whether we're logged in or not.

I can only think of two possibilities for the reasoning behind this. One is that NHS managers have little concept of the difference between their site, intended to provide patient information and guidance, and that of a media organization needing advertising to stay afloat. It's one of the truisms of new technologies that they infiltrate the workplace through the medium of people who already use them: email, instant messaging, latterly social networks. So maybe they think that because they love Facebook the rest of us must, too. My other thought is that NHS managers think this is what we want because their grandkids have insisted they get onto Facebook, where they now occupy their off-hours hitting the "like" button and poking each other and think this means they're modern.

There's the issue Tim Berners-Lee has raised, that Facebook and other walled gardens are dividing the Net up into incompatible silos. The much worse problem, at least for public services and we who must use them, is the insidiously spreading assumption that if a new technology is popular it must be used no matter what the context. The effect is about as compelling as a TSA agent offering you a lollipop after your pat-down.

Most likely, the decision to deploy the "Like" button started with the simple, human desire for feedback. At some point everyone who runs a Web site wonders what parts of the site get read the most...and then by whom...and then what else they read. It's obviously the right approach if you're a media organization trying to serve your readers better. It's a ludicrously mismatched approach if you're the NHS because your raison d'être is not to be popular but to provide the public with the services they need at the most vulnerable times in their lives. Your page on rare lymphomas is not less valuable or important just because it's accessed by fewer people than the pages on STDs, nor are you actually going to derive particularly useful medical research data from finding that people who read about lymphoma also often read pages on osteoporosis. But it's easy, quick, and free to install Google Analytics or Facebook Like, and so people do it without thought.

Both of these incidents have also exposed once and for all the limited value of privacy policies. For one thing, a patient in distress is not going to take time out from bleeding to read the fine print ("when you visit pages on our site that display a Facebook Like button, Facebook will collect information about your visit") or check for open, logged-in browser windows. The NHS wants its sites to be trusted; but that means more than simply being medically accurate; it requires implementing confidentiality as well. The NHS's privacy policy is meaningless if you need to be a technical expert to exercise any choice. Similarly, who cares what the TSA's privacy policy says if the simple desire to spend Christmas with your family requires you to submit to whatever level of intimate inspection the agent on the ground that day feels like dishing out? What privacy policy makes up for being required to covered in urine spilled from your roughly handled urostomy bag? Milgram moments, both.

It's at this point that we need our politicians to act in our interests, because the thinking has to change at the top level.

Meantime, if you're traveling in the US this Christmas, the ACLU, and Edward Hasbrouck have handy guides to your rights. But pragmatically, if you do get patted down and really want to make your flight, it seems like your best policy is to lie back and think of the country of your choice.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 19, 2010

Power to the people

We talk often about the fact that ten years of effort - lawsuits, legislation, technology - on the part of the copyright industries has made barely a dent in the amount of material available online as unauthorized copies. We talk less about the similar situation that applies to privacy despite years of best efforts by Privacy International, Electronic Privacy Information Center, Center for Democracy and Technology, Electronic Frontier Foundation, Open Rights Group, No2ID, and newcomer Big Brother Watch. The last ten years have built Google, and Facebook, and every organization now craves large data stores of personal information that can be mined. Meanwhile, governments are complaisant, possibly because they have subpoena power. It's been a long decade.

"Information is the oil of the 1980s," wrote Thomas McPhail and Brenda McPhail in 1987 in an article discussing the politics of the International Telecommunications Union, and everyone seems to take this encomium seriously.

William Heath, who spent his early career founding and running Kable, a consultancy specializing in government IT. The question he focused on a lot: how to create the ideal government for the digital era, has been saying for many months now that there's a gathering wave of change. His idea is that the *new* new thing is technologies to give us back control and up-end the current situation in which everyone behaves as if they own all the information we give them. But it's their data only in exactly the same way that taxpayers' money belongs to the government. They call it customer relationship management; Heath calls the data we give them volunteered personal information and proposes instead vendor relationship management.

Always one to put his effort where his mouth is (Heath helped found the Open Rights Group, the Foundation for Policy Research, and the Dextrous Web as well as Kable), Heath has set up not one, but two companies. The first, Ctrl-Shift, is a research and advisory businesses to help organizations adjust and adapt to the power shift. The second, Mydex, a platform now being prototyped in partnership with the Department of Work and Pensions and several UK councils (PDF). Set up as a community interest company, Mydex is asset-locked, to ensure that the company can't suddenly reverse course and betray its customers and their data.

The key element of Mydex is the personal data store, which is kept under each individual's own control. When you want to do something - renew a parking permit, change your address with a government agency, rent a car - you interact with the remote council, agency, or company via your PDS. Independent third parties verify the data you present. To rent a car, for example, you might present a token from the vehicle licensing bureau that authenticates your age and right to drive and another from your bank or credit card company verifying that you can pay for the rental. The rental company only sees the data you choose to give it.

It's Heath's argument that such a setup would preserve individual privacy and increase transparency while simultaneously saving companies and governments enormous sums of money.

"At the moment there is a huge cost of trying to clean up personal data," he says. "There are 60 to 200 organisations all trying to keep a file on you and spending money on getting it right. If you chose, you could help them." The biggest cost, however, he says, is the lack of trust on both sides. People vanish off the electoral rolls or refuse to fill out the census forms rather than hand over information to government; governments treat us all as if we were suspected criminals when all we're trying to do is claim benefits we're entitled to.

You can certainly see the potential. Ten years ago, when they were talking about "joined-up government", MPs dealing with constituent complaints favored the notion of making it possible to change your address (for example) once and have the new information propagate automatically throughout the relevant agencies. Their idea, however, was a huge, central data store; the problem for individuals (and privacy advocates) was that centralized data stores tend to be difficult to keep accurate.

"There is an oft-repeated fallacy that existing large organizations meant to serve some different purpose would also be the ideal guardians of people's personal data," Heath says. "I think a purpose-created vehicle is a better way." Give everyone a PDS, and they can have the dream of changing their address only once - but maintain control over where it propagates.

There are, as always, key questions that can't be answered at the prototype stage. First and foremost is the question of whether and how the system can be subverted. Heath's intention is that we should be able to set our own terms and conditions for their use of our data - up-ending the present situation again. We can hope - but it's not clear that companies will see it as good business to differentiate themselves on the basis of how much data they demand from us when they don't now. At the same time, governments who feel deprived of "their" data can simply pass a law and require us to submit it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 29, 2010

Wanted: less Sir Humphrey, more shark


Seventeen MPs showed up for Thursday's Backbenchers' Committee debate on privacy and the Internet, requested by Robert Halfon (Con-Harlow). They tell me this is a sell-out crowd. The upshot: Google and every other Internet company may come to rue the day that Google sent its Street View cars around Britain. It crossed a line.

That line is this: "Either your home is your castle or it's not." Halfon, talking about StreetView and email he had from a vastly upset woman in Cornwall whose home had been captured and posted on the Web. It's easy for Americans to forget how deep the "An Englishman's home is his castle" thing goes.

Halfon's central question: are we sleepwalking into a privatized surveillance society, and can we stop it? "If no one has any right to privacy, we will live in a Big Brother society run by private companies." StreetView, he said, "is brilliant - but they did it without permission." Of equal importance to Halfon is the curious incident of the silent Information Commissioner (unlike apparently his equivalent everywhere else in the world) and Google's sniffed wi-fi data. The recent announcement that the sniffed data includes contents of email messages, secure Web pages, and passwords has prompted the ICO to take another look.

The response of the ICO, Halfon said, "has been more like Sir Humphrey than a shark with teeth, which is what it should be."

Google is only one offender; Julian Huppert (LibDem-Cambridge) listed some of the other troubles, including this week's release of Firesheep, a Firefox add-on designed to demonstrate Facebook's security failings. Several speakers raised the issue of the secret BT/Phorm trials. A key issue: while half the UK's population choose to be Facebook users (!), and many more voluntarily use Google daily, no one chose to be included in StreetView; we did not ask to be its customers.

So Halfon wants two things. He wants an independent commission of inquiry convened that would include MPs with "expertise in civil liberties, the Internet, and commerce" to suggest a new legal framework that would provide a means of redress, perhaps through an Internet bill of rights. What he envisions is something that polices the behavior of Internet companies the way the British Medical Association or the Law Society provides voluntary self-regulation for their fields. In cases of infringement, fines, perhaps.

In the ensuing discussion many other issues were raised. Huppert mentioned "chilling" (Labour) government surveillance, and hoped that portions of the Digital Economy Act might be repealed. Huppert has also been asking Parliamentary Questions about the is-it-still-dead? Interception Modernization Programme; he is still checking on the careful language of the replies. (Asked about it this week, the Home Office told me they can't speculate in advance about the details will that be provided "in due course"; that what is envisioned is a "program of work on our communications abilities"; that it will be communications service providers, probably as defined in RIPA Section 2(1), storing data, not a government database; that the legislation to safeguard against misuse will probably but not certainly, be a statutory instrument.)

David Davis (Con-Haltemprice and Howden) wasn't too happy even with the notion of decentralized data held by CSPs, saying these would become a "target for fraudsters, hackers and terrorists". Damien Hinds (Con-East Hampshire) dissected Google's business model (including £5.5 million of taxpayers' money the UK government spent on pay-per-click advertising in 2009).

Perhaps the most significant thing about this debate is the huge rise in the level of knowledge. Many took pains to say how much they value the Internet and love Google's services. This group know - and care - about the Internet because they use it, unlike 1995, when an MP was about as likely to read his own email as he was to shoot his own dog.

Not that I agreed with all of them. Don Foster (LibDem-Bath) and Mike Weatherley (Con-Hove) were exercised about illegal file-sharing (Foster and Huppert agreed to disagree about the DEA, and Damian Collins (Con-Folkestone and Hythe complained that Google makes money from free access to unauthorized copies). Nadine Dorries (Con-Mid Bedfordshire) wanted regulation to young people against suicide sites.

But still. Until recently, Parliament's definition of privacy was celebrities' need for protection from intrusive journalists. This discussion of the privacy of individuals is an extraordinary change. Pressure groups like PI, , Open Rights Group, and No2ID helped, but there's also a groundswell of constituents' complaints. Mark Lancaster (Con-Milton Keynes North) noted that a women's refuge at a secret location could not get Google to respond to its request for removal and that the town of Broughton formed a human chain to block the StreetView car. Even the attending opposition MP, Ian Lucas (Lab-Wrexham), favored the commission idea, though he still had hopes for self-regulation.

As for next steps, Ed Vaizey (Con-Wantage and Didcot), the Minister for Communication, Culture, and the Creative Industries, said he planned to convene a meeting with Google and other Internet companies. People should have a means of redress and somewhere to turn for mediation. For Halfon that's still not enough. People should have a choice in the first place.

To be continued...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 23, 2010

An affair to remember

Politicians change; policies remain the same. Or if, they don't, they return like the monsters in horror movies that end with the epigraph, "It's still out there..."

Cut to 1994, my first outing to the Computers, Freedom, and Privacy conference. I saw: passionate discussions about the right to strong cryptography. The counterargument from government and law enforcement and security service types was that yes, strong cryptography was a fine and excellent thing at protecting communications from prying eyes and for that very reason we needed key escrow to ensure that bad people couldn't say evil things to each other in perfect secrecy. The listing of organized crime, terrorists, drug dealers, and pedophiles as the reasons why it was vital to ensure access to cleartext became so routine that physicist Timothy May dubbed them "The Four Horsemen of the Infocalypse". Cypherpunks opposed restrictions on the use and distribution of strong crypto; government types wanted at the very least a requirement that copies of secret cryptographic keys be provided and held in escrow against the need to decrypt in case of an investigation. The US government went so far as to propose a technology of its own, complete with back door, called the Clipper chip.

Eventually, the Clipper chip was cracked by Matt Blaze, and the needs of electronic commerce won out over the paranoia of the military and restrictions on the use and export of strong crypto were removed.

Cut to 2000 and the run-up to the passage of the UK's Regulation of Investigatory Powers Act. Same Four Horsemen, same arguments. Eventually RIPA passed with the requirement that individuals disclose their cryptographic keys - but without key escrow. Note that it's just in the last couple of months that someone - a teenager - has gone to jail in the UK for the first time for refusing to disclose their key.

It is not just hype by security services seeking to evade government budget cuts to say that we now have organized cybercrime. Stuxnet rightly has scared a lot of people into recognizing the vulnerabilities of our infrastructure. And clearly we've had terrorist attacks. What we haven't had is a clear demonstration by law enforcement that encrypted communications have impeded the investigation.

A second and related strand of argument holds that communications data - that is traffic data such as email headers and Web addresses - must be retained and stored for some lengthy period of time, again to assist law enforcement in case an investigation is needed. As the Foundation for Information Policy Research and Privacy International have consistently argued for more than ten years, such traffic data is extremely revealing. Yes, that's why law enforcement wants it; but it's also why the American Library Association has consistently opposed handing over library records. Traffic data doesn't just reveal who we talk to and care about; it also reveals what we think about. And because such information is of necessity stored without context, it can also be misleading. If you already think I'm a suspicious person, the fact that I've been reading proof-of-concept papers about future malware attacks sounds like I might be a danger to cybersociety. If you know I'm a journalist specializing in technology matters, that doesn't sound like so much of a threat.

And so to this week. The former head of the Department of Homeland Security, Michael Chertoff, at the RSA Security Conference compared today's threat of cyberattack to nuclear proliferation. The US's Secure Flight program is coming into effect, requiring airline passengers to provide personal data for the US to check 72 hours in advance (where possible). Both the US and UK security services are proposing the installation of deep packet inspection equipment at ISPs. And language in the UK government's Strategic Defence and Security Review (PDF) review has led many to believe that what's planned is the revival of the we-thought-it-was-dead Interception Modernisation Programme.

Over at Light Blue Touchpaper, Ross Anderson links many of these trends and asks if we will see a resumption of the crypto wars of the mid-1990s. I hope not; I've listened to enough quivering passion over mathematics to last an Internet lifetime.

But as he says it's hard to see one without the other. On the face of it, because the data "they" want to retain is traffic data and note content, encryption might seem irrelevant. But a number of trends are pushing people toward greater use of encryption. First and foremost is the risk of interception; many people prefer (rightly) to use secured https, SSH, or VPN connections when they're working over public wi-fi networks. Others secure their connections precisely to keep their ISP from being able to analyze their traffic. If data retention and deep packet inspection become commonplace, so will encrypted connections.

And at that point, as Anderson points out, the focus will return to long-defeated ideas like key escrow and restrictions on the use of encryption. The thought of such a revival is depressing; implementing any of them would be such a regressive step. If we're going to spend billions of pounds on the Internet infrastructure - in the UK, in the US, anywhere else - it should be spent on enhancing robustness, reliability, security, and speed, not building the technological infrastructure to enable secret, warrantless wiretapping.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 15, 2010

The elected dictatorship

I wish I had a nickel for every time I had the following conversation with some British interlocutor in the 1970s and 1980s:

BI: You should never have gotten rid of Nixon.

wg: He was a crook.

BI: They're all crooks. He was the best foreign policy president you ever had.

As if it were somehow touchingly naïve to expect that politicians should be held to standards of behaviour in office. (Look, I don't care if they have extramarital affairs; I care if they break the law.)

It is, however, arguable that the key element of my BIs' disapproval was that Americans had the poor judgment and bad taste to broadcast the Watergate hearings live on television. (Kids, this was 1972. There was no C-Span then.) If Watergate had happened in the UK, it's highly likely no one would ever have heard about it until 50 or however many years later the Public Records Office opened the archives.

Around the time I founded The Skeptic, I became aware of the significant cultural difference in how people behave in the UK versus the US when they are unhappy about something. Britons write to their MP. Americans...make trouble. They may write letters, but they are equally likely to found an organization and create a campaign. This do-it-yourself ethic is completely logical in a relatively young country where democracy is still taking shape.

Britain, as an older - let's be polite and call it mature - country, operates instead on a sort of "gentlemen's agreement" ethos (vestiges of which survive in the US Constitution, to be sure). You can get a surprising amount done - if you know the right people. That system works perfectly for the in-group, and so to effect change you either have to become one of them (which dissipates your original desire for change) or gate-crash the party. Sometimes, it takes an American...

This was Heather Brooke's introduction to English society. The daughter of British parents and the wife of a British citizen, burned out from years of investigative reporting on murders and other types of mayhem in the American South, she took up residence in Bethnal Green with her husband. And became bewildered when repeated complaints to the council and police about local crime produced no response. Stonewalled, she turned to writing her book Your Right to Know, which led her to make her first inquiries about viewing MPs' expenses. The rest is much-aired scandal.

In her latest book, The Silent State, Brooke examines the many ways that British institutions are structured to lock out the public. The most startling revelation: things are getting worse, particularly in the courts, where the newer buildings squeeze public and press into cramped, uncomfortable spaces but the older buildings. Certainly, the airport-style security that's now required for entry into Parliament buildings sends the message that the public are both unwelcome and not to be trusted (getting into Thursday's apComms meeting required standing outside in the chill and damp for 15 minutes while staff inspected and photographed one person at a time).

Brooke scrutinizes government, judiciary, police, and data-producing agencies such as the Ordnance Survey, and each time finds the same pattern: responsibility for actions cloaked by anonymity; limited access to information (either because the information isn't available or because it's too expensive to obtain); arrogant disregard for citizens' rights. And all aided by feel-good, ass-covering PR and the loss of independent local press to challenge it. In a democracy, she argues, it should be taken for granted that citizens should have a right to get an answer when they ask the how many violent attacks are taking place on their local streets, take notes during court proceedings or Parliamentary sessions, or access and use data whose collection they paid for. That many MPs seem to think of themselves as members of a private club rather than public servants was clearly shown by the five years of stonewalling Brooke negotiated in trying to get a look at their expenses.

In reading the book, I had a sudden sense of why electronic voting appeals to these people. It is yet another mechanism for turning what was an open system that anyone could view and audit - it doesn't take an advanced degree to be able to count pieces of paper - into one whose inner workings can effectively be kept secret. That its inner workings are also not understandable to MPs =themselves apparently is a price they're willing to pay in return for removing much of the public's ability to challenge counts and demand answers. Secrecy is a habit of mind that spreads like fungus.

We talk a lot about rolling back newer initiatives like the many databases of Blair's and Brown's government, data retention, or the proliferation of CCTV cameras. But while we're trying to keep citizens from being run down by the surveillance state we should also be examining the way government organizes its operations and block the build-out of further secrecy. This is a harder and more subtle thing to do, but it could make the lives of the next generation of campaigners easier.

At least one thing has changed in the last 30 years, though: people's attitudes. In 2009, when the scandal over MPs' expenses broke, you didn't hear much about how other qualities meant we should forgive MPs. Britain wanted *blood*.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 24, 2010

Lost in a Haystack

In the late 1990s you could always tell when a newspaper had just gotten online because it would run a story about the Good Times virus.

Pause for historical detail: the Good Times virus (and its many variants) was an email hoax. An email message with the subject heading "Good Times" or, later, "Join the Crew", or "Penpal Greetings", warned recipients that opening email messages with that header would damage their computers or delete the contents of their hard drives. Some versions cited Microsoft, the FCC, or some other authority. The messages also advised recipients to forward the message to all their friends. The mass forwarding and subsequent complaints were the payload.

The point, in any case, is that the Good Times virus was the first example of mass social engineering that spread by exploiting not particularly clever psychology and a specific kind of technical ignorance. The newspaper staffers of the day were very much ordinary new users in this regard, and they would run the story thinking they were serving their readers. To their own embarrassment, of course. You'd usually see a retraction a week or two later.

Austin Heap, the progenitor of Haystack, software he claimed was devised to protect the online civil liberties of Iranian dissidents, seems unlikely to have been conducting an elaborate hoax rather than merely failing to understand what he was doing. Either way, Haystack represents a significant leap upward in successfully taking mainstream, highly respected publications for a technical ride. Evgeny Morozov's detailed media critique underestimates the impact of the recession and staff cuts on an already endangered industry. We will likely see many more mess-equals-technology-plus-journalism stories because so few technology specialists remain in the post-recession mainstream media.

I first heard Danny O'Brien's doubts about Haystack in June, and his chief concern was simple and easily understood: no one was able to get a copy of the software to test it for flaws. For anyone who knows anything about cryptography or security, that ought to have been damning right out of the gate. The lack of such detail is why experienced technology journalists, including Bruce Schneier, generally avoided commenting on it. There is a simple principle at work here: the *only* reason to trust technology that claims to protect its users' privacy and/or security is that it has been thoroughly peer-reviewed - banged on relentlessly by the brightest and best and they have failed to find holes.

As a counter-example, let's take Phil Zimmermann's PGP, email encryption software that really has protected the lives and identities of far-flung dissidents. In 1991, when PGP first escaped onto the Net, interest in cryptography was still limited to a relatively small, though very passionate, group of people. The very first thing Zimmermann wrote in the documentation was this: why should you trust this product? Just in case readers didn't understand the importance of that question, Zimmermann elaborated, explaining how fiendishly difficult it is to write encryption software that can withstand prolonged and deliberate attacks. He was very careful not to claim that his software offered perfect security, saying only that he had chosen the best algorithms he could from the open literature. He also distributed the source code freely for review by all and sundry (who have to this day failed to find substantive weaknesses). He concludes: "Anyone who thinks they have devised an unbreakable encryption scheme either is an incredibly rare genius or is naive and inexperienced." Even the software's name played down its capabilities: Pretty Good Privacy.

When I wrote about PGP in 1993, PGP was already changing the world by up-ending international cryptography regulations, blocking mooted US legislation that would have banned the domestic use of strong cryptography, and defying patent claims. But no one, not even the most passionate cypherpunks, claimed the two-year-old software was the perfect, the only, or even the best answer to the problem of protecting privacy in the digital world. Instead, PGP was part of a wider argument taking shape in many countries over the risks and rewards of allowing civilians to have secure communications.

Now to the claims made for Haystack in its FAQ:

However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy.

Without proper and open testing of the entire system - peer review - they could not possibly know this. The strongest cryptographic algorithm is only as good as its implementation. And even then, as Clive Robertson writes in Financial Cryptography, technology is unlikely to be a complete solution.

What a difference a sexy news hook makes. In 1993, the Clinton Administration's response to PGP was an FBI investigation that dogged Zimmermann for two years; in 2010, Hillary Clinton's State Department fast-tracked Haystack through the licensing requirements. Why such a happy embrace of Haystack rather than existing privacy technologies such as Freenet, Tor, or other anonymous remailers and proxies remains as a question for the reader.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 3, 2010

Beyond the zipline

When Aaron Sorkin (The West Wing, Sports Night) was signed to write the screenplay for a movie about Facebook, I think the general reaction was one of more or less bafflement. Sorkin has a great track record, sure, but how do you make a movie about a Web site, even if it's a social network? What are you going to show? People typing to each other?

Now that the movie is closer coming out (October 1 in the US) that we're beginning to see sneak peak trailers, and we can tell a lot more from the draft screenplay that's been floating around the Net. The copy I found is dated March 2009, and you can immediately tell it's the real thing: quality dialogue and construction, and the feel of real screenwriting expertise. Turns out, the way you write a screenplay about Facebook is to read the books, primarily the novelistic, not-so-admired Accidental Billionaires by Ben Mezrich, along with other published material and look for the most dramatic bit of the story: the lawsuits eventually launched by the characters you're portraying. Through which, as a framing device, you can tell the story of the little social network that exploded. Or rather, Sorkin can. The script is a compelling read. (It's actually not clear to me that it can be improved by actually filming it.)

Judging from other commentaries, everyone seems to agree it's genuine, though there's no telling where in the production process that script was, how many later drafts there were, or how much it changed in filming and post-production. There's also no telling who leaked it or why: if it was intentional it was a brilliant marketing move, since you could hardly ask for more word-of-mouth buzz.

If anyone wanted to design a moral lesson for the guy who keeps saying privacy is dead, it might be this: turn out your deepest secrets to portray you as a jerk who steals other people's ideas and codes them into the basis for a billion-dollar company, all because you want to stand out at Harvard and, most important, win the admiration of the girl who dumped you. Think the lonely pathos of the socially ostracized, often overlooked Jenny Humphrey in Gossip Girl crossed with the arrogant, obsessive intelligence of Sheldon Cooper in The Big Bang Theory. (Two characters I actually like, but they shouldn't breed.)

Neither the book nor the script is that: they're about as factual as 1978's The Buddy Holly Story or any other Hollywood biopic. Mezrich, who likes to write books about young guys who get rich fast (you can see why; he's gotten several bestsellers out of this approach), had no help from Facebook founder and CEO Mark Zuckerberg, What dialogue there is has been "re-created", and sources other than disaffected co-founder Eduardo Saverin are anonymous. Lacking sourcing (although of course the court testimony is public information), it's unclear how fictional the dramatization is. I'd have no problem with that if the characters weren't real people identified by their real names.

Places, too. Probably the real-life person/place/thing that comes off worst is Harvard, which in the book especially is practically a caricature of the way popular culture likes to depict it: filled with the rich, the dysfunctional, and the terminally arrogant who vie to join secretive, elite clubs that force them to take part in unsavoury hazing rituals. So much so that it was almost a surprise to read in Wikipedia that Mezrich actually went to Harvard.

Journalists and privacy advocates have written extensively about the consequences for today's teens of having their adolescent stupidities recorded permanently on Facebook or elsewhere, but Zuckerberg is already living with having his frat-boy early days of 2004 documented and endlessly repeated. Of course one way to avoid having stupid teenaged shenanigans reported is not to engage in them, but let's face it: how many of us don't have something in our pasts we'd just as soon keep out of the public eye? And if you're that rich that young, you have more opportunities than most people to be a jerk.

But if the only stories people can come up with about Zuckerberg date from before he turned 21, two thoughts occur. First, that Zuckerberg has as much right as anybody to grow up into a mature human being whose early bad judgement should be forgiven. To cite two examples: the tennis player Andre Agassi was an obnoxious little snert at 18 and a statesman of the game at 30; at 30 Bill Gates was criticized for not doing enough for charity but now at 54 is one of the world's most generous philanthropists. It is, therefore, somewhat hypocritical to demand that Zuckerberg protect today's teens from their own online idiocy while constantly republishing his follies.

Second, that outsized, hyperspeed business success might actually have forced him to grow up rather quickly. Let's face it, it's hard to make an interesting movie out of the hard work of coding and building a company.

And a third: by joining the 500 million and counting who are using Facebook we are collectively giving Zuckerberg enough money not to care either way.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 27, 2010

Trust the data, not the database

"We're advising people to opt out," said the GP, speaking of the Summary Care Records that are beginning to be uploaded to what is supposed to be eventually a nationwide database used by the NHS. Her reasoning goes this way. If you don't upload your data now you can always upload it later. If you do upload it now - or sit passively by while the National Health Service gets going on your particular area - and live to regret it you won't be able to get the data back out again.

You can find the form here, along with a veiled hint that you'll be missing out on something if you do opt out - like all those great offers of products and services companies always tell you you'll get if you sign up for their advertising, The Big Opt-Out Web site has other ideas.

The newish UK government's abrupt dismissal of the darling databases of last year has not dented the NHS's slightly confusing plans to put summary care records on a national system that will move control over patient data from your GP, who you probably trust to some degree, to...well, there's the big question.

In briefings for Parliamentarians conducted by the Open Rights Group in 2009, Emma Byrne, a researcher at University College, London who has studied various aspects of healthcare technology policy, commented that the SCR was not designed with any particular use case in mind. Basic questions that an ordinary person asks before every technology purchase - who needs it? for what? under what circumstances? to solve what problem? - do not have clear answers.

"Any clinician understands the benefits of being able to search a database rather than piles of paper records, but we have to do it in the right way," Fleur Fisher, the former head of ethics, science, and information for the British Medical Association said at those same briefings. Columbia University researcher Steve Bellovin, among others, has been trying to figure out what that right way might look like.

As comforting as it sounds to say that the emergency care team looking after you will be able to look up your SCR and find out that, for example, you are allergic to penicillin and peanuts, in practice that's not how stuff happens - and isn't even how stuff *should* happen. Emergency care staff look at the patient. If you're in a coma, you want the staff to run the complete set of tests, not look up in a database, see you're a diabetic and assume it's a blood sugar problem. In an emergency, you want people to do what the data tells them, not what the database tells them.

Databases have errors, we know this. (Just last week, a database helpfully moved the town I live in from Surrey to Middlesex, for reasons best known to itself. To fix it, I must write them a letter and provide documentation.) Typing and cross-matching blood drawn by you from the patient in front of you is much more likely to have you transfusing the right type of blood into the right patient.

But if the SCR isn't likely to be so much used by the emergency staff we're all told would? might? find it helpful, it still opens up much broader possibilities of abuse. It's this part of the system that the GP above was complaining about: you cannot tell who will have access or under what circumstances.

GPs do, in a sense, have a horse in this race, in that if patient data moves out of their control they have lost an important element of their function as gatekeepers. But given everything we know about how and why large government IT projects fail, surely the best approach is small, local projects that can be scaled up once they're shown to be functional and valuable. And GPs are the people at the front lines who will be the first to feel the effects of a loss of patient trust.

A similar concern has kept me from joining at study whose goals I support, intended to determine if there is a link between mobile phone use and brain cancer. The study is conducted by an ultra-respectable London university; they got my name and address from my mobile network operator. But their letter notes that participation means giving them unlimited access to my medical records for the next 25 years. I'm 56, about the age of the earliest databases, and I don't know who I'll be in 25 years. Technology is changing faster than I am. What does this decision mean?

There's no telling. Had they said I was giving them permission for five years and then would be asked to renew, I'd feel differently about it. Similarly, I'd be more likely to agree had they said that under certain conditions (being diagnosed with cancer, dying, developing brain disease) my GP would seek permission to release my records to them. But I don't like writing people blank checks, especially with so many unknowns over such a long period of time. The SCR is a blank check.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

August 20, 2010

Naming conventions

Eric Schmidt, the CEO of Google, is not a stupid person, although sometimes he plays one for media consumption. At least, that's how it seemed this week, when the Wall Street Journal reported that he had predicted, apparently in all seriousness, that the accumulation of data online may result in the general right for young people to change their names on reaching adulthood in order to escape the embarrassments of their earlier lives.

As Danah Boyd commented in response, it is to laugh.

For one thing, every trend in national and international law is going toward greater, permanent trackability. I know the UK is dumping the ID card and many US states are stalling on Real ID, but try opening a new bank account in the US or Europe, especially if you're a newly arrived foreigner. It's true that it's not so long ago - 20 years, perhaps - that people, especially in California, did change their names at the drop of an acid tablet. I'm fairly sure, for example, that the woman I once knew as Dancingtree Moonwater was not named that by her parents. But those days are gone with the anti-money laundering regulations, the anti-terrorist laws, and airport security.

For one thing, when is he imagining the adulthood moment to take place? When they're 17 and applying to college and need to cite their past records of good works, community involvement, and academic excellence? When they're 21 and graduating from college and applying for jobs and need to cite their past records of academic excellence, good works, and community involvement? I don't know about you, but I suspect that an admissions officer/prospective employer would be deeply suspicious of a kid coming of age today who had, apparently, no online history at all. Even if that child is a Mormon.

For another, changing your name doesn't change your identity (even if the change is because you got married). Investigators who track down people who've dropped out of their lives and fled to distant parts to start new ones often do so by, among other things, following their hobbies. You can leave your spouse, abandon your children, change jobs, and move to a distant location - but it isn't so easy to shake a passion for fly-fishing or 1957 Chevys. The right to reinvent yourself, as Action on Rights for Children's Terri Dowty pointed out during the campaign against the child-tracking database ContactPoint, is an important one. But that means letting minor infractions and youthful indiscretions fade into the mists of time, not to be pulled out and laughed until, say, 30 years hence, rather than being recorded in a database that thinks it "knows" you.

I think Schmidt knows all this perfectly well. And I think if such an infrastructure - turn 16, create a new identity - were ever to be implemented the first and most significant beneficiary would be...Google. I would expect most people's search engine use to provide as individual a fingerprint as, well, fingerprints. (This is probably less true for journalists, who research something different every week and therefore display the database equivalent of multiple personality disorder.)

Clearly if the solution to young people posting silly stuff online where posterity can bite them on the ass is a change of name the only way to do it is to assign kids online-only personas at birth that can be retired when they reach an age of reason. But in such a scenario, some kids would wind up wanting to adopt their online personas as their real ones because their online reputation has become too important in their lives. In the knowledge economy, as plenty of others have pointed out, reputation is everything.

This is, of course, not a new problem. As usual. When, in 1995, DejaNews (bought by Google some years back to form the basis of the Google Groups archive) was created, it turned what had been ephemeral Usenet postings into a permanent archive. If you think people post stupid stuff on Facebook now, when they know their friends and families are watching, you should have seen the dumb stuff they posted on Usenet when they thought they were in the online equivalent of Benidorm, where no one knew them and there were no consequences. Many of those Usenet posters were students. But I also recall the newly appointed CEO of a public company who went around the WELL deleting all his old messages. Didn't mean there weren't copies...or memories.

There is a genuine issue here, though, and one that a very smart friend with a 12-year-old daughter worries about regularly: how do you, as a parent, guide your child safely through the complexities of the online world and ensure that your child has the best possible options for her future while still allowing her to function socially with her peers? Keeping her offline is not an answer. Neither are facile statements from self-interested CEOs who, insulated by great wealth and technological leadership, prefer to pretend to themselves that these issues have already been decided in their favor.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 23, 2010

Information Commissioner, where is thy sting?

Does anyone really know what their computers are doing? Lauren Weinstein asked recently in a different context.

I certainly don't. Mostly, I know what they're not doing, and then only when it inconveniences me. Don't most of us have an elaborate set of workarounds for things that are just broken enough not to work but not so broken that we have to fix them?

But companies - particularly companies who have made their fortunes by being clever with technology - are supposed to do better than that. And so we come to the outbreak of legal actions against Google for collecting wifi data - not only wireless network names (SSIDs) and information identifying individual computer devices (MAC addresses) while it was out photographing every house for StreetView, but also payload data. The company says this sniffing was accidental. Privacy International's Simon Davies says that no engineer he's spoken to buys this: either the company collected it deliberately or the company's internal management systems are completely broken.

This was the topic of Tuesday's Big Brother Watch event. We actually had a Googler, Sarah Hunter, head of UK public policy, on the premises taking notes (as far as I could discern she did not have a camera mounted on her head, which seems like a missed opportunity), but the court actions in progress against the company meant that she was under strict orders from legal not to say anything much.

You can't really blame her. The list of government authorities investigating Google over the wifi data now includes: 38 US states and the District of Columbia, led by Connecticut; Germany; France; and Australia. Britain? Not so much.

"I find it amazing that Google did it without permission and seemed to get away with it without anyone causing a fuss," said Rob Halfon MP, who took time between votes on Tuesday to deliver a call to action. "There has to be a limit to what these companies do," he said, calling Street View "a privatized version of Big Brother." Halfon has tabled an early day motion on surveillance and the Internet.

There are two separate issues here. The first is Street View itself, which many countries have been unhappy about.

I was sympathetic when Google first launched Street View in the US and ran into privacy issues. It was, I thought and think, an innocently geeky kind of mistake to make: a look! This is so COOL! kind of moment. In the flush of excitement, I reasoned, it was probably easy to lose sight of the fact that people might object to having their living room windows peered into in a drive-by shoot and the resulting images posted online. Who would stop to ask the opinions of the inept, confused user of typical geek contempt, "my mother"?

By the time Street View arrived in Europe, however, there was no excuse. That the product's launch has sparked public anger in every country with every launch, along with other controversial actions (think Google Books), suggests that the company's standard MO is that of the teenager who deliberately avoids her parents' permission because she knows it will be denied.

It is, I think, reasonable to argue, as Google does, that the company is taking pictures of public areas, something that is not illegal in the US although it has various restrictions in other places. The keys, I think, are first of all the scale of the operation, and second the public display part of the equation, an element that is restricted in some European countries. As Halfon said, "Only big companies have the financial muscle to do this kind of mapping."

The second issue, the wifi data, is much more clear-cut. It seems unquestionable that accidental or not - and in fact we would not know the company had sniffed this data if it hadn't told us itself - laws have been broken in a number of countries. In the UK, it seems likely that the action was illegal under the Regulation of Investigatory Powers Act (2000) and the Computer Misuse Act would apply. Google's founders and CEO, Sergey Brin, Larry Page, and Eric Schmidt, seem to take the view that no harm, no foul.

But that's not the point, which is why Privacy International, having been told the Information Commissioner was not interested in investigating, went to the Metropolitan Police.

"There has to be a point where Google is brought to account because of its systemic failure," he said. "If all the criminal investigation does is to sensitise Google, then internally there may be some evolution."

The key, however, for the UK, is the unwillingness of the Information Commissioner to get involved. First, the ICO declined to restrict Street View. Then it refused to investigate the wifi issue and wanted the data destroyed, an action PI argued would mean destroying the evidence needed for a forensic investigation.

It was this failure that Davies and Alex Deane, director of Big Brother Watch, picked on.

"I find it peculiar that the British ICO was so reluctant to investigate Google when so many other ICOs were willing," Deane said. "The ICO was asleep on the job."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. .

June 25, 2010

New money

It seems that the Glastonbury Festival, which I tend to sniffily dismiss as a Woodstock wannabe, is to get rid of cash. I can understand their thinking: cash is expensive for the festival to transport, store, and guard and creates security problems for individual festival-goers, too. Mr Cashless himself, James Allan, will be pleased. Although, given his squirming reaction to being offered cash at a conference a few months ago, it's hard to believe he'd regard an outdoor festival as sufficiently hygienic to attend.

But here is the key bit:

As well as convenience and security issues, Barclaycard's Mr Mathieson said that information gathered from transactions could be valuable for future marketing. "For example if the system knows what time you went and bought a beer and at which bar, it can make a guess which band you were about to see," he said. "Then the organizers could send you information about upcoming tours. The opportunities are exciting."

Talk about creepy! Your £5 notes do not climb out of your wallet to chirp eagerly about what they'd like to be spent on.

One of the things we talked about in the history of cypherpunks session at CFP last week (the video recording is online) was what ever happened to digital cash, something often discussed in the early 1990s, when cryptography was the revolution. First proposed by David Chaum in an influential Scientific American article in 1992, it was meant to be genuinely the equivalent of anonymous cash.

Chaum's scheme was typically brilliant but typically facing a hard road to acceptance (he has since come up with a clever cryptographic scheme to secure electronic voting). Getting it widely deployed required two things: the cooperation of banks and the willingness of consumers to transfer what they see as "real money" into an unfamiliar currency with uncertain backing. Consumers have generally balked at this kind of thing; the early days of the Net saw a number of attempts at new forms of payment, and the only ones that have succeeded are those that, like Paypal, build on existing and familiar currencies and structures. You could argue that frequent flyer miles are currency and they are, but they generally come free with purchases; when people do buy them with what they perceive as "real" money it's to acquire a tangible near-term benefit such as a cheap ticket, elite status for their next flight, or a free upgrade.

Chaum understood correctly, however, that the future would hold some form of digital cash, and the anonymous version he was proposing was a deliberately chosen alternative to the future he saw unfolding as computerized transactions took hold.

"If the trend toward identifier-based smart cards continues, personal privacy will be increasingly eroded," he wrote in 1992. And so it has proved: credit cards, debit cards, mobile phone and online payments are all designed to make every transaction traceable.

"The banking industry has a vested interest in not providing anonymous payment mechanisms," said Lance Cottrell at CFP, "because they really like to know as much information as they can about you." Combine that with money-laundering laws and increased government surveillance, and anonymous digital cash seems pretty well dead. The one US bank that tried offering DigiCash, the St Louis, Missouri-based Mark Twain bank, dropped the offering in September 1998 because of low take-up; shortly afterwards DigiCash went into liquidation.

Before heading out to CFP, my bedtime reading was Dave Birch's Digital Money Reader 2010, a compilation of all his digital money blog postings, with attached comments, from the past year. Birch is seriously at war with physical cash, which he seems to perceive as the equivalent of an unfair tax on people like him, who would rather do everything electronically. Because the costs of cash aren't visible to consumers at point of use, he argues, people are taught to think of it as free, where electronic transactions have clearly delineated costs. If people were charged the true cost of paying with cash, surely the percentage of cash payments - still around 80 percent in Europe - would begin to drop precipitously.

But it seems clear that the hidden cost of electronic payments as they are presently constituted is handing over tracking data. A truly anonymous Oyster card costs nothing extra in financial terms, but you pay with convenience: you must put down a £5 deposit for a prepaid card at a tube station, and you must always remember to top it up with notes at station machines. Similarly, you can have an anonymous Paypal account in the sense that you can receive funds via a throwaway email address and use them only to buy digital goods that do not require a delivery address. But after the first $500 or so you'll have to set up another account or provide Paypal with verifiable banking information. Because we have so far not come up with a good way to estimate the value of such personal data, we have no way to calculate the true cost of trackable electronic payments.

Still, it occurs to me writing this that if cash ever does die under the ministrations of Birch and his friends, the event will open up new possibilities for struggling post offices everywhere. Stamps, permanently redeemable for at least their face value, could become the new cash.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 18, 2010

Things I learned at this year's CFP

- There is a bill in front of Congress to outlaw the sale of anonymous prepaid SIMs. The goal seems to be some kind of fraud and crime prevention. But, as Ed Hasbrouck points out, the principal people who are likely to be affected are foreign tourists and the Web sites who sell prepaid SIMS to them.

- Robots are getting near enough in researchers' minds for them to be spending significant amounts of time considering the legal and ethical consequences in real life - not in Asimov's fictional world where you could program in three safety llaws and your job was done. Ryan Calo points us at the work of Stanford student Victoria Groom on human-robot interaction. Her dissertation research not yet on the site, discovered that humans allocate responsibility for success and failure proportionately according to how anthropomorphic the robot is.

- More than 24 percent of tweets - and rising sharply - are sent by automated accounts, according to Miranda Mowbray at HP labs. Her survey found all sorts of strange bots: things that constantly update the time, send stock quotes, tell jokes, the tea bot that retweets every mention of tea...

- Google's Kent Walker, the 1997 CFP chair, believes that censorship is as big a threat to democracy as terrorism, and says that open architectures and free expression are good for democracy - and coincidentally also good for Google's business.

- Microsoft's chief privacy strategist, Peter Cullen, says companies must lead in privacy to lead in cloud computing. Not coincidentally, others are the conference note that US companies are losing business to Europeans in cloud computing because EU law prohibits the export of personal data to the US, where data protection is insufficient.

- It is in fact possible to provide wireless that works at a technical conference. And good food!

- The Facebook Effect is changing the attitude of other companies about user privacy. Lauren Gelman, who helps new companies with privacy issues, noted that because start-ups all see Facebook's success and want to be the next 400 million-user environment, there was a strong temptation to emulate Facebook's behavior. Now, with the angry cries mounting from consumers, she's having to spend less effort convincing them about the level of pushback companies will get from consumers if they change their policies and defy their expectations. Even so, it's important to ensure that start-ups include privacy in their budgets and not become an afterthought. In this respect, she makes me realize, privacy in 2010 is at the stage that usability was in the early 1990s.

- All new program launches come through the office of the director of Yahoo!'s business and human rights program, Ebele Okabi-Harris. "It's very easy for the press to focus on China and particular countries - for example, Australia last year, with national filtering," she said, "but for us as a company it's important to have a structure around this because it's not specific to any one region." It is, she added later, a "global problem".

- We should continue to be very worried about the database state because the ID cards repeal act continues the trend toward data sharing among government departments and agencies, according to Christina Zaba from No2ID.

- Information brokers and aggregators, operating behind the scenes, are amassing incredible amounts of details about Americans and it can require a great deal of work to remove one's information from these systems. The main customers of these systems are private investigators, debt collectors, media, law firms, and law enforcement. The Privacy Rights Clearinghouse sees many disturbing cases, as Beth Givens outlined, as does Pam Dixon's World Privacy forum.

- I always knew - or thought I knew - that the word "robot" was not coined by Asimov but by Karel Capek for his play R.U.R. (for "Rossum's Universal Robots", which coincidentally I also know that playing a robot in same was Michael Caine's first acting job). But Twitterers tell me that this isn't quite right. The word is derived from the Czech word "robota", "compulsory work for a feudal landlord". And that it was actually coined by Capek's older brother, Josef..

- There will be new privacy threats emerging from automated vehicles, other robots, and voicemail transcription services, sooner rather than later.

- Studying the inner workings of an organization like the International Civil Aviation Organization is truly difficult because the time scales - ten years to get from technical proposals to mandated standard, which is when the public becomes aware of - are a profound mismatch for the attention span of media and those who fund NGOs. Anyone who feels like funding an observer to represent civil society at ICAO should get in touch with Edward Hasbrouck.

- A lot of our cybersecurity problems could be solved by better technology.

- Lillie Coney has a great description of deceptive voting practices designed to disenfranchise the opposition: "It's game theory run amok!"

- We should not confuse insecure networks (as in vulnerable computers and flawed software) with unsecured networks (as in open wi-fi).

- Next year's conference chairs are EPIC's Lillie Coney and Jules Polonetsky. It will be in Washington, DC, probably the second or third week in June. Be there!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 11, 2010

Bonfire of the last government's vanities

"We have no hesitation in making the national identity card scheme an unfortunate footnote in history. There it should remain - a reminder of a less happy time when the Government allowed hubris to trump civil liberties," the Home Secretary, Theresa May, told the House of Commons at the second reading of the Identity Documents Bill 2010, which will erase the 2006 act introducing ID cards and the National Identity Register. "This will not be a literal bonfire of the last Government's vanities, but it will none the less be deeply satisfying." Estimated saving: £86 million over the next four years.

But not so fast...

An "unfortunate footnote" sounds like the perfect scrapheap on which to drop the National Identity Register and its physical manifestation, ID cards, but if there's one thing we know about ID cards it's that, like the monster in horror movies, they're always "still out there".

In 2005, Lilian Edwards, then at the Centre for Research in Intellectual Property and Law at the University of Edinburgh, invited me to give a talkIdentifying Risks, on the history of ID cards, an idea inspired by a comment from Ross Anderson. The gist: after the ID card was scrapped in 1952 at the end of World War II, attempts to bring it back an ID card were made, on average, about every two or three years. (Former cabinet minister Peter Lilley, speaking at Privacy International's 2002 conference, noted that every new IT minister put the same set of ID card proposals before the Cabinet.)

The most interesting thing about that history is that the justification for bringing in ID cards varied so much; typically, it drew on the latest horrifying public event. So, in 1974 it was the IRA bombings in Guildford and Birmingham. In 1988, football hooliganism and crime. In 1989, social security fraud. In 1993, illegal immigration, fraud, and terrorism.

Within the run of just the 2006 card, the point varied. The stated goals began with blocking benefit fraud, then moved on to include preventing terrorism and serious crime, stopping illegal immigration, and needing to comply with international standards that require biometric features in passports. It is this chameleon-like adaptation to the troubles of the day that makes ID cards so suspect as the solution to anything.

Immediately after the 9/11 attacks, Tony Blair rejected the idea of ID cards (which he had actively opposed in 1995, when John Major's government issued a green paper). But by mid-2002 a consultation paper had been published and by 2004 Blair was claiming that the civil liberties objections had vanished.

Once the 2006 ID card was introduced as a serious set of proposals in 2002, events unfolded much as Simon Davies predicted they would at that 2002 meeting. The government first clothed the ID card in user-friendly obfuscation: an entitlement card. The card's popularity in the polls, at first favourable (except, said David Blunkett for a highly organised minority), slid inexorably as the gory details of its implementation and costs became public. Yet the (dear, departed) Labour government clung to the proposals despite admitting, from time to time, their utter irrelevance for preventing terrorism.

Part of the card's sliding popularity has been due to people's increased understanding of the costs and annoyance it would impose. Their apparent support for the card was for the goals of the card, not the card itself. Plus, since 2002 the climate has changed: the Iraq war is even less popular and even the 2005 "7/7" London attacks did not keep acceptance of the "we are at war" justification for increased surveillance from declining. And the economic climate since 2008 makes large expenditure on bureaucracy untenable.

Given the frequency with which the ID card has resurfaced in the past, it seems safe to say that the idea will reappear at some point, though likely not during this coalition government. The LibDems always opposed it; the Conservatives have been more inconsistent, but currently oppose large-scale public IT projects.

Depending how you look at it, ID cards either took 54 years to resurface (from their withdrawal in1952 to the 2006 Identity Cards Act), or the much shorter time to the first proposals to reinstate them. Australia might be a better guide. In 1985, Bob Hawke made the "Australia card" a central plank of his government. He admitted defeat in 1987, after widespread opposition fueled by civil liberties groups. ID card proposals resurfaced in Australia in 2006, to be withdrawn again at the end of 2007. That's about 21 years - or a generation.

In 2010 Britain, it's as important that much of the rest of the Labour government's IT edifice, such as the ContactPoint database, intended to track children throughout their school years, is being scrapped. Left in place, it might have taught today's generation of children to perceive state tracking as normal. The other good news is that many of today's tireless campaigners against the 2006 ID card will continue to fight the encroachment of the database state. In 20 years - or sooner, if (God forbid) some catastrophe makes it politically acceptable - when or if an ID card comes back, they will still be young enough to fight it. And they will remember how.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

May 28, 2010

Privacy theater

On Wednesday, in response to widespread criticism and protest Facebook finally changed its privacy settings to be genuinely more user-friendly - and for once, the settings actually are. It is now reasonably possible to tell at a glance which elements of the information you have on the system are visible and to what class of people. To be sure, the classes available - friends, friends of friends, and everyone - are still broad, but it is a definite improvement. It would be helpful if Facebook provided a button so you could see what your profile looks like to someone who is not on your friends list (although of course you can see this by logging out of Facebook and then searching for your profile). If you're curious just how much of your information is showing, you might want to try out Outbook.

Those changes, however, only tackle one element of a four-part problem.

1: User interface. Fine-grained controls are, as the company itself has said, difficult to present in a simple way. This is what the company changed this week and, as already noted, the new design is a big improvement. It can still be improved, and it's up to users and governments to keep pressure on the company to do so.

2: Business model. Underlying all of this, however, is the problem that Facebook still has make money. To some extent this is our own fault: if we don't want to pay money to use the service - and it's pretty clear we don't - then it has to be paid for some other way. The only marketable asset Facebook has is its user data. Hence Andrew Brown's comment that users are Facebook's product; advertisers are its customers. As others have commented, traditional media companies also sell their audience to their advertisers; but there's a qualitative difference in that traditional media companies also create their own content, which gives them other revenue streams.

3. Changing the defaults. As this site's graphic representation makes clear, since 2005 the changes in Facebook's default privacy settings have all gone one way: towards greater openness. We know from decades of experience that defaults matter because so many computer users never change them. It's why Microsoft has had to defend itself against antitrust actions regarding bundling Internet Explorer and Windows Media Player into its operating system. On Facebook, users should have to make an explicit decision to make their information public - opt in, rather than opt out. That would also be more in line with the EU's Data Protection Directive.

4: Getting users to understand what they're disclosing. Back in the early 1990s, AT&T ran a series of TV ads in the US targeting a competitor's having asked its customers the names of their friends and family for marketing purposes, "I don't want to give those out," the people in the ads were heard to say. Yet they freely disclose on Facebook every day exactly that sort of information. As director of the Foundation for Information Policy Research Caspar Bowden argued persuasively that traffic analysis - seeing who is talking to whom and with what frequency - is far more revealing than the actual contents of messages.

What makes today's social networks different from other messaging systems (besides their scale) is that typically those - bulletin boards, conferencing systems, CompuServe, AOL, Usenet, today's Web message boards - were and are organized around topics of interest: libel law reform, tennis, whatever. Even blogs, whose earliest audiences are usually friends, become more broadly successful because of the topics they cover and the quality of that coverage. In the early days, that structure was due to the fact that most people online were strangers meeting for the first time. These days, it allows those with minority interests to find each other. But in social media the organizing principle is the social connections of individual people whose tenure on the service begins, by and large, by knowing each other. This vastly simplifies traffic analysis.

A number of factors contributed to the success of Facebook. One was the privacy promises the company made (and have since revised). But another was certainly elements of dissatisfaction with the wider Net. I've heard Facebook described as an effort to reinvent the Net, and there's some truth to that in that it presents itself as a safer space. That image is why people feel comfortable posting pictures of their kids. But a key element in Facebook's success has, I think, also been the brokenness of email and, to a lesser degree, instant messaging. As these became overridden with spam, rather than grapple with spam and other unwanted junk or the uncertainty of knowing which friend was using which incompatible IM service, many people gravitated to social networks as a way of keeping their inboxes as personal space.

Facebook is undoubtedly telling the truth when it says that the privacy complaints have, so far, made little difference to the size and engagement of its user base. It's extreme to say that Facebook victimizes its users, but it is true that the active core of long-term users' expectations have been progressively betrayed. Facebook's users have no transparency about or control over what data Facebook shares with its advertisers. Making that visible would go a long way toward restoring users' trust.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 14, 2010

Bait and switch

If there's one subject Facebook's PR people probably wish its founder and CEO, 27-year-old Mark Zuckerberg, had never discussed in public it's privacy, which he dismissed in January as no longer a social norm.

What made Zuckerberg's statement sound hypocritical - on top of arrogant, blinkered, self-interested, and callous - is the fact that he himself protects information he posts on Facebook. If he doesn't want his own family photographs searchable on Google, why does he assume that other people do?

What's equally revealing, though, is the comment he went on to make (quoted in that same piece) that he views it as really important "to keep a beginner's mind" in deciding what the company should do next. In other words, they ask themselves what decision they would make if they were starting Facebook now - and then they do that.

You can't hardly get there from here.

Zuckerberg is almost certainly right that if he were setting up the company now he'd make everything public as a default setting - as Twitter, founded two years later, does. Of course he'd do things differently: he'd be operating post-Facebook. Most important, he'd be a tiny company instead of a huge one. Size matters: you cannot make the same decisions that you would if you were a start-up when you have 400 million users, are the Web's largest host of photographs, and the biggest publisher of display ads. Facebook is discovering what Microsoft and Google also have: it isn't easy being big.

Being wholly open would, I'm sure, be a simpler situation both legally and in terms of user expectations, and I imagine it would be easier to program and develop. The difficulty is that he isn't starting the company now, and just as the seventh year of a marriage isn't the same as the first year of a marriage, he can't behave as if he is. Because: like in a marriage, Facebook has made promises to its users throughout the last six years, and you cannot single-handedly rewrite the contract without betraying them.

On Sky TV last night, I called Facebook's attitude to privacy a case of classic bait-and-switch. While I have no way of knowing if that was Zuckerberg's conscious intention when he first created Facebook in his Harvard dorm room at 19, that is nonetheless an accurate description of the situation. Facebook users - and the further you go back in the company's history the more true this is - shared their information because the company promised them privacy. Had the network been open from the start, people would likely have made different choices. Both a group of US senators nor the EU's Data Protection working party understand this perfectly. It would be a mistake for Facebook's management to dismiss these complaints as the outdated concerns of a bunch of guys who aren't down with the modern world.

Part of Facebook's difficulty with privacy issues is I'm sure the kind of interface design problem computer companies have struggled with for decades. In published comments, the company has referred to the conflict between granularity and simplicity: people want detailed choices but providing those makes the interface complex; simplifying the interface removes choice. I don't think this is an unsolvable problem; though it does require a new approach.

One thing I'd like Facebook to provide is a way of expiring data (which would solve a number of privacy issues) so that you could specify that anything posted on the site will be deleted after a certain amount of time has passed. Such a setup would also allow users to delete data posted before the beginning date of a new privacy regime. I'd also like to be able to export all my data in a format suitable for searching and archiving on my own system.

Zuckerberg was a little bit right, in that people are disclosing information to anybody who's interested in a way they didn't - couldn't - before. That doesn't, however, mean they're not interested in privacy; it means many think they are in private, talking to their friends, without understanding who else may be watching. It was doubtless that sort of feeling that ledPaul Chambers into trouble: a few days ago he was (in my opinion outrageously) fined £1,000 for sending a menacing message over a public telecommunications network.

I suppose Facebook can argue that the fact that 400 million people use their site means their approach can't be wholly unpopular. The number of people that have deleted their accounts since the latest opening-up announcements seems to be fairly small. But many more are there because they have to be: they have friends who won't communicate in any other way, or there are work commitments that require it. Facebook should remember that this situation came about because the company made promises about privacy. Reneging on those promises and thumbing your nose at people for being so stupid as to believe you invites a backlash.

Where Zuckerberg is wrong is to think that the errors people make in a new and unfamiliar medium where the social norms and community standards are still being defined means there's been a profound change in the world's social values. If it looks like that to rich geeks in California, it may be time for them to get out of Dodge.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

April 16, 2010

Data-mining the data miners

The case of murdered Colombian student Anna Maria Chávez Niño, presented at this week's Privacy Open Space, encompasses both extremes of the privacy conundrum posed by a world in which 400 million people post intimate details about themselves and their friends onto a single, corporately owned platform. The gist: Chávez met her murderers on Facebook; her brother tracked them down, also on Facebook.

Speaking via video link to Cédric Laurant, a Brussels-based independent privacy consultant, Juan Camilo Chávez noted that his sister might well have made the same mistake - inviting dangerous strangers into her home - by other means. But without Facebook he might not have been able to identify the killers. Criminals, it turns out, are just as clueless about what they post online as anyone else. Armed with the CCTV images, Chávez trawled Facebook for similar photos. He found the murderers selling off his sister's jacket and guitar. As they say, busted.

This week's PrivacyOS was the fourth in a series of EU-sponsored conferences to collaborate on solutions to that persistent, growing, and increasingly complex problem: how to protect privacy in a digital world. This week's focused on the cloud.

"I don't agree that privacy is disappearing as a social value," said Ian Brown, one of the event's organizers, disputing Mark privacy-is-no-longer-a-social-norm Zuckerberg's claim. The world's social values don't disappear, he added, just because some California teenagers don't care about them.

Do we protect users through regulation? Require subject releases for YouTube or Qik? Require all browsers to ship with cookies turned off? As Lilian Edwards observed, the latter would simply make many users think the Internet is broken. My notion: require social networks to add a field to photo uploads requiring users to enter an expiration date after which it will be deleted.

But, "This is meant to be a free world," Humberto Morán, managing director of Friendly Technologies, protested. Free as in speech, free as in beer, or free as in the bargain we make with our data so we can use Facebook or Google? We have no control over those privacy policy contracts.

"Nothing is for free," observed NEC's Amardeo Sarma. "You pay for it, but you don't know how you pay for it." The key issue.

What frequent flyers know is that they can get free flights once in a while in return for their data. What even the brightest, most diligent, and most paranoid expert cannot tell them is what the consequences of that trade will be 20 years from now, though the Privacy Value Networks project is attempting to quantify this. It's hard: any photographer will tell you that a picture's value is usually highest when it's new, but sometimes suddenly skyrockets decades later when its subject shoots unexpectedly to prominence. Similarly, the value of data, said David Houghton, changes with time and context.

It would be more right to say that it is difficult for users to understand the trade-offs they're making and there are no incentives for government or commerce to make it easy. And, as the recent "You have 0 Friends" episode of South Park neatly captures, the choice for users is often not between being careful and being careless but between being a hermit and participating in modern life.

Better tools ought to be a partial solution. And yet: the market for privacy-enhancing technologies is littered with market failures. Even the W3C's own Platform for Privacy Preferences (P3P), for example, is not deployed in the current generation of browsers - and when it was provided in Internet Explorer users didn't take advantage of it. The projects outlined at PrivacOS - PICOS and PrimeLife - are frustratingly slow to move from concept to prototype. The ideas seem right: providing a way to limit disclosures and authenticate identity to minimize data trails. But, Lilian Edwards asked: is partial consent or partial disclosure really possible? It's not clear that it is, partly because your friends are also now posting information about you. The idea of a decentralized social network, workshopped at one session, is interesting, but might be as likely to expand the problem as modulate it.

And, as it has throughout the 25 years since the first online communities were founded, the problem keeps growing exponentially in size and complexity. The next frontier, said Thomas Roessler: the sensor Web that incorporates location data and input from all sorts of devices throughout our lives. What does it mean to design a privacy-friendly bathroom scale that tweets your current and goal weights? What happens when the data it sends gets mashed up with the site you use to monitor the calories you consume and burn and your online health account? Did you really understand when you gave your initial consent to the site what kind of data it would hold and what the secondary uses might be?

So privacy is hard: to define, to value, to implement. As Seda Gürses, studying how to incorporate privacy into social networks, said, privacy is a process, not an event. "You can't do x and say, Now I have protected privacy."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats non-spam comments for reasons surpassing understanding.

March 19, 2010

Digital exclusion: the bill

The workings of British politics are nearly as clear to foreigners as cricket; and unlike the US there's no user manual. (Although we can recommend Anthony Trollope's Palliser novels and the TV series Yes, Minister as good sources of enlightenment on the subject.) But what it all boils down to in the case of the Digital Economy Bill is that the rights of an entire nation of Internet users are about to get squeezed between a rock and an election unless something dramatic happens.

The deal is this: the bill has completed all the stages in the House of Lords, and is awaiting its second reading in the House of Commons. Best guesses are that this will happen on or about March 29 or 30. Everyone expects the election to be called around April 8, at which point Parliament disbands and everyone goes home to spend three weeks intensively disrupting the lives of their constituency's voters when they're just sitting down to dinner. Just before Parliament dissolves there's a mad dash to wind up whatever unfinished business there is, universally known as the "wash-up". The Digital Economy Bill is one of those pieces of unfinished business. The fun part: anyone who's actually standing for election is of course in a hurry to get home and start canvassing. So the people actually in the chamber during the wash-up while the front benches are hastily agreeing to pass stuff thought on the nod are likely to be retiring MPs and others who don't have urgent election business.

"What we need," I was told last night, "is a huge, angry crowd." The Open Rights Group is trying to organize exactly that for this Wednesday, March 24.

The bill would enshrine three strikes and disconnection into law. Since the Lords' involvement, it provides Web censorship. It arguably up-ends at least 15 years of government policy promoting the Internet as an engine of economic growth to benefit one single economic sector. How would the disconnected vote, pay taxes, or engage in community politics? What happened to digital inclusion? More haste, less sense.

Last night's occasion was the 20th anniversary of Privacy International (Twitter: @privacyint), where most people were polite to speakers David Blunkett and Nick Clegg. Blunkett, who was such a front-runner for a second Lifetime Menace Big Brother Award that PI renamed the award after him, was an awfully good sport when razzed; you could tell that having his personal life hauled through the tabloid press in some detail has changed many of his views about privacy. Though the conversion is not quite complete: he's willing to dump the ID card, but only because it makes so much more sense just to make passports mandatory for everyone over 16.

But Blunkett's nearly deranged passion for the ID card was at least his own. The Digital Economy Bill, on the other hand, seems to be the result of expert lobbying by the entertainment industry, most especially the British Phonographic Industry. There's a new bit of it out this week in the form of the Building a Digital Economy report, which threatens the loss of 250,000 jobs in the UK alone (1.2 million in the EU, enough to scare any politician right before an election). Techdirt has a nice debunking summary.

A perennial problem, of course, is that bills are notoriously difficult to read. Anyone who's tried knows these days they're largely made up of amendments to previous bills, and therefore cannot be read on their own; and while they can be marked up in hypertext for intelligent Internet perusal this is not a service Parliament provides. You would almost think they don't really want us to read these things.

Speaking at the PI event, Clegg deplored the database state that has been built up over the last ten to 15 years, the resulting change in the relationship between citizen and state, and especially the omission that, "No one ever asked people to vote on giant databases." Such a profound infrastructure change, he argued, should have been a matter for public debate and consideration - and wasn't. Even Blunkett, who attributed some of his change in views to his involvement in the movie Erasing David (opening on UK cinema screens April 29), while still mostly defending the DNA database, said that "We have to operate in a democratic framework and not believe we can do whatever we want."

And here we are again with the Digital Economy Bill. There is plenty of back and forth among industry representatives. ISPs estimate the cost of the DEB's Web censorship provisions at up to £500 million. The BPI disagrees. But where is the public discussion?

But the kind of thoughtful debate that's needed cannot take place in the present circumstances with everyone gunning their car engines hoping for a quick getaway. So if you think the DEB is just about Internet freedoms, think again; the way it's been handled is an abrogation of much older, much broader freedoms. Are you angry yet?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 5, 2010

The surveillance chronicles

There is a touching moment at the end of the new documentary Erasing David, which had an early screening last night for some privacy specialists. In it, Katie, the wife of the film's protagonist, filmmaker David Bond, muses on the contrast between the England she grew up in and the "ugly" one being built around her. Of course, many people become nostalgic for a kinder past when they reach a certain age, but Katie Bond is probably barely 30, and what she is talking about is the engorging Database State (PDF).

Anyone watching this week's House of Lords debate on the Digital Economy Bill probably knows how she feels. (The Open Rights Group has advice on appropriate responses.)

At the beginning, however, Katie's biggest concern is that her husband is proposing to "disappear" for a month leaving her alone with their toddler daughter and her late-stage pregnancy.

"You haven't asked," she points out firmly. "You're leaving me with all the child care." Plus, what if the baby comes? They agree in that case he'd better un-disappear pretty quickly.

And so David heads out on the road with a Blackberry, a rucksack, and an increasingly paranoid state of mind. Is he safe being video-recorded interviewing privacy advocates in Brussels? Did "they" plant a bug in his gear? Is someone about to pounce while he's sleeping under a desolate Welsh tree?

There are real trackers: Cerberus detectives Duncan Mee and Cameron Gowlett, who took up the challenge to find him given only his (rather common) name. They try an array of approaches, both high- and low-tech. Having found the Brussels video online, they head to St Pancras to check out arriving Eurostar trains. They set up a Web site to show where they think he is and send the URL to his Blackberry to see if they can trace him when he clicks on the link.

In the post-screening discussion, Mee added some new detail. When they found out, for example, that David was deleting his Facebook page (which he announced on the site and of which they'd already made a copy), they set up a dummy "secret replacement" and attempted to friend his entire list of friends. About a third of Bond's friends accepted the invitation. The detectives took up several party invitations thinking he might show.

"The Stasi would have had to have a roomful of informants," said Mee. Instead, Facebook let them penetrate Bond's social circle quickly on a tiny budget. Even so, and despite all that information out on the Internet, much of the detectives' work was far more social engineering than database manipulation, although there was plenty of that, too. David himself finds the material they compile frighteningly comprehensive.

In between pieces of the chase, the filmmakers include interviews with an impressive array of surveillance victims, politicians (David Blunkett, David Davis), and privacy advocates including No2ID's Phil Booth and Action on Rights for Children's Terri Dowty. (Surprisingly, no one from Privacy International, I gather because of scheduling issues.)

One section deals with the corruption of databases, the kind of thing that can make innocent people unemployable or, in the case of Operation Ore, destroy lives such as that of Simon Bunce. As Bunce explains in the movie, 98.2 percent of the Operation Ore credit card transactions were fraudulent.

Perhaps the most you-have-got-to-be-kidding moment is when former minister David Blunkett says that collecting all this information is "explosive" and that "Government needs to be much more careful" and not just assume that the public will assent. Where was all this people-must-agree stuff when he was relentlessly championing the ID card ? Did he - my god! - learn something from having his private life exposed in the press?

As part of his preparations, Bond investigates: what exactly do all these organizations know about him? He sends out more than 80 subject access requests to government agencies, private companies, and so on. Amazon.com sends him a pile of paper the size of a phone book. Transport for London tells him that even though his car is exempt his movements in and out of the charging zone are still recorded and kept. This is a very English moment: after bashing his head on his desk in frustration over the length of his wait on hold, when a woman eventually starts to say, "Sorry for keeping you..." he replies, "No problem".

Some of these companies know things about him he doesn't or has forgotten: the time he "seemed angry" on the phone to a customer service representative. "What was I angry about on November 21, 2006?" he wonders.

But probably the most interesting journey, after all, is Katie's. She starts with some exasperation: her husband won't sign this required form giving the very good nursery they've found the right to do anything it wants with their daughter's data. "She has no data," she pleads.

But she will have. And in the Britain she's growing up in, that could be dangerous. Because privacy isn't isolation and it isn't not being found. Privacy means being able to eat sand without fear.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 19, 2010

Death doth make hackers of us all

"I didn't like to ask him what his passwords were just as he was going in for surgery," said my abruptly widowed friend.

Now, of course, she wishes she had.

Death exposes one of the most significant mismatches between security experts' ideas of how things should be done and the reality for home users. Every piece of advice they give is exactly the opposite of what you'd tell someone trying to create a disaster recovery plan to cover themselves in the event of the death of the family computer expert, finance manager, and media archivist. If this were a business, we'd be talking about losing the CTO, CIO, CSO, and COO in the same plane crash.

Fortunately, while he was alive, and unfortunately, now, my friend was a systems programmer of many decades of expertise. He was acutely aware of the importance of good security. And so he gave his Windows desktop, financial files, and email software fine passwords. Too fine: the desktop one is completely resistant to educated guesses based on our detailed knowledge of his entire life and partial knowledge of some of his other PINs and passwords.

All is not locked away. We think we have the password to the financial files, so getting access to those is a mere matter of putting the hard drive in another machine, finding the files, copying them, installing the financial software on a different machine, and loading them up. But it would be nice to have direct as-him access to his archive of back (and new) email, the iTunes library he painstakingly built and digitized, his Web site accounts, and so on. Because he did so much himself, and because his illness was an 11-day chase to the finish, our knowledge of how he did things is incomplete. Everyone thought there was time.

With backups secured and the financial files copied, we set to the task of trying to gain desktop access.

Attempt 1: ophcrack. This is a fine piece of software that's easy to use as long as you don't look at any of the detail. Put it on a CD, boot from said CD, run it on automatic, and you're fine. The manual instructions I'm sure are fine, too, for anyone who has studied Windows SAM files.

Ophcrack took a happy 4 minutes and 39 seconds to disclose that the computer has three accounts: administrator, my friend's user account, and guest. Administrator and guest have empty passwords; 's is "not found". But that's OK, said the security expert I consulted, because you can log in as administrator using the empty password and change the user account. Here is a helpful command. Sure. No problem.

Except, of course, that this is Vista, and Vista hides the administrator account to make sure that no brainless idiot accidentally got into the administrator account and ran around the system creating havoc and corrupted files. By "brainless idiot" I mean: the user-owner of the computer. Naturally, my friend had left it hidden.

In order to unhide the administrator account so you can run the commands to reset 's password, you have to run the command prompt in administrator mode. Which we can't do because, of course, there are only two administrator accounts and one is hidden and the other is the one we want the password for. Next.

Attempt 2: Password Changer. Now, this is a really nifty thing: you download the software, use it to create a bootable CD, and boot the computer. Which would be fine, except that the computer doesn't like it because apparently command.com is missing...

We will draw a veil over the rest. But my point is that no one would advise a business to operate in this way - and now that computers are in (almost) every home, homes are businesses, too. No one likes to think they're going to die, still less without notice. But if you run your family on your computer you need a disaster recovery plan - fire, flood, earthquake, theft, computer failure, stroke, and yes, unexpected death,

- Have each family member write down their passwords. Privately, if you want, in sealed envelopes to be stored in a safe deposit box at the bank. Include: Windows desktop password, administrator password, automated bill-paying and financial record passwords, and the list of key Web sites you use and their passwords. Also the passwords you may have used to secure phone records and other accounts. Credit and debit card PINs. Etc.

- Document your directory structure so people know where the important data - family photos, financial records, Web accounts, email address books - is stored. Yes, they can figure it out, but you can make it a lot easier for them.

- Set up your printer so it works from other computers on the home network even if yours is turned off. (We can't print anything, either.)

- Provide an emergency access route. Unhide the administrator account.

- Consider your threat model.

Meanwhile, I think my friend knew all this. I think this is his way of taking revenge on me for never letting him touch *my* computer.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 1, 2010

Privacy victims

Frightened people often don't make very good decisions. If I were in charge of aviation security, I'd have been pretty freaked out by the Christmas Day crotch bomber - failure or no failure. Even so, like all of us Boxing Day quarterbacks, I'd like to believe I'd have had more sense than to demand that airline passengers stay seated and unmoving for an hour, laps empty.

But the locking-the-barn elements of the TSA's post-crotch rules are too significant to ignore: the hastily implemented rules were very specifically drafted to block exactly the attack that had just been attempted. Which, I suppose, makes sense if your threat model is a series of planned identical, coordinated attacks and copycats. But as a method of improving airport security it's so ineffective and irrelevant that even the normally rather staid Economist accused the TSA of going insane and Bruce Schneier called the new rulesmagical thinking.

Consider what actually happened on Christmas Day:

- Intelligence failed. Umar Farouk Abdulmutallab was on the watch list (though not, apparently, the no-fly list), and his own father had warned the US embassy.

- Airport screening failed. He got through with his chunk of explosive attached to his underpants and the stuff he needed to set it off. (As the flyer boards have noted, anyone flying this week should be damned grateful he didn't stuff it in a condom and stick it up his ass.)

- And yet, the plan failed. He did not blow up the plane; there were practically no injuries, and no fatalities.

That, of course, was because a heroic passenger was paying attention instead of snoozing and leaped over seats to block the attempt.

The logical response, therefore, ought to be to ask passengers to be vigilant and to encourage them to disrupt dangerous activities, not to make us sit like naughty schoolchildren being disciplined. We didn't do anything wrong. Why are we the ones who are being punished?

I have no doubt that being on the plane while the incident was taking place was terrifying. But the answer isn't to embark upon an arms race with the terrorists. Just as there are well-funded research labs churning out new computer viruses and probing new software for vulnerabilities, there are doubtless research facilities where terrorist organizations test what scanners can detect and in what quantity.

Matt Blaze has a nice analysis of why this approach won't work to deter terrorists: success (plane blown up) and failure (terrorist caught) are, he argues, equally good outcomes for the terrorist, whose goal is to sow terror and disruption. All unpredictable screening does is drive passengers nuts and, in some cases, put their health at risk. Passengers work to the rules. If there are no blankets, we wear warmer clothes; if there is no bathroom access, we drink less; if there is no in-flight entertainment, we rearrange the hours we sleep.

As Blaze says, what's needed is a correct understanding of the threat model - and as Schneier has often said, the most effective changes since 9/11 have been reinforcing the cockpit doors and the fact that passengers now know to resist hijackers.

Since the incident, much of the talk has been about whole-body scanners - "nudie scanners" Dutch privacy advocates have dubbed them - as if these will secure airplanes for once and for all. I think if people think that whole-body scanners are the answer they have misunderstood the problem.

Or problems, because there is more than one. First: how can we make air travel secure from terrorists? Second: how can we make air travelers feel secure? Third: how can we accomplish those things while still allowing travelers to be comfortable, a specification which includes respecting their right to privacy and civil liberties? If your reaction to that last is to say that you don't care whose rights are violated, all that matters is perfect security I'm going to guess that: 1) you fly very infrequently; 2) you would be happy to do so chained to your seat naked with a light coating of Saran wrap; and 3) that your image of the people who are threats is almost completely unlike your own.

It is particularly infuriating to read that we are privacy victims: that the opposition of privacy advocates to invasive practices such as whole-body scanners are the reason this clown got as close as he did. Such comments are as wrong-headed as Jack Straw claiming after 9/11 that opponents of key escrow were naïve.

The most rational response, it seems to me, is for TSA and airlines alike to solicit volunteers among their most loyal and committed passengers. Elite flyers know the rhythms of flights; they know when something is amiss. Train us to help in emergencies and to spot and deter mishaps.

Because the thing we should have learned from this incident is that we are never going to have perfect security: terrorists are a moving target. We need fallbacks, for when our best efforts fail.

The more airport security becomes intrusive, annoying, and visibly stupid, the more motive passengers will have to find workarounds and the less respect they will have for these authorities. That process is already visible. Do you feel safer now?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

December 19, 2009

Little black Facebook

Back in 2004, the Australian privacy advocate and consultant Roger Clarke warned about the growth of social networks. In his paper Very Black 'Little Black Books' he warned of the privacy implications inherent in posting large amounts of personal data to these sites. The primary service Clarke talks about in that paper is Plaxo, though he also mentions the Google's then newly-created Orkut, as well as Tribe.net, various dating sites, and, on the business side, LinkedIn.

The gist: posting all that personal data (especially in the case of Plaxo, to which users upload their entire address books) is a huge privacy risk because the business models for such sites are still unknown.

"The only logical business model is the value of consumers' data," he told me for a piece I wrote on social networks in 2004. "Networking is about viral marketing, and that's one of the applications of social networking. It's social networks in order to achieve economic networks."

In the same interview, Clarke predicted the future for such networks and their business models: "My expectation would be that if they were rational accumulators of data about individuals they wouldn't be caught out abusing until they had a very nice large collection of that data. It doesn't worry me if they haven't abused yet; they will abuse."

Cut to this week, when Facebook - which wouldn't even exist until two years after that interview - suddenly changed its privacy defaults to turn the service inside out. Gawker calls the change a great betrayal, and says, "The company has, in short, turned evil."

The change in a nutshell: Facebook changed the default settings on its privacy controls, so that information that was formerly hidden by default is now visible to default - and not just to people on Facebook but to the Internet at large. The first time I logged on after the change, I got a confusing screen asking me to choose among the privacy options for each of a number of different types of data - open, or "old settings". I stared at it: what were the old settings?

Less than a week after the changes were announced, ten privacy organizations, led by the Electronic Privacy Information Center and including the American Library Association, the Privacy Rights Now Coalition, and the Bill of Rights Foundation, filed a complaint with the Federal Trade Commission (PDF) asking the FTC to enjoin Facebook's "unfair and deceptive business practices" and compel the company to restore its earlier privacy settings and allow complete opt-out, as well as give users more effective control over their data.

The "walled garden" approach to the Net is typically loathed when it's applied to, say, general access to the Internet. But the situation is different when it's applied to personal information; Facebook's entire appeal to its users is based on the notion that it's a convenient way to share stuff with their friends that they don't want to open up to the entire Internet. If they didn't care, they'd put it all on blogs, or family Web sites.

"I like it," one friend told me not long ago, "because I can share pictures of my kids with my family and know no one else can see them."

My guess is that Facebook's owners have been confused by the success of Twitter. On Twitter, almost everything is public: what you post, who you follow, who follows you, and the replies you send to others' messages. All of that is easily searchable by Google, and Tweets show up with regularity in public search results.

But Twitter users know that everything is public, and (one hopes) moderate their behavior accordingly. Facebook users have populated the service with personal chatter and photos of each other at private moments precisely because they expected that material to remain private. (Although: Joseph Bonneau at the University of Cambridge noticed last May that even deleted photos didn't always remain private.) You can understand Facebook's being insecure about Twitter. Twitter is the fastest-growing social network and the one scooping all the media attention (because if ever there were a service designed for the butterfly mentality of journalists, this is it). The fact that Tweets are the same length as Facebook status updates may have led Facebook founding CEO Mark Zuckerberg et al to think that competing with Twitter means implementing the same features that make Twitter so appealing.

Of course, Facebook has done this in a typically Facebookish sort of way, in that the interface is typically clunky and unpleasant (the British journalist Andrew Brown once commented that the Facebook user interface could drive one to suicide.) Hence the need for a guide to reprivatizing your account.

But adding mobile phone connections is one thing; upending users' expectations of your service is another. There is a name for selling a product based on one description and supplying something different and less desirable: bait and switch.

It is as Roger Clarke said five years ago: sooner or later, these companies have to make money. Social networks have only two real assets: their users' desire to keep using their service, and the mass of data users keep giving them. They're not charging users. What does that leave as a business strategy?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

December 4, 2009

Which lie did I tell?


"And what's your mother's maiden name?"

A lot of attention has been paid over the years to the quality of passwords: how many letters, whether there's a sufficient mix of numbers and "special characters", whether they're obviously and easily guessable by anyone who knows you (pet's name, spouse's name, birthday, etc.), whether you've reset them sufficiently recently. But, as someone noted this week on UKCrypto, hardly anyone pays attention to the quality of the answers to the "password hint" questions sites ask so they can identify you when you eventually forget your password. By analogy, it's as though we spent all our time beefing up the weight, impenetrability, and lock quality on our front doors while leaving the back of the house accessible via two or three poorly fitted screen doors.

On most sites it probably doesn't matter much. But the question came up after the BBC broadcast an interview with the journalist Angela Epstein, the loopily eager first registrant for the ID card, in which she apparently mentioned having been asked to provide the answers to five rather ordinary security questions "like what is your favorite food". Epstein's column gives more detail: "name of first pet, favourite song and best subject at school". Even Epstein calls this list "slightly bonkers". This, the UKCrypto poster asked, is going to protect us from terrorists?

Dave Birch had some logic to contribute: "Why are we spending billions on a biometric database and taking fingerprints if they're going to use the questions instead? It doesn't make any sense." It doesn't: she gave a photograph and two fingerprints.

But let's pretend it does. The UKCrypto discussion headed into technicalities: has anyone studied challenge questions?

It turns out someone has: Mike Just, described to me as "the world expert on challenge questions". Just, who's delivered two papers on the subject this year, at the Trust (PDF) and SOUPS (PDF) conferences, has studied both the usability and the security of challenge questions. There are problems from both sides.

First of all, people are more complicated and less standardized than those setting these questions seem to think. Some never had pets; some have never owned cars; some can't remember whether they wrote "NYC", "New York", "New York City", or "Manhattan". And people and their tastes change. This year's favorite food might be sushi; last year's chocolate chip cookies. Are you sure you remember accurately what you answered? With all the right capitalization and everything? Government services are supposedly thinking long-term. You can always start another Amazon.com account; but ten years from now, when you've lost your ID card, will these answers be valid?

This sort of thing is reminiscent of what biometrics expert James Wayman has often said about designing biometric systems to cope with the infinite variety of human life: "People never have what you expect them to have where you expect them to have it." (Note that Epstein nearly failed the ID card registration because of a burn on her finger.)

Plus, people forget. Even stuff you'd think they'd remember and even people who, like the students he tested, are young.

From the security standpoint, there are even more concerns. Many details about even the most obscure person's life are now public knowledge. What if you went to the same school for 14 years? And what if that fact is thoroughly documented online because you joined its Facebook group?

A lot depends on your threat model: your parents, hackers with scripted dictionary attacks, friends and family, marketers, snooping government officials? Just accordingly came up with three types of security attacks for the answers to such questions: blind guess, focused guess, and observation guess. Apply these to the often-used "mother's maiden name": the surname might be two letters long; it is likely one of the only 150,000 unique surnames appearing more than 100 times in the US census; it may be eminently guessable by anyone who knows you - or about you. In the Facebook era, even without a Wikipedia entry or a history of Usenet postings many people's personal details are scattered all over the online landscape. And, as Just also points out, the answers to challenge questions are themselves a source of new data for the questioning companies to mine.

My experience from The Skeptic suggests that over the long term trying to protect your personal details by not disclosing them isn't going to work very well. People do not remember what they tell psychics over the course of 15 minutes or an hour. They have even less idea what they've told their friends or, via the Internet, millions of strangers over a period of decades or how their disparate nuggets of information might match together. It requires effort to lie - even by omission - and even more to sustain a lie over time. It's logically easier to construct a relatively small number of lies. Therefore, it seems to me that it's a simpler job to construct lies for the few occasions when you need the security and protect that small group of lies. The trouble then is documentation.

Even so, says Birch, "In any circumstance, those questions are not really security. You should probably be prosecuted for calling them 'security'."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

November 13, 2009

Cookie cutters

Sometimes laws sneak up on you while you're looking the other way. One of the best examples was the American Telecommunications Act of 1996: we were so busy obsessing about the freedom of speech-suppressing Communications Decency Act amendment that we failed to pay attention to the implications of the bill itself, which allowed the regional Baby Bells to enter the long distance market and changed a number of other rules regarding competition.

We now have a shiny, new example: we have spent so much time and electrons over the nasty three-strikes-and-you're offline provisions that we, along with almost everyone else, utterly failed to notice that the package contains a cookie-killing provision last seen menacing online advertisers in 2001 (our very second net.wars).

The gist: Web sites cannot place cookies on users' computers unless said users have agreed to receive them unless the cookies are strictly necessary - as, for example, when you select something to buy and then head for the shopping cart to check out.

As the Out-Law blog points out this proposal - now to become law unless the whole package is thrown out - is absurd. We said it was in 2001 - and made the stupid assumption that because nothing more had been heard about it the idea had been nixed by an outbreak of sanity at the EU level.

Apparently not. Apparently MEPs and others at EU level spend no more time on the Web than they did eight years ago. Apparently none of them have any idea what such a proposal would mean. Well, I've turned off cookies in my browser, and I know: without cookies, browsing the Web is as non-functional as a psychic being tested by James Randi.

But it's worse than that. Imagine browsing with every site asking you to opt in every - pop-up - time - pop-up - it - pop-up - wants - pop-up - to - pop-up - send - pop-up - you - a - cookie - pop-up. Now imagine the same thing, only you're blind and using the screen reader JAWS.

This soon-to-be-law is not just absurd, it's evil.

Here are some of the likely consequences.

As already noted, it will make Web use nearly impossible for the blind and visually impaired.

It will, because such is the human response to barriers, direct ever more traffic toward those sites - aggregators, ecommerce, Web bulletin boards, and social networks - that, like Facebook, can write a single privacy policy for the entire service to which users consent when they join (and later at scattered intervals when the policy changes) that includes consent to accepting cookies.

According to Out-Law, the law will trap everyone who uses Google Analytics, visitor counters, and the like. I assume it will also kill AdSense at a stroke: how many small DIY Web site owners would have any idea how to implement an opt-in form? Both econsultancy.com and BigMouthMedia think affiliate networks generally will bear the brunt of this legislation. BigMouthMedia goes on to note a couple of efforts - HTTP.ETags and Flash cookies - intended to give affiliate networks more reliable tracking that may also fall afoul of the legislation. These, as those sources note, are difficult or impossible for users to delete.

It will presumably also disproportionately catch EU businesses compared to non-EU sites. Most users probably won't understand why particular sites are so annoying; they will simply shift to sites that aren't annoying. The net effect will be to divert Web browsing to sites outside the EU - surely the exact opposite of what MEPs would like to see happen.

And, I suppose, inevitably, someone will write plug-ins for the popular browsers that can be set to respond automatically to cookie opt-in requests and that include provisions for users to include or exclude specific sites. Whether that will offer sites a safe harbour remains to be seen.

The people it will hurt most, of course, are the sites - like newspapers and other publications - that depend on online advertising to stay afloat. It's hard to understand how the publishers missed it; but one presumes they, too, were distracted by the need to defend music and video from evil pirates.

The sad thing is that the goal behind this masterfully stupid piece of legislation is a reasonably noble one: to protect Internet users from monitoring and behavioural targeting to which they have not consented. But regulating cookies is precisely the wrong way to go about achieving this goal, not just because it disables Web browsing but because technology is continuing to evolve. The EU would be better to regulate by specifying allowable actions and consequences rather than specifying technology. Cookies are not in and of themselves inherently evil; it's how they're used.

Eight years ago, when the cookie proposals first surfaced, they, logically enough, formed part of a consumer privacy bill. That they're now part of the telecoms package suggests they've been banging around inside Parliament looking for something to attach themselves to ever since.

I probably exaggerate slightly, since Out-Law also notes that in fact the EU did pass a law regarding cookies that required sites to offer visitors a way to opt out. This law is little-known, largely ignored, and unenforced. At this point the Net's best hope looks to be that the new version is treated the same way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter or by email to netwars@skeptic.demon.co.uk).

August 28, 2009

Develop in haste, lose the election at leisure

Well, this is a first: returning to last week's topic because events have already overtaken it.

Last week, the UK government was conducting a consultation on how to reduce illegal file-sharing by 70 percent within a year. We didn't exactly love the proposals, but we did at least respect the absence of what's known as "three strikes" - as in, your ISP gets three complaints about your file-sharing habit and kicks you offline. The government's oh-so-English euphemism for this is "technical measures". Activists opposed to "technical measures" often call them HADOPI, after the similar French law that was passed in May (and whose three strikes portions were struck down in June); HADOPI is the digital rights agency that law created.

This week, the government - or more precisely, the Department for Business, Innovation, and Skills - suddenly changed its collective mind and issued an addendum to the consultation (PDF) that - wha-hey! - brings back three strikes. Its thinking has "developed", BIS says. Is it so cynical to presume that what has "developed" in the last couple of months is pressure from rights holders? Three strikes is a policy the entertainment industry has been shopping around from country to country like an unwanted refugee. Get it passed in one place and use that country a lever to make all the others harmonize.

What the UK government has done here is entirely inappropriate. At the behest of one business sector, much of it headquartered outside Britain, it has hijacked its own consultation halfway through. It has issued its new-old proposals a few days before the last holiday weekend of the summer. The only justification it's offered: that its "new ideas" (they aren't new; they were considered and rejected earlier this year, in the Digital Britain report (PDF)) couldn't be implemented fast enough to meet its target of reducing illicit file-sharing by 70 percent by 2012 if they aren't included in this consultation. There's plenty of protest about the proposals, but even more about the government's violating its own rules for fair consultations.

Why does time matter? No one believes that the Labour government will survive the next election, due by 2010. The entertainment industries don't want to have to start the dance all over again, fine: but why should the rest of us care?

As for "three strikes" itself, let's try some equivalents.

Someone is caught speeding three times in the effort to get away from crimes they've committed, perhaps a robbery. That person gets points on their license and, if they're going fast enough, might be prohibited from driving for a length of time. That system is administered by on-the-road police but the punishment is determined by the courts. Separately, they are prosecuted for the robberies, and may serve jail time - again, with guilt and punishment determined by the courts.

Someone is caught three times using their home telephone to commit fraud. They would be prosecuted for the fraud, but they would not be banned from using the telephone. Again, the punishment would be determined by the courts after a prosecution requiring the police to produce corroborating evidence.

Someone is caught three times gaming their home electrical meter so that they are able to defraud the electrical company and get free electricity. (It's not so long since in parts of the UK you could achieve this fairly simply just by breaking into the electrical meter and stealing back the coins you fed it with. You would, of course, be caught at the next reading.) I'm not exactly sure what happens in these cases, but if Wikipedia is to be believed, when caught such a customer would be switched to a higher tariff.

It seems unlikely that any court would sentence such a fraudster to live without an electricity supply, especially if they shared their home, as most people do, with other family members. The same goes for the telephone example. And in the first case, such a person might be banned from driving - but not from riding in a car, even the getaway car, while someone else drove it, or from living in a house where a car was present.

Final analogy: millions of people smoke marijuana, which remains illegal. Marijuana has beneficial uses (relieving the nausea from chemotherapy, remediating glaucoma) as well as recreational ones. We prosecute the drug dealers, not the users.

So let's look again at these recycled-reused proposals. Kicking someone offline after three (or however many) complaints from rights holders:

1- Affects everyone in their household. Kids have to go to the library to do homework, spouses/'parents can't work at home or socialize online. An entire household is dropped down the wrong side of the Digital Divide. As government functions such as filing taxes, providing information about public services, and accepting responses to consultations all move online, this household is now also effectively disenfranchised.

2- May in fact make both the alleged infringer and their spouse unemployable.

3- Puts this profound control over people's lives, private and public, personal and financial into the hands of ISPs, rights holders, and Ofcom, with no information about how or whether the judicial process would be involved. Not that Britain's court system really has the capacity to try the 10 percent of the population that's estimated to engage in file-sharing. (Licit, illicit, who can tell?)

All of these effects are profoundly anti-democratic. Whose government is it, anyway?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

July 24, 2009

Security for the rest of us


Many governments, faced with the question of how to improve national security, would do the obvious thing: round up the usual suspects. These would be, of course, the experts - that is, the security services and law enforcement. This exercise would be a lot like asking the record companies and film studios to advise on how to improve copyright: what you'd get is more of the same.

This is why it was so interesting to discover that the US National Academies of Science was convening a workshop to consult on what research topics to consider funding, and began by appointing a committee that included privacy advocates and usability experts, folks like Microsoft researcher Butler Lampson, Susan Landau, co-author of books on privacy and wiretapping, and Donald Norman, author of the classic book The Design of Everyday Things. Choosing these people suggests that we might be approaching a watershed like that of the late 1990s, when the UK and the US governments were both forced to understand that encryption was not just for the military any more. The peace-time uses of cryptography to secure Internet transactions and protect mobile phone calls from casual eavesdropping are much broader than crypto's war-time use to secure military communications.

Similarly, security is now everyone's problem, both individually and collectively. The vulnerability of each individual computer is a negative network externality, as NYU economist Nicholas Economides pointed out. But, as many asked, how do you get people to understand remote risks? How do you make the case for added inconvenience? Each company we deal with makes the assumption that we can afford the time to "just click to unsubscribe" or remember one password, without really understanding the growing aggregate burden on us. Norman commented that door locks are a trade-off, too: we accept a little bit of inconvenience in return for improved security. But locks don't scale; they're acceptable as long as we only have to manage a small number of them.

In his 2006 book, Revolutionary Wealth, Alvin Toffler comments that most of us, without realizing it, have a hidden third, increasingly onerous job, "prosumer". Companies, he explained, are increasingly saving money by having us do their work for them. We retrieve and print out our own bills, burn our own CDs, provide unpaid technical support for ourselves and our families. One of Lorrie Cranor's students did the math to calculate the cost in lost time and opportunities if everyone in the US read annually the privacy policy of each Web site they visited once a month. Most of these things require college-level reading skills; figure 244 hours per year per person, $3,544 each...$781 billion nationally. Weren't computers supposed to free us of that kind of drudgery? As everything moves online, aren't we looking at a full-time job just managing our personal security?

That, in fact, is one characteristic that many implementations of security share with welfare offices - and that is becoming pervasive: an utter lack of respect for the least renewable resource, people's time. There's a simple reason for that: the users of most security systems are deemed to be the people who impose it, not the people - us - who have to run the gamut.

There might be a useful comparison to information overload, a topic we used to see a lot about ten years back. When I wrote about that for ComputerActive in 1999, I discovered that everyone I knew had a particular strategy for coping with "technostress" (the editor's term). One dealt with it by never seeking out information and never phoning anyone. His sister refused to have an answering machine. One simply went to bed every day at 9pm to escape. Some refused to use mobile phones, others to have computers at home..

But back then, you could make that choice. How much longer will we be able to draw boundaries around ourselves by, for example, refusing to use online banking, file tax returns online, or participate in social networks? How much security will we be able to opt out of in future? How much do security issues add to technostress?

We've been wandering in this particular wilderness a long time. Angela Sasse, whose 1999 paper Users Are Not the Enemy talked about the problems with passwords at British Telecom, said frankly, "I'm very frustrated, because I feel nothing has changed. Users still feel security is just an obstacle there to annoy them."

In practice, the workshop was like the TV game Jeopardy: the point was to generate research questions that will go into a report, which will be reviewed and redrafted before its eventual release. Hopefully, eventually, it will all lead to a series of requests for proposals and some really good research. It is a glimmer of hope.

Unless, that is, the gloominess of the beginning presentations wins out. If you listened to Lampson, Cranor, and to Economides, you got the distinct impression that the best thing that could happen for security is that we rip out the Internet (built to be open, not secure), trash all the computers (all of whose operating systems were designed in the pre-Internet era), and start over from scratch. Or, like the old joke about the driver who's lost and asking for directions, "Well, I wouldn't start from here".

So, here's my question: how can we make security scale so that the burden stays manageable?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

July 17, 2009

Human factors

For the last several weeks I've been mulling over the phrase security fatigue. It started with a paper (PDF) co-authored by Angela Sasse, in which she examined the burden that complying with security policies imposes upon corporate employees. Her suggestion: that companies think in terms of a "compliance budget" that, like any other budget (money, space on a newspaper page), has to be managed and used carefully. And, she said, security burdens weigh differently on different people and at different times, and a compliance budget needs to comprehend that, too.

Some examples (mine, not hers). Logging onto six different machines with six different user IDs and passwords (each of which has to be changed once a month) is annoying but probably tolerable if you do it once every morning when you get to work and once in the afternoon when you get back from lunch. But if the machines all log you out every time you take your hands off the keyboard for two minutes, by the end of the day they will be lucky to survive your baseball bat. Similarly, while airport security is never fun, the burden of it is a lot less to a passenger traveling solo after a good night's sleep who reaches the checkpoints when they're empty than it is to the single parent with three bored and overtired kids under ten who arrives at the checkpoint after an overnight flight and has to wait in line for an hour. Context also matters: a couple of weeks ago I turned down a ticket to Court 1 at Wimbledon on men's semi-finals day because I couldn't face the effort it would take to comply with their security rules and screening. I grudgingly accept airport security as the trade-off for getting somewhere, but to go through the same thing for a supposedly fun day out?

It's relatively easy to see how the compliance budget concept could be worked out in practice in a controlled environment like a company. It's very difficult to see how it can be worked out for the public at large, not least because none of the many companies each of us deals with sees it as beneficial to cooperate with the others. You can't, for example, say to your online broker that you just can't cope with making another support phone call, can't they find some other way to unlock your account? Or tell Facebook that 61 privacy settings is too many because you're a member of six other social networks and Life is Too Short to spend a whole day configuring them all.

Bruce Schneier recently highlighted that last-referenced paper, from Joseph Bonneau and Soeren Preibusch at Cambridge's computer lab, alongside another by Leslie John, Alessandro Acquisti, and George Loewenstein from Carnegie-Mellon, to note a counterintuitive discovery: the more explicit you make privacy concerns the less people will tell you. "Privacy salience" (as Schneier calls it) makes people more cautious.

In a way, this is a good thing and goes to show what privacy advocates have been saying along: people do care about privacy if you give them the chance. But if you're the owners of Facebook, a frequent flyer program, or Google it means that it is not in your business interest to spell out too clearly to users what they should be concerned about. All of these businesses rely on collecting more and more data about more and more people. Fortunately for them, as we know from research conducted by Lorrie Cranor (also at Carnegie-Mellon), people hate reading privacy policies. I don't think this is because people aren't interested in their privacy. I think this goes back to what Sasse was saying: it's security fatigue. For most people, security and privacy concerns are just barriers blocking the thing they came to do.

But choice is a good thing, right? Doesn't everyone want control? Not always. Go back a few years and you may remember some widely publicized research that pointed out that too many choices stall decision-making and make people feel...tired. A multiplicity of choices adds weight and complexity to the decision you're making: shouldn't you investigate all the choices, particularly if you're talking about which of 56 mutual funds to add to your 401(k)?

It seems obvious, therefore, that the more complex the privacy controls offered by social networks and other services the less likely people are to use them: too many choices, too little time, too much security fatigue. In minor cases in real life, we handle this by making a decision once and sticking to it as a kind of rule until we're forced to change: which brand of toothpaste, what time to leave for work, never buy any piece of clothing that doesn't have pockets. In areas where rules don't work, the best strategy is usually to constrain the choices until what you have left is a reasonable number to investigate and work with. Ecommerce sites notoriously get this backwards: they force you to explore group by group instead of allowing you to exclude choices you'll never use.

How do we implement security and privacy so that they're usable? This is one of the great unsolved, under-researched questions in security. I'm hoping to know more next week.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

July 10, 2009

The public interest

It's not new for journalists to behave badly. Go back to 1930s plays-turned-movies like The Front Page (1931) or Mr Smith Goes to Washington (1939), and you'll find behavior (thankfully, fictional) as bad as this week's Guardian story that the News of the World paid out £1 million to settle legal cases that would have revealed that its staff journalists were in the habit of hiring private investigators to hack into people's phone records and voice mailboxes.

The story's roots go back to 2006, when the paper's Royal editor, Clive Goodman, was jailed for illegally intercepting phone calls. The paper's then editor, Andy Coulson, resigned and the Press Complaints Commission concluded the paper's executives did not know what Goodman was doing. Five months later, Coulson became the chief of communications for the Tory party.

There are so many cultural failures here that you almost don't know where to start counting. The first and most obvious is the failure of a newsroom to obey the dictates of common sense, decency, and the law. That particular failure is the one garnering the most criticism, and yet it seems to me the least surprising, especially for one of Britain's most notorious tabloids. Journalists have competed for stories big enough to sell papers since the newspaper business was founded; the biggest rewards generally go to the ones who expose the stories their subjects least wanted exposed. It's pretty sad if any newspaper's journalists think the public interest argument is as strong for listening to Gwyneth Paltrow's voice mail as it was to exposing MPs' expenses, but that leads to the second failure: celebrity culture.

This one is more general: none of this would happen if people didn't flock to buy stories about intimate celebrity details. And newspapers are desperate for sales.

The third failure is specific to politicians: under the rubric of "giving people a second chance" Tory leader David Cameron continues to defend Coulson, who continues to claim he didn't know what was going on. Either Coulson did know, in which case he was condoning it, or he didn't, in which case he had only the shakiest grasp of his newsroom. The latter is the same kind of failure that at other papers and magazines has bred journalistic fraud: surely any editor now ought to be paying attention to sourcing. Either way, Coulson does not come off well and neither does Cameron. It would be more tolerable if Cameron would simply say outright that he doesn't care whether Coulson is honorable or not because he's effective at the job Cameron is paying him for.

The fourth failure is of course the police, the Press Complaints Commission, and the Information Commissioner, all of whom seem to have given up rather easily in 2007.

The final failure is also general: the problem that more and more intimate information about each of us is held in databases whose owners may have incentives (legal, regulatory, commercial) for keeping them secured but which are of necessity accessible by minions whose risks and rewards are different. The weakest link in security is always the human factor, and the problem of insiders who can be bribed or conned into giving up confidential information they shouldn't is as old as the hills, whether it's a telephone company employee, a hotel chambermaid, or a former Royal nanny. Seemingly we have learned little or nothing since Kevin Mitnick pioneered the term "social engineering" some 20 years ago or since Squidgygate, when various Royals' private phone conversations were published. At least some ire should be directed at the phone companies involved, whose staff apparently find it easy to refuse to help legitimate account holders by citing the Data Protection Act but difficult to resist illegitimate blandishments.

This problem is exacerbated by what University College of London security researcher Angela Sasse calls "security fatigue". Gaining access to targets' voice mail was probably easier than you think if you figure that many people never change the default PIN on their phones. Either your private investigator turned phone hacker tries the default PIN or, as Sophos senior fellow Graham Cluley suggests, convinces the phone company to reset the PIN to the default. Yes, it's stupid not to change the default password on your phone. But with so many passwords and PINs to manage and only so much tolerance for dealing with security, it's an easy oversight. Sasse's paper (PDF) fleshing out this idea proposes that companies should think in terms of a "compliance budget" for employees. But this will be difficult to apply to consumers, since no one company we interact with will know the size of the compliance burden each of us is carrying.

Get the Press Complaints Commission to do its job properly by all means. And stop defending the guy who was in charge of the newsroom while all this snooping was going on. Change a culture that thinks that "the public interest" somehow expands to include illegal snooping just because someone is famous.

But bear in mind that, as Privacy International has warned all along, this kind of thing is going to become endemic as Britain's surveillance state continues to develop. The more our personal information is concentrated into large targets guarded by low-paid staff, the more openings there will be for those trying to perpetrate identity fraud or blackmail, snoop on commercial competitors, sell stories about celebrities and politicians, and pry into the lives of political activists.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk.

May 23, 2009

InPhormed consent

This week's announcement that the UK is to begin hooking up its network of CCTV cameras to automatic number plate recognition software is a perfect example of a lot of things. Function creep, which privacy advocates always talk about: CCTV was sold to the public on the basis that it would make local streets safer; ANPR was sold to the public on the basis that it would decrease London's traffic congestion. You can question either or both of those propositions, but nowhere in them was the suggestion that marrying the two technologies together would give the police a network enabling them to track people's movements around the country. In fact, as I understand it, there will probably be two such networks, one for police and the other for enabling road pricing.

It's also a perfect example of why with today's developing technology it's nearly impossible for people to give informed consent. Do I want to post personal photographs where only my friends and family can see them? Sure. Do I want those photos to persist online even after I think I've deleted them and be viewable by outsiders via content delivery networks and other caches? No, or not necessarily.

And it's a perfect example of why opt-in is an important principle. Will I trade access to slightly better treatment and the occasional free ticket for my travel data (in the form of frequent flyer programs)? Apparently so. Does that mean that every casual flyer should perforce be signed up with a frequent flyer number and told to opt out if they don't want their data sold for marketing purposes? Obviously not.

Developing technologies are an area where experts have trouble predicting the outcome. Most people will not or cannot find the time to try to understand the implications, even if those were available. How is anyone supposed to give intelligent and informed consent? Making a system opt-in means that only those who have taken at least some trouble make the trade-offs. With CCTV and ANPR, most of us have little choice: we may vote for or against politicians based on their policies, but we don't have a fine-grained way of voting for this policy and against that one.

Even if we did, however, we'd still have the problem that technology is developing faster than anyone can say "small-scale pilot". This is why it's difficult for anyone to give intelligent and informed consent when a new idea like Phorm comes along to argue that their service is so wonderful and compelling that everyone should be automatically joined to it and those few who are too short-sighted to see the benefits should opt out.

When Phorm first came along and everyone got very hysterical very fast, I took a more cautious, hang-on-let's-see-what-this-is-about view that was criticized by some expert friends and called "a breath of sanity" by one of the Phorm folks I met. Richard Clayton did a careful technical analysis (PDF). Then it emerged that BT had been conducting trials of Phorm's packet inspection technology without getting the consent of its customers. (What do we pay for, eh?). This was clearly arrogant and wrong, a stand with which the EU concurs in the form of a lawsuit despite the Home Office's expressed belief last year that Phorm operates within UK law.

For a lot of us, if we don't quite understand the technology, can't guess the implications, and aren't sure of the implications, we play the man instead of the ball. Who are the people who want us to use this stuff? And do they behave honourably? The BT trial is a clear "no" answer to the last. As for the former, that's where the Stop Phoul Play Web site is so helpful in characterizing its opponents as privacy pirates. I am not listed, but I note that many of those who are serve with me on the Open Rights Group advisory council and/or on that of the Foundation for Information Policy Research, an organization whose aims I also support. But the whole Stop Phorm Web site is written in precisely the tone of the fake news pieces that appear in C. S. Lewis's novel That Hideous Strength, deliberately written as outright lies and propaganda by a weak character under the influence of the novel's forces of evil.

If Phorm had sat down to calculate carefully what its best strategy would be for alienating as many people as possible, it would have created exactly this Web site. I might disagree with but respect an organization that set out its claims and reasoning for public debate. An organization that thinks claiming it's being smeared while smearing its opponents (calling The Register a "media mouthpiece" is particularly hilarious) is either stupid or dishonest, and in neither case can we trust its claims about what its technology does and does not do.

Though we can wonder: did the Home Office support Phorm's proposals because they thought that having a third party build a deep packet inspection system might be something they could use later at low cost? I'm not normally paranoid, but...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at the other blog, follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2009

Statebook of the art

The bad thing about the Open Rights Group's new site, Statebook is that it looks so perfectly simple to use that the government may decide it's actually a good idea to implement something very like it. And, unfortunately, that same simplicity may also create the illusion in the minds of the untutored who still populate the ranks of civil servants and politicians that the technology works and is perfectly accurate.

For those who shun social networks and all who sail in her: Statebook's interface is an almost identical copy of that of Facebook. True, on Facebook the applications you click on to add are much more clearly pointless wastes of time, like making lists of movies you've liked to share with your friends or playing Lexulus (the reinvention of the game formerly known as Scrabulous until Hasbrouck got all huffy and had it shut down).

Politicians need to resist the temptation to believe it's as easy as it looks. The interfaces of both the fictional Statebook and the real Facebook look deceptively simple. In fact, although friends tell me how much they like the convenience of being able to share photos with their friends in a convenient single location, and others tell me how much they prefer Facebook's private messaging to email, Facebook is unwieldy and clunky to use, requiring a lot of wait time for pages to load even over a fast broadband connection. Even if it weren't, though, one of the difficulties with systems attempting to put EZ-2-ewes front ends on large and complicated databases is that they deceive users into thinking the underlying tasks are also simple.

A good example would be airline reservations systems. The fact is that underneath the simple searching offered by Expedia or Travelocity lies some extremely complex software; it prices every itinerary rather precisely depending on a host of variables. These may not just the obvious things like the class of cabin, but the time of day, the day of the week, the time of year, the category of flyer, the routing, how far in advance the ticket is being purchased, and the number of available seats left. Only some of this is made explicit; frequent flyers trying to maxmize their miles per dollar despair while trying to dig out arcane details like the class of fare.

In his 1988 book The Design of Everyday Things, Donald Norman wrote about the need to avoid confusing the simplicity or complexity of an interface with the characteristics of the underlying tasks. He also writes about the mental models people create as they attempt to understand the controls that operate a given device. His example is a refrigerator with two compartments and two thermostatic controls. An uninformed user naturally assumes each thermostat controls one compartment, but in his example, one control sets the thermostat and the other directs the proportion of cold air that's sent to each comparment. The user's mental model is wrong and, as a consequence, attempts that user makes to set the temperature will also, most likely, be wrong.

In focusing on the increasing quantity and breadth of data the government is collecting on all of us, we've neglected to think about how this data will be presented to its eventual users. We have warned about the errors that build up in very large databases that are compiled from multiple sources. We have expressed concern about surveillance and about its chilling impact on spontaneous behaviour. And we have pointed out that data is not knowledge; it is very easy to take even accurate data and build a completely false picture of a person's life. Perhaps instead we should be focusing on ensuring that the software used to query these giant databases-in-progress teaches users not to expect too much.

As an everyday example of what I mean, take the automatic line-calling system used in tennis since 2005, Hawkeye. Hawkeye is not perfectly accurate. Its judgements are based on reconstructions that put together the video images and timing data from four or more high-speed video cameras. The system uses the data to calculate the three-dimensional flight of the ball; it incorporates its knowledge of the laws of physics, its model of the tennis court, and its database of the rules of the game in order to judge whether the ball is in or out. Its official margin for error is 3.6mm.

A study by two researchers at Cardiff University disputed that number. But more relevant here, they pointed out that the animated graphics used to show the reconstructed flight of the ball and the circle indicating where it landed on the court surface are misleading because they look to viewers as though they are authoritative. The two researchers, Harry Collins and Robert Evans, proposed that in the interests of public education the graphic should be redesigned to display the margin for error and the level of confidence.

This would be a good approach for database matches, too, especially since the number of false matches and errors will grow with the size of the databases. A real-life Statebook that doesn't reflect the uncertainty factor of each search, each match, and each interpretation next to every hit would indeed be truly dangerous.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 27, 2009

The view

"Am I in it?"

That seems to be the first question people ask about Street View. Most people I know actually want to see themselves caught unawares; the ones who weren't captured are actively disappointed, while the ones who were are excited.

At least as many - mostly people I don't know - are angry and unhappy and feel their privacy has been invaded just by having the cars drive down their street taking photographs. Hundreds have complained and had pictures taken down. The Register called the cars Orwellian spycars and snoopmobiles, and charted their inexorable progress across the UK on a mash-up.

I can, I think, understand the emotions on both sides. Most of the take-down requests are understandable. Of course, there are some that seem ridiculous. Number 10 Downing Street? The Blairs' house? Will they claim copyright in their homes and sue, like Barbra Streisand in 2003?

What I can't understand is the relative size of the fuss over Street View compared to the pervasive general apathy about CCTV. Street View is one collection of images that will gradually age. CCTV is always with us.

Privacy International, who, to be fair, have persistently and publicly criticized CCTV, has filed a formal complaint with the Information Commissioner and asked the ICO to order the service offline while investigating.

Google, of course, has absolutely no excuse this time. When, two years ago, Street View originally launched in the US, it seemed as though Google had (yet again) failed at privacy - but that it had failed in a very geeky way. You could easily imagine the engineers at Google who started up Street View going, "This is so *cool*! You can see into people's windows!" You can also see them never thinking of applying to each local council for permission and having to wait for a public inquiry and local vote because that would take too long, and we have this idea today!

Google should have learned from the outcry that followed the launch that many people do not react casually to discovering that their images have been captured and put online. The town of North Oaks, Minnesota kicked them out entirely. Two years and scores of complaints weren't enough to teach the company to proceed with a little more humility and caution? Is it so difficult to imagine, when you assign people to drive around the streets taking pictures, that they might capture the strange and the embarrassing?

This isn't like Flickr, where users post millions of images of which the company has no prior knowledge and no control and where there is no organized way to search through them. The Google employees who drive the Street View cars and operate the cameras could, oh, I don't know, actually look at their surroundings while they're doing it. Of course there are plenty of things that look innocent but aren't - the person walking into the newsagent's who's supposed to be at work at a wholly different location, say, or the couple making out on the park bench who are married but to other people. But how hard is it to stop and think that maybe the guy urinating in public - or vomiting, or falling off a bicycle - might prefer not to have that moment immortalized on the Web? This is especially true because the Googlers themselves objected to being photographed.

It's also true that simply blurring car license plates and people's faces isn't enough to erase all chance that they'll be identified. If you wear a lime green coat, own the only 23-year-old Nissan Prairie in London, or routinely play tennis wearing a James Randi Educational Foundation hat you're going to be easily identifiable. (Though it's arguable that if you do those things you probably don't object to standing out from the crowd.)

For all those reasons, Privacy International is right to throw the book at the company (which came bottom of the heap in PI's report on the privacy practices of major Web companies).

And yet. Google's Street View is one very large set of images captured once, and there are all sorts of valid uses for it. You can get a look at the route you're going to navigate through so you don't get lost. You can look at the neighborhoods surrounding the prospective homes you're looking at in the property listings. And there will doubtless be dozens or hundreds of other genuinely useful things you can do with it once we've had time to think. The privacy debate over it, therefore, has similar characteristics to the debate over file-sharing: it, too, is a dual-use technology.

CCTV is not. It has been sold to the public as a crime-prevention technology, and perhaps it seems private because we only see the images when a crime has been committed. CCTV cameras do not - as far as we know - provide anything like the quality or resolution of the Street View photographs. Yet. What Street View really exposes is not the personal moments causing all the fuss but the power we are giving the state by allowing CCTV to spread everywhere.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 14, 2009

The Gattaca in Gossip Girl

Spotted: net.wars obsessing over Gossip Girl instead of diligently reading up on the state of the data retention directive's UK implementation.

It's the cell phones. The central conceit of the show and the books that inspired it is this: an unseen single-person Greek (voiced by Kristen Bell in a sort of cross between her character on Veronica Mars and Christina Ricci's cynical, manipulative trouble-maker in The Opposite of Sex) chorus of unknown identity publishes - to the Web and by blast to subscribers' cell phones - tips and rumors about "the scandalous lives of Manhattan's elite".

The Upper East Siders she? reports on are, of course, the private high school teens whose centrally planned destiny is to inherit their parents' wealth, power, social circles, and Ivy League educations. These are teens under acute pressure to perform as expected, and in between obsessing about whether they can get into Yale (played on-screen by Columbia), they blow off steam by throwing insanely expensive parties, drinking, sexing, and scheming. All, of course, in expensive designer clothes and bearing the most character and product-placement driven selection of phones ever seen on screen.

Most of the plots are, of course, nonsense. The New Yorker more or less hated it on sight. Also my first reaction: I went, not to the school the books' author, Cecily von Ziegesar, did, but to one in the same class 25 years earlier and then to an Ivy League school. One of my closest high school friends grew up in - and his parents still live at - the building the inhabited in the series by teen queen Blair Waldorf. So I can assess the show's unreality firsthand. So can lots of other New Yorkers who are equally obsessed with the show: the New York Magazine runs a hysterically funny reality index recap of each episode of "the Greatest Show of Our Time", followed by a recap of the many comments.

But we never had the phones! Pink and flip, slider and black, Blackberries, red, gold, and silver phones! Behind the trashy drama portraying the ultra rich as self-important, stressed-out, miserable, self-absorbed, and mean is a fictional exploration of what life is like under constant surveillance by your peers.

Over the year and a half of the show's run - SPOILER ALERT - all sorts of private secrets have been outed on Gossip Girl via importunate camera phone and text message. Serena is spotted buying a pregnancy test (causing panic in at least two households); four characters are revealed at a party full of agog subscribers to be linked by a half-sibling they didn't know they had until the blast went out; and of course everyone is photographed kissing (or worse) the wrong person at some point. Exposure via Gossip Girl is also handy for blackmail (Blair), pre-emption (Chuck), lovesick yearning (Dan), and outing his sister's gay boyfriend (Dan).

"If you're sending tips to Gossip Girl, you're in the game with the rest of us," Jenny tells Dan, who had assumed his own moral superiority.

A lot of privacy advocates express concern that today's "digital natives" don't care about privacy, or at least, don't understand the potential consequences to their future job and education prospects of the decisions they make when they post the intimate details of their lives online. In fact, when this generation grows up they'll all be in the same boat, exposure wise.. Both in reality and in this fiction, the case is as it's usually been, that teens don't fear each other; they collude as allies to exclude their parents. That trope, too, is perfectly played on the show when Blair (again!) gets rid of a sociopathic interloper by going over the garden wall and calling her parents. This is not the world of David Brin's The Transparent Society, after all; the teens surveille each other but catch adults only by accident, though they take full advantage when they do.

"Gossip Girl...is how we communicate," Blair says, trying to make one of her many vendettas seem normal.

Privacy advocates also often stress that surveillance chills spontaneous behaviour. Not here, or at least not yet. Instead, the characters manipulate and expose, then anguish when it happens to them. A few become inured.

Says Serena, trying to comfort Rachel Carr, the first teacher to be so exposed: "I've been on Gossip Girl plenty of times and for the worst things...eventually everyone forgets. The best thing to do with these things is nothing at all,"

Phones and Gossip Girl are not the only mechanisms by which the show's characters spy on and out each other. They use all the more traditional media, too - in-person interaction, mistaken identity (a masked ball!), rifling through each other's belongings, stolen phones, eavesdropping, accident, and, of course, the gossip pages of the New York press.

"It's anonymous, so no one really knows," Serena says, when asked who is behind the site. But she and all the others do know: the tips come from each other and from the nameless other students they ignore in the background. Gossip Girl merely forwards them, with commentary in her own style:

You know you love me.

XOXO,
Net.wars

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 16, 2009

Health watch

We'll have to wait some months to find out what Steve Jobs' health situation really is, just as investors will have to wait to find out how well Apple is prepared to handle his absence. But that doesn't stop rampant speculation about both things, or discussion about whether Jobs owes it to the public to disclose his health problems.

As an individual, of course not. We write - probably too often for some people's tastes - about privacy with respect to health matters. But Jobs isn't just a private individual, and he isn't an average CEO. Like Warren Buffett, who saw his company's share price decline noticeably some years back during a scare over his health, Jobs's presence as CEO is a noticeable percentage of Apple's share price. That means that shareholders - and therefore by extension the Securities and Exchange Commission - have some legitimate public interest in his state of health.

That doesn't mean that all the speculation going on is a good thing. If Jobs is smart, he doesn't read news stories about himself; in normal times no one needs their sense of self-importance inflated that much, and in a health crisis the last thing you need is to read dozens of people speculating that you're on the way out. The pruriently curious may like to know that there is some speculation that the weight loss is the result of the Whipple procedure Jobs reportedly had in 2004 to treat his islet cell neuroendocrine tumor (a less aggressive type of pancreatic cancer); or that it's a thyroid disorder. No one wants to just write a post that says simply, "I don't know."

It would not matter if Jobs and Apple did not so conspicuously embrace the cult of personality. The downside of having a celebrity CEO is that when that CEO is put out of action the company struggles to keep its market credibility. The more the CEO takes credit - and Jobs is indelibly associated with each of Apple's current products - the less confidence people have in the company he runs.

To a large extent, it's absurd. No one - not even Jobs - can run a tech company the size of Apple by himself. Jobs may insist on signing off on every design detail, but let's face it, he's not the one working evenings and weekends to write the software code and run bug testing and run a final polishing cloth over the shinies before they hit the stores. Apple definitely lost his way during the period he wasn't at the helm - that much is history. But Jobs helped recruit John Sculley, the CEO who ran Apple during those lost years. And Jobs's next company, NeXT, was a glossy, well-designed, technically sophisticated market failure whose biggest success came when Apple bought it (and Jobs) and incorporated some of the company's technology into its products. Jobs had far more success with Pixar, now part of Disney; but accounts of the company's early history suggest was the company's founders who did the heavy lifting.

Unfortunately, if you're a public company you don't get to create public confidence by pointing out the obvious: that even with Jobs out of action there's a lot of company left for the managers he picked to run in the direction's he's chosen. Apple, whose relations with the press seem to be a dictionary definition of "arrogant", has apparently never cared to create a public image for itself that suggests it's a strong company with or without Jobs.

Compare and contrast to Buffett, who has been a rock star CEO for far longer than Jobs has. Buffett is 78, and Berkshire Hathaway's success is universally associated almost solely with him; yet every year he reminds shareholders that he has three or four candidates to succeed him who are chosen and primed and known to his board of directors. His annual shareholder's letters, too, are filled with praise for the managers and directors of the many subsidiaries Berkshire owns. Based on all that, it is clear that Buffett has an eye to ensuring that his company will retain its value and culture with or without him. That so many Berkshire Hathaway millionaires are his personal friends and neighbors, who staked money in the company decades ago at some personal risk, may have something to do with it.

Apple has not done anything like the same, which may have something to do with the personality of its CEO. Jobs's health troubles of 2004 should have been a wakeup call; if Buffett can understand that his age is a concern for shareholders, why can't Jobs understand that his health is, too? If he doesn't want people prying into his medical condition, that's understandable. But then the answer is to loosen his public identification with the company. As long as the perception is that Jobs is Apple and Apple is Jobs, the company's fortunes and share price will be inextricably linked to the fragility of his aging human body. Show that the company has a plan for succession, give its managers and product developers public credit, and identify others with its most visible products, and Jobs can go back to having some semblance of a private medical record.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 2, 2009

No rest for 2009

It's been a quiet week, as you'd expect. But 2009 is likely to be a big year in terms of digital rights.

Both the US and the UK are looking to track non-citizens more closely. The UK has begun issuing foreigners with biometric ID cards. The US, which began collecting fingerprints from visiting tourists two years ago says it wants to do the same with green card holders. In other words, you can live in the US for decades, you can pay taxes, you can contribute to the US economy - but you're still not really one of us when you come home.

The ACLU's Barry Steinhardt has pointed out, however, that the original US-VISIT system actually isn't finished: there's supposed to be an exit portion that has yet to be built. The biometric system is therefore like a Roach Motel: people check in but they never leave.

That segues perfectly into the expansion of No2ID's "database state". The UK is proceeding with its plan for a giant shed to store all UK telecommunications traffic data. Building the data shed is a lot like saying we're having trouble finding a few needles in a bunch of haystacks so the answer is to build a lot bigger haystack.

Children in the UK can also look forward to ContactPoint (budget £22.4 million) going live at the end of January, only the first of several. The conservativers apparently have pledged to scrap ContactPoint in favor of a less expensive system that would track only children deemed to be at risk. If the conservatives don't get their chance to scrap it - probably even if they do - the current generation may be the last that doesn't get to grow up taking for granted that their every move is being tracked. Get 'em young, as the Catholic church used to say, and they're yours for life.

The other half of that is, of course, the National Identity Register. Little has been heard of the ID card in recent months; although the Home Office says 1,000 people have actually requested one. Since these have begun rolling out to foreigners, it's probably best to keep an eye on them.

On January 19, look for the EU to vote on copyright term extension in sound recordings. They have now: 50 years. They want: 95 years. The problem: all the independent reviewers agree it's a bad idea economically. Why does this proposal keep dogging us? Especially given that even the UK government accepts that recording contracts mean that little of the royalties will go to the musicians the law is supposedly trying to help, why is the European Parliament even considering it? Write your MEP. Meanwhile, the economic downturn reaches Cliff Richards; his earliest recordings begin entering the public domain...oh, look - yesterday, January 1, 2009.

Those interested in defending file-sharing technology, the public domain, or any other public interest in intellectual property will find themselves on the receiving end of a pack of new laws and initiatives out to get them.

The RIAA recently announced it would cease suing its customers in the US. It plans to "work with ISPs". Anyone who's been around the UK and France in recent months should smell the three-strikes policy that the Open Rights Group has been fighting against. ORG's going to find it a tougher battle, now that the govermment is considering a stick and carrot approach: make ISPs liable for their users' copyright infringement, but give them a slice of the action for legal downloads. One has to hope that even the most cash-strapped ISPs have more sense.

Last year's scare over the US's bald statement that customs authorities have the right to search and impound computers and other electronic equipment carried by travellers across the national borders will probably be followed up with lengthy protest over new rules known as the Anti-Counterfeiting Trade Agreement and being negotiated by the US, EU, Japan, and other countries. We don't know as much as we'd like about what the proposals actually are, though some information escaped last June. Negotiations are expected to continue in 2009.

The EU has said that it has no plans to search individual travellers, which is a relief; in fact, in most cases it would be impossible for a border guard to tell whether files on a computer were copyright violations. Nonetheless, it seems likely that this and other laws will make criminals of most of us; almost everyone who owns an MP3 player has music on it that technically infringes the copyright laws (particularly in the UK, where there is as yet no exemption for personal copying).

Meanwhile, Australia's new $44 million "great firewall" is going ahead despiteknown flaws in the technology. Nearer home, British Culture Secretary Andy Burnham would like to rate the Web, lest it frighten the children.

It's going to be a long year. But on the bright side, if you want to make some suggestions for the incoming Obama administration, head over to Change.org and add your voice to those assembling under "technology policy".

Happy new year!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 5, 2008

Saving seeds

The 17 judges of the European Court of Human Rights ruled unanimously yesterday that the UK's DNA database, which contains more than 3 million DNA samples, violates Article 8 of the European Convention on Human Rights. The key factor: retaining, indefinitely, the DNA samples of people who have committed no crime.

It's not a complete win for objectors to the database, since the ruling doesn't say the database shouldn't exist, merely that DNA samples should be removed once their owners have been acquitted in court or the charges have been dropped. England, the court said, should copy Scotland, which operates such a policy.

The UK comes in for particular censure, in the form of the note that "any State claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance..." In other words, before you decide to be the first on your block to use a new technology and show the rest of the world how it's done, you should think about the consequences.

Because it's true: this is the kind of technology that makes surveillance and control-happy governments the envy of other governments. For example: lacking clues to lead them to a serial killer, the Los Angeles Police Department wants to copy Britain and use California's DNA database to search for genetic profiles similar enough to belong to a close relative .The French DNA database, FNAEG, was proposed in 1996, created in 1998 for sex offenders, implemented in 2001, and broadened to other criminal offenses after 9/11 and again in 2003: a perfect example of function creep. But the French DNA database is a fiftieth the size of the UK's, and Austria's, the next on the list, is even smaller.

There are some wonderful statistics about the UK database. DNA samples from more than 4 million people are included on it. Probably 850,000 of them are innocent of any crime. Some 40,000 are children between the ages of 10 and 17. The government (according to the Telegraph) has spent £182 million on it between April 1995 and March 2004. And there have been suggestions that it's too small. When privacy and human rights campaigners pointed out that people of color are disproportionately represented in the database, one of England's most experienced appeals court judges, Lord Justice Sedley, argued that every UK resident and visitor should be included on it. Yes, that's definitely the way to bring the tourists in: demand a DNA sample. Just look how they're flocking to the US to give fingerprints, and how many more flooded in when they upped the number to ten earlier this year. (And how little we're getting for it: in the first two years of the program, fingerprinting 44 million visitors netted 1,000 people with criminal or immigration violations.)

At last week's A Fine Balance conference on privacy-enhancing technologies, there was a lot of discussion of the key technique of data minimization. That is the principle that you should not collect or share more data than is actually needed to do the job. Someone checking whether you have the right to drive, for example, doesn't need to know who you are or where you live; someone checking you have the right to borrow books from the local library needs to know where you live and who you are but not your age or your health records; someone checking you're the right age to enter a bar doesn't need to care if your driver's license has expired.

This is an idea that's been around a long time - I think I heard my first presentation on it in about 1994 - but whose progress towards a usable product has been agonizingly slow. IBM's PRIME project, which Jan Camenisch presented, and Microsoft's purchase of Credentica (which wasn't shown at the conference) suggest that the mainstream technology products may finally be getting there. If only we can convince politicians that these principles are a necessary adjunct to storing all the data they're collecting.

What makes the DNA database more than just a high-tech fingerprint database is that over time the DNA stored in it will become increasingly revealing of intimate secrets. As Ray Kurzweil kept saying at the Singularity Summit, Moore's Law is hitting DNA sequencing right now; the cost is accordingly plummeting by factors of ten. When the database was set up, it was fair to characterize DNA as a high-tech version of fingerprints or iris scans. Five - or 15, or 25, we can't be sure - years from now, we will have learned far more about interpreting genetic sequences. The coded, unreadable messages we're storing now will be cleartext one day, and anyone allowed to consult the database will be privy to far more intimate information about our bodies, ourselves than we think we're giving them now.

Unfortunately, the people in charge of these things typically think it's not going to affect them. If the "little people" have no privacy, well, so what? It's only when the powers they've granted are turned on them that they begin to get it. If a conservative is a liberal who's been mugged, and a liberal is a conservative whose daughter has needed an abortion, and a civil liberties advocate is a politician who's been arrested...maybe we need to arrest more of them.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 24, 2008

Living by numbers

"I call it tracking," said a young woman. She had healthy classic-length hair, a startling sheaf of varyingly painful medical problems, and an eager, frequent smile. She spends some minutes every day noting down as many as 40 different bits of information about herself: temperature, hormone levels, moods, the state of the various medical problems, the foods she eats, the amount and quality of sleep she gets. Every so often, she studies the data looking for unsuspected patterns that might help her defeat a problem. By this means, she says she's greatly reduced the frequency of two of them and was working on a third. Her doctors aren't terribly interested, but the data helps her decide which of their recommendations are worth following.

And she runs little experiments on herself. Change a bunch of variables, track for a month, review the results. If something's changed, go back and look at each variable individually to find the one that's making the difference. And so on.

Of course, everyone with the kind of medical problem - diabetes, infertility, allergies, cramps, migraines, fatigue - that medicine can't really solve - has done something like this for generations. Diabetics in particularly have long had to track and control their blood sugar levels. What's different is the intensity - and the computers. She currently tracks everything in an Excel spreadsheet, but what she's longing for is good tools to help her with data analysis.

From what Gary Wolf, the organizer of this group, Quantified Self, says - about 30 people are here for its second meeting, after hours at Palo Alto's Institute for the Future to swap notes and techniques on personal tracking - getting out of the Excel spreadsheet is a key stage in every tracker's life. Each stage of improvement thereafter gets much harder.

Is this a trend? Co-founder Kevin Kelley thinks so, and so does the Washington Post, which covered this group's first meeting. You may not think you will ever reach the stage of obsession that would lead you to go to a meeting about it, but in fact, if the interviews I did with new-style health companies in the past year is any guide, we're going to be seeing a lot of this in the health side of things. Home blood pressure monitors, glucose tests, cholesterol tests, hormone tests - these days you can buy these things in Wal-Mart.

The key question is clearly going to be: who owns your health data? Most of the medical devices in development assume that your doctor or medical supplier will be the one doing the monitoring; the dozens of Web sites highlighted in that Washington Post article hope there's a business in helping people self-track everything from menstrual cycles to time management. But the group in Palo Alto are more interested in self-help: in finding and creating tools everyone can use, and in interoperability. One meeting member shows off a set of consumer-oriented prototypes - bathroom scale, pedometer, blood pressure monitor, that send their data to software on your computer to display and, prospectively, to a subscription Web site. But if you're going to look at those things together - charting the impact of how much you walk on your weight and blood pressure - wouldn't you also want to be able to put in the foods you eat? There could hardly be an area where open data formats will be more important.

All of that makes sense. I was less clear on the usefulness of an idea another meeting member has - he's doing a start-up to create it - a tiny, lightweight recording camera that can clip to the outside of a pocket. Of course, this kind of thing already has a grand, old man in the form of Steve Mann, who has been recording his life with an increasingly small sheaf of devices for a couple of decades now. He was tired, this guy said, of cameras that are too difficult to use and too big and heavy; they get left at home and rarely used. This camera they're working on will have a wide-angle lens ("I don't know why no one's done this") and take two to five pictures a second. "That would be so great," breathes the guy sitting next to me.

Instantly, I flash on the memory of Steve Mann dogging me with flash photography at Computers, Freedom, and Privacy 2005. What happens when the police subpoenas your camera? How long before insurance companies and marketing companies offer discounts as inducements to people to wear cameras and send them the footage unedited so they can study behavior they currently can't reach?

And then he said, "The 10,000 greatest minutes of your life that your grandchildren have to see," and all you can think is, those poor kids.

There is a certain inevitable logic to all this. If retailers, manufacturers, marketers, governments, and security services are all convinced they can learn from data mining us why shouldn't we be able to gain insights by doing it ourselves?

At the moment, this all seems to be for personal use. But consider the benefits of merging it with Web 2.0 and social networks. At last you'll be able to answer the age-old question: why do we have sex less often than the Joneses?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 10, 2008

Data mining snake oil

The basic complaints we've been making for years about law enforcement's and government's desire to collect masses of data have primarily focused on the obvious set of civil liberties issues: the chilling effect of surveillance, the right of individuals to private lives, the risk of abuse of power by those in charge of all that data. On top of that we've worried about the security risks inherent in creating such large targets from which data will, inevitably, leak sometimes.

This week, along came the National Research Council to offer a new trouble with dataveillance: it doesn't actually work to prevent terrorism. Even if it did work, the tradeoff of the loss of personal liberties against the security allegedly offered by policies that involve tracking everything everyone does from cradle to grave was hard to justify. But if it doesn't work - if all surveillance all the time won't make us actually safer - then the discussion really ought to be over.

The NAS report, Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Assessment, makes its conclusions clear: "Modern data collection and analysis techniques have had remarkable success in solving information-related problems in the commercial sector... But such highly automated tools and techniques cannot be easily applied to the much more difficult problem of detecting and preempting a terrorist attack, and success in doing so may not be possible at all."

Actually, the many of us who have had our cards stopped for no better reason than that the issuing bank didn't like the color of the Web site we were buying from, might question how successful these tools have been in the commercial sector. At the very least, it has become obvious to everyone how much trouble is being caused by false positives. If a similar approach is taken to all parts of everyone's lives instead of just their financial transactions, think how much more difficult it's going to be to get through life without being arrested several times a year.

The report again: "Even in well-managed programs such tools are likely to return significant rates of false positives, especially if the tools are highly automated." Given the masses of data we're talking about - the UK wants to store all of the nation's communications data for years in a giant shed, and a similar effort in the US would have to be many times as big - the tools will have to be highly automated. And - the report yet again - the difficulty of detecting terrorist activity "through their communications, transactions, and behaviors is hugely complicated by the ubiquity and enormoity of electronic databases maintained by both government agencies and private-sector corporations." The bigger the haystack, the harder it is to find the needle.

In a recent interview, David Porter, CEO of Detica, who has spent his entire career thinking about fraud prevention, said much the same thing. Porter's proposed solution - the basis of the systems Detica sells -is to vastly shrink the amount of data to be analyzed by throwing out everything we know is not fraud (or, as his colleague, Tom Black, said at the Homeland and Border Security conference in July, terrorist activity). To catch your hare, first shrink your haystack.

This report, as the title suggests, focuses particularly on balancing personal privacy against the needs of anti-terrorist efforts. (Although, any terrorist watching the financial markets the last couple of weeks would be justified in feeling his life's work had been wasted, since we can do all the damage that's needed without his help.) The threat from terrorists is real, the authors say - but so is the threat to privacy. Personal information in databases cannot be fully anonymized; the loss of privacy is real damage; and data varies substantially in quality. "Data derived by linking high-quality data with data of lesser quality will tend to be low-quality data." If you throw a load of silly string into your haystack, you wind up with a big mess that's pretty much useless to everyone and will be a pain in the neck to clean up.

As a result, the report recommends requiring systematic and periodic evaluation of every information-based government program against core values and proposes a framework for carrying that out. There should be "robust, independent oversight". Research and development of such programs should be carried out with synthetic data, not real data "anonymized"; real data should only be used once a program meets the proposed criteria for deployment and even then only phased in at a small number of sites and tested thoroughly. Congress should review privacy laws and consider how best to protect privacy in the context of such programs.

These things seem so obvious; but to get to this the point it's taken three years of rigorous documentation and study by a 21-person committee of unimpeachable senior scientists and review by members of a host of top universities, telephone companies, and top technology companies. We have to think the report's sponsors, who include the the National Science Foundation, and the Department of Homeland Security, will take the results seriously. Writing for Cnet, Declan McCullagh notes that the similar 1996 NRC CRISIS report on encryption was followed by decontrol of the export and use of strong cryptography two years later. We can but hope.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 25, 2008

Who?

A certain amount of government and practical policy is being made these days based on the idea that you can take large amounts of data and anonymize it so researchers and others can analyze it without invading anyone's privacy. Of particular sensitivity is the idea of giving medical researchers access to such anonymized data in the interests of helping along the search for cures and better treatments. It's hard to argue with that as a goal - just like it's hard to argue with the goal of controlling an epidemic - but both those public health interests collide with the principle of medical confidentiality.

The work of Latanya Sweeney was I think the first hint that anonymizing data might not be so straightforward; I've written before about her work. This week, at the Privacy Enhancing Technologies Symposium in Leuven, Belgium (which I regrettably missed) researchers Arvind Narayanan and Vitaly Shmatikov from the University of Texas at Austin won an award sponsored by Microsoft for taking reidentifying supposedly anonymized data a step further.

The pair took a database released by the online DVD rental company Netflix last year as part of the $1 million Netflix Prize, a project to improve upon the accuracy of the system's predictions. You know the kind of thing, since it's built into everything from Amazon to Tivos - you give the system an idea of your likes and dislikes by rating the movies you've rented and the system makes recommendations for movies you'll like based on those expressed preferences. To enable researchers to work on the problem of improving these recommendations, Netflix released a dataset containing more than 100 million movie ratings contributed by nearly 500,000 subscribers between December 1999 and December 2005 with, as the service stated in its FAQ, all customer identifying information removed.

Maybe in a world where researchers only had one source of information that would be a valid claim. But just as Sweeney showed in 1997 that it takes very little in the way of public records to re-identify a load of medical data supplied to researchers in the state of Massachusetts, Narayananan and Shamtikov's work reminds us that we don't live in a world like that. For one thing, people tend disproportionately to rate their unusual, quirky favorites. Rating movies takes time; why spend it on giving The Lord of the Rings another bump when what people really need is to know about the wonders of King of Hearts, All That Jazz, and The Tall Blond Man with One Black Shoe? The consequence is that the Netflix dataset is what they call "sparse" - that is, there few subscribers have very similar records.

So: how much does someone need to know about you to identify a particular user from the database? It turns out, not much. The is the public ratings and dates at the Internet Movies Database, which include dates and real names. Narayanan and Shmatikov concluded that 99 percent of records could be uniquely identified from only eight matching ratings (of which two could be wrong); for 68 percent of the records you only need two (and reidentifying the rest becomes easier). And of course, if you know a little bit about the particular person whose record you want to identify things get a lot easier - the three movies I've just listed would probably identify me and a few of my friends.

Even if you don't care if your tastes in movies are private - and both US law and the American Library Association's take on library loan records would protect you more than you yourself would - there are couple of notable things here. First of all, the compromise last week whereby Google agreed to hand Viacom anonymized data on YouTube users isn't as good a deal for users as they might think. A really dedicated searcher might well think it worth the effort to come up with a way to re-identify the data - and so far rightsholders have shown themselves to be very dedicated indeed.

Second of all, the Thomas-Walport review on data-sharing actually recommends requiring NHS patients to agree to sharing data with medical researchers. There is a blithe assumption running through all the government policies in this area that data can be anonymized, and that as long as they say our privacy is protected it will be. It's a perfect example of what someone this week called "policy-based evidence-making".

Third of all, most such policy in this area assumes it's the past that matters. What may be of greater significance, as Narayanan and Shmatikov point out, is the future: forward privacy. Once a virtual identity has been linked to a real-world identity, that linkage is permanent. Yes, you can create a new virtual identity, but any slip that links it to either your previous virtual or your real-world identity blows your cover.

The point is not that we should all rush to hide our movie ratings. The point is that we make optimistic assumptions every day that the information we post and create has little value and won't come back to bite us on the ass. We do not know what connections will be possible in the future.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 11, 2008

Voters for sale

It must be hard to be the Direct Marketing Association. All individuals in the DMA must know that they themselves hate getting marketing calls during dinner, weeding the real post out from the junk mail, and constantly having to unsubscribe from email lists that they're only on because they had the misfortune to buy something from the sender. Collectively, the DMA remains firmly convinced that people want advertising really, it just has to be targeted right (at which point people no longer call it advertising). It must be very hard for everyone involved to maintain this level of cognitive dissonance.

And it leads them to do things as an organization that probably each individual would oppose if they were working for someone else. Today the DMA is opposing the withdrawal of the edited electoral register, a recommendation appearing in the Data-Sharing Review, published by the Ministry of Justice and written by Information Commissioner Richard Thomas and Dr Mark Walport. There's a lot of interesting stuff to digest; the electoral register issue is one of the simpler bits.

To recap: historically the UK, like the US, treated the electoral rolls as public information. In the UK every household gets sent a canvassing form once a year that comes with a stern warning that you are legally required to register.

Starting in the 1830s, the British electoral rolls have been available for public inspection and sale; what a godsend for direct marketers as their industry grew up. As of 2001, electoral registration officers are required to sell a copy of the register at a specified price to anyone who wants it under Regulation 48 of the Representation of the People (England and Wales) Regulations. Almost immediately there were objections on privacy grounds, most notably a complaint by Pontefract-based Brian Robertson, a retired accountant, against Wakefield City Council because there was no provision for him to prevent the sale of his information for commercial use. He refused to register, took them to court - and won.

The regulations were promptly amended to require councils to maintain two registers: the full public register and an edited version that could be sold to commercial organizations and others and to which voters would be added automatically - but with the right to opt out. The first edited registers appeared in 2002.

And there was a lot of confusion. The canvassing forms that first year didn't make it very clear what the edited register was, and it was easy to make the mistake of thinking that if you opted out you would not be able to vote. Subsequent years saw amended forms that made it more clear just what you were opting out of. And the results really shouldn't surprise anyone: in the latest rolls 40 percent of voters opted out, double the percentage in the first years. Given that, it's not entirely clear why the government needs to withdraw the register. If they just wait a few more years everyone of any value to marketers will have opted out, and the edited rolls will become useful again as a list of all the people who aren't worth marketing to. Anyone left presumably either didn't understand the form, so lonely they enjoy the attention, or so mentally afflicted that someone else filled out the form for them.

The full register is available - at least in theory - only to a select group of people and organizations: political parties for electoral purposes, credit reference agencies to check names and addresses when people apply for credit, and law enforcement. The main purchasers of the edited register, the Thomas-Walport report notes, are direct marketing companies and companies compiling directories.

Thomas and Walport disapprove of its existence on these grounds: "It sends a particularly poor message to the public that personal information collected for something as vital as participation in the democratic process can be sold to 'anyone for any purpose'."

A key data protection principle is that a chance of use in personal information requires the consent of the individual. If ever there were a more significant change of use than selling information collected to enable people to vote to third party companies for general marketing purposes, I don't know what it would be.

The DMA's objection to its withdrawal is that its members won't be able to clean their lists and keep them accurate and up-to-date. And it happily sees the direct mail envelope as more than half full: "Some householders have opted out, but around 60 petrcent have chosen to remain on the edited register." They don't believe the forms are all confusing. And the DMA plays the environmental card: targeting reduces the amount of waste paper the industry produces.

One issue neither group tackles is whether the register represents a significant source of income for councils. How much are we willing to pay for privacy. This warrants more research; a quick glance turns up figures from Bath and North East Somerset Counil. In 2005-2006, the council netted £1,553 and £380.50 for the sales of the full and edited registers respectively; in 2006-2007 those figures were £1558.50 and £681. If that's indicative of national trends, we can afford it, especially given the savings on administering the opt-out process.

"The edited register does serve a purpose," the DMA concludes, "and so should not be abolished." A purpose, yes. Just not our purpose.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 27, 2008

Mistakes were made

This week we got the detail on what went wrong at Her Majesty's Revenue and Customs that led to the loss of those two CDs full of the personal details of 25 million British households last year with the release of the Poynter Review (PDF). We also got a hint of how and whether the future might be different with the publication yesterday of Data Handling: Proecures in Government (PDF), written by Sir Gus O'Donnell and commissioned by the Prime Minister after the HMRC loss. The most obvious message of both reports: government needs to secure data better.

The nicest thing the Poynter review said was that HMRC has already made changes in response to its criticisms. Otherwise, it was pretty much a surgical demonstration of "institutional deficiencies".

The chief points:


- Security was not HMRC's top priority.

- HMRC in fact had the technical ability to send only the selection of data that NAO actually needed, but the staff involved didn't know it.

- There was no designated single point of contact between HMRC and NAO.

- HMRC used insecure methods for data storage and transfer.

- The decision to send the CDs to the NAO was taken by junior staff without consulting senior managers - which under HMRC's own rules they should have done.

- The reason HMRC's junior staff did not consult managers was that they believed (wrongly) that NAO had absolute authority to access any and all information HMRC had.

- The HMRC staffer who dispatched the discs incorrectly believed the TNT Post service was secure and traceable, as required by HMRC policy. A different TNT service that met those requirements was in fact available.

- HMRC policies regarding information security and the release of data were not communicated sufficiently through the organization and were not sufficiently detailed.

- HMRC failed on accountability, governance, information security...you name it.

The real problem, though, isn't any single one of these things. If junior staff had consulted senior staff, it might not have mattered that they didn't know what the policies were. If HMRC used proper information security and secure methods for data storage (that is, encryption rather than simple password protection), they wouldn't have had access to send the discs. If they'd understood TNT's services correctly, the discs wouldn't have gotten lost - or at least been traceable if they had.

The real problem was the interlocking effect of all these factors. That, as Nassim Nicholas Taleb might say, was the black swan.

For those who haven't read Taleb's The Black Swan: The Impact of the Highly Improbable, the black swan stands for the event that is completely unpredictable - because, like black swans until one was spotted in Australia, no such thing has ever been seen - until it happens. Of course, data loss is pretty much a white swan; we've seen lots of data breaches. The black swan, really, is the perfectly secure system that is still sufficiently open for the people who need to use it.

That challenge is what O'Donnell's report on data handling is about and, as he notes, it's going to get harder rather than easier. He recommends a complete rearrangement of how departments manage information as well as improving the systems within individual departments. He also recommends greater openness about how the government secures data.

"No organisation can guarantee it will never lose data," he writes, "and the Government is no exception." O'Donnell goes on to consider how data should be protected and managed, not whether it should be collected or shared in the first place. That job is being left for yet another report in progress, due soon.

It's good to read that some good is coming out of the HMRC data loss: all departments are, according to the O'Donnell report, reviewing their data practices and beginning the process of cultural change. That can only be a good thing.

But the underlying problem is outside the scope of these reports, and it's this government's fondness for creating giant databases: the National Identity Register, ContactPoint, the DNA database, and so on. If the government really accepted the principle that it is impossible to guarantee complete data security, what would they do? Logically, they ought to start by cancelling the data behemoths on the understanding that it's a bad idea to base public policy on the idea that you can will a black swan into existence.

It would make more sense to create a design for government use of data that assumes there will be data breaches and attempts to limit the adverse consequences for the individuals whose data is lost. If my privacy is compromised alongside 50 million other people's and I am the victim of identity theft does it help me that the government department that lost the data knows which staff member to blame?

As Agatha Christie said long ago in one of her 80-plus books, "I know to err is human, but human error is nothing compared to what a computer can do if it tries." The man-machine combination is even worse. We should stop trying to breed black swans and instead devise systems that don't create so many white ones.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 30, 2008

Ten

It's easy to found an organization; it's hard to keep one alive even for as long as ten years. This week, the Foundation for Information Policy Research celebrated its tenth birthday. Ten years is a long time in Internet terms, and even longer when you're trying to get government to pay attention to expertise in a subject as difficult as technology policy.

My notes from the launch contain this quote from FIPR's first director, Caspar Bowden, which shows you just how difficult FIPR's role was going to be: "An educational charity has a responsibility to speak the truth, whether it's pleasant or unpleasant." FIPR was intended to avoid the narrow product focus of corporate laboratory research and retain the traditional freedoms of an academic lab.

My notes also show the following list of topics FIPR intended to research: the regulation of electronic commerce; consumer protection; data protection and privacy; copyright; law enforcement; evidence and archiving; electronic interaction between government, businesses, and individuals; the risks of computer and communications systems; and the extent to which information technologies discriminate against the less advantaged in society. Its first concern was intended to be researching the underpinnings of electronic commerce, including the then recent directive launched for public consultation by the European Commission.

In fact, the biggest issue of FIPR's early years was the crypto wars leading up to and culminating in the passage of the Regulation of Investigatory Powers Act (2000). It's safe to say that RIPA would have been a lot worse without the time and energy Bowden spent listening to Parliamentary debates, decoding consultation papers, and explaining what it all meant to journalists, politicians, civil servants, and anyone else who would listen.

Not that RIPA is a fountain of democratic behavior even as things are. In the last couple of weeks we've seen the perfect example of the kind of creeping functionalism that FIPR and Privacy International warned about at the time: the Poole council using the access rules in RIPA to spy on families to determine whether or not they really lived in the right catchment area for the schools their children attend.

That use of the RIPA rules, Bowden said at at FIPR's half-day anniversary conference last Wednesday, sets a precedent for accessing traffic data for much lower level purposes than the government originally claimed it was collecting the data for. He went on to call the recent suggestion that the government may be considering a giant database, updated in real time, of the nation's communications data "a truly Orwellian nightmare of data mining, all in one place."

Ross Anderson, FIPR's founding and current chair and a well-known security engineer at Cambridge, noted that the same risks adhere to the NHS database. A clinic that owns its own data will tell police asking for the names of all its patients under 16 to go away. "If," said Anderson, "it had all been in the NHS database and they'd gone in to see the manager of BT, would he have been told to go and jump in the river? The mistake engineers make too much is to think only technology matters."

That point was part of a larger one that Anderson made: that hopes that the giant databases under construction will collapse under their own weight are forlorn. Think of developing Hulk-Hogan databases and the algorithms for mining them as an arms race, just like spam and anti-spam. The same principle that holds that today's cryptography, no matter how strong, will eventually be routinely crackable means that today's overload of data will eventually, long after we can remember anything we actually said or did ourselves, be manageable.

The most interesting question is: what of the next ten years? Nigel Hickson, now with the Department of Business, Enterprise, and Regulatory Reform, gave some hints. On the European and international agenda, he listed the returning dominance of the large telephone companies on the excuse that they need to invest in fiber. We will be hearing about quality of service and network neutrality. Watch Brussels on spectrum rights. Watch for large debates on the liability of ISPs. Digital signatures, another battle of the late 1990s, are also back on the agenda, with draft EU proposals to mandate them for the public sector and other services. RFID, the "Internet for things" and the ubiquitous Internet will spark a new round of privacy arguments.

Most fundamentally, said Anderson, we need to think about what it means to live in a world that is ever more connected through evolving socio-technological systems. Government can help when markets fail; though governments themselves seem to fail most notoriously with large projects.

FIPR started by getting engineers, later engineers and economists, to talk through problems. "The next growth point may be engineers and psychologists," he said. "We have to progressively involve more and more people from more and more backgrounds and discussions."

Probably few people feel that their single vote in any given election really makes a difference. Groups like FIPR, PI, No2ID, and ARCH remind us that even a small number of people can have a significant effect. Happy birthday.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).


May 23, 2008

The haystack conundrum

Early this week the news broke that the Home Office wants to create a giant database in which will be stored details of all communications sent in Britain. In other words, instead of data retention, in which ISPs, telephone companies, and other service providers would hang onto communications data for a year or seven in case the Home Office wanted it, everything would stream to a Home Office data center in real time. We'll call it data swallowing.

Those with long memories - who seem few and far between in the national media covering this sort of subject - will remember that in about 1999 or 2000 there was a similar rumor. In the resulting outraged media coverage it was more or less thoroughly denied and nothing had been heard of it since, though privacy advocates continued to suspect that somewhere in the back of a drawer the scheme lurked, dormant, like one of those just-add-water Martians you find in the old Bugs Bunny cartoons. And now here it is again in another leak that the suspicious veteran watcher of Yes, Minister might think was an attempt to test public opinion. The fact that it's been mooted before makes it seem so much more likely that they're actually serious.

This proposal is not only expensive, complicated, slow, and controversial/courageous (Yes, Minister's Fab Four deterrents), but risk-laden, badly conceived, disproportionate, and foolish. Such a database will not catch terrorists, because given the volume of data involved trying to use it to spot any one would-be evil-doer will be the rough equivalent of searching for an iron filing in a haystack the size of a planet. It will, however, make it possible for anyone trawling the database to make any given individual's life thoroughly miserable. That's so disproportionate it's a divide-by-zero error.

The risks ought to be obvious: this is a government that can't keep track of the personal details of 25 million households, which fit on a couple of CDs. Devise all the rules and processes you want, the bigger the database the harder it will be to secure. Besides personal information, the giant communications database would include businesses' communication information, much of likely to be commercially sensitive. It's pretty good going to come up with a proposal that equally offends civil liberties activists and businesses.

In a short summary of the proposed legislation, we find this justification: "Unless the legislation is updated to reflect these changes, the ability of public authorities to carry out their crime prevention and public safety duties and to counter these threats will be undermined."

Sound familiar? It should. It's the exact same justification we heard in the late 1990s for requiring key escrow as part of the nascent Regulation of Investigatory Powers Act. The idea there was that if the use of strong cryptography to protect communications became widespread law enforcement and security services would be unable to read the content of the messages and phone calls they intercepted. This argument was fiercely rejected at the time, and key escrow was eventually dropped in favor of requiring the subjects of investigation to hand over their keys under specified circumstances.

There is much, much less logic to claiming that police can't do their jobs without real-time copies of all communications. Here we have real analogies: postal mail, which has been with us since 1660. Do we require copies of all letters that pass through the post office to be deposited with the security services? Do we require the Royal Mail's automated sorting equipment to log all address data?

Sanity has never intervened in this government's plans to create more and more tools for surveillance. Take CCTV. Recent studies show that despite the millions of pounds spent on deploying thousands of cameras all over the UK, they don't cut crime, and, more important, the images help solve crime in only 3 percent of cases. But you know the response to this news will not be to remove the cameras or stop adding to their number. No, the thinking will be like the scheme I once heard for selling harmless but ineffective alternative medical treatments, in which the answer to all outcomes is more treatment. (Patient gets better - treatment did it. Patient stays the same - treatment has halted the downward course of the disease. Patient gets worse - treatment came too late.)

This week at Computers, Freedom, and Privacy, I heard about the Electronic Privacy Information Center's work on fusion centers, relatively new US government efforts to mine many commercial and public sources of data. EPIC is trying to establish the role of federal agencies in funding and controlling these centers, but it's hard going.

What do these governments imagine they're going to be able to do with all this data? Is the fantasy that agents will be able to sit in a control room somewhere and survey it all on some kind of giant map on which criminals will pop up in red, ready to be caught? They had data before 9/11 and failed to collate and interpret it.

Iron filing; haystack; lack of a really good magnet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 9, 2008

Swings and roundabouts

There was a wonderful cartoon that cycled frequently around computer science departments in the pre-Internet 1970s - I still have my paper copy - that graphically illustrated the process by which IT systems get specified, designed, and built, and showed precisely why and how far they failed the user's inner image of what it was going to be. There is a scan here. The senior analyst wanted to make sure no one could possibly get hurt; the sponsor wanted a pretty design; the programmers, confused by contradictory input, wrote something that didn't work; and the installation was hideously broken.

Translate this into the UK's national ID card. Consumers, Sir James Crosby wrote in March (PDF)want identity assurance. That is, they - or rather, we - want to know that we're dealing with our real bank rather than a fraud. We want to know that the thief rooting through our garbage can't use any details he finds on discarded utility bills to impersonate us, change our address with our bank, clean out our accounts, and take out 23 new credit cards in our name before embarking on a wild spending spree leaving us to foot the bill. And we want to know that if all that ghastliness happens to us we will have an accessible and manageable way to fix it.

We want to swing lazily on the old tire and enjoy the view.

We are the users with the seemingly simple but in reality unobtainable fantasy.

The government, however - the project sponsor - wants the three-tiered design that barely works because of all the additional elements in the design but looks incredibly impressive. ("Be the envy of other major governments," I feel sure the project brochure says.) In the government's view, they are the users and we are the database objects.

Crosby nails this gap when he draws the distinction between ID assurance and ID management:

The expression 'ID management' suggests data sharing and database consolidation, concepts which principally serve the interests of the owner of the database, for example, the Government or the banks. Whereas we think of "ID assurance" as a consumer-led concept, a process that meets an important consumer need without necessarily providing any spin-off benefits to the owner of any database.

This distinction is fundamental. An ID system built primarily to deliver high levels of assurance for consumers and to command their trust has little in common with one inspired mainly by the ambitions of its owner. In the case of the former, consumers will extend use both across the population and in terms of applications such as travel and banking. While almost inevitably the opposite is true for systems principally designed to save costs and to transfer or share data.

As writer and software engineer Ellen Ullman wrote in her book Close to the Machine, databases infect their owners, who may start with good intentions but are ineluctibly drawn to surveillance.

So far, the government pushing the ID card seems to believe that it can impose anything it likes and if it means the tree collapses with the user on the swing, well, that's something that can be ironed out later. Crosby, however, points out that for the scheme to achieve any of the government's national security goals it must get mass take-up. "Thus," he writes, "even the achievement of security objectives relies on consumers' active participation."

This week, a similarly damning assessment of the scheme was released by the Independent Scheme Assurance Panel (PDF) (you may find it easier to read this clean translation - scroll down to policywatcher's May 8 posting). The gist: the government is completely incompetent at handling data, and creating massive databases will, as a result, destroy public trust in it and all its systems.

Of course, the government is in a position to compel registration, as it's begun doing with groups who can't argue back, like foreigners, and proposes doing for employees in "sensitive roles or locations, such as airports". But one of the key indicators of how little its scheme has to do with the actual needs and desires of the public is the list of questions it's asking in the current consultation on ID cards, which focus almost entirely on how to get people to love, or at least apply for, the card. To be sure, the consultation document pays lip service to accepting comments on any ID card-related topic, but the consultation is specifically about the "delivery scheme".

This is the kind of consultation where we're really damned if we do and damned if we don't. Submit comments on, for example, how best to "encourage" young people to sign up ("Views are invited particularly from young people on the best way of rolling out identity cards to them") without saying how little you like the government asking how best to market its unloved policy to vulnerable groups and when the responses are eventually released the government can say there are now no objectors to the scheme. Submit comments to the effect that the whole National Identity scheme is poorly conceived and inappropriate, and anything else you say is likely to be ignored on the grounds that they've heard all that and it's irrelevant to the present consultation. Comments are due by June 30.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 2, 2008

Bet and sue

Most net.wars are not new. Today's debates about free speech and censorship, copyright and control, nationality and disappearing borders were all presaged by the same discussions in the 1980s even as the Internet protocols were being invented. The rare exception: online gambling. Certainly, there were debates about whether states should regulate gambling, but a quick Usenet search does not seem to throw up any discussions about the impact the Internet was going to have on this particular pastime. Just sex, drugs, and rock 'n' roll.

The story started in March, when the French Tennis Federation (FFT - Fédération Française de Tennis) filed suit in Belgium against Betfair, Bwin, and Ladbrokes to prevent them from accepting bets on matches played at the upcoming French Open tennis championships, which start on May 25. The FFT's arguments are rather peculiar: that online betting stains the French Open's reputation; that only the FFT has the right to exploit the French Open; that the online betting companies are parasites using the French Open to make money; and that online betting corrupts the sport. Bwin countersued for slander.

On Tuesday of this week, the Liège court ruled comprehensively against the FFT and awarded the betting companies costs.

The FFT will still, of course, control the things it can: fans will be banned from using laptops and mobile phones in the stands. The convergence of wireless telephony, smart phones, and online sites means that in the second or two between the end of a point and the electronic scoreboard updating, there's a tiny window in which people could bet on a sure thing. Why this slightly improbable scenario concerns the FFT isn't clear; that's a problem for the betting companies. What should concern the FFT is ensuring a lack of corruption within the sport. That means the players and their entourages.

The latter issue has been a touchy subject in the tennis world ever since last August, when Russian player Nikolay Davydenko, currently fourth in the world rankings, retired in the third and final set of a match in Poland against 87th ranked Marin Vassallo Arguello, citing a foot injury. Davydenko was accused of match-fixing; the investigation still drags on. In the resulting publicity, several other players admitted being approached to fix matches. As part of subsequent rule-tightening by the Association of Tennis Professionals, the governing body of men's professional tennis, three Italian players were suspended briefly late last year for betting on other players' matches.

Probably the most surprising thing is that tennis, along with soccer and horse racing, is actually among the most popular sports for betting. A minority sport like tennis? Yet according to USA Today, the 2007 Paris Masters event saw $750 million to $1.5 billion in bets. I can only assume that the inverted pyramid of matches every week involving individual players fits well with what bettors like to do.

Fixing matches seems even more unlikely. The best payouts come from correctly picking upsets, the bigger the better. But top players are highly unlikely to throw matches to order. Most of them play a relatively modest number of events (Davydenko is admittedly the exception) and need all the match wins and points from those events to sustain their rankings. Plus, they're just too damn rich.

In 2007, Roger Federer, the ultra-dominant number one player since the end of 2003, earned upwards of $10 million in prize money alone; Davydenko picked up over $2 million (and has already won another $1 million in 2008). All of the top 12 earned over $1 million. Add in endorsements, and even after you subtract agents' fees, tax, and travel costs for self and entourage, you're still looking at wealthy guys. They might tank matches at events where they're being paid appearance fees (which are legal on the men's tour at all but the top 14 events, but proving they've done so is exceptionally difficult. Fixing matches, which could cost them in lost endorsements on top of the tour's own sanctions, surely can't be worth it.

There are several ironies about the FFT's action. First of all (something most of the journalists covering this story don't mention, probably because they don't spend a lot of time watching tennis on TV), Bwin has been an important advertiser sponsoring tennis on Eurosport. It's absolutely typical of the counter-productive and intricately incestuous politics that characterize the tennis world that one part of the sport would sue someone who pays money into another part of the sport.

Second of all, as Betfair and Bwin pointed out, all three of these companies are highly regulated European licensed operations. Ruling them out of action would mean shift online betting to less well regulated offshore companies. They also pointed out the absurdity of the parasites claim: how could they accept bets on an event without using its name? Betfair in particular documented its careful agreements with tennis's many governing bodies.

Third of all, the only reason match-fixing is an issue in the tennis world right now is that Betfair spotted some unusual betting patterns during that Polish Davydenko match, cancelled all the bets, and went public with the news. Without that, Davydenko would have avoided the fight over his family's phone records. Come to think of it, making the issue public probably explains the FFT's behavior: it's revenge.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2008

My IP address, my self

Some years back when I was writing about the data protection directive, Simon Davies, director of Privacy International, predicted a trade war between the US and Europe over privacy laws. It didn't happen, or at least it hasn't happened yet.

The key element to this prediction was the rule in the EU's data protection laws that prohibited sending data on for processing to countries whose legal regimes aren't as protective as those of the EU. Of course, since then we've seen the EU sell out on supplying airline passenger data to the US. Even so, this week the Article 29 Data Protection Working Party made recommendations about how search engines save and process personal data that could drive another wedge between the US and Europe.

The Article 29 group is one of those arcane EU phenomena that you probably don't know much about unless you're a privacy advocate or paid to find out. The short version: it's a sort of think tank of data protection commissioners from all over Europe. The UK's Information Commissioner, Richard Thomas, is a member, as are his equivalents in countries from France to Lithuania.

The Working Party (as it calls itself) advises and recommends policies based on the data protection principles enshrined in the EU Data Protection Directive. It cannot make law, but both its advice to the European Commission and the Commission's action (or lack thereof) are publicly reported. It's arguable that in a country like the UK, where the Information Commissioner operates with few legal teeth to bite with, the existence of such a group may help strengthen the Commissioner's hand.

(Few legal teeth, at least in respect of government activities: the Information Commissioner has issued an opinion about Phorm indicating that the service must be opt-in only. As Phorm and the ISPs involved are private companies, if they persisted with a service that contravened data protection law, the Information Commissioner could issue legal sanctions. But while the Information Commissioner can, for example, rule that for an ISP to retain users' traffic data for seven years is disproportionate, if the government passes a law saying the ISP must do so then within the UK's legal system the Information Commissioner can do nothing about it. Similarly, the Information Commissioner can say, as he has, that he is "concerned" about the extent of the information the government proposes to collect and keep on every British resident, but he can't actually stop the system from being built.)

The group's key recommendation: search engines should not keep personally identifiable search histories for longer than six months, and it specifically includes search engines whose headquarters are based outside the EU. The group does not say which search engines it studied, but it was reported to be studying Google as long ago as last May. The report doesn't look at requirements to keep traffic data under the Data Retention Directive, as it does not apply to search engines.

Google's shortening the life of its cookies and anonymizing its search history logs after 18 months turns out to have a significance I didn't appreciate when, at the time, I dismissed it as insultingly trivial (which it was): it showed the Article 29 working group that the company doesn't really need to keep all that data for so long. In

One of the key items the Article 29 group had to decide in writing its report on data protection issues related to search engines (PDF) is this: are IP addresses personal information? It sounds like one of those bits of medieval sophistry, like asking how many angels can dance on the head of a pin. In the dial-up days, it might not have mattered, at least in Britain, where local phone charges forced limited usage, so users were assigned a different IP address every time they logged in. But in the world of broadband, where even the supposedly dynamic IP addresses issued by cable suppliers may remain with a single subscriber for years on end. Being able to track your IP address's activities is increasingly like being able to track your library card, your credit card, and your mobile phone all at the same time. Fortunately, the average ISP doesn't have the time to be that interested in most of its users.

The fact is that any single piece of information that identifies your activities over a long period and can be mapped to your real-life identity has to be considered personal information or the data protection laws make no sense. The libertarian view, of course, would be that there are other search engines. You do not actually have to use Google, Gmail, or even YouTube. But if all search engines adopted Google's habits the choice would be more apparent than real. Time was when the US was the world's policeman. With respect to data, it seems that the EU has taken on this role. It will be interesting to see whether this decision has any impact on Google's business model and practices. If it does, that trade war could finally be upon us. If not, then Google was building up a vast data store just because we can.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 28, 2008

Leaving Las Vegas

Las Vegas shouldn't exist. Who drops a sprawling display of electric lights with huge fountains and luxury hotels that into the best desert scenery on the planet during an energy crisis? Indoors, it's Britain in mid-winter; outdoors you're standing in a giant exhaust fan. The out-of-proportion scale means that everything is four times as far away as you think, including the jackpot you're not going to win at one of its casinos. It's a great place to visit if you enjoy wallowing in self-righteous disapproval.

This all makes it the stuff of song, story, and legend and explains why Jeff Jonas's presentation at etech was packed.

The way Jonas tells it in his blog and at his presentation, he got into the gaming industry by driving through Las Vegas in 1989 idly wondering what was going on behind the scenes at the casinos. A year later he got the tiny beginnings of an answer when he picked up a used couch he'd found in the newspaper classified ads (boy, that dates it, doesn't it?) and found that its former owner played blackjack "for a living". Jonas began consulting to the gaming industry in 1991, helping to open Treasure Island, Bellagio, and Wynn.

"Possibly half the casinos in the world use technology we created," he said at etech.

Gaming revenues are now less than half of total revenues, he said, and despite the apparent financial win they might represent problem gamblers are in fact bad for business. The goal is for people to have fun. And because of that, he said, a place like the Bellagio is "optimized for consumer experience over interference. They don't want to spend money on surveillance."

Jonas began with a slide listing some common ideas about how Las Vegas works, culled from movies like Ocean's 11 and the TV show Las Vegas. Does the Bellagio have a vault? (No.) Do casinos perform background checks on guests based on public records? (No.) Is there a gaming industry watch list you can put yourself on but not take yourself off? (Yes, for people who know they have a gambling addiction.) Do casinos deliberately hire ex-felons? (Yes, to rehabilitate them.) Do they really send private jets for high rollers? (Cue story.)

There was, he said, a casino high roller who had won some $18 million. A win like that is going to show up in a casino's quarterly earnings. So, yes, they sent a private jet to his town and parked a limo in front of his house for the weekend. If you've got the bug, we're here for you, that kind of thing. He took the bait, and lost $22 million.

Do they help you create cover stories? (Yes.) "What happens in Vegas stays in Vegas" is an important part of ensuring that people can have fun that does not come back to bite them when they go home. The casinos' problem is with identity, not disguises, because they are required by anti-money laundering rules to report it any time someone crosses the $10,000 threshold for cash transactions. So if you play at several different tables, then go upstairs and change disguises, and come back and play some more, they have to be able to track you through all that. ID, therefore, is extremely important. Disguises are welcome; fake ID is not.

Do they use facial recognition to monitor the doors to spot cheaters on arrival? (Well...)

Of course technology-that-is-indistinguishable-from-magic-because-it-actually-is-magic appears on every crime-solving TV show these days. You know, the stuff where Our Heroes start with a fuzzy CCTV image and they punch in on a tiny piece of it and blow it up. And then someone says, "Can you enhance that?" and someone else says, "Oh, yes, we have new software," and a second later a line goes down the picture filling in detail. And a second after that you can read the brand on the face of a wrist watch (Numb3rs or the manufacturer's coding on a couple of pills (Las Vegas. Or they have a perfect matching system that can take a partial fingerprint lifted off a strand of hair or something and bang! the database can find not only the person's identity but their current home address and phone number (Bones). And who can ever forget the first episode of 24, when Jack Bauer, alarmed at the disappearance of his daughter, tosses his phone number to an underling and barks, "Find me all the Internet passwords associated with this phone number."

And yet...a surprising number of what ought to be the technically best-educated audience on the planet thought facial recognition was in operation to catch cheaters. Folks, it doesn't work in airports, either.

Which is the most interesting thing Jonas said: he now works for IBM (which bought his company) on privacy and civil liberties issues, including work on software to help the US government spot terrorists without invading privacy. It's an interesting concept, partly because security at airports and other locations is now so invasive. But also because if Las Vegas can find a way to deploy surveillance such that only the egregious problems are caught and everyone else just has a good time...why can't governments?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 14, 2008

Uninformed consent

Apparently the US Congress is now being scripted by Jon Stewart of the Daily Show. In a twist of perfect irony, the House of Representatives has decided to hold its first closed session in 25 years to debate - surveillance.

But it's obvious why they want closed doors: they want to talk about the AT&T case. To recap: AT&T is being sued for its complicity in the Bush administration's warrantless surveillance of US citizens after its technician Mark Klein blew the whistle by taking documents to the Electronic Frontier Foundation (which a couple of weeks ago gave him a Pioneer Award for his trouble).

Bush has, of course, resisted any effort to peer into the innards of his surveillance program by claiming it's all a state secret, and that's part of the point of this Congressional move: the Democrats have fielded a bill that would give the whole program some more oversight and, significantly, reject the idea of giving telecommunications companies - that is, AT&T - immunity from prosecution for breaking the law by participating in warrantless wiretapping. 'Snot fair that they should deprive us of the fun of watching the horse-trading. It can't, surely, be that they think we'll be upset by watching them slag each other off. In an election year?

But it's been a week for irony, as Wikipedia founder Jimmy Wales has had his sex life exposed when he dumped his girlfriendand been accused of - let's call it sloppiness - in his expense accounts. Worse, he stands accused of trading favorable page edits for cash. There's always been a strong element of Schadenpedia around, but the edit-for-cash thing really goes to the heart of what Wikipedia is supposed to be about.

I suspect that nonetheless Wikipedia will survive it: if the foundation has the sense it seems to have, it will display zero tolerance. But the incident has raised valid questions about how Wikipedia can possibly sustain itself financially going forward. The site is big and has enviable masses of traffic; but it sells no advertising, choosing instead to live on hand-outs and the work of volunteers. The idea, I suppose, is that accepting advertising might taint the site's neutral viewpoint, but donations can do the same thing if they're not properly walled off: just ask the US Congress. It seems to me that an automated advertising system they did not control would be, if anything, safer. And then maybe they could pay some of those volunteers, even though it would be a pity to lose some of the site's best entertainment.

With respect to advertising, it's worth noting that Phorm, which we is under increasing pressure. Earlier this week, we had an opportunity to talk to Kent Ertegrul, CEO of Phorm, who continues to maintain that Phorm's system, because it does not store data, is more protective of privacy than today's cookie-driven Web. This may in fact be true.

Less certain is Ertegrul's belief that the system does not contravene the Regulation of Investigatory Powers Act, which lays down rules about interception. Ertegrul has some support from a informal letter from the Home Office whose reasoning seems to be that if users have consented and have been told how they can opt out, it's legal. Well, we'll see; there's a lot of debate going on about this claim and it will be interesting to hear the Information Commissioner's view. If the Home Office's interpretation is correct, it could open a lot of scope for abusive behavior that could be imposed upon users simply by adding it to the terms of service to which they theoretically consent when they sign up, and a UK equivalent of AT&T wanting to assist the government with wholesale warrantless wiretapping would have only to add it to the terms of service.

The real problem is that no one really knows how Phorm's system works. Phorm doesn't retain your IP address, but the ad servers surely have to know it when they're sending you ads. If you opt out but can still opt back in (as Ertegrul said you can), doesn't that mean you still have a cookie on your system and that your data is still passed to Phorm's system, which discards it instead of sending you ads? If that's the case, doesn't that mean you can not opt out of having your data shared? If that isn't how it works, then how does it work? I thought I understood it after talking to Ertegrul, I really did - and then someone asked me to explain how Phorm's cookie's usefulness persisted between sessions, and I wasn't sure any more. I think the Open Rights Group: Phorm should publish details of how its system works for experts to scrutinize. Until Phorm does that the misinformation Ertegrul is so upset about will continue. (More disclosure: I am on ORG's Advisory Council.

But maybe the Home Office is on to something. Bush could solve his whole problem by getting everyone to give consent to being surveilled at the moment they take US citizenship. Surely a newborn baby's footprint is sufficient agreement?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 22, 2008

Strikeout

There is a certain kind of mentality that is actually proud of not understanding computers, as if there were something honorable about saying grandly, "Oh, I leave all that to my children."

Outside of computing, only television gets so many people boasting of their ignorance. Do we boast how few books we read? Do we trumpet our ignorance of other practical skills, like balancing a cheque book, cooking, or choosing wine? When someone suggests we get dressed in the morning do we say proudly, "I don't know how"?

There is so much insanity coming out of the British government on the Internet/computing front at the moment that the only possible conclusion is that the government is made up entirely of people who are engaged in a sort of reverse pissing contest with each other: I can compute less than you can, and see? here's a really dumb proposal to prove it.

How else can we explain yesterday's news that the government is determined to proceed with Contactpoint even though the report it commissioned and paid for from Deloitte warns that the risk of storing the personal details of every British child under 16 can only be managed, not eliminated? Lately, it seems that there's news of a major data breach every week. But the present government is like a batch of 20-year-olds who think that mortality can't happen to them.

Or today's news that the Department of Culture, Media, and Sport has launched its proposals for "Creative Britain", and among them is a very clear diktat to ISPs: deal with file-sharing voluntarily or we'll make you do it. By April 2009. This bit of extortion nestles in the middle of a bunch of other stuff about educating schoolchildren about the value of intellectual property. Dare we say: if there were one thing you could possibly do to ensure that kids sneer at IP, it would be to teach them about it in school.

The proposals are vague in the extreme about what kind of regulation the DCMS would accept as sufficient. Despite the leaks of last week, culture secretary Andy Burnham has told the Financial Times that the "three strikes" idea was never in the paper. As outlined by Open Rights Group executive director Becky Hogge in New Statesman, "three strikes" would mean that all Internet users would be tracked by IP address and warned by letter if they are caught uploading copyrighted content. After three letters, they would be disconnected. As Hogge says (disclosure: I am on the ORG advisory board), the punishment will fall equally on innocent bystanders who happen to share the same house. Worse, it turns ISPs into a squad of private police for a historically rapacious industry.

Charles Arthur, writing in yesterday's Guardian, presented the British Phonographic Institute's case about why the three strikes idea isn't necessarily completely awful: it's better than being sued. (These are our choices?) ISPs, of course, hate the idea: this is an industry with nanoscale margins. Who bears the liability if someone is disconnected and starts to complain? What if they sue?

We'll say it again: if the entertainment industries really want to stop file-sharing, they need to negotiate changed business models and create a legitimate market. Many people would be willing to pay a reasonable price to download TV shows and music if they could get in return reliable, fast, advertising-free, DRM-free downloads at or soon after the time of the initial release. The longer the present situation continues the more entrenched the habit of unauthorized file-sharing will become and the harder it will be to divert people to the legitimate market that eventually must be established.

But the key damning bit in Arthur's article (disclosure: he is my editor at the paper) is the BPI's admission that they cannot actually say that ending file-sharing would make sales grow. The best the BPI spokesman could come up with is, "It would send out the message that copyright is to be respected, that creative industries are to be respected and paid for."

Actually, what would really do that is a more balanced copyright law. Right now, the law is so far from what most people expect it to be - or rationally think it should be - that it is breeding contempt for itself. And it is about to get worse: term extension is back on the agenda. The 2006 Gowers Review recommended against it, but on February 14, Irish EU Commissioner Charlie McCreevy (previously: champion of software patents) has announced his intention to propose extending performers' copyright in sound recordings from the current 50-year term to 95 years. The plan seems to go something like this: whisk it past the Commission in the next two months. Then the French presidency starts and whee! new law! The UK can then say its hands are tied.

That change makes no difference to British ISPs, however, who are now under the gun to come up with some scheme to keep the government from clomping all over them. Or to the kids who are going to be tracked from cradle to alcopop by unique identity number. Maybe the first target of the government computing literacy programs should be...the government.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 7, 2007

Data hogs

If a data point falls in the forest and there's no database to pick it up, is it still private?

There is a general view that people do not care about privacy, particularly younger people. They blog the names of all their favorite bands and best friends, post their drunken photographs on Facebook, and tell all of MySpace who they slept with last night. No one, the argument goes – actually 22 percent – reads the privacy policies Web sites pay their lawyers to draw up so unreadably.

And yet the perception is wrong. People do, clearly, care about privacy – when the issues are made visible to them. Unfortunately, the privacy-invasiveness of a service, policy, or Web site usually only becomes visible after the horse has escaped and is comfortably grazing in the field of three-leaf clover.

A lot of this is, as Charles Arthur blogged recently in relation to the loss of the HMRC discs holding the Child Benefit database, an education issue: if we taught kids important principles of computer science, like security, privacy, and the value of data, instead of boring things like how to format an Excel spreadsheet, some of the most casual data violations wouldn't happen.

A lot of recent privacy failures seem to have happened in just this same unconscious way. Google's various privacy invasions, for example, seem to be a peculiarly geeky failure to connect with the general public's view of things. You can just imagine the techies at the Googleplex saying, "Oh, cool! Look, you can see right into the windows of those houses!" and utterly failing at simple empathy.

The continuing Facebook privacy meltdown seems to include the worst aspects of both the HMRC incident and Google's blind spot. If you haven't been following it, the story in brief is that Facebook created a new advertising program it calls Beacon, which collects tracking data from a variety of partner sites such as Blockbuster.com. Beacon then uses the data to display your latest purchases so your friends can see them.

The blind spot is, of course, the utter surprise with which the company greeted the discovery that people have all sorts of reasons why they don't want their purchase history displayed to their friends. They might be gifts for said friends. The friends, as so often on Facebook and the other social networks, may not be really friends but acquaintances chosen to make you look well-connected, or relatives you assiduously avoid in real life. And even your closest real friends may prefer not to know too much about the porn DVDs you rent. American librarians are militant about protecting the reading lists of library patrons; but Facebook would gleefully expose the books you buy. Are you kidding me? Facebook CEO Mark Zuckerberg can apologize all he wants, but his apparent surprise at the size of the fuss suggests that he's as inexperienced at shopping as those women in front of you in the grocery checkout who seem not to know they'll need to pay until after everything's been bagged up.

What Facebook shares with HMRC, though, is the underlying principle that it's cheaper to send the full set of data and let the recipients delete what they don't want than to be selective. And so, as the story has developed, it turns out that all sorts of data is being sent to Facebook, some of it even relating to non-users. They just delete what they don't want, so they say.

Facebook was briefly defensive, then allowed users to opt out, and then finally allowed users to delete the thing entirely. But the whole thing highlights one of the very real problems with social network sites that net.wars first wrote about in connection with (the now more responsibly designed) Plaxo: they grow by getting people to invade their own and their friends' privacy. The Australian computer scientist and privacy advocate Roger Clarke, whose paper Very Black "Little Black Boooks" is the seminal work in this area, predicted in 2003 that the social networks' business models would force them to become extremely invasive. And so it has proved.

How do we make privacy a choice? We know people care about privacy when they can see its loss: the reactions to the Facebook and HMRC incidents have made this plain. We know theyRecent research by Lorrie Cranor at Carnegie-Mellon (PDF) suggests, for example, that people's purchasing habits will change if you give them an easy-to-understand graphical representation of how well an ecommerce site's practices match their privacy preferences.

But visibility to users, helpful though it would be, is not the root of the problem. What privacy advocates need going forward is a way to persuade companies and governments to make privacy choices easy and visible when their mindset is to collect and keep all data, all the time? These organisations do not perceive giving users control over their privacy as being in their own best interests. Maybe plummeting stock prices and forced resignations, however brief, will get through to them. But to keep their attention focused on building better systems that put the user in control, we need to make the consequences of getting it wrong constantly visible and easily interpretable to the data hogs themselves.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 23, 2007

Road block

There are many ways for a computer system to fail. This week's disclosure that Her Majesty's Revenue and Customs has played lost-in-the-post with two CDs holding the nation's Child Benefit data is one of the stranger ones. The Child Benefit database includes names, addresses, identifying numbers, and often bank details, on all the UK's 25 million families with a child under 16. The National Audit Office requested a subset for its routine audit; the HMRC sent the entire database off by TNT post.

There are so many things wrong with this picture that it would take a village of late-night talk show hosts to make fun of them all. But the bottom line is this: when the system was developed no one included privacy or security in the specification or thought about the fundamental change in the nature of information when paper-based records are transmogrified into electronic data. The access limitations inherent in physical storage media must be painstakingly recreated in computer systems or they do not exist. The problem with security is it tends to be inconvenient.

With paper records, the more data you provide the more expensive and time-consuming it is. With computer records, the more data you provide the cheaper and quicker it is. The NAO's file of email relating to the incident (PDF) makes this clear. What the NAO wanted (so it could check that the right people got the right benefit payments): national insurance numbers, names, and benefit numbers. What it got: everything. If the discs hadn't gotten lost, we would never have known.

Ironically enough, this week in London also saw at least three conferences on various aspects of managing digital identity: Digital Identity Forum, A Fine Balance, and Identity Matters. All these events featured the kinds of experts the UK government has been ignoring in its mad rush to create and collect more and more data. The workshop on road pricing and transport systems at the second of them, however, was particularly instructive. Led by science advisor Brian Collins, the most notable thing about this workshop is that the 15 or 20 participants couldn't agree on a single aspect of such a system.

Would it run on GPS or GSM/GPRS? Who or what is charged, the car or the driver? Do all roads cost the same or do we use differential pricing to push traffic onto less crowded routes? Most important, is the goal to raise revenue, reduce congestion, protect the environment, or rebalance the cost of motoring so the people who drive the most pay the most? The more purposes the system is intended to serve, the more complicated and expensive it will become, and the less likely it is to answer any of those goals successfully. This point has of course also been made about the National ID card by the same sort of people who have warned about the security issues inherent in large databases such as the Child Benefit database. But it's clearer when you start talking about something as limited as road charging.

For example: if you want to tag the car you would probably choose a dashboard-top box that uses GPS data to track the car's location. It will have to store and communicate location data to some kind of central server, which will use it to create a bill. The data will have to be stored for at least a few billing cycles in case of disputes. Security services and insurers alike would love to have copies. On the other hand, if you want to tag the driver it might be simpler just to tie the whole thing to a mobile phone. The phone networks are already set up to do hand-off between nodes, and tracking the driver might also let you charge passengers, or might let you give full cars a discount.

The problem is that the discussion is coming from the wrong angle. We should not be saying, "Here is a clever technological idea. Oh, look, it makes data! What shall we do with it?" We should be defining the problem and considering alternative solutions. The people who drive most already pay most via the fuel pump. If we want people to drive less, maybe we should improve public transport instead. If we're trying to reduce congestion, getting employers to be more flexible about working hours and telecommuting would be cheaper, provide greater returns, and, crucially for this discussion, not create a large database system that can be used to track the population's movements.

(Besides, said one of the workshop's participants: "We live with the congestion and are hugely productive. So why tamper with it?")

It is characteristic of our age that the favored solution is the one that creates the most data and the biggest privacy risk. No one in the cluster of organisations opposing the ID card - No2ID, Privacy International, Foundation for Information Policy Research, or Open Rights Group - wanted an incident like this week's to happen. But it is exactly what they have been warning about: large data stores carry large risks that are poorly understood, and it is not enough for politicians to wave their hands and say we can trust them. Information may want to be free, but data want to leak.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 9, 2007

Watching you watching me

A few months ago, a neighbour phoned me and asked if I'd be willing to position a camera on my windowsill. I live at the end of a small dead-end street (or cul-de-sac), that ends in a wall about shoulder height. The railway runs along the far side of the wall, and parallel to it and further away is a long street with a row of houses facing the railway. The owners of those houses get upset because graffiti keeps appearing alongside the railway where they can see it and covers flat surfaces such as the side wall of my house. The theory is that kids jump over the wall at the end of my street, just below my office window, either to access the railway and spray paint or to escape after having done so. Therefore, the camera: point it at the wall and watch to see what happens.

The often-quoted number of times the average Londoner is caught on camera per day is scary: 200. (And that was a few years ago; it's probably gone up.) My street is actually one of those few that doesn't have cameras on it. I don't really care about the graffiti; I do, however, prefer to be on good terms with neighbours, even if they're all the way across the tracks. I also do see that it makes sense at least to try to establish whether the wall downstairs is being used as a hurdle in the getaway process. What is the right, privacy-conscious response to make?

I was reminded of this a few days ago when I was handed a copy of Privacy in Camera Networks: A Technical Perspective, a paper published at the end of July. (We at net.wars are nothing if not up-to-date.)

Given the amount of money being spent on CCTV systems, it's absurd how little research there is covering their efficacy, their social impact, or the privacy issues they raise. In this paper, the quartet of authors – Marci Lenore Meingast (UC Berkeley), Sameer Pai (Cornell), Stephen Wicker (Cornell), and Shankar Sastry (UC Berkeley) – are primarily concerned with privacy. They ask a question every democratic government deploying these things should have asked in the first place: how can the camera networks be designed to preserve privacy? For the purposes of preventing crime or terrorism, you don't need to know the identity of the person in the picture. All you want to know is whether that person is pulling out a gun or planting a bomb. For solving crimes after the fact, of course, you want to be able to identify people – but most people would vastly prefer that crimes were prevented, not solved.

The paper cites model legislation (PDF) drawn up by the Constitution Project. Reading it is depressing: so many of the principles in it are such logical, even obvious, derivatives of the principles that democratic governments are supposed to espouse. And yet I can't remember any public discussion of the idea that, for example, all CCTV systems should be accompanied by identification of and contact information for the owner. "These premises are protected by CCTV" signs are everywhere; but they are all anonymous.

Even more depressing is the suggestion that the proposals for all public video surveillance systems should specify what legitimate law enforcement purpose they are intended to achieve and provide a privacy impact assessment. I can't ever remember seeing any of those either. In my own local area, installing CCTV is something politicians boast about when they're seeking (re)election. Look! More cameras! The assumption is that more cameras equals more safety, but evidence to support this presumption is never provided and no one, neither opposing politicians nor local journalists, ever mounts a challenge. I guess we're supposed to think that they care about us because they're spending the money.
The main intention of Meingast, Pai, et al, however, is to look at the technical ways such networks can be built to preserve privacy. They suggest, for example, collecting public input via the Internet (using codes to identify the respondents on whom the cameras will have the greatest impact). They propose an auditing system whereby these systems and their usage is reviewed. As the video streams become digital, they suggest using layers of abstraction of the resulting data to limit what can be identified in a given image. "Information not pertinent to the task in hand," they write hopefully, "can be abstracted out leaving only the necessary information in the image." They go on into more detail about this, along with a lengthy discussion of facial recognition.

The most depressing thing of all: none of this will ever happen, and for two reasons. First, no government seems to have the slightest qualm of conscience about installing surveillance systems. Second, the mass populace don't seem to care enough to demand these sorts of protections. If these protections are to be put in place at all, it must be done by technologists. They must design these systems so that it's easier to use them in privacy-protecting ways than to use them in privacy-invasive ways. What are the odds?

As for the camera on my windowsill, I told my neighbour after some thought that they could have it there for a maximum of a couple of weeks to establish whether the end of my street was actually being used as an escape route. She said something about getting back to me when something or other happened. Never heard any more about it. As far as I am aware, my street is still unsurveilled.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 12, 2007

The permission-based society

It was Edward Hasbrouck who drew my attention to a bit of rulemaking being proposed by the Transportation Security Agency. Under current rules, if you want to travel on a plane out of, around, into, or over the US you buy a ticket and show up at the airport, where the airline compares your name and other corroborative details to the no-fly list the TSA maintains. Assuming you're allowed onto the flight, unbeknownst to you, all this information has to be sent to the TSA within 15 minutes of takeoff (before, if it's a US flight, after if it's an international flight heading for the US).

Under the new rules, the information will have to arrive at the TSA 72 hours before the flight takes off – after all, most people have finalised their travel plans by that time, and only 7 to 10 percent of itineraries change after that – and the TSA has to send back an OK to the airline before you can be issued a boarding pass.

There's a whole lot more detail in the Notice of Proposed Rulemaking, but that's the gist. (They'll be accepting comments until October 22, if you would like to say anything about these proposals before they're finalised.)

There are lots of negative things to say about these proposals – the logistical difficulties for the travel industry, the inadequacy of the mathematical model behind this (which at the public hearing the ACLU's Barry Steinhardt compared to trying to find a needle in a haystack by pouring more hay on the stack), and the privacy invasiveness inherent in having the airlines collect the many pieces of data the government wants and, not unnaturally, retaining copies while forwarding it on to the TSA. But let's concentrate on one: the profound alteration such a scheme will make to American society at large. The default answer to the question of whether you had the right to travel anywhere, certainly within the confines of the US, has always been "Yes". These rules will change it to "No".

(The right to travel overseas has, at times, been more fraught. The folk scene, for example, can cite several examples of musicians who were denied passports by the US State Department in the 1950s and early 1960s because of their left-wing political beliefs. It's not really clear to me why the US wanted to keep people whose views it disapproved of within its borders but some rather hasty marriages took place in order to solve some of these immigration problems, though everyone's friends again now and it's fresh passports all round.)

Hasbrouck, Steinhardt, and EFF founder John Gilmore, who sued the government over the right to travel anonymously within the US, have all argued that the key issue here is the right to assemble guaranteed in the First Amendment. If you can't travel, you can't assemble. And if you have to ask permission to travel, your right of assembly is subject to disruption at any time. The secrecy with which the TSA surrounds its decision-making doesn't help.

Nor does the amount of personal data the TSA is collecting from airline passenger name records. The Identity Project's recent report on the subject highlights that these records may include considerable detail: what books the passenger is carrying, what answer you give when asked where you've been or are going, names and phone numbers given as emergency contacts, and so on. Despite the data protection laws, it isn't always easy to find out what information is being stored; when I made such a request of US Airways last year, the company refused to show me my PNR from a recent flight and gave as the reason: "Security." Civilisation as we know it is at risk if I find out what they think they know about me? We really are in trouble.

In Britain, the chief objections to the ID card and, more important, the underlying database, have of course been legion, but they have generally focused on the logistical problems of implementing it (huge cost, complex IT project, bound to fail) and its general privacy-invasiveness. But another thing the ID card – especially the high-tech, biometric, all-singing, all-dancing kind – will do is create a framework that could support a permission-based society in which the ID card's interaction with systems is what determines what you're allowed to do, where you're allowed to go, and what purchases you're allowed to make. There was a novel that depicted a society like this: Ira Levin's This Perfect Day, in which these functions were all controlled by scanner bracelets and scanners everywhere that lit up green to allow or red to deny permission. The inhabitants of that society were kept drugged, so they wouldn't protest the ubiquitous controls. We seem to be accepting the beginnings of this kind of life stone, cold sober.

American children play a schoolyard game called "Mother, May I?" It's one of those games suitable for any number of kids, and it involves a ritual of asking permission before executing a command. It's a fine game, but surely it isn't how we want to live.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 14, 2007

Nothing to hide, no one to trust

The actor David Hyde Pierce is widely reported to have once responded to a TV interviewer inquiring as to whether he was gay, "My life is an open book, but that doesn't mean I'm going to read it to you." (Or something very like that.)

This seems to me a both witty and intelligent response to the seemingly ever-present mantra these days, "If you have nothing to hide, you have nothing to fear," invoked every time someone wants to institute some new, egregious privacy-invasive surveillance practice. And there are a lot of these.

Last week, a British judge came up with a brilliant scheme for eliminating the racial bias of the 3 million-entry DNA database: collect samples from everyone, even visitors. I may have nothing to fear from this - after all, DNA testing has, in the US, been used to exonerate the innocent on Death Row - but it invokes in me what British politicos sometimes call the "yuck factor". Normally, this is reserved for such science-related ethical dilemmas as human cloning, but for me at least it applies much more strongly here. I loved the movie Gattaca, but I don't want to live there.

In fact, there are considerable risks in DNA-printing the entire population (aside from killing tourism). For one thing, we do not know how we will be able to use or interpret DNA in the future as sequencing plummets in price (as it's expected to do). Say, the UK had considered compiling a nationwide fingerprints database back in 1970 (there would have been riots, but leaving that aside). No one would have foreseen then the widespread availability of cheap fingerprint scanners and online communications that could turn that database into a central authority.

We can surmise that the DNA database will contain sufficient information to allow anyone who can gain access to it to impersonate anyone at any time. Conversely, as we get better and better at understanding what individual genes mean and sequencing drops precipitously in price, the DNA database will grant those who have access to it unprecedented amounts of information about each person's biological or medical prospects and those of their immediate relatives. While there are many diseases that do not have markers in our genes, there are plenty more that do. Does anyone really want the government to be the first to know that they carry the gene for cystic fibrosis or breast cancer?

I don't believe for a second that it was a serious proposal. This is the kind of thing someone says and then everyone holds their breath to gauge the reaction. Has the country been softened up enough to accept such a thing yet? But the fact that someone could say it at all shows how far we have moved away from the presumption of innocence on which both the UK and the US governments were founded.

Witty answers on talk shows aren't, however, quite enough to make a case to a government that what it wants to do is a bad idea. In his book The Digital Person, George Washington University law professor Daniel Solove compares privacy to environmental damage: not the single horror story implied by "nothing to hide, nothing to fear", but the result of the accumulating damage caused by a series of "small acts by different actors". The broader structural damage that happens in breaches of confidentiality (such as companies violating their own privacy policies by selling data to third parties) is a loss of trust.

I am not a supporter of open gun ownership, but the US Second Amendment has some merit in principle: the basic idea is to balance the power of the individual against the State. The EU's data protection laws do - or would, if the EU doesn't ignore them as it has in the case of passenger data - a reasonable job of balancing the power of the individual against commercial companies. But the data protection laws can be upended, it seems, whenever a national government wants to do so. All it has to do is pass a law making it legal or mandatory to supply the data it wants to collect, transfer, share, or sell. But the fact that such policies are possible doesn't make them a good idea, even with the best intentions of improving security or personal safety.

The San Francisco computer security expert Russell Brand once asked me, in the casual way he poses philosophical questions, "If you knew they would never be used against you, would you still have secrets?" After some thought, I came up with this: yes. Because one of the ways you show someone that they are important to you and you trust them is that you disclose to them things you don't normally tell other people. It is, in fact, one of the ways we show we love them. We don't tell governments secrets because we want to be intimate with them; we do it because we're required to do so by law. The more one-sided laws make the balance of power, the less we're going to trust our government. Is that really what they want?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 7, 2007

Was that private?

Time and again the Net has proved that anything large corporations can do to us – we can do to ourselves much more effectively and willingly. Churn, for example, used to be something people fired stockbrokers for; it is the practice of buying and selling stocks much more frequently than is rational in order to profit from the brokerage fees. But during the dot-com day-trading boom, as I remember brokers saying at the time, people churned their own accounts far more than any broker would ever have dared to do (and, we add, probably losing themselves far more money).

The same is happening now with privacy. If, say, Tesco posted the real names of its club card members on a Web site complete with little jokes about the foods they buy and the times of day they like deliveries, there would be considerable outrage. (And rightly so.) When a government allows the kinds of leaks detailed here on the Blindside site (to which I contribute) we are significantly unimpressed. On the other hand, over and over on the Net we see people invading their own privacy to a degree that probably no corporation or government would dare. We post about our friends, our habits, our flight times, the shinies we just bought, the books and newspapers we read, the TV programs we watch (and, in some cases, download), our religious feelings or lack thereof, and the organisations we join and promote. We do this for the same reason people feel safer driving in their own cars than they do flying on someone else's airplane: we think we're in control.

Which all leads to a 2003 European court decision that was noted this week by Oxford Internet Institute law professor Jonathan Zittrain (who had it from Karen McCullagh) about Mrs Bodil Lindqvist, a Swedish church catechist who apparently chatted rather liberally about some of her fellow church committee members on a Web site they didn't know she'd created. Among other information, she included names, the fact that a colleague was on partial medical leave because her foot was injured, phone numbers, and so on.

Your attitude about this sort of thing depends partly on your personality, your culture, and your own online habits. So many of my friends blog what seems to be their whole lives in copious detail that it never occurred to me I was doing anything privacy-invasive when I visited a foreign country and blogged the name of the friend of a friend I met there and an account of a day we spent together. The blogee, however, was unhappy and asked me to remove the name and other identifying details. I was surprised, but complied. Referring to someone's injured foot seems kind of harmless, too. On the other hand, I would never, however, put someone's telephone number online without their consent, even though my own phone number is on my Web site. Yet that prejudice seems irrational if you instead call the information about the injured foot "personal medical data" but see the phone number as something anyone could find in the phone book.

Fortunately, I don't live in Sweden. Mrs Lindqvist also took down her pages when she found out the people she had mentioned were upset. But nonetheless she was prosecuted for several violations of the data protection laws – processing personal data without giving prior written notice to the data protection authorities, processed sensitive personal data (the medical information about the foot) without consent, and transferred personal data to a third country. The European Court of Justice, where the case eventually ended up, concluded the third of these did not apply – simply posting something on a Web page that can be read by someone in another country isn't enough for that. But the ECJ agreed she was guilty of the first two of these offenses – as are, by now, probably hundreds of thousands, if not millions, of other Europeans.

This judgement was rendered (and ignored) before Facebook and MySpace became phenomena, and before blogging became quite such a widespread pastime. It acknowledges the competing claims of laws guaranteeing freedom of expression, but still comes down against Mrs Lindvist – who, frankly, seems like pretty small beer compared to this week's announcement that Facebook is to open its member listings to Google's search engine.

The problem with the Facebook decision is that one of the things that really does govern the Net while the law is still making up its mind is community standards. Based on Facebook's assurances that the system was closed to members only, people posted material about themselves and their friends that they thought would stay private. The decision to open the service makes them more like celebrities at Addictions Anonymous meetings. They are about to discover this in the same way that a generation of Usenet posters did when the archives of what they thought were ephemera were assembled and opened to the public by Deja News (now Google Groups). What they will also discover is that although they can delete their own accounts or mark them private, like Mrs Lindqvist's church colleagues, they have no control over what others have said about them in public when they weren't looking.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 24, 2007

Game gods

Virtual worlds have been with us for a long time. Depending who you listen to, they began in 1979, or 1982, or it may have been the shadows on the walls of Plato's cave. We'll go with the University of Essex MUD, on the grounds that its co-writer Richard Bartle can trace its direct influence on today's worlds.

At State of Play this week, it was clear that just as the issues surrounding the Internet in general have changed very little since about 1988, neither have the issues surrounding virtual worlds.

True, the stakes are higher now and, as Professor Yee Fen Lim noted, when real money starts to be involved people become protective.

Level 70 warrior accounts on World of Warcraft go for as little as $10 (though your level number cannot disguise your complete newbieness), but the unique magic sword you won in a quest may go for much more. The best-known pending case is Bragg versus Second Life over virtual property the world's owners confiscated when they realized that Bragg was taking advantage of a loophole in their system to buy "land" at exceptionally cheap prices. Lim had an interesting take on the Bragg case: as a legal concept, she argued, property is right of control, even though Linden Labs itself defines its virtual property as rental of a processor. As computer science that's fine, but it's not law. Otherwise, she said, "Property is mere illusion."

Ultimately, the issues all come down to this: who owns the user experience? In subscription gaming worlds, the owners tend to keep very tight control of everything – they claim ownership in all intellectual property in the world, limit users' ability to create their own content, and block the sale of cheats as much as possible. In a free-form world like Second Life which may host games but is itself a platform rather than a game, users are much freer to do what they want but the EULAs or Terms of Service may be just as unfair.

Ultimately, no matter what the agreement says, today's privately owned virtual worlds all function under the same reality: the game gods can pull the plug at any time. They own and control the servers. Possession is nine-tenths of the law, and all that. Until someone implements open source world software on a P2P platform, this will always be the way. Linden Labs says, for what it's worth, that its long-term intention is to open-source its platform so that anyone may set up a world. This, too, has been done before, with The Palace.

One consequence of this is that there is no such thing as virtual privacy, a topic that everyone is aware of but no one's talking about. The piecemeal nature of the Net means that your friend's IRC channel doesn't know anything about your Web use, and Amazon.com doesn't track what you do on eBay. But virtual worlds log everything. If you buy a new shirt at a shop and then fly to a distant island to have sex with it, all that is logged. (Just try to ensure the shirt doesn't look like a child's shirt and you don't get into litigation over who owns the island…)

There are, as scholars say, legitimate reasons. Logging everything that happens is important in helping game developers pinpoint the source of crashes and eliminate bugs. Logs help settle disputes over who did what to whose magic sword. And in a court case, they may be important evidence (although how you can ensure that the logs haven't been adjusted to suit the virtual world provider, who is usually one of the parties to the litigation, I don't know).

As long as you think of virtual worlds as games, maybe this isn't that big a problem. After all, no one is forced to spend half their waking hours killing enough monsters in World of Warcraft to join a guild for a six-hour quest.

But something like Second Life aspires to be a lot more than that. The world is adding voice communication, which will be interesting: if you have to use your real voice, the relative anonymity conferred by the synthetic world are gone. Quite apart from bandwidth demands (lag is the bane of every SLer's existence), exploring what virtual life is like in the opposite gender isn't going to work. They're going to need voice synthesizers.

Much of the law in this area is coming out of Asia, where massively multi-player online games took off so early with such ferocity that, according to Judge Unggi Yoon, in a recent case a member of a losing team in one such game ran to the café where the winning team was playing and physically battered one of its members. Yoon, who explained some of the new laws, is an experienced online gamer, all the way back to playing Ultima Online in middle school. In his country, a law has recently come into force taxing virtual world transactions (it works like a VAT threshold – under $100 a month you don't owe anything). For Westerners, who are used to the idea that we make laws and export them rather than the other way around, this is quite a reality shift.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 27, 2007

There ain't no such thing as a free Benidorm

This has been the week for reminders that the border between real life and cyberspace is a permeable blood-brain barrier.

On Wednesday, Linden Labs announced that it was banning gambling in Second Life. The resentment expressed by some of SL residents is understandable but naive. We're not at the beginning of the online world any more; Second Life is going through the same reformation to take account of national laws as Usenet and the Web did before it.

Second, this week MySpace deleted the profiles of 29,000 American users identified as sex offenders. That sounds like a lot, but it's a tiny percentage of MySpace's 180 million profiles. None of them, be it noted, are Canadian.

There's no question that gambling in Second Life spills over into the real world. Linden dollars, the currency used in-world, have active exchange rates, like any other currency, currently running about L$270 to the US dollar. (When I was writing about a virtual technology show, one of my interviewees was horrified that my avatar didn't have any distinctive clothing; she was and is dressed in the free outfit you are issued when you join. He insisted on giving me L$1,000 to take her shopping. I solemnly reported the incident to my commissioning editor, who felt this wasn't sufficiently corrupt to worry about: US$3.75! In-world, however, that could buy her several cars.) Therefore: the fact that the wagering takes place online in a simulated casino with pretty animated decorations changes nothing. There is no meaningful difference between craps on an island in Second Life and poker on an official Web-based betting site. If both sites offer betting on real-life sporting events, there's even less difference.

But the Web site will, these days, have gone through considerable time and money to set up its business. Gaming, even outside the US, is quite difficult to get into: licenses are hard to get, and without one banks won't touch you. Compared to that, the $3,800 and 12 to 14 hours a day Brighton's Anthony Smith told Information Week he'd invested in building his SL Casino World is risibly small. You have to conclude that there are only two possibilities. Either Smith knew nothing about the gaming business - if he did, he know that the US has repeatedly cracked down on online gambling over the last ten years and that ultimately US companies will be forced to decide to live within US law. He'd also have known how hard and how expensive it is to set up an online gambling operation even in Europe. Or, he did know all those things and thought he'd found a loophole he could exploit to avoid all the red tape and regulation and build a gaming business on the cheap.

I have no personal interest in gaming; risking real money on the chance draw of a card or throw of dice seems to me a ridiculous waste of the time it took to earn it. But any time you have a service that involves real money, whether that service is selling an experience (gaming), a service, or a retail product, when the money you handle reaches a certain amount governments are going to be interested. Not only that, but people want them involved; people want protection from rip-off artists.

The MySpace decision, however, is completely different. Child abuse is, rightly, illegal everywhere. Child pornography is, more controversially, illegal just about everywhere. But I am not aware of any laws that ban sex offenders from using Web sites, even if those Web sites are social networks. Of course, in the moral panic following the MySpace announcement, someone is proposing such a law. The MySpace announcement sounds more like corporate fear (since the site is now owned by News International) than rational response. There is a legitimate subject for public and legislative debate here: how much do we want to cut convicted sex offenders out of normal social interaction? And a question for scientists: will greater isolation and alienation be effective strategies to keep them from reoffending? And, I suppose, a question for database experts: how likely is it that those 29,000 profiles all belonged to correctly identified, previously convicted sex offenders? But those questions have not been discussed. Still, this problem, at least in regards to MySpace, may solve itself: if parents become better able to track their kids' MySpace activities, all but the youngest kids will surely abandon it in favour of sites that afford them greater latitude and privacy.

A dozen years ago, John Perry Barlow (in)famously argued that national governments had no place in cyberspace. It was the most hyperbolic demonstration of what I call the "Benidorm syndrome": every summer thousands of holidaymakers descend on Benidorm, in Spain, and behave in outrageous and sometimes lawless ways that they would never dare indulge in at home in the belief that since they are far away from their normal lives there are no consequences. (Rinse and repeat for many other tourist locations worldwide, I'm sure.) It seems to me only logical that existing laws apply to behaviour in cyberspace. What we have to guard against is deforming cyberspace to conform to laws that don't exist.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 20, 2007

The cookie monster

Google announced this week that in order to improve user privacy it would cut the length of time its cookies stay on our systems to two years. The clock will start ticking again every time you use Google or a site using a Google application. The company also plans to anonymize its user logs after 18 months.

The company has been under attack lately for its privacy policies. First, a few weeks ago, Privacy International published its report on the privacy practices of key Internet companies, A Race to the Bottom, and Google came last. Not, you understand, that many companies did all that much better.

Privacy International didn't think any company was "privacy-friendly and privacy-enhancing", but it classed several as "generally privacy-aware but in need of improvement". These were: eBay, LiveJournal, the BBC, Wikipedia, and Last.fm. The report excluded travel sites and financial services, on the grounds that these are subject to regulations beyond their control..

You may notice that these sites aren't exactly comparable with Google, to say nothing of other companies included in the survey, such as AOL, Friendster, Microsoft, and Skype. This seems to me a real problem. The BBC, which does not rely on advertising or commercial sales, needs to operate no registration system, and outside its ecommerce sales has no reason to track anybody beyond generating usage statistics to show the patterns of how content is accessed on its site. Skype, on the other hand, can hardly offer a service without retaining user information, call records, and financial data. Practices that Skype must engage in to operate would be shockingly privacy-invasive if adopted by the BBC. More difficult to assess is user privacy on the social networking sites; on a site like Friendster or LiveJournal by publishing the details of their lives and thoughts users may invade their own privacy far more comprehensively than the site itself can.

The sites are also not comparable in terms of how necessary they are. Hardly anyone really needs AOL. Keeping a blog on LiveJournal is optional; my life proceeds quite happily without Friendster or Facebook. But it's almost impossible these days to look anything up without at least considering looking on Wikipedia, and while there are many VoIP services, peer pressure makes a lot of people sign up for Skype. Therefore, while it's reasonable to compare the companies' corporate behavior, the impact of that behavior is not comparable, nor is the amount of effort and money respecting privacy costs the company. It's a lot harder for Google to respect privacy and maintain its revenue stream than it is for the BBC.

It's also ironic that eBay should have scored so well. Police forces all over Britain agree that online auction fraud is one of the biggest sources of complaints they have. Google's ability to track everyone's search history, reading habits, and general interests may be, long-term, the worse privacy invasion. But to most people it's worse to be ripped off, and while eBay says it takes fraud seriously, the site is still awash in counterfeit DVDs, and does nothing to warn people with transactions in progress when a user's account is suspended for fraud. Which is worse? Being marketed at and tracked or being ripped off? Given that so many people are happy to hand over their privacy in return for some money off groceries (loyalty cards) or a truly modest amount of better treatment from their airline, I'd guess most people would think the latter.

But even given all that Google's announcement this week is so trivial that it's insulting. For one thing, as Google Watch points out, Google assigns your computer a unique ID that persists through rain, snow, IP address change and cookie rewrite. For another, you have no idea when you click on a URL whether a Web site you're about to visit uses a Google service. The point of privacy practices is to give users control; this does anything but that. Why not instead widen the user-configurable preferences to include whether or not to accept cookies and for how long? How hard can that be for all those Google geniuses?

The Article 29 working group pointed out also that the bigger problem is Google's storing search histories and IP addresses. As Privacy International noted, most companies regard individual IP addresses as essentially anonymous, impersonal data – absurd in this time of broadband, when people have the same addresses for years on end. My IP address identifies my computer system more tightly than a library card.

To a large extent, Privacy International blames advertising. As long as content and services are going to be paid for by advertising, sites must track user statistics and supply the data that keeps advertisers happy. There's some justice to that.

But the real problem is the users: who is going to stop using Google because of its privacy policies? You might decide to avoid Gmail, or to delete patiently, one by one, the Usenet postings you crazily typed one night while drunk in 1982, but if you want search, or advertising on your own site… Google is successful as a business because it's made itself indispensable.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 20, 2007

Green spies

Some months back I blogged a breakdown of the various fees that are added on to each airline ticket and tagged it "What we're paying." A commenter took issue: society at large, he wrote, was paying a good deal more than that for my evil flying habits, and I shouldn't be going to Miami anyway. He had a point. What's offending one niece by missing her wedding? I have more.

The intemperateness of the conversation is the kind of thing smokers used to get from those who've already quit.
Just how acrimonious the whole thing is getting was brought home to me this week when Ian Angell surfaced to claim that it is not really possible to be a privacy advocate and an environmentalist at the same time. Of course, Angell was in part just trying to make trouble and get people arguing. But he says he has a serious point.

"The green issues are providing a moral justification for the invasion of privacy," he says, "and the green lobby must take it on board as part of what they're doing. And the fact that they're not taking it on board makes them guilty."

I wouldn't go that far – I do not think you can blame people for unintended consequences. But there are a number of proposals floating around in the UK that could provide yet more infrastructure for endemic surveillance, even if the intention at the moment is to protect the environment.

For example: the idea of the personal carbon allowance, first mooted in 2005 with the notion that it could be linked to the ID card. Last July, environment minister, David Miliband, proposed issuing swipe cards to all consumers, which you'd have to produce whenever you bought anything like petrol or heating – or plane tickets. That at least would give me ammunition against my blog commenter, because other than flying my carbon footprint is modest. In fact, we could have whole forums of moral superiors boasting about how few carbon points they used, like we now have people who boast about how early they get up in the morning. And we could have billboards naming and shaming those who – oh, the horror – had to buy extra carbon points, like they do for TV license delinquents.

Or take the latest idea in waste management, the spy bin fitted with a microchip sensor that communicates with the garbage truck to tell your local council how much you've contributed to the landfill. Given the apparent eagerness of manufacturers to enhance their packaging with RFID chips, this could get really interesting over time.

This is also a country where the congestion charge – a scheme intended to reduce the amount of traffic in central London – is enforced by cameras that record the license plates of every vehicle as it crosses the border. Other countries have had road tolls for decades, but London's mayor, formerly known as "Red Ken" Livingstone because of his extreme left-wing leanings, chose the most privacy-invasive way to do it. Proposals for nationwide road charging follow the same pattern, although the claim is that there will be safeguards against using the installed satellite tracking boxes to actually track motorists. Why on earth is this huge infrastructure remotely necessary? We already have per-mile road use charging. It's called buying fuel.

Privacy International's executive director, Simon Davies, points out that none of these proposals – nor those to expand the use of CCTV (talking cameras!) – are supported by research to show how the environment will benefit.

Of course, if there's one rule about environmentalism it is, as Angell says, "The best tax is the tax the other guy pays." Personally, I'd ban airconditioning; it doesn't get that hot in the UK anyway, and a load of ceiling fans and exhaust fans would take care of all but the most extreme cases of medical need. It certainly does seem ironic that just at the moment when everyone's getting exercised about saving energy and global warming – they're all putting in airconditioning so cold you have to carry a sweater with you if you go anywhere in the "summer".

So, similarly, when Angell says there are "straightforward, immediate answers" he's perfectly right. The problem is they'll all enrage some large group of businesses. "You could reduce garbage by 80 percent by banning packaging in shops. We are squabbling about tiny little changes when quite substantial changes are just not on the cards."
And then, he adds, "They jump on airline travel because you can bump up the taxes and it's morally justified."

I am convinced, however, that it's possible to be a privacy advocate and an environmentalist simultaneously. This is a type of issue that has come up before, most notably in connection with epidemiology. If you make AIDS a notifiable disease you make it easier to track the patterns of infection and alert those most at risk; but doing so invades patient privacy. But in the end, although Angell's primary goal was to stir up trouble, he's right to say that environmentalists need to ensure that their well-meaning desire to save the planet is not hijacked. Or, he says, "they will be blamed for the taxation and the intrusion."

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 19, 2007

Spineless

A friend went to India recently and got sick. Unsurprising, you might think, except that the reason she got sick was that her doctor in Pennsylvania didn't know that the anti-malaria medication he prescribed would interact badly with the anti-acid reflux drug he had also prescribed. The Indian doctor (who, ironically, had trained in Pennsylvania) knew all about it. Geography.

Preventing this kind of situation, at least on a national level, is part of the theory behind the NHS data spine; for Americans, it's a giant database onto which patient information from all parts of Britain's National Health Service is going to be put. It's also presumably part of the reason that pharmacists think they should be allowed to edit and add to patient records; American pharmacies have for years marketed the notion that if you fill all your prescriptions at the same place they'll be able to tell you if you're prescribed something stupid.

Personally, I can see a lot of merit in this idea. Also in the idea that medical personnel would have access to my records, that my allergies would be known to all and sundry (it would be a help, in a medical crisis, if someone didn't try to revive me by feeding me peanut butter).

The problem is that this is a fantasy. It seems appealing to me, I suppose, only because a) I have hardly any medical records – all my doctors either died, refused to give me my records, or destroyed them after they hadn't heard from me for too long – and b) I can't imagine anything bad happening to me if any of my medical history were disclosed. This is not true for most people, and it ignores the most important thing we know about all databases: they contain errors. This is how the campaign to opt out of the database was born: its organiser, Helen Wilkinson, discovered that her medical records had erroneously labeled her an alcoholic. (Not that there's anything wrong with that.) Getting that corrected took years and questions in Parliament.

The problem with all these systems is that they seek to replace knowledge with information. Your GP may know you; the database merely holds information about you and can make no intelligent judgments about what's relevant to a particular situation or distinguish true from false.

Last year, the World Privacy Forum released a report on medical identity theft. Identity fraud, they concluded, happens at all levels in the US medical system. Medical personnel or clinics seeking to pad their income may add treatments they have never delivered to patient records and present them to insurers for payment. Thieves may use doctors' information to forge prescriptions. Patients without insurance or who do not want particular types of treatment to appear on their own records may steal another's identity. In the US, where most treatment is funded by private medical insurance, the consequences can be far-reaching for the victims of such fraud: their credit ratings, employment prospects, and ability to get medical insurance can all be hit hard. A lot of the complaints, therefore, about the Health Insurance Portability and Accountability Act are that it opens medical records to far too many people and, like the NHS Data Spine, does not provide a way for individuals to correct their own records.

In the UK, things are a bit different. Here, GPs are gatekeepers to all care. According to Fleur Fisher, a consultant on ethics and health care practice, and probably the leading expert on medical privacy in the UK, there have, however, been serious frauds in dentistry in the UK, where dentists may claim for big treatments they haven't actually performed.

The problem in the UK, she says, "is not that people will assume your medical identity so they can get treatment. It's much more that it will open people's health records."

A key part of the NHS plan seems to be to provide data to researchers to help determine public policy. Again, in a lot of ways this makes sense; but there is an old and recurring conflict between the desire for privacy of patients with, say, AIDS, and the legitimate interest of society at large to halt the disease's spread. One thing you should not rely on is that the data will be unidentifiable, even if the NHS confirms that it will be "anonymized". Years ago, Latanya Sweeney showed just how unreliable this is by analyzing supposedly anonymized data from the health system in the state of Massachusetts by matching it against publicly available motor vehicle rolls. With only a few database fields she was able to identify almost all of the individuals in the medical data.

Opting out has turned out not to be so simple, despite the fact that according to Ross Anderson, who has been working on medical privacy for well over a decade, most GPs are unhappy about the forced uploading of patient data to a centralised database. That being the case, as Phil Booth notes in the No2ID forum on the topic, if you want to opt out treat your GP as your ally unless he proves otherwise.

Meantime, if you really want emergency personnel to know the important stuff about you, wear an alert bracelet or some other identifier.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 1, 2006

A SWIFT kick

One of the clear trends of the last five years has been increasing international surveillance, especially by or on behalf of the US. Foreign visitors to the US now are welcomed with demands for fingerprints and other biometrics; airlines flying to the US are required to hand over passenger data even before the plane pushes back; and, behind the scenes, the cooperative that handles interbank transfers within Europe has been sending the US Treasury department banking records that the average European citizen almost certainly assumes are confidential.

This week, the Article 29 group – a panel of European Commissioners for Freedom, Security, and Justice – ruled that the interbank money transfer service SWIFT (Society for Worldwide Interbank Financial Telecommunication) has failed to respect the provisions of the EU Data Protection directive by transferring personal financial data to the US in a manner the press release describes as "hidden, systematic, massive, and long-term."

It doesn't sound like much when you say that a few people brought a complaint about an obscure organization to an equally obscure branch of the EU government and won. It sounds like a lot more when you say that a few people brought a complaint that, upheld, means that the European financial world will have to change their behavior.

The transfers are part of anti-terrorist programs put in place after the September 11, 2001 attacks to allow American intelligence agency analysts to spot funds being sent to finance terrorists. The problem is that, under EU law, the Data Protection Directive forbids the transfer of personal data to countries that do not have the same level of protection in place; the US is most certainly in that category. Simon Davies, executive director of Privacy International, says the goal in making the complaint that led to the Article 29 group's decision was not to stop all data transfers. "The data should be transferred when there's some level of evidence," he says. What PI objected to was the lack of oversight from anyone outside the cooperative, which is owned by the many private companies – banks, brokers, investment managers, and corporations.

"Now that we know SWIFT was acting illegally," says Davies, "the aim is to bring SWIFT and the banks to account, first by establishing a meaningful oversight mechanicsm, and second by bringing some transparency to the whole arrangement." Part of Privacy International's involvement was, together with the American Civil Liberties Union, to prepare a report on the involvement of consulting firm Booz Allen Hamilton, which is SWIFT's supposedly independent auditor but which, according to the report, has been deeply involved with American surveillance programs for the last ten years. Booz Allen told the New York Times that it rejected PI's charges.

PI's next step, Davies says, will be to contact the banks to ask what they intend to do or have done to comply with the decision. Under the law, they have 30 days to reply. "At the end of the 30 days, unless they provide evidence that they have complied, we then follow up with a second round of complaints to all commissioners worldwide." The US, of course, has no data protection commissioner – and even if it did, the transfers are legal there – so the list Davies is talking about is all the EU countries, Canada, Hong Kong, Australia, New Zealand, and a smattering of others.

"What they do depends on their powers in each country," says Davies, noting that "the UK is particularly weak." Unlimited fines can be imposed, should the commissioners so choose. "If SWIFT doesn't make an adult decision to deal with the situation, then it's up to member banks to use their voting rights within SWIFT to force change."

Meanwhile, he says, "SWIFT is also stuck. They have to comply with subpoenas issued by US authorities." Otherwise, SWIFT would be incurring criminal liability.

Davies' belief is that what's needed is either a truly independent oversight body or perhaps a former judge, to review proposed data transfers and ensure they comply with the law.

That, of course, is not what the US wants; Jane Hovarth, chief privacy and civil liberties officer for the US Department of Justice, told the recent international conference of data protection officers that the US does, too, have privacy laws, and that everyone should get together and agree on some kind of global data law. Under EU law, however, the US would have to raise its privacy protections to EU standards before sending data there would be legal.

This seems unlikely, but you never know. A couple of years ago, when the EU had the choice of honoring data protection law or sending the US government all the airline passenger data it wanted – the EU caved and sent the passenger data. Still, in this era when people seem willing to justify almost any amount of privacy invasion with the words "anti-terrorism", it was heartening to read the Working Party's final comment on the whole thing:

"The Working Party recalls that any measures taken in the fight against crime and terrorism should not and must not reduce standards of protection of fundamental rights which characterise democratic societies."

It's up to us to make them stick to that.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 3, 2006

Where can we go?

One of the things about being an expatriate is this: whenever there’s a problem you always think that changing countries might be the solution. So one of the games we like to play is where-can-we-go. Global warming means the Gulfstream will stop or change direction and Britain will freeze? Where can we go? Britain is awash in CCTV cameras and wants to bring in a national identity card. Where can we go?

My usual answer is New Zealand, based on no knowledge whatsoever: it just seems so far from here that anything that’s a problem here surely can’t be one there, too. And I know so little about the country that it’s easy to fantasize being left alone to roam the hills among the sheep. Yes, I know: it can’t really be like that, and I’d hate it if it were.

But apparently when it comes to privacy the answers are Germany or Canada. In a pinch, Belgium, Austria, Greece, Hungary, or Argentina. At least, that’s the situation according to Privacy International’s new human rights survey New Zealand’s score is a good bit lower, although you could improve it by avoiding employment; workplace monitoring was the one area in which it scored really badly. Of course, the raw numbers never really tell you the quality of life in those countries; and most people don’t want to play where-can-we-go. Most people, being sensible, want the place they do live, and where they have their cultural and social ties, to be better.

The problem with talking about privacy is that it's so abstract. It’s arguable, for example, that the biggest worry in many people’s lives in the US is not privacy but how to pay for health care. A friend, for example, recently had occasion to go to the emergency room for a few tests; she estimated the bill at $2,000 and her share of it at $500.

In that sense, the Information Commissioner’s new survey, A Surveillance Society, launched alongside the annual data protection conference, is more alarming, in part because it investigates the consequences of constant surveillance and the impact it has on the realities of daily life. What can be done and is being done is to create a class system of great rigidity: surveillance, it says, brings social sorting to define target markets and risky populations. The airline that knows how much you travel decides accordingly how to treat you; the health service decides how to treat you based on its assessment of your worthiness for treatment. Welfare becomes an exercise in deterring fraud rather than assuring safety. It is, the IC’s survey says, risk management rather than the original promise of universal health care. That it’s not a conspiracy, as the IC survey repeats several times, makes it almost more alarming: there is no one specific enemy to fight.

This should not be surprising to anyone who read, some years back, the software engineer Ellen Ullman’s wonderful essay collection, Close to the Machine. Everything the IC is talking about was laid out there in detail, including the exact process by which it happens. Her story concerned a database created to help ensure that people with AIDS got all the help that was available to them. Slowly, the system morphed; in her words, it “infected” its users. The fuzzy, human logic by which one person might get an extra blanket was replaced by inexorable computer rules. Then it became hostile, trying to ensure that no one got more than they were entitled to – the precise stage Britain is at right now with respect to welfare.

In another case, the fact of a system’s existence led a boss to wonder whether he could monitor his sole employee to find out what she did all day. The employee had worked for him for decades and had picked up his children from school. This is, I suppose, where we are with National Identity cards. We *can* find out what everyone does all day, so why shouldn’t we?

Of course, Britain is famous for its class system and the anti-democratic nature of it. But if there was one thing you could say for the old ways, the class differences were clearly visible on the outside. Accent and habits of speech, as George Bernard Shaw observed more than a century ago in Pygmalion, determined how you were treated, and you knew what to expect. Social sorting via surveillance is more democratic in the sense that you don’t have to have a title or the right accent to be a big spender the airlines will treat like gold dust/ But the rules are hidden and insidious; rather than open and well-understood.

So: where can we go (PDF)? A lot of people like the sound of Ireland – they speak English, it’s close, and it’s pretty. But it ranks only a tier above the UK and will always been under pressure to adopt British standards because of the common travel area. Some people like Sweden, for its longstanding commitment to social welfare. It, too, ranks low on the privacy scale. It will have to be Canada. But it’s cold, I hear you cry. Nah. Global warming. Those igloos will be melting any day now. Off to Winnipeg (DOC)!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her , or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 20, 2006

Spam, spam, spam, and spam

Illinois is a fine state. It is the Land of Lincoln. It is the birth place of such well-known Americans as Oprah Winfrey, Roger Ebert, and Ronald Reagan. It has a baseball team so famous that even I know it's called the Chicago Cubs. John Dewey (as in the Dewey decimal system for cataloguing library books) came from Illinois. So did the famous pro-evolution lawyer Clarence Darrow, Mormon church founder Joseph Smith, the nuclear physicist Enrico Fermi, semiconductor inventor William Shockley, and Frank Lloyd Wright.

I say all this because I don't want anyone to think I don't like or respect Illinois or the intelligence and honor of its judges, including those of Charles Kocoras, who who awarded $11.7 million in damages to e360Insight, a company branded a spammer by the Spamhaus Project.

The story has been percolating for a while now, but is reasonably simple. e360Insight says it's not a bad spammer guy but a good opt-in marketing guy; Spamhaus first said the Illinois court didn't have jurisdiction over a British company with no offices, staff, or operations in the US, then decided to appeal against the court's $11.7 million judgement. e360Insight filed a motion asking the court to haveICANN and/or Spamhaus's domain registrar, the Canadian company Tucows, remove Spamhaus's domain from the Net. The judge refused to grant this request, partly because doing so would cut off Spamhaus's lawful activities, not just those in contravention of the order he issued against Spamhaus. And a good time is being had by all the lawyers.

The case raises so many problems you almost don't know where to start. For one thing, there's the arms race that is spam and anti-spam. This lawsuit escalates it, in that if you can't get rid of an anti-spammer through DDoS attacks, well, hey, bankrupt them through lawsuits.

Spam, as we know, is a terrible, intractable problem that has broken email, and is trying to break blogs, instant messaging, online chat, and, soon, VOIP. (The net.wars blog, this week, has had hundreds of spam comments, all appearing to come from various Gmail addresses, all landing in my inbox, breaking both blogs and email in one easy, low-cost plan. The breakage takes two forms. One is the spam itself – up to 90 percent of all email. But the second is the steps people take to stop it. No one can use email with any certainty now.

Some have argued that real-time blacklists are censorship. I don't think it's fair to invoke the specter of Joseph McCarthy. For one thing, using these blacklists is voluntary. No one is forced to subscribe, not even free Webmail users. That single fact ought to be the biggest protection against abuse. For another thing, spam email in the volumes it's now going out is effectively censorship in itself: it fills email boxes, often obscuring and sometimes blocking entirely wanted email. The fact that most of it either is a scam or advertises something illegal is irrelevant; what defines spam, I have long argued, is the behavior that produces it. I have also argued that the most effective way to put spammers out of business is to lean on the credit card companies to pull their authorisations.

Mail servers are private property; no one has the automatic right to expect mine to receive unwanted email just as I am not obliged to speak to a telemarketer who phones during dinner.

That does not mean all spambusters are perfect. Spamhaus provides a valuable public service. But not all anti-spammers are sane; in 2004 journalist Brian McWilliams made a reasonable case in his book Spam Kings that some anti-spammers can be as obsessive as the spammers they chase.

The question that's dominated a lot of the Spamhaus coverage is whether an Illinois court has jurisdiction over a UK-based company with no offices or staff in the US. In the increasingly connected world we live in, there are going to be a lot of these jurisdictional questions. The first one I remember – the 1996 case United States vs. Thomas – came down in favor of the notion that Tennessee could impose its community decency standards on a bulletin board system in California. It may be regrettable – but consumers are eager enough for their courts to have jurisdiction in case of fraud. Spamhaus is arguably as much in business in the US as any foreign organisation whose products are bought or used in the US. Ultimately, "Come here and say that" just isn't much of a legal case.

The really tricky and disturbing question is: how should blacklists operate in future? Publicly listing the spammers whose mail is being blocked is an important – even vital – way of keeping blacklists honest. If you know what's being blocked and can take steps to correct it, it's not censorship. But publishing those lists makes legal action against spam blockers of all types – blacklists, filtering software, you name it – easier.

Spammers themselves, however, should not rejoice if Spamhaus goes down. Spam has broken email, that's not news. But if Spamhaus goes and we actually receive all the spam it's been weeding out for us – the flood will be so great that spam will finally break spam itself.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 6, 2006

A different kind of poll tax

Elections have always had two parts: the election itself, and the dickering beforehand (and occasionally afterwards) over who gets to vote. The latest move in that direction: at the end of September the House of Representatives passed the Federal Election Integrity Act of 2006 (H.R. 4844), which from 2010 will prohibit election officials from giving anyone a ballot who can't present a government-issued photo ID whose issuing requirements included proof of US citizenship. (This lets out driver's licenses, which everyone has, though I guess it would allow passports, which relatively few have.)
These days, there is a third element: specifying the technology that will tabulate the votes. Democracy depends on the voters' being able to believe that what determines the election is the voters' choices rather than the latter two.

The last of these has been written about a great deal in technology circles over the last decade. Few security experts are satisfied with the idea that we should trust computers to do "black box voting" where they count up and just let us know the results. Even fewer security experts are happy with the idea that so many politicians around the world want to embrace: Internet (and mobile phone) voting.

The run-up to this year's mid-term US elections has seen many reports of glitches. My favorite recent report comes from a test in Maryland, where it turned out that the machines under test did not communicate with each other properly when the touch screens were in use. If they don't communicate correctly, voters might be able to vote more than once. Attaching mice to the machines solves the problem – but the incident is exactly the kind of wacky glitch that's familiar from everyday computing life and that can take absurd amounts of time to resolve. Why does anyone think that this is a sensible way to vote? (Internet voting has all the same risks of machine glitches, and then a whole lot more.)

The 2000 US Presidential election isn’t as famous for the removal from the electoral rolls in Florida of few hundred thousand voters as it is for hanging chad – but read or watch on the subject. Of course, wrangling over who gets to vote didn't start then. Gerrymandering districts, fighting over giving the right to vote to women, slaves, felons, expatriates…

The latest twist in this fine, old activity is the push in the US towards requiring Voter ID. Besides the federal bill mentioned above, a couple of dozen states have passed ID requirements since 2000, though state courts in Missouri, Kentucky, Arizona, and California are already striking them down. The target here seems to be that bogeyman of modern American life, illegal immigrants.

Voter ID isn't as obviously a poll tax. After all, this is just about authenticating voters, right? Every voter a legal voter. But although these bills generally include a requirement to supply a voter ID free of charge to people too poor to pay for one, the supporting documentation isn't free: try getting a free copy of your birth certificate, for example. The combination of the costs involved in that aspect, plus the effort involved in getting the ID are a burden that falls disproportionately on the usual already disadvantaged groups (the same ones stopped from voting in the past by road blocks, insufficient provision of voting machines in some precincts, and indiscriminate cleaning of the electoral rolls). Effectively, voter ID creates an additional barrier between the voter and the act of voting. It may not be the letter of a poll tax, but it is the spirit of one.

This is in fact the sort of point that opponents are making.

There are plenty of other logistical problems, of course, such as: what about absentee voters? I registered in Ithaca, New York, in 1972. A few months before federal primaries, the Board of Elections there mails me a registration form; returning it gets me absentee ballots for the Democratic primaries and the elections themselves. I've never known whether my vote is truly anonymous; nor whether it's actually counted. I take those things on trust, just as, I suppose, the Board of Elections trusts that the person sending back these papers is not some stray British person who's does my signature really well. To insert voter ID into that process would presumably require turning expatriate voters over to, say, the US Embassies, who are familiar with authentication and checking identity documents.

Given that most countries have few such outposts, the barriers to absentee voting would be substantially raised for many expatriates. Granted, we're a small portion of the problem. But there's a direct clash between the trend to embrace remote voting - the entire state of Oregon votes by mail – and the desire to authenticate everyone.
We can fix most of the voting technology problems by requiring voter-verifiable, auditable, paper trails, as Rebecca Mercuri began pushing for all those years ago (and most computer scientists now agree with), and there seem to be substantial moves in that direction as state electors test the electronic equipment and scientists find more and more serious potential problems. Twenty-seven states now have laws requiring paper trails. But how we control who votes is the much more difficult and less talked-about frontier.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 4, 2006

Hard times at the identity corral

If there is one thing we always said about the ID card it's that it was going to be tough to implement. About ten days ago, the Sunday Times revealed how tough: manufacturers are oddly un-eager to bid to make something that a) the Great British Public is likely to hate, and b) they're not sure they can manufacture anyway. That suggests (even more strongly than before) that in planning the ID card the government operated like an American company filing a dodgy patent: if we specify it, they will come.

I sympathize with IBM and the other companies, I really do. Anyone else remember 1996, when nearly all the early stories coming out of the Atlanta Olympics blamed IBM' prominently for every logistical snafu? Some really weren't IBM's fault (such as the traffic jams). Given the many failures of UK government IT systems, being associated with the most public, widespread, visible system of all could be real stock market poison.

But there's a secondary aspect to the ID card that I, at least, never considered before. It's akin to the effect often seen in the US when an amendment to the Constitution is proposed. Even if it doesn't get ratified in enough states – as, for example, the Equal Rights Amendment did not – the process of considering it often inspires a wave of related legislation. The fact that ID cards, biometric identifiers, and databases are being planned and thought about at such a high level seems to be giving everyone the idea that identity is the hammer for every nail.

Take, for example, the announcement a couple of days ago of NetIDme, a virtual ID card intended to help kids identify each other online and protect them from the pedophiles our society apparently now believes are lurking behind every electron.

There are a lot of problems with this idea, worthy though the intentions behind it undoubtedly are. For one thing, placing all your trust in an ID scheme like this is a risk in itself. To get one of these IDs, you fill out a form online and then a second one that's sent to your home address and must be counter-signed by a professional person (how like a British passport) and a parent if you're under 18. It sounds to me as though this system would be relatively easy to spoof, even if you assume that no professional person could possibly be a bad actor (no one has, after all, ever fraudulently signed passports). No matter how valid the ID is when it's issued, in the end it's a computer file protected by a password; it is not physically tied to the holder in any way, any more than your Hotmail ID and password are. For a third thing, "the card removes anonymity," the father who designed the card, Alex Hewitt, told The Times. But anonymity can protect children as well as crooks. And you'd only have to infiltrate the system once to note down a long list of targets for later use.

But the real kicker is in NetIDme's privacy policy, in which the fledgling company makes it absolutely explicit that the database of information it will collect to issue IDs is an asset of a business: it may sell the database, the database will be "one of the transferred assets" if the company itself is sold, and you explicitly consent to the transfer of your data "outside of your country" to wherever NetIDme or its affiliates "maintain facilities". Does this sound like child safety to you?

But NetIDme and other systems – fingerprinting kids for school libraries, iris-scanning them for school cafeterias – have the advantage that they can charge for their authentication services. Customers (individuals, schools) have at least some idea of what they're paying for. This is not true for the UK's ID card, whose costs and benefits are still unclear, even after years of dickering over the legislation. A couple of weeks ago, it became known that as of October 5 British passports will cost £66, a 57 percent increase that No2ID attributes in part to the costs of infrastructure needed for ID cards but not for passports. But if you believe the LSE's estimates, we're not done yet. Most recent government estimates are that an ID card/passport will cost £93, up from £85 at the time of the LSE report. So, a little quick math: the LSE report also guessed that entry into the national register would cost £35 to £40 with a small additional charge for a card, so revising that gives us a current estimate of £38.15 to £43.60 for registration alone. If no one can be found to make the cards but the government tries to forget ahead with the database anyway, it will be an awfully hard sell. "Pay us £40 to give us your data, which we will keep without any very clear idea of what we're going to do with it, and in return maybe someday we'll sell you a biometric card whose benefits we don't know yet." If they can sell that, they may have a future in Alaska selling ice boxes to Eskimos.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 16, 2006

Security vs security, part II

It's funny. Half the time we hear that the security of the nation depends on the security of its networks. The other half the time we're being told by governments that if the networks are too secure the security of the nation is at risk.

This schizophrenia was on display this week in a ruling by the US Court of Appeals in the District of Columbia, which ruled in favor of the Federal Communications Commission: yes, the FCC can extend the Communications Assistance for Law Enforcement Act to VoIP providers. Oh, yeah, and other people providing broadband Internet access, like universities.

Simultaneously, a clutch of experts – to wit, Steve Bellovin (Columbia University), Matt Blaze (University of Pennsylvania), Ernest Brickell (Intel), Clinton Brooks (NSA, retired), Vinton Cerf (Google), Whifield Diffie (Sun), Susan Landau (Sun), Jon Peterson (NeuStar), and John Treichler (Applied Signal Technology) – released a paper explaining why requiring voice over IP to accommodate wiretapping is dangerous. Not all of these folks are familiar to me, but the ones who are could hardly be more distinguished, and it seems to me when experts on security, VOIP, Internet protocols, and cryptography all get together to tell you there's a problem, you (as in the FCC) should listen. Together, this week they released Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP (PDF), which carefully documents the problems.

First of all – and they of course aren't the only ones to have noticed this – the Internet is not your father's PSTN. On the public switched telephone network, you have fixed endpoints, you have centralized control, and you have a single, continuously open circuit. The whole point of VoIP is that you take advantage of packet switching to turn voice calls into streams of data that are more or less indistinguishable from all the other streams of data whose packets are flying alongside. Yes, many VoIP services give you phone numbers that sound the same as geographically fixed numbers – but the whole point is that neither caller nor receiver need to wait by the phone. The phone is where your laptop is. Or, possibly, where your secretary's laptop is. Or you're using Skype instead of Vonage because your contact also uses Skype.

Nonetheless, as the report notes, the apparent simplicity of VoIP, its design that makes it look as though it functions the same as old-style telephones, means that people wrongly conclude that anything you can do on the PSTN you should be able to do just as easily with VoIP.

But the real problems lie in security. There's no getting round the fact that when you make a hole in something you've made a hole through which stuff leaks out. And where in the PSTN world you had just a few huge service providers and a single wire you could follow along and place your wiretap wherever was most secure, in the VoIP world you have dozens of small providers, and an unpredictable selection of switching and routing equipment. You can't be sure any wiretap you insert will be physically controlled by the VoIP provider, which may be one of dozens of small operators. Your targets can create new identities at no cost faster than you can say "pre-pay mobile phone". You can't be sure the signals you intercept can be securely transported to Wiretap Central. The smart terminals we use have a better chance of detecting the wiretap – which is both good and bad, in terms of civil liberties. Under US law, you're supposed to tap only the communications pertaining to the court authorization; difficult to do because of the foregoing. And then, there's a hole, as the IETF observed in 2000, which could be exploited by someone else. Whom do you fear more will gain access to your communications: government, crook, hacker, credit reporting agency, boss, child, parent, or spouse? Fun, isn't it?

And then there's the money. American ISPs can look forward to the cost of CALEA with all the enthusiasm that European ISPs had for data retention. Here, the government helpfully provided its own data: a VoIP provider paid $100,000 to a contractor to develop its CALEA solution, plus a monthly fee of $14,000 to $15,000 and, on top of that, $2,000 for each intercept.

Two obvious consequences. First: VoIP will be primarily sold by companies overseas into the US because in general the first reason people buy VoIP is that it's cheap. Second: real-time communications will migrate to things that look a lot less like phone calls. The report mentions massively multi-player online role-playing games and instant messaging. Why shouldn't criminals adopt pink princess avatars and kill a few dragons while they plot?

It seems clear that all of this isn't any way to run a wiretap program, though even the report (two of whose authors, Landau and Diffie, have written a history of wiretapping) allows that governments have a legitimate need to wiretap, within limits. But the last paragraph sounds like a pretty good way to write a science fiction novel. In fact, something like the opening scenes of Vernor Vinge's new Rainbows End.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 5, 2006

Computers, Freedom, and Privacy XVI

“Everyone seems depressed,” someone said a half-day into this year’s Computers, Freedom, and Privacy conference.

It’s true. Databases are everywhere this year. FEMA databases made of records from physicians, pharmacists, insurers. The databases we used to call the electoral rolls. Choicepoint. The National Health infrastructure they want to build. Real ID. RFID tracks and trails – coming soon to a database near you. And so on.

The only really positive moment is when Senator Leahy (Democrat – VT) bounds in to deliver a keynote, saying that the society we are creating is “a different US society than the one we know”. He ringingly denounces the claim that voting for the resolution to pursue Al-Qaida included voting for warrantless wiretapping, and everyone applauds.
Wait. CFP is being bucked up by a senator? Ten years ago, this conference thought it could code its way out of anything. PGP and Internet architecture could beat any lawmaker.

Now, someone says, “The governments are moving in in a big way” and there is a sense that the only hope lies in policy-making and persuasion. Privacy advocates are brainstorming legislative proposals. The Electronic Frontier Foundation is opening an office in DC.

Even the government types are depressed. Stewart Baker, who in 1994 baited this group by claiming that opposition to key escrow was coming only from those who couldn’t go to Woodstock because they had to finish their math homework, is now at the Department of Homeland Security, and tells us that in an emergency we should save ourselves.

Actually, that was one of the few moments of levity. What he did was ask how many libertarians were there in the room who believed that a government governs best that governs least. About ten percent of the crowd raised their hands (another major change from minus ten years, when at least half would have done so). How many actually had provisions of food and water for 72 hours? Most hands dropped. “Who,” Baker asked, “are you expecting to rescue you?” Gotcha.

The science fiction writer Vernor Vinge, who’s been wandering the conference to sample the zeitgeist preparatory to delivering his wrap-up late Friday, summed it up in an advance sample.

“The angle that’s somewhat discouraging,” he says, “is the sense I have in many of these issues that they do reflect an almost implacable advance and on many different fronts on the part of the government in support of the fundamental government idea that total information awareness (no trademark) is absolutely essential to the national interest. I see that as clearly and explicitly recognized by the government as essential to national security, so that in the long run opposing it is at best a matter of slowing its advance down and at worst giving it the appearance of slowing its advance down.”

Vinge was last at CFP in 1996, when he, Bruce Sterling, and Pat Cadigan all participated in a panel called “We Know Where You Will Live”. I remember it as one of the best CFP panels, ever. It was, to be sure, somewhat gloomy. I remember, for example, predictions that a supermarket might know what foods you had been eating from your sales records and, in cahoots with your medical insurer, order you off the potato chips and onto the celery sticks. But being able to imagine this dysfunctional future gave the sense that we would be able to avert it. And without, as one prominent CFPer has done since last year, moving to Canada.

This year, Katherine Albrecht, the leading campaigner against RFID tags and their prospective use to tag and track goods and people, presented her latest findings. Some of her scenarios are far-fetched enough to be truly lame (for example, the idea that someone could sit next to you in a plane and scan your bag so they could steal exactly what they wanted while you were in the lavatory) and others are too clearly chosen to try to manipulate emotional hot buttons (such as the idea that someone passing on the street could point a cellphone reader at a woman and be able to tell what model and color bra she was wearing; I mean, so what?). But the tracking, storing, and eventually sharing of data are all logical consequences of the infrastructure her research shows they are building. I’m not convinced we will go there. But the possibility is no longer outlandish enough for me to feel empowered by considering it any more.

“We haven’t,” a privacy activist now in the corporate sector said to me over dinner, “had one single success. It’s just a long list of failures.”

It is the health care situation that is particularly depressing. The UK has its flaws, but one benefit of nationalized health care is a real reduction in the number of people and organizations who are intensely interested in your medical records. In the US, it seems as though everyone is lining up hoping to get a glimpse of what might be wrong with you.

In one panel we learn that medical identity theft is one of the biggest and fastest growing problems. Now, I wouldn’t mind that so much if they’d take my ailments, too. Such as this growing sensation of being surrounded, spied upon, watched by cameras…

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML)

March 31, 2006

Protect people, not data

I spent some of this week talking to parents about the phenomenon of fingerprinting kids in schools for the Guardian. (Surely fingerpainting was more fun.) One of the real frustrations among the people I spoke to was the lack of (helpful) response from the Information Commissioner's Office.

The systems that are being deployed in many school libraries in the UK (with doubtless other countries to follow if they succeed here) are made by Micro Librarian Systems. The fingerprinting side of it is really an add-on; without fingerprint readers, the kids use barcodes. One of the system's selling points seems to be that it doesn't need adult supervision, unlike library cards.

You can see why fingerprints sound appealing as a way to unlock the system: quick, easy, efficient, nothing to lose and/or replace. Slight problem, maybe, that kids' fingers are often dirty, sticky, or damaged, but at least they can't lose them.

What took me aback a bit was discovering that MLS has on its Web site – and quotes to schools – letters from the Information Commissioner's Office and from the Department of Education saying they saw nothing wrong with the system.

It's not my purpose here to rehash whether fingerprinting kids to let them take out library books is appropriate; the parents who were against it had plenty to say in my Guardian piece. But the whole incident has made me think about the role of the Information Commissioner's Office. Whenever I've spoken to anyone there it's seemed clear that ICO's job (PDF) is to explain the law and ensure that organizations obey it. They don't go around looking for things to investigate; they respond to complaints from the public. In this case, they say, they haven't had many. They do advise, as the letter MLS received says, that schools consult parents before instituting fingperprinting "as it may be a sensitive issue".

Indeed, it might. It's a measure of how much both technology and the willingness to be monitored are infiltrating everyone's consciousness that there has been so little public outcry over this. The manufacturers are, of course, very reassuring: the system doesn't store whole fingerprints but an encrypted, very large number mathematically derived from the scanned finger. The image cannot be reconstructed from the number.

But that doesn't actually help because if what unlocks the system is the number it is actually more easily forged than a fingerprint image would be. Now, no one's suggesting that some crook is going to break into a school and steal the computer that holds the kids' fingerprints just so he can take out all the library books. If you were told that your government records were protected by a very large number in encrypted form would you feel reassured that it wasn't an image? I'm not sure you should, because first of all, even encryption that can't be cracked today probably can be tomorrow. Second of all, there are plenty of people out there with good reasons to try to deconstruct how these systems work. And third of all, a number is, well…sometimes it's just a number. In the case of MLS, it's a number generated by a system created by Digital Persona, who supply enterprise biometric solutions to all sorts of other clients. How many of them will accept the same numbers because they use the same algorithms?

In this case, it seems to me that it's not sufficient to say whether the precautions taken to protect the data are adequate. It seems to me the real question is whether the proposed system is proportionate to the problem it's being installed to solve. Is the desire to provide a quick and easy method for kids to check out their own library books sufficient justification for fingerprinting children? This is a question the ICO is not in business to answer.

It's always seemed to me that no amount of data protection really solves anything: data always seems to go where it's not supposed to, whether that's because someone leaves a laptop in a cab or a CD in the back of an airplane seat, or because the database's owner has become infected with what writer Ellen Ullman has called "the fever of the system". The MLS literature states that fingerprints are removed from the system when the child leaves school. But how would a parent check this? And how often do people really throw out data they have. You never know, you might need it someday.

It seems to be an inalienable truth of human nature: if you have two databases you want to link them together; if you have one database you want to keep adding to it and using it for more and more stuff. In the end it's like Mark Twain's old adage, that "Three people can keep a secret if two of them are dead." Databases are not designed to keep secrets; they are designed to help people find things out. If you really want to protect privacy, the only certain option is not to create the database.

In the meantime, it would be nice if the ICO's job description and title went further to ask, "Is this an appropriate use of technology? Is it possible there will be consequences down the line that make this a bad idea?" Now that the government has its way to build the national identity register, these are questions we should all be asking.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. She has an intermittent blog. Readers are welcome to post there, at the official net.wars blog, or to send email, but please turn off HTML.

March 24, 2006

The IDs of March

Ping-pong is so much fun. If you haven't been following the ins and outs of the latest incarnation of the Identity Cards Bill, it's on about its fifth bat between the House of Commons and the House of Lords, with doubtless more to come. In the fourth bat, the Lords had proposed an amendment making the system opt-in until 2012, which would have meant that when you applied for your new passport (or other designated document, such as residence permit or driver's license) you would have the option of also being added to the National Identity Register and an ID card. That, of course, is not what the government wants.

Despite all the lip service paid in the election manifesto to the scheme's being voluntary, the government wants registration to be compulsorily tied to the issuance of documents that most people want to have. In the government's scheme, the ID card will be "voluntary" in the sense that having a passport is "voluntary". Don't want an ID card? Fine. Don't travel, and if you drive, don't lose your old license, change address, or turn 70. (For Americans: old-style British licenses are a fancily printed piece of folded paper that are valid until you're 70; these are gradually being replaced by new-style plastic photo licenses that are valid for ten years, but it's a very long process. Few people in Britain carry them as daily identification, and the only time you need to produce it is within ten days of being stopped for a traffic violation, so there's little incentive to update them.)

The Commons rejected that (PDF). What's supposed to happen next: the House will reject the Lords' amendments again, and the Lords will amend it again.

We even know something about how the Lords will try to amend it: Lord Armstrong of Ilminster has already tabled the amendment to be proposed. The new amendment will offer people the chance to opt out of registering in the national identity database and acquiring an ID card alongside a passport application. It's an interesting idea for a compromise. Most people will not bother to opt out. The ones who do will be the ones who otherwise might decide to forgo travelling in order to avoid registering as long as possible.

If that amendment were to succeed—and it seems likely to garner more support than the last round—there is one significance class of people we know would not opt out of getting ID cards: criminals. Just as you'd probably opt to wear a business suit and tie and get your hair cut respectably short if you were a dope dealer traveling internationally, if you are up to any nefarious schemes you will want the credibility having an ID card will lend you.

Eventually, the most likely outcome is that the Commons will win. There are several reasons for that, none of them really to do with ID cards. The most important is the balance of power between the Commons and the Lords: everyone agrees that the Commons, as the elected body, has supremacy. But the whole mess could easily drag on long enough to block the bill until everyone goes on their summer vacation. If the Commons has to invoke the Parliamentary Act to override the Lords' dissent, we could be into November before the government can even get going on it. The ID card is already behind the govenrment's original schedule. But, hey, what's the hurry?

The ID card is becoming secondary in these debates to the question of how much say the Lords should have and how much the government is railroading its proposals through. The ID card is increasingly controversial; the most recent YouGov/Daily Telegraph survey (PDF) showed only 45 percent in favor of the thing, and support decreases when the estimated cost rises from £6 billion (government) to £18 billion (LSE's upper figure). Are the Lords in fact more representative than the elected Commons?

It's interesting to consider this question in the light of the Power Inquiry, which is considering the question of voter alienation. Two of its conclusions: that there should be a "rebalancing of power away from the executive and unaccountable bodies towards Parliament and local government" and that electoral systems should be more responsive "allowing citizens a much more direct and focused say over political decisions and policies". Exactly the opposite is happening with respect to the ID Cards bill which, given the size of the shift it would cause in British national life and its cost, arguably should be the subject of a referendum.

In this context it's also worth reading testimony given recently to the US Congress by Stephen T. Kent, author of two books on ID card systems about the difficulties of doing what the UK government is proposing (and the US government has in mind with Real ID). Kent reminds us that what we are proposing to build is not an ID card but an ID system, and asks the same questions that even technology vendors have been asking the British government all along: what is the system for? What problem are you trying to solve? Without a clear set of goals, how can the technology fail to fail? In Britain's case, though, getting the ID card bill passed seems to be the problem the ID card is trying to solve.

You can, of course, still promise to refuse.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of linkts to all the earlier columns in this series. Readers are welcome to send email ( but please turn off HTML), or to post comments at the net.wars blog