February 27, 2015

Barbershop quartet

"It's unlikely that we'll have a barbershop quartet singing it out every time," Emma Craddock. She was talking about ways to make it explicit to users when they are handing over data, kind of like the cookie directive on steroids. (You know, the thing that makes messages pop up on every website demanding that you accept cookies to make the site work.) She was speaking at this week's workshop run by the Meaningful Consent project at the University of Southampton. The main question up for consideration: what is meaningful consent, and how do we achieve it?

For a moment, I was entranced by the possibilities. A barbership quartet! I know - or knew - someone who sang in one of those. It's sort of entrancing to imagine him, as a retired engineer, touring around to people's houses to pop up with his buddies, like out of a cake, to sing out,

"You're paying with your data For this thing you think is free."

It's easy to become inured to clicking "OK" to make these trades just to get on with things; but how much harder to ignore four guys in striped jackets and hats singing full-voice in harmony, arms outflung, two feet from your ears? Yes, yes, in real life it would be spectacularly annoying and wildly labor-intensive (although: jobs!), but for a moment, imagine...it would certainly get users' attention as they traded their data away.

Craddock's main point was that the data protection laws reflect the expectation of their mid-1990s time that we always knew when we were disclosing personal information, just as at one time we knew when we crossed the border into a foreign country's legal jurisdiction and now the crossing is invisible. Today, we disclose information unknowingly: it requires an exercise of deliberate thought to see every typed-in search query as a gift from us to GooBingYa, and the data brokers who swap and trade behind the scenes are completely unknown to the millions whose data they keep. You visit Google, not its fully owned subsidiary DoubleClick; only a tiny minority of obsessed privacy advocates visit Axciom or Comscore. Under EU data protection law you have the right to file subject access request for your data file. But who would know to ask these hidden third-party data brokers - and even if you do, you are not their customer. Use the source, Luke.

Cut to: Motherboard, where Brian Merchant lays out what goes on behind the scenes when you search for information on medical conditions. A search for information on diabetes may get you tagged as "diabetes-worried". US health insurers certainly would want to know if a prospective customer might be on the verge of developing a chronic, expensive condition - and prospective employers might like to know, too. Is this what you "consented" to when you typed in your search term and hit ENTER?

On Tuesday evening I checked in for a flight on Iberia. At completion of check-in, a message popped up, offering me the great idea of sharing with my friends on Facebook. Iberia-checkin-facebook.jpgIt offered three "practical" messages, one announcing my flight number and departure time; another announcing takeoff and expected flight time; a third announcing my arrival. We talk about the "sharing economy" and this ultimate product placement is an aspect of it: advertising seamlessly integrated into activities that would formerly have been entirely separated that it's easy not to notice who benefits from the underlying data flow or that it actually *is* advertising. They can reasonably call it opt-in instead of "insidious propaganda".

The European NGO Alliance for Child Safety Online has been discussing the need for legislation to provide children with clearly understandable information about what they're sharing and with who in simple language. It's an unobjectionable idea except for the recurring problem: how? We don't even know how to do this for adults: hence the Southampton workshop.

We do know some things. We know that asking anyone to read lengthy privacy policies and terms and conditions is a meaningless exercise. First, because people hate it and won't do it, even if you make it a bulleted summary. Second, because without the market power to do more than say yes or no, use the service or don't use the service, it's futile. We cannot bargain, object to, or negotiate these contracts. Even phone apps, which are a bit more explicit about what they're asking for, come on a take-it-or-leave-it basis. Over time, if the app world goes the way desktop software did, there may be fewer alternatives to turn to when you don't like the terms than there are now.

We also know that asking users to participate in a lengthy set-up process to embed their preferences into some kind of dashboard or basic settings does not work. Most people accept the defaults and get on with things. (I am a member of the weird minority who read all customization options at the outset and configure them all.)

We really do need context-based single questions a user can answer. We really do need it made clear where our data goes, how it's shared, and with whom. But most of all, we need real choices. People seem not to care about privacy because they believe they've already lost. That barbershop quartet needs to bring with them the ability to rewrite the contract.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


February 20, 2015

The gunpowder tea party

For several years in the mid-2000s, Privacy International ran annual Stupid Security Awards. The situation has not improved since.

Item: last week I took a small (under 100 grams) package destined for elsewhere within the UK to the post office. "What's in it?" the postmaster asked. I genuinely couldn't remember beyond that it was an item I'd found on my desk that I thought the recipient should have. "I can't send it if you don't tell me what's in it." He could, however, sell me stamps to put on the package so I could drop it in the post box outside.

Item: an absurd exchange with the now-departed-from-my-life Vodafone. On February 2, my number ported to the new supplier. As that was the day my bill was due, I thought I'd check the website to see if I had anything to pay and discovered porting the number had simultaneously shut down web access to my billing information - I say "my" billing information, but that's like "my library book". I initiated a web chat. All I wanted to know: would they send me a final bill?

"Dylan" (who I thought was a robot until he? started misspelling things, which, who knows, may be deliberate to make a bot look human) said he had to take me through security. Name, address, phone number, amount of my last bill. I gave the amount of the January bill and said I didn't have February. Dylan responded with a little encomium about how security is important and that's why he has to ask these questions. He did *not* indicate whether he'd accepted the January amount.

Which may be why his next question - "what is the IMEI number of your handset" - made me feel less confident that I was really chatting with someone from Vodafone. I know: I contacted them via their HTTPS-protected website. But malware...hacking...social engineering...someone asking a string of questions and providing no feedback...and I could see no logical reason why they needed this level of certainty in order to send a bill to an email address they already had on file. At that point my New York personality - impatience and distrust - kicked in and I said if they wanted payment they could let me know. end of conversation.

People confronted with situations like these do not conclude that there are terrible risks we must all work together to protect ourselves against. Instead, they conclude that security is stupid, inflexible, and a waste of their time, a result that makes solving the society-wide security problems we actually face even harder.

To be sure, a lot of the issue was a design problem. Vodafone did the right thing in telling me how long I would have to wait before my chat approach was answered - but it then did the wrong thing by not telling me how many questions I might have to answer or how long the security process might take. This is a mismatch between their perception of the task and mine. I want an answer to my question and anything leading up to that is "waiting". They think once they have connected to me I am no longer "waiting" and am now being served. Answering security questions is not being served; to the customer it's still waiting. "Being served" is: I'm looking for the answer to your question and here it is.

The other really significant thing Vodafone did wrong is to fail to offer any acknowledgment that we were making progress toward a defined goal. I understand that security people do not want to give a miscreant clues that might help them game the system. I get that. I also get that the procedure and number of questions may vary. But there still needs to be some feedback. I'm still *waiting* here. The even more significant failing was the depressingly standard behavior of not offering any information to confirm itself. If all organizations handling sensitive information had made two-way authentication (not two-*factor*) authentication when telephone banking began and made it standard practice that grew up alongside the internet, there would be few phishing problems now.

That's becoming an increasing issue because the other side of stupid security is that the people in charge of important building blocks are making the kinds of stupid decisions that make it impossible for us to make good ones. Just this week:

Item: Lenovo has been shipping PCs with adware that intercepts HTTPS connections in the interests of inserting ads. In the US, many companies do this, presumably with some idea that in-depth monitoring of their employees' web use will yield at least legal compliance, at best some ability to catch wrongdoing.

Item: Samsung has been shipping smart TVs that capture what's said in front of them and uploads it unencrypted. Yes, Samsung will fix it, but here is the future: updating myriad "smart" inanimate objects because their makers have no...let's call it street smarts.

Item: GCHQ and the NSA hacked Gemalto's network to steal the encryption keys that protect many of the world's mobile phone conversations. What they failed to win legally when key escrow was defeated, they went ahead and stole.Simon Davies.jpg

Every part of this ecosystem matters, from bad design decisions to deliberate undermining. As Privacy International founder Simon Davies said in 2003: a global menace.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.