« The third penguin | Main | Who gets the kidney? »

Fool me once

new-22portobelloroad.jpgMost of the "us" who might read this rarely stop to marvel at the wonder that is our daily trust in the society that surrounds us. One of the worst aspects of London Underground's incessant loud reminders to report anything suspicious - aside from the slogan, which is dumber than a bag of dead mice - is that it interrupts the flow of trust. It adds social friction. I hear it, because I don't habitually block out the world with headphones.

Friction is, of course, the thing that so many technologies are intended to eliminate. And they might, if only we could trust them.

Then you read things like this news, that Philip Morris wants to harvest data from its iQOS e-cigarette. If regulators allow, Philip Morris will turn on functions in the device's internal chips that capture data on its user's smoking habits, not unlike ebook readers' fine-grained data collection. One can imagine the data will be useful for testing strategies for getting people to e-smoke longer.

This example did not arrive in time for this week's Nuances of Trust event, hosted by the Alliance for Internet of Things Innovation (AIOTI) and aimed at producing intelligent recommendations for how to introduce trust into the Internet of Things. But, so often, it's the company behind the devices you can't trust. For another example: Volkswagen.

Partly through the problem-solving session, we realized we had regenerated three of Lawrence Lessig's four modalities of constraining behavior: technology/architecture, law, market, social norms. The first changes device design to bar shipping loads of data about us to parts unknown; law pushes manufacturers into that sort of design, even if it cost more; market would mean people refused to buy privacy-invasive devices, and social norms used to be known as "peer pressure". Right now, technology is changing faster than we can create new norms. If a friend has an Amazon Echo at home, does entering their house constitute signing Amazon's privacy policy? Should they show me the privacy policy before I enter? Is it reasonable to ask them to turn it off while I'm there? We could have asked questions like "Are you surreptitiously recording me?" at any time since portable tape recorders were invented, but absent a red, blinking light we felt safe in assuming no. Now, suddenly, trusting my friend requires also trusting a servant belonging to a remote third party. If I don't, it's a social cost - to me, and maybe to my friend, but not to Amagoople.

On Tuesday, Big Brother Watch provided a far more alarming example when director Silkie Carlo launched BBW's report on automated facial recognition (PDF). Now, I know the technically minded will point out grumpily that all facial recognition is "automated" because it's a machine what does it, but what BBW means is a system in which CCTV and other cameras automatically feed everything they gather into a facial recognition system that sprinkles AI fairy dust and pops out Persons of Interest (I blame TV). Various UK police have deployed these AFR systems at concerts and football and rugby games; at the 2016 and 2017 Notting Hill Carnivals; on Remembrance Sunday 2017 to restrict "fixated individuals"; and at peaceful demonstrations. On average, fewer than 9% of matches were accurate; but that's little consolation when police pick you out of the hordes arriving by train for an event and insist on escorting you under watch. The system London's Met Police used had a false positive rate of over 98%! How does a system like that even get out of the lab?

Neither the police nor the Home Office seem to think that bringing in this technology requires any public discussion; when asked they play the Yes, Minister game of pass the policy. Within the culture of the police, it may in fact be a social norm that invasive technologies whose vendors promise magical preventative results should be installed as quickly as possible before anyone can stop them. Within the wider culture...not so much.

This is the larger problem with what AIOTI is trying to do. It's not just that the devices themselves are insecure, their risks capricious, and the motives of their makers suspect. It's that long after you've installed and stopped thinking about a system incorporating these devices someone else can come along to subvert the whole thing. How do you ensure that the promise you make today cannot be broken by yourself or others in future? The problem is near-identical to the one we face with databases: each may be harmless on its own, but mash them together and you have a GDPR fine-to-the-max dataset of reidentification.

Somewhere in the middle of this an AIOTI participant suggested that the IoT rests on four pillars: people, processes, things, data. Trust has pillars, too, that take a long time to build but that can be destroyed in an instant: choice, control, transparency, and, the one we talk about least, but perhaps the most important, familiarity. The more something looks familiar, the more we trust it, even when we shouldn't. Both the devices AIOTI is fretting about and the police systems BBW deplores have this in common: they center on familiar things whose underpinnings have changed without our knowledge - yet their owners want us to trust them. We wish we could.


Illustrations:: Orwell's house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/775

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives