" /> net.wars: May 2018 Archives

« April 2018 | Main | June 2018 »

May 25, 2018

Who gets the kidney?

whogetsthekidney.jpg
At first glance, Who should get the kidney? seemed more reasonable and realistic than MIT's Moral Machine.

To recap: about a year ago, MIT ran an experiment, a variation of the old trolley problem, in which it asked visitors in charge of a vehicle about to crash to decide which nearby beings (adults, children, pets) to sacrifice and which to save. Crash!

As we said at the time, people don't think like that. In charge of a car, you react instinctively to save yourself, whoever's in the car with you, and then try to cause the least damage to everything else. Plus, much of the information the Moral Machine imagined - this stick figure is a Nobel prize-winning physicist; this one is a sex offender - just is not available to a car driver in a few seconds and even if it were, it's cognitive overload.

So, the kidney: at this year's We Robot, researchers offered us a series of 20 pairs of kidney recipients and a small selection of factors to consider: age, medical condition, number of dependents, criminal convictions, drinking habits. And you pick. Who gets the kidney?

Part of the idea as presented is that these people have a kidney available to them but it's not a medical match, and therefore some swapping needs to happen to optimize the distribution of kidneys. This part, which made the exercise sound like a problem AI could actually solve, is not really incorporated into the tradeoffs you're asked to make. Shorn of this ornamentation, Who Gets the Kidney? is a simple and straightforward question of whom to save. Or, more precisely, who in future will prove to have deserved to have been given this second chance at life? You are both weighing the value of a human being as expressed through a modest set of known characteristics and trying to predict the future. In this, it is no different from some real-world systems, such as the benefits and criminal justice systems Virginia Eubanks studies in her recent book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.

I found, as did the others in our group, that decision fatigue sets in very quickly. In this case, the goal - to use the choices to form like-minded discussion clusters of We Robot attendees - was not life-changing, and many of us took the third option, flipping a coin.

At my table, one woman felt strongly that the whole exercise was wrong; she embraced the principle that all lives are of equal value. Our society often does not treat them that way, and one reason is obvious: most people, put in charge of a kidney allocation system, want things arranged so that if they themselves they will get one.

Instinct isn't always a good guide, either. Many people, used to thinking in terms of protecting children and old people as "they've had their chance at life", automatically opt to give the kidney to the younger person. Granted, I'm 64, and see above paragraph, but even so: as distressing as it is to the parents, a baby can be replaced very quickly with modest effort. It is *very* expensive and time-consuming to replace an 85-year-old. It may even be existentially dangerous, if that 85-year-old is the one holding your society's institutional memory. A friend advises that this is a known principle in population biology.

The more interesting point, to me, was discovering that this exercise really wasn't any more lifelike than the moral machine. It seemed more reasonable because unlike the driver in the crashing car, kidney patients have years of documentation of their illness and there is time for them, their families, and their friends to fill in further background. The people deciding the kidney's destination are much better informed, and in the all-too-familiar scenario of allocating scarce resources. And yet: it's the same conundrum, and in the end how many of us want the machine, rather than a human, to decide whether we live or die?

Someone eventually asked: what if we become able to make an oversupply of kidneys? This only solves the top layer of the problem. Each operation has costs in surgeons' time, medical equipment, nursing care, and hospital infrastructure. Absent a disruptive change in medical technology, it's hard to imagine it will ever be easy to give everyone a kidney who needs one. Say it in food: we actually do grow enough food to supply everyone, but it's not evenly distributed, so in some areas we have massive waste and in others horrible famine (and in some places, both).

Moving to current practice, in a Guardian article Eubanks documents the similar conundrums confronting those struggling to allocate low-income housing, welfare, and other basic needs to poor people in the US in a time of government "austerity". The social workers, policy makers, and data scientists on these jobs have to make decisions, that, like the kidney and driving examples, have life-or-death consequences. In this case, as Eubanks puts it, they decide which get helped among "the most exploited and marginalized people in the United States". The automated systems Eubanks encounters do not lower barriers to programs as promised and, she writes, obscure the political choices that created these social problems in the first place. Automating the response doesn't change those.


Illustrations: Project screenshot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 18, 2018

Fool me once

new-22portobelloroad.jpgMost of the "us" who might read this rarely stop to marvel at the wonder that is our daily trust in the society that surrounds us. One of the worst aspects of London Underground's incessant loud reminders to report anything suspicious - aside from the slogan, which is dumber than a bag of dead mice - is that it interrupts the flow of trust. It adds social friction. I hear it, because I don't habitually block out the world with headphones.

Friction is, of course, the thing that so many technologies are intended to eliminate. And they might, if only we could trust them.

Then you read things like this news, that Philip Morris wants to harvest data from its iQOS e-cigarette. If regulators allow, Philip Morris will turn on functions in the device's internal chips that capture data on its user's smoking habits, not unlike ebook readers' fine-grained data collection. One can imagine the data will be useful for testing strategies for getting people to e-smoke longer.

This example did not arrive in time for this week's Nuances of Trust event, hosted by the Alliance for Internet of Things Innovation (AIOTI) and aimed at producing intelligent recommendations for how to introduce trust into the Internet of Things. But, so often, it's the company behind the devices you can't trust. For another example: Volkswagen.

Partly through the problem-solving session, we realized we had regenerated three of Lawrence Lessig's four modalities of constraining behavior: technology/architecture, law, market, social norms. The first changes device design to bar shipping loads of data about us to parts unknown; law pushes manufacturers into that sort of design, even if it cost more; market would mean people refused to buy privacy-invasive devices, and social norms used to be known as "peer pressure". Right now, technology is changing faster than we can create new norms. If a friend has an Amazon Echo at home, does entering their house constitute signing Amazon's privacy policy? Should they show me the privacy policy before I enter? Is it reasonable to ask them to turn it off while I'm there? We could have asked questions like "Are you surreptitiously recording me?" at any time since portable tape recorders were invented, but absent a red, blinking light we felt safe in assuming no. Now, suddenly, trusting my friend requires also trusting a servant belonging to a remote third party. If I don't, it's a social cost - to me, and maybe to my friend, but not to Amagoople.

On Tuesday, Big Brother Watch provided a far more alarming example when director Silkie Carlo launched BBW's report on automated facial recognition (PDF). Now, I know the technically minded will point out grumpily that all facial recognition is "automated" because it's a machine what does it, but what BBW means is a system in which CCTV and other cameras automatically feed everything they gather into a facial recognition system that sprinkles AI fairy dust and pops out Persons of Interest (I blame TV). Various UK police have deployed these AFR systems at concerts and football and rugby games; at the 2016 and 2017 Notting Hill Carnivals; on Remembrance Sunday 2017 to restrict "fixated individuals"; and at peaceful demonstrations. On average, fewer than 9% of matches were accurate; but that's little consolation when police pick you out of the hordes arriving by train for an event and insist on escorting you under watch. The system London's Met Police used had a false positive rate of over 98%! How does a system like that even get out of the lab?

Neither the police nor the Home Office seem to think that bringing in this technology requires any public discussion; when asked they play the Yes, Minister game of pass the policy. Within the culture of the police, it may in fact be a social norm that invasive technologies whose vendors promise magical preventative results should be installed as quickly as possible before anyone can stop them. Within the wider culture...not so much.

This is the larger problem with what AIOTI is trying to do. It's not just that the devices themselves are insecure, their risks capricious, and the motives of their makers suspect. It's that long after you've installed and stopped thinking about a system incorporating these devices someone else can come along to subvert the whole thing. How do you ensure that the promise you make today cannot be broken by yourself or others in future? The problem is near-identical to the one we face with databases: each may be harmless on its own, but mash them together and you have a GDPR fine-to-the-max dataset of reidentification.

Somewhere in the middle of this an AIOTI participant suggested that the IoT rests on four pillars: people, processes, things, data. Trust has pillars, too, that take a long time to build but that can be destroyed in an instant: choice, control, transparency, and, the one we talk about least, but perhaps the most important, familiarity. The more something looks familiar, the more we trust it, even when we shouldn't. Both the devices AIOTI is fretting about and the police systems BBW deplores have this in common: they center on familiar things whose underpinnings have changed without our knowledge - yet their owners want us to trust them. We wish we could.


Illustrations:: Orwell's house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 11, 2018

The third penguin

two-angry-penguins.jpgYou never have time to disrupt yourself and your work by updating your computer's software until Bad Things happen and you're forced to find the time you don't have.

So last week the Ubuntu machine's system drive, which I had somehow failed to notice dated to 2012, lost the will to live. I had been putting off upgrading to 64-bit; several useful pieces of software are no longer available in 32-bit versions, such as Signal for Desktop, Free File Sync, and Skype.

It transpired that 18.04 LTS had been released a few days earlier. Latest version means longer until forced to upgrade, right?

The good news is that Ubuntu's ease of installation continues to improve. The experience of my first installation, about two and a half years ago, of trying umpteen things and hoping one would eventually work, is gone. Both audio and video worked first time out, and although I still had to switch video drivers, but I didn't have to search AskUbuntu to do it. Even more than my second installation Canonical has come very, very close to one-click installation. The video freezes that have been plaguing the machine since the botched 16.04 update in 2016 appear to have largely gone.

However, making it easy also makes some things hard. Reason: making it easy means eliminating things that require effort to configure and that might complicate the effortlessness. In the case of 18.04, that means that if you have a mixed network you still have to separately download and configure Samba, the thing that makes it possible for an Ubuntu machine to talk to a Windows machine. I understand this choice, I think: it's reasonable to surmise that the people who need an easy installation are unlikely to have mixed networks, and the people who do have them can cope with downloading extra software. But Samba is just mean.

An ideal installation routine would do something like:
- Ask the names and IP addresses of the machines you want to connect to;
- Ask what directories you want to share;
- Use that information to write the config file;
- Send you to pages with debugging information if it doesn't work.

Of course, it doesn't work like that. I eventually found the page I think helped me most last time. That half-solved the problem, in that the Windows machines could see the Ubuntu machine but not the reverse. As far as I could tell, the Ubuntu machine had adopted the strategy of the Ravenous Bug Blatter Beast of Traal and wrapped a towel around its head on the basis that if it couldn't see them they couldn't see *it*.

Many DuckDuckGo searches later the answer arrived: apparently for 18.04 the decisions was made to remove a client protocol. The solution was to download and install a bit of software called smbclient, which would restore the protocol. That worked.

Far more baffling was the mysterious, apparently random appearance of giant colored graphics in my Thunderbird inbox. All large enough to block numerous subject lines. This is not an easy search to frame, and I've now forgotten the magical combination of words that produced the answer: Ubuntu 18.04 has decorated itself with a colorful set of bright, shiny *emoji*. These, it turns out, you can remove easily. Once you have, the symbols sent to torture you shrink back down to tiny black and white blogs that disturb no one. Should you feel a desperate need to find out what one is, you can copy and paste it into Emojipedia, and there it is: that thing you thought was a balloon was in fact a crystal ball. Like it matters.

I knew going in that Unity, the desktop interface that came with my previous versions of Ubuntu, had been replaced by Gnome, which everyone predicted I would hate.

The reality is that it's never about whether a piece of software is good or bad; it's always about what you're used to. If your computer is your tool rather than your plaything, the thing you care most about is not having to learn too much that's new. I don't mind that the Ubuntu machine doesn't look like Windows; I prefer to have the reminder that it's different. But as much as I'd disliked it at first, I'd gotten used to the way Unity groups and displays windows, the size of the font it used, and the controls for configuring it. So, yes, Gnome annoyed, with its insistence on offering me apps I don't want, tiny grey fonts, wrong-side window controls, and pointless lockscreens that all wanted recofniguration. KDE desktop, which a friend insisted I should try, didn't seem much different. It took only two days to revert to Unity, which is now "community-maintained", polite GNU/Linux-speak for "may not survive for long". Back to some version of normal.

In my view, Ubuntu could still fix some things. It should be easier to add applications to the Startup list. The Samba installation should be automated and offered as an option in system installation with a question like, "Do you need to connect to a Windows machine on your network?" User answers yes or no, Samba is installed or not with a script like that suggested above.

But all told, it remains remarkable progress. I salute the penguin wranglers.


Illustrations: Penguins.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 3, 2018

Data protection panic

gdpr-countdown.jpgWherever you go at the moment someone is asking panicked questions about the General Data Protection Regulation, which comes into effect on May 25, 2018. The countdown above appeared at a privacy engineering workshop on April 27, and looked ominous enough for Buffy to want to take a whack at it.

Every day new emails arrive asking me to confirm I want to stay on various mailing lists and announcing new privacy policies. Most seem to have grasped the idea that positive consent is required, but some arrive saying you need to nothing to stay stay on their list. I am not a lawyer, but I know that's backwards. The new regime is opt-in, not opt-out. You cannot extract consent from silence.

At the local computer repair place (hard drive failure, don't ask), where my desktop was being punished with diagnostics, the owner asks, "Is encryption necessary? A customer is asking." We agree, from our own reading, that encryption is not *required*, but that liability is less if the data is encrypted and therefore can't be read, and as a consequence sold, reidentified, sprayed across the internet, or used for blackmail. And you don't have to report it as a data breach or notify customers. I explain this to my tennis club and another small organization. Then I remember: crypto is ridiculously hard to implement.

The UK's Information Commissioner's Office has a helpful 12-step guide to assessing what you have to do. My reading, for example, is that a small community interest organization does not have to register or appoint a data controller, though it does need to agree who will answer any data protection complaints it gets. The organization's web host, however, has sent a contract written in data-protectionese, a particularly arcane subset of lawyerese. Asked to look at it, I blanched and started trying to think which of my privacy lawyer friends might be most approachable. Then I realized: tear up that contract and write a new one in English that says who's responsible for what. Someone probably found a model contract somewhere that was written for businesses with in-house lawyers who understood it.

So much is about questioning your assumptions. You think the organization you're involved with has acquired all its data one record at a time when people have signed up to become members. Well, is that true? Have you ever used anyone else's mailing list to trawl for new members? Have you ever shared yours with another organization because you were jointly running a conference? How many copies of the data exist and where are they stored, and how? These are audits few ever stop to do. The threat of the loss of 4% of global revenues is very effective in making them happen.

The computer repair store owner began to realize this aspect. The shop asks new customers to fill out a form, and then adds their information to their database, which means that the next time you bring your machine in they have its whole service history. We mulled over this form for a bit. "I should add a line at the bottom," he said. Yes: a line that asks for permission to include the person on their mailing list for offers and discounts and that says the data won't be shared.

Then I asked him, "How much benefit does the shop get from emailing these offers?" Um, well...none, really. People sometimes come in and ask about them, but they don't buy. So why do them? Good point. The line shrank to something on the order of: "We do not share your data with any third parties".

This is in fact the effect GDPR is intended to have: make people rethink their practices. Some people don't need to keep all the data they have - one organization I'm involved with has a few thousand long-lapsed members in its database with no clear way to find and delete them. For others, the marketing they do isn't really worth the customer irritation. Getting organizations to clean up just those two things seems worth the trouble.

But then he asked, "Who is going to enforce this?" And the reality is there is probably no one until there's a complaint. In the UK, the ICO's budget (PDF) is widely held to be inadequate, and it's not increasing. Elsewhere, it took the tenacity of Max Schrems to get regulators to take the actions that eventually brought down Safe Harbor. A small shop would be hugely unlucky to be a target of regulatory action unless customers were complaining and possibly not even then. Except in rare cases these aren't the people we want targeted; we want the regulators to focus first on egregious harms, repeat offenders with great power, such as Google, and incessant offenders, such as Facebook, whose list of apologies and missteps includes multiple entries for every year of its existence. No wonder the WhatsApp CEO quit (though there's little else he can do, since he sold his company).

Nonetheless, it's the smallest companies and charities who are in the greatest panic about this. Possibly for good reason: there is mounting concern that GDPR will be the lever via which the big data-driven companies lock out small competitors and start-ups. Undesirable unintended consequences, if that's the outcome.


Illustrations: GDPR countdown clock on April 27.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.