« Fool me once | Main | The three IPs »

Who gets the kidney?

whogetsthekidney.jpg
At first glance, Who should get the kidney? seemed more reasonable and realistic than MIT's Moral Machine.

To recap: about a year ago, MIT ran an experiment, a variation of the old trolley problem, in which it asked visitors in charge of a vehicle about to crash to decide which nearby beings (adults, children, pets) to sacrifice and which to save. Crash!

As we said at the time, people don't think like that. In charge of a car, you react instinctively to save yourself, whoever's in the car with you, and then try to cause the least damage to everything else. Plus, much of the information the Moral Machine imagined - this stick figure is a Nobel prize-winning physicist; this one is a sex offender - just is not available to a car driver in a few seconds and even if it were, it's cognitive overload.

So, the kidney: at this year's We Robot, researchers offered us a series of 20 pairs of kidney recipients and a small selection of factors to consider: age, medical condition, number of dependents, criminal convictions, drinking habits. And you pick. Who gets the kidney?

Part of the idea as presented is that these people have a kidney available to them but it's not a medical match, and therefore some swapping needs to happen to optimize the distribution of kidneys. This part, which made the exercise sound like a problem AI could actually solve, is not really incorporated into the tradeoffs you're asked to make. Shorn of this ornamentation, Who Gets the Kidney? is a simple and straightforward question of whom to save. Or, more precisely, who in future will prove to have deserved to have been given this second chance at life? You are both weighing the value of a human being as expressed through a modest set of known characteristics and trying to predict the future. In this, it is no different from some real-world systems, such as the benefits and criminal justice systems Virginia Eubanks studies in her recent book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.

I found, as did the others in our group, that decision fatigue sets in very quickly. In this case, the goal - to use the choices to form like-minded discussion clusters of We Robot attendees - was not life-changing, and many of us took the third option, flipping a coin.

At my table, one woman felt strongly that the whole exercise was wrong; she embraced the principle that all lives are of equal value. Our society often does not treat them that way, and one reason is obvious: most people, put in charge of a kidney allocation system, want things arranged so that if they themselves they will get one.

Instinct isn't always a good guide, either. Many people, used to thinking in terms of protecting children and old people as "they've had their chance at life", automatically opt to give the kidney to the younger person. Granted, I'm 64, and see above paragraph, but even so: as distressing as it is to the parents, a baby can be replaced very quickly with modest effort. It is *very* expensive and time-consuming to replace an 85-year-old. It may even be existentially dangerous, if that 85-year-old is the one holding your society's institutional memory. A friend advises that this is a known principle in population biology.

The more interesting point, to me, was discovering that this exercise really wasn't any more lifelike than the moral machine. It seemed more reasonable because unlike the driver in the crashing car, kidney patients have years of documentation of their illness and there is time for them, their families, and their friends to fill in further background. The people deciding the kidney's destination are much better informed, and in the all-too-familiar scenario of allocating scarce resources. And yet: it's the same conundrum, and in the end how many of us want the machine, rather than a human, to decide whether we live or die?

Someone eventually asked: what if we become able to make an oversupply of kidneys? This only solves the top layer of the problem. Each operation has costs in surgeons' time, medical equipment, nursing care, and hospital infrastructure. Absent a disruptive change in medical technology, it's hard to imagine it will ever be easy to give everyone a kidney who needs one. Say it in food: we actually do grow enough food to supply everyone, but it's not evenly distributed, so in some areas we have massive waste and in others horrible famine (and in some places, both).

Moving to current practice, in a Guardian article Eubanks documents the similar conundrums confronting those struggling to allocate low-income housing, welfare, and other basic needs to poor people in the US in a time of government "austerity". The social workers, policy makers, and data scientists on these jobs have to make decisions, that, like the kidney and driving examples, have life-or-death consequences. In this case, as Eubanks puts it, they decide which get helped among "the most exploited and marginalized people in the United States". The automated systems Eubanks encounters do not lower barriers to programs as promised and, she writes, obscure the political choices that created these social problems in the first place. Automating the response doesn't change those.


Illustrations: Project screenshot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/776

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives