« The opposite of privilege | Main | Regulatory disruption »

Life forms

Thumbnail image for Cephalopod_barnstar.pngWould you rather be killed by a human or a machine?

At this week's Royal Society meeting on AI and Society, Chris Reed recounted asking this question of an audience in Singapore. They all picked the human, even though they knew it was irrational, because they thought at least they'd know *why*.

A friend to whom I related this had another theory: maybe they thought there was a chance they could talk the the human killer out of it, whereas the machine would be implacable. It's possible.

My own theory pins this distaste for machine killing on a different, crucial underlying factor: a sense of shared understanding. The human standing over you with the axe or driving the oncoming bus may be a professional paid to dispatch you, a serial killer, an angry ex, or mentally ill, but they all have a personal understanding of what a human life means because they all have one they know they, too, will one day lose. The meaning of removing someone else's life is thoroughly embedded in all of us. Not having that is more or less the definition of a machine, or was until Philip K. Dick and his replicants. But there is no reason to assume that every respondent had the same reason.

Similarly, a commenter in the audience found similar responses to an Accenture poll he encountered on Twitter that inquired whether he would be in favor of AI making health decisions. When he checked the voting results, 69% had said no. Here again, the death of a patient by medical mistake keeps a human doctor awake at night (if television is to be believed), while to a machine it's a statistic, no matter how heavily weighted in its inner backpropagating neural networks.

Marion-Oswald-in-template.jpgThese two anecdotes resonated because earlier, Marion Oswald had opened her talk by asking whether, like Peter Godfrey-Smith's observation of cephalopods, interacting with AI was the closest we can come to interacting with an intelligent alien. Arguably, unless the aliens are immortal, on issues of life and death we can actually expect to have more shared understanding with them, as per above, than with machines.

The primary focus of Oswald's talk was actually to discuss her work studying HART, an algorithmic model used by Durham Constabulary to decide whether offenders qualified for deferred prosecution and help with their problems. The study raises all sorts of questions we're going to have to consider over the coming years about the role of police in society.

These issues were somewhat taken up later by Mireille Hildebrandt, who warned of the risks of transforming text-driven law - the messy stuff centuries of court cases have contested and interpreted - to data-driven law Allowing that to happen, she argued, transforms law into administration. "Contestability is the heart of the rule of law," she said. "There is more to the law that predictability and expedience." A crucial part of that is being able to test the system, and here Hildebrandt was particularly gloomy, in that although legal systems that comb the legal corpus are currently being marketed as aids for lawyers, she views it as inevitable that at some point they will become replacements. Some time after that, the skills necessary to test the inner workings of these systems will have vanished from the systems' human owners' firms.

At the annual We Robot conference, a recurring theme is the hard edges of computer systems, an aspect Ellen Ullman examined closely in her 1997 book, Close to the Machine. In Bill Smart's example, the difference between 59.99 miles an hour and 60.01 miles an hour is indistinguishable, but to a computer fitted with the right sensors the difference is a speeding ticket. An aspect of this that is insufficiently discussed is that all biological beings have some level of unpredictability. Robots and AI with far greater sensing precision than is available to humans will respond to changes we can't detect, making them appear less predictable, and therefore more intelligent, than they actually are. This is a deception we will have to learn to decode.

Already, machines that are billed as tools to aid human judgement are often much more trusted than they should be. Danielle Citron's 2006 paper Technological Due Process studied this in connection with benefits scoring systems in Texas and California, and found two problems. First, humans tended to trust the machine's decisions rather than apply their own judgement, a problem Hildebrandt referred to as "judgemental atrophy". Second, computer programmers are not trained lawyers, and are therefore not good at accurately translating legal text into decision-making systems. How do you express a fuzzy but widely understood and often-used standard like the UK's "reasonable person" in computer code? You'd have to precisely define the attopoint at which "reasonable" abruptly flicks to "unreasonable".

Ultimately, Oswald came down against the "intelligent alien" idea: "These are people-made, and it's up to us to find the benefits and tackle the risks," she said. "Ignorance of mathematics is no excuse."

That determination rests on the notion that the people building AI systems and the people using them have shared values. We already know that's not true, but even so: I vote less alien than a cephalopod on everything but the fear of death.

Illustrations: Cephalopod (via Obsidian Soul; Marion Oswald.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


TrackBack URL for this entry:

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)