« Becoming a science writer | Main | Runaway »

Humans in, bugs out

Thumbnail image for Wilcox, Dominic - Stained Glass car.jpgAt the Guardian, John Naughton ponders our insistence on holding artificial intelligence and machine learning to a higher standard of accuracy than the default standard - that is, us.

Sure. Humans are fallible, flawed, prejudiced, and inconsistent. We are subject to numerous cognitive biases. We see patterns where none exist. We believe liars we like and distrust truth-tellers for picayune reasons. We dislike people who tell unwelcome truths and like people who spread appealing, though shameless, lies. We self-destruct, and then complain when we suffer the consequences. We evaluate risk poorly, fearing novel and recent threats more than familiar and constant ones. And on and on. In 10,000 years we have utterly failed to debug ourselves.

My inner failed comedian imagines the frustrated AI engineer muttering, "Human drivers kill 40,000 people in the US alone every year, but my autonomous car kills *one* pedestrian *one* time, and everybody gets all 'Oh, it's too dangerous to let these things out on the roads'."

New always scares people. But it seems natural to require new systems to do better than their predecessor; otherwise, why bother?

Part of the problem with Naughton's comparison is that machine learning and AI systems aren't really separate from us; they're humans all the way down. We create the algorithms, code the software, and allow them to mine the history of flawed human decisions, from which they make their new decisions. If humans are the problem with human-made decisions, then we are as much or more the problem with machine-made decisions.

I also think Naughton's frustrated AI researchers have a few details the wrong way round. While it's true that self-driving cars have driven millions of miles with very few deaths and human drivers were responsible for 36,560 deaths in 2018 in the US alone, it's *also* true that it's still rare for self-driving cars to be truly autonomous: Human intervention is still required startlingly often. In addition, humans drive in a far wider variety of conditions and environments than self-driving cars are as yet authorized to do. The idea that autonomous vehicles will be vastly safer than human drivers is definitely an industry PR talking point, but the evidence is not there yet.

We'd also point out that a clear trend in AI books this year has been to point out all the places where "automated" systems are really "last-mile humans". In Ghost Work, Mary L. Gray and Siddharth Suri document an astonishing array of apparently entirely computerized systems where remote humans intervene in all sorts of unexpected ways through task-based employment, while in Behind the Screen Sarah T. Roberts studies the specific case of the raters of online content. These workers are largely invisible (hence "ghost") because the companies who hire them, via subcontractors, think it sounds better to claim their work is really AI.

Throughout "automation's last mile", humans invisibly rate online content, check that the Uber driver picking you up is who they're supposed to be, and complete other tasks to hard for computers. As Janelle Shane writes in You Look Like a Thing and I Love You, the narrower the task you give an AI the smarter it seems. Humans are the opposite: no one thinks we're smart while we're getting bored by small, repetitive tasks; it's the creative struggle of finding solutions to huge, complex problems that signals brilliance. Some of AI's most ardent boosters like to hope that artificial *general* intelligence will be able to outdo us in solving our most intractable problems, but who is going to invent that? Us, if it ever happens (and it's unlikely to be soon).

There is also a problem with scale and replication. While a single human decision may affect billions, of people, there is always a next time when it will be reconsidered and reinterpreted by a different judge who takes into account differences of context and nuance. Humans have flexibility that machines lack, while computer errors can be intractable, especially when bugs are produced by complex interactions. The computer scientist Peter Neumann has been documenting the risks of over-relying on computers for decades.

However, a lot of our need for computers prove themselves to a superhuman standard is social, cultural, and emotional. AI adds a layer of remoteness and removes some of our sense of agency. With humans, we think we can judge character, talk them into changing their mind, or at least get them to explain the decision. In the just-linked 2017 event, the legal scholar Mireille Hildebrandt differentiated between law - flexible, reinterpretable, modifiable - and administration, which is what you get if a rules-based expert computer system is in charge. "Contestability is the heart of the rule of law," she said.

At the very least, we hope that the human has enough empathy to understand the impact their decision will have on their fellow human, especially in matters of life and death.

We give the last word to Agatha Christie, who decisively backed humans in her 1969 book, Hallowe'en Party, in which alter-ego Ariadne Oliver tells Hercule Poirot, "I know there's a proverb which says, 'To err is human' but a human error is nothing to what a computer can do if it tries."


Illustrations: Artist Dominic Wilcox's concept self-driving car (as seen at the Science Museum, July 2019).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
https://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/890

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives