« Coding ethics | Main | Regression »

Insert a human

We Robot - 2022 - boston dynamics.JPGRobots have stopped being robots. This is a good thing.

This is my biggest impression of this year's We Robot conference: we have moved from the yay! robots! of the first year, 2012, through the depressed doldrums of "AI" systems that make the already-vulnerable more vulnerable circa 2018 to this year, when the phrase that kept twanging was "sociotechnical systems". For someone with my dilettantish conference-hopping habit, this seems like the necessary culmination of a long-running trend away from robots as autonomous mobile machines to robots/AI as human-machine partnerships. We Robot has never talked much about robot rights, instead focusing on considering the policy challenges that arise as robots and AI become embedded in our lives. This is realism; as We Robot co-founder Michael Froomkin writes, we're a long, long way from a self-aware and sentient machine.

The framing of sociotechnical systems is a good thing in part because so much of what passes for modern "artificial intelligence" is humans all the way down, as Mary L. Gray and Siddhart Suri documented in their book, Ghost Work. Even the companies that make self-driving cars, which a few years ago were supposed to be filling the streets by now, are admitting that full automation is a long way off. "Admitting" as in consolidating or being investigated for reckless hyping.

If this was the emerging theme, it started with the first discussion, of a paper on humans in the loop, by Margot Kaminski, Nicholson Price, and Rebecca Crootof. Too often, the proposed policy-making proposal for handling problems with decision making systems is to insert a human, a "solution" they called the "MABA-MABA trap", for "Machines Are Better At / Men Are Better At". While obviously humans and machines have differing capabilities - people are creative and flexible, machines don't get bored - just dropping in a human without considering what role that human is going to fill doesn't necessarily take advantage of the best capabilities of either. Hybrid systems are of necessity more complex - this is why cybersecurity keeps getting harder - but policy makers may not take this into account or think clearly about what the human's purpose is going to be.

At this conference in 2016, Madeleine Claire Elish foresaw that the human would become a moral crumple zone or liability sponge, absorbing blame without necessarily being at fault. No one will admit that this is the human's real role - but it seems an apt description of the "safety driver" watching the road, trying to stay alert in case the software driving the car needs backup or the poorly-paid human given a scoring system and tasked with awarding welfare benefits. What matters, as Andrew Selbst said in discussing this paper, is the *loop*, not the human - and that may include humans with invisible control, such as someone who can massage the data they enter into a benefits system in order to help a particularly vulnerable child, or who have wide discretion, such as a judge who is ultimately responsible for parole decisions no matter what the risk assessment system says.

This is not the moment to ask what constitutes a human.

It might be, however, the moment to note the commentator who said that a lot of the problems people are suggesting robots/AI can solve have other, less technological solutions. As they said, if you are putting a pipeline through a community without its consent, is the solution to deploy police drones to protect the pipeline and the people working on it - or is it to put the pipeline somewhere else (or to move to renewables and not have a pipeline at all)? Change the relationship with the community and maybe you can partly disarm the police.

One unwelcome forthcoming issue, discussed in a paper by Kate Darling and Daniella DiPaola is the threat merging automation and social marketing poses to consumer protection. A truly disturbing note came from DiPaola, who investigated manipulation and deception with personal robots and 75 children. The children had three options: no ads, ads allowed only if they are explicitly disclosed to be ads, or advertising through casual conversation. The kids chose casual conversation because they felt it showed the robot *knew* them. They chose this even though they knew the robot was intentionally designed to be a "friend". Oy. In a world where this attitude spreads widely and persists into adulthood, no amount of "media literacy" or learning to identify deception will save us; these programmed emotional relationships will overwhelm all that. As DiPaola said, "The whole premise of robots is building a social relationship. We see over and over again that it works better if it is more deceptive."

There was much more fun to be had - steamboat regulation as a source of lessons for regulating AI (Bhargavi Ganesh and Shannon Vallor), police use of canid robots (Carolin Kemper and Michael Kolain), and - a new topic - planning for the end of life of algorithmic and robot systems (Elin Björling and Laurel Riek). The robots won't care, but the humans will be devastated.

Illustrations: Hanging out at We Robot with Boston Dynamics' "Spot".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
https://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/1077

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives