« The lab and the world | Main | Hit for six »

Humans all the way down

werobot-2016-amf-calo.jpg"I don't think my robot is like a blender," said Olivier Guilhem. He was referring to the French civil code, in which the key differentiator is whether or not an object has moving parts. Robot, blender, 18-ton truck, it's all the same. But only the robot can hold its maker's hand to walk it around a conference, a sight at least one attendee at last week's We Robot found "creepy". Plus, the truck can't rifle through your drawers and upload images of the contents and probably doesn't have a cute name like "Pepper". (At that, it does look a lot like a ceramic pepper shaker.)

Guilhem was talking about his company, Aldebaran Robotics. Pepper is a Rorschach robot: apparently female to Americans, male to Asians. Biological supremacists - here the fancy hammer school - see "it". But: this country has ruled that corporations have free speech rights. Why not robots, anthropomorphized or not? In their paper (PDF), Helen Norton and Toni Massaro argue that the law presents few barriers: the rights of listeners, not just speakers, matter here.

werobot-pepper-head_zpsrvlmgvgl.jpgFor Guilhem, Pepper is a friend; he espouses ethics-by-design: "We have no intention of creating a spy robot". I'd love to think he's right, but until security researchers bang on their designs for a while there's no telling how their intentions - to build interactive humanoid robots that understand emotions and adapt their behavior to your mood - can be subverted. Could Pepper work as a palm reader?

There has been much debate, both this and previous years, and elsewhere, about humans in the loop, particularly for life-critical decisions. Killer robots! make great headlines. A growing body of literature such as Ian Kerr's 2013 Prediction, Presumption, and Preemption explores the inscrutability of black-box Big Data systems and exposes way history and human creators embed biases, gaps, and prejudices within "neutral" technology. We want to believe that Peter Asaro's imaginary robocop (PDF) would stop-and-search more fairly, but, as Asaro argues, it probably won't. Often, design specifications simply aren't broad enough - Siri can't understand braid Scots and, for a time, could give plentiful advice to people having a heart attack but had none for people who had been raped. So, Asaro asked, how do we make robocop care about #blacklivesmatter?

A Jamaican lawyer responded by asking how we can eliminate racism when we don't really understand what it is. "Why is the robot white?" she asked by way of illustrating this uncertainty. Pepper's manufacturer quickly tweeted: "It looks clean." Well, OK - but is that a sign of racism, germophobia, or design sensibilities too limited to think of more useful coverings such as cork, velcro, or fabric, all both more entertaining and more useful for attaching things. A Pepper in my house would probably be covered with Day-Glo colored Post-It notes and bits of tape. Or why couldn't the outer shell be customized to double as a dressmaker's dummy? Make yourself useful, 'bot.

Mary Anne Franks observed in discussing Asaro's paper, that these concerns make her skeptical of techno-optimism: "You can't take yourself completely out of the system. It's humans all the way down." Removing the "race" variable doesn't help; others act as proxies, just as in Glasgow prospective employers know a job applicant's religion by which school they attended. This discussion deeply frustrated one commenter, who said, "I've been told all my life that I'm broken and can't see bias." After noting he was raised in Latin America, he went on, "I thought I would be able to code blind and defeat it with technology until I read this paper. Thank you...and screw you!"

Besides, most robots and AI systems are designed to nudge interacting humans toward particular behavior. Cars won't start if you're not wearing your seatbelt; we frame queries to suit search engines'; elder care robots ensure they take their pills. "Neutral" or not, behind each robot are designers convinced they have identified our best choices.

The human-robot interface is this year's emerging problem. Humans pose problems for robots in all sorts of ways, not just the design phase. Harry Surden, in presenting Mary Anne Williams's paper on autonomous cars, asked how you know a robot has perceived you. Google's self-driving cars have gotten into accidents because the human drivers around them expect them to behave less cautiously than they do; or they get stuck at crosswalks because, programmed to pull back for pedestrians, they are perceived as "weak" and humans flood the street.

At last May's Royal Society meeting on AI, the predominant imagined future was human-machine partnerships. Data & Society researcher Madeleine Elish has conceptualized this kind of prospect as moral crumple zones (PDF). The idea derives from cars, whose crumple zones absorb the impact of collisions. In human-machine (or human-robot) systems, Elish argues, the humans will bear the brunt of the blame, like today's customer service representatives. Different designs handle hand-off between human and robot differently, but almost always the port of last resort is humans, who take over the most complex, ongoing crises when machines have already failed to solve them. This stacked deck inevitably makes human performance look worse.

"Crumple zone" is so evocative because although syntactically "shock absorbers" is more correct, it captures accurately the feelings of low-level corporate employees caught between the angry frustration of customers and the unyielding demands of employers: squished.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/610

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives