« Mob rule | Main | Doping authorities »

Just a very clever bacterium

A couple of weeks back, the thinktank Cybersalon hosted a discussion derived, in part, from the back-and-forth Bill_Thompson,_BBC,_at_Wikimania_2014_-_14876124081.jpgnet.wars had with Bill Thompson a few months back when the TV series Humans convinced him that it would be morally wrong to embed Isaac Asimov's First Law of Robotics into the brain of a sentient robot. Thompson, I think, was feeling for the poor robot, trapped between sentience and its programmed-in cage; but I suppose there's also an argument to be made about the effect on natural-born humans of having a fully consciousness-endowed thing it's OK to be cruel to.

In the event's expanded discussion, Martin Smith, the head of the UK's cybernetics society, noted the substantial percentage (a number I now can't find) with bacteria; the percentage rises as you move through the animal kingdom to mammals and primates - we're about 95% the same as a chimpanzee. Even a fruit fly is about 60% the same as we are. Smith felt, therefore, that we might just as well recognize that we're "just a very clever bacterium". It's a little glib, given the reality that actually each of us is home to a complex ecosystem of billions of bacteria, but OK.

Smith argued that while robots are getting closer to us, we're simultaneously getting closer to them, invoking examples such as pacemakers and other implanted devices that keep us alive or restore failing functions (or, of course, augment them, as demonstrated at a previous Cybersalon event), the sort of thing former head of BT research Peter Cochrane has also been saying for a decade or few. I recall, from his 2004 book, Uncommon Sense, a conversation with his wife in which he tried to get her to pinpoint the exact point at which replacement parts would make him no longer himself.

If one thing became clear to me in this discussion, it was that artificial intelligence - if we can ever agree that we've achieved it - will share less commonality with our own than a bacterium does with our genome. For one thing, as Smith said, while we have five senses, some of which AI may never share, there's no reason for AI to be limited to five - it's easy to reel off a few dozen senses we could embed in AI-bearing gadgets that we don't or have in only limited ways, such as GPS, accelerometer, thermometer, chemical testers...all sorts of things.

But more important, the range of what the AI community thinks of as "intelligence" is narrow. Satinder Gill discussed "tacit knowledge", things we know but don't know we know. How do two people walking together fall unconsciously into step with one another? How do strangers know to perform corresponding movements without discussion? How would we teach a robot to navigate these social accommodations when we don't really understand them ourselves?

A few days later, at a Royal Society event on autonomous systems, full_Kuchenbecker.jpgthe University of Pennsylvania professor Katherine Kuchenbecker outlined another large gap: robots' lack of touch. "Why don't modern robots have this?" she asked. When someone mentions "haptics" to most roboticists, they think of force sensing, but that on its own is not enough. To prove her point, she showed a video clip of a human whose thumb and forefinger had been anesthetized and was then asked to pick up and light a matchstick. The result was extraordinary clumsiness. Rather than forces, Kuchenbecker's group focuses on sensing vibrations. Humans, she said, have four different kinds of mechanical receptors in our fingertips, plus sensitivity to pain. We always know how hard our muscles are working, and these cues are an essential kind of intelligence that allows us to operate in the world.

Gill's and Kuchenbecker's comments make sense because so much of how we experience the world is determined by the bodies through which we experience it. Whether you're attractive or not, whether you are able to move lightly or not, whether your body is in physical pain or not - all of these things change the lens through which you interpret what happens to you. It's one of the reasons each of us is unique. Do Google's cars understand that the world will judge them differently if they're dented and painted purple instead of perfectly formed and black?

All of this is, I suppose, part of why I feel no particular need to learn from Humans about the ethics of how to treat synthetic, though conscious beings (there seem more urgent things to philosophize about). I'm aware that to believe that it matters what substrate intelligence is located in is not the rationality expected of a skeptic. But it has to matter that so many of the experiences that make us human will not apply to AIs, however perfectly formed they may be. Thompson argues that fiction or not, stories can still help us find ethical principles. Fair enough. But if we're not going to be in control, when those AIs are assigned to solve the problem of climate change and they figure out that the cause is too many humans and realize that the simplest solution is to kill off half of us...you'd better hope there's an off-switch you can hit before they get to you.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter. We apologize, but comments have been disabled for the time being (11/2015) because comment-bots were hammering the server.

TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/586

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)