« Democracy theater | Main | The fifth estate »

Robot wars

Who'd want to be a robot right now, branded a killer before you've even really been born? This week, Huw Price, a philosophy professor, Martin Rees, an emeritus professor of cosmology and astrophysics, and Jaan Tallinn, co-founder of Skype and a serial speaker at the Singularity Summit, announced the founding of the Cambridge Project for Existential Risk. I'm glad they're thinking about this stuff.

Their intention is to build a Centre for the Study of Existential Risk. There are many threats listed in the short introductory paragraph explaining the project - biotechnology, artificial life, nanotechnology, climate change - but the one everyone seems to be focusing on is: yep, you got it, KILLER ROBOTS - that is, artificial general intelligences so much smarter than we are that they may not only put us out of work but reshape the world for their own purposes, not caring what happens to us. Asimov would weep: his whole purpose in creating his Three Laws of Robotics was to provide a device that would allow him to tell some interesting speculative, what-if stories and get away from the then standard fictional assumption that robots were eeeevil.

The list of advisors to Cambridge project has some interesting names: Hermann Hauser, now in charge of a venture capital fund, whose long history in the computer industry includes founding Acorn and an attempt to create the first mobile-connected tablet (it was the size of a 1990s phone book, and you had to write each letter in an individual box to get it to recognize handwriting - just way too far ahead of its time); and Nick Bostrum of the Future of Humanity Institute at Oxford. The other names are less familiar to me, but it looks like a really good mix of talents, everything from genetics to the public understanding of risk.

The killer robots thing goes quite a way back. A friend of mine grew up in the time before television when kids would pay a nickel for the Saturday show at a movie theatre, which would, besides the feature, include a cartoon or two and the next chapter of a serial. We indulge his nostalgia by buying him DVDs of old serials such as The Phantom Creeps, which features an eight-foot, menacing robot that scares the heck out of people by doing little more than wave his arms at them.

Actually, the really eeeevil guy in that movie is the mad scientist, Dr Zorka, who not only creates the robot but also a machine that makes him invisible and another that induces mass suspended animation. The robot is really just drawn that way. But, like CSER, what grabs your attention is the robot.

I have a theory about this that I developed over the last couple of months working on a paper on complex systems, automation, and other computing trends, and this is that it's all to do with biology. We - and other animals - are pretty fundamentally wired to see anything that moves autonomously as more intelligent than anything that doesn't. In survival terms, that makes sense: the most poisonous plant can't attack you if you're standing out of reach of its branches. Something that can move autonomously can kill you - yet is also more cuddly. Consider the Roomba versus a modern dishwasher. Counterintuitively, the Roomba is not the smarter of the two.

And so it was that on Wednesday, when Voice of Russia assembled a bunch of us for a half-hour radio discussion, the focus was on KILLER ROBOTs, not synthetic biology (which I think is a much more immediately dangerous field) or climate change (in which the scariest new development is the very sober, grown-up, businesslike this-is-getting-expensive report from the insurer Munich Re). The conversation was genuinely interesting, roaming from the mysteries of consciousness to the problems of automated trading and the 2010 flash crash. Pretty much everyone agreed that there really isn't sufficient evidence to predict a date at which machines might be intelligent enough to pose an existential risk to humans. You might be worried about self-driving cars, but they're likely to be safer than drunk humans.

There is a real threat from killer machines; it's just that it's not super-human intelligence or consciousness that's the threat here. Last week, Human Rights Watch and the International Human Rights Clinic published Losing Humanity: the Case Against Killer Robots, arguing that governments should act pre-emptively to ban the development of fully autonomous weapons. There is no way, that paper argues, for autonomous weapons (which the military wants so fewer of *our* guys have to risk getting killed) to distinguish reliably between combatants and civilians.

There were some good papers on this at this year's We Robot conference from Ian Kerr and Kate Szilagyi (PDF) and Markus Wegner.

From various discussions, it's clear that you don't need to wait for *fully* autonomous weapons to reach the danger point. In today's partially automated systems, the operator may be under pressure to make a decision in seconds and "automation bias" means the human will most likely accept whatever the machines suggests it will do, the military equivalent of clicking OK. The human in the loop isn't as much of a protection as we might hope against the humans designing these things. Dr Zorka, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series

TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/423

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)