« The weakest link | Main | The reasonable woman »

The end

At the Royal Society's scientific meeting on machine learning on May 22, it was notable how many things the assembled scientists thought machine learning was going to bring to an end: the end of today's computer processor architectures (Simon Knowles, CTO of XMOS, and Oxford researcher Simon Benjamin); the end of the primacy of the program (also Knowles); the end of programming (Christopher Bishop); the end of the Royal Society (Nick Bostrom), and, quite possibly the end of humanity (also Bostrom).

The last idea, which we'll call "terminator theory", was out of sync with the generally optimistic tone, which held that machine learning has the potential to...well, all the usual economic and social transformations that tend to be predicted for all technological breakthroughs. Still: you have to like an AI conference that talks more about what *we* will achieve in building and partnering with AIs rather than what AI can achieve with or without us and whether they will like us afterwards.

Nick Jennings stressed the value of partnering experienced humans with machine learning systems to allocate resources in crisis situations. Bishop's work on recommendation and matching systems - movies and subscribers, game players with each other based on constantly updating skill levels - is meant to serve us, if only to get us to consume entertainment.

Bostrom, who runs the Future of Humanity Institute at Oxford and therefore specializes in identifying existential threats, was the only one to seem worried that sufficiently advanced AIs - "superintelligences" - might prefer to run things to suit themselves. The only other speaker who came close was Demis Hassabis from (now Google) DeepMind, who said his company has two goals: "Crack intelligence. Then use it to solve everything else." Modest dreams are no fun.

Note that no one (not even Bostrom, or at least, not clearly) was talking about creating *consciousness". This was a scientific meeting on machine learning to outline where we are, what happens next, what's needed, and how to use the results to date. Where we are is not "artificial general intelligence" like Hassabis wants to build, even though DeepMind has some algorithms that have impressively taught themselves superhuman expertise at a series of old computer games. A former child chess prodigy himself, in 1996, when he watched IBM's Deep Blue beat Gary Kasparov at chess, he was more impressed by Kasparov, who managed to be competitive but could also speak three languages, drive cars, and tie shoelaces. Deep Blue, "had to be reprogrammed to play noughts and crosses." (For Americans: tic tac toe.)

Still: Hassabis's games-player, given the chance to play long enough (overnight, hundreds of games) has found strategies its programmers didn't know about. Geoff Hinton geoffhinton.jpgand others he referenced have created systems that can reliably recognize the subjects of photographs and apply captions. We all know about voice recognition and driverless cars - to the point that, to borrow a quip I believe is from Bostrom colleague Anders Sandberg, we no longer think of those functions as "AI" because they work (mostly).

Hinton, in showing off a system that can create captions for photographs with surprising reliability, however, noted the next question. The machine looks at a picture of a toddler asleep cuddling a stuffed animal and describes it as: "child holding stuffed animal". This machine is good at objects, but not so much at relationships. So: when it says "holding" is it because it sees the relationship between the child's posture and the animal, or does its inner language model pick out "child" and "stuffed animal" and probabilistically predict that "holding" is the correct intervening word? In other words, is it interpreting or guessing? This - and other questions of how much a machine can be said to "think" - are problems for us, not them.

The big winner in all this is Bayes' theorem, which is everywhere because every one of these systems has to deal with uncerntainty. This is the difference between trying to program a machine to handle every conceivable eventuality (which beyond the simplest situations is infeasible) and giving a machine a set of rues for handling uncertainty, which is what Bayes is all about.

About a month ago, NPR's Planet Money ran a face-off between a robot and a human in three areas: folding laundry (human won), therapy for those recovering from trauma and depression (robot won), and radio news reporting. Planet Money awarded the robot a win for reporting: the robot took two minutes and change to create a perfectly accurate and correctly highlighted report from financial data; the human, chosen for his speed, took seven and something. Yet there's no question that if you're a radio station wishing to keep your car-bound listeners during commute time, you'd run the human-written version: it was colorful and entertaining as well as accurate. It told you things humans, as opposed to high-speed traders, would enjoy hearing even if they didn't care that much about the quarterly results of that specific restaurant chain.

That aside, the most interesting conclusion of the program: the earlier in life a skill can be learned by a human the harder it is to teach a robot. Put all these things together, and the logical conclusion is obvious: this means the end of shoelaces. Note that's already started...


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/561

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)