« Leverage | Main | Deception »

Late, noisy, and wrong

Thumbnail image for Bill Smart - We Robot 2016.jpg"All sensors are terrible," Bill Smart and Cindy Grimm explained as part of a pre-conference workshop at this year's We Robot. Smart, an engineer at Oregon State with prior history here, loves to explain why robots and AI aren't as smart as people think. "Just a fancy hammer," he said the first year.

Thursday's target was broad: the reality of sensors, algorithms, and machine learnng.

One of his slides read:


  • It's all just math and physics.

  • There is no intelligence.

  • It's just a computer program.

  • Sensors turn physics into numbers.

That last one is the crucial bit, and it struck me as surprising only because in all the years I've read about and glibly mentioned sensors and how many there are in our phones they've never really been explained to me. I'm not an electrical engineering student, so like most of us, I wave around the words. Of course I know that digital means numbers, and computers do calculations with numbers not fuzzy things like light and sound, and therefore the camera in my phone (which is a sensor) is storing values describing light levels rather than photographing light in the way that analogue film did, But I don't' - or didn't until Thursday - really know what sensors do measure. For most purposes, it's OK that my understanding is...let's call it abstract. But it does make it easy to overestimate what the technology can do now and how soon it will be able to fulfil the fantasies of mad scientists.

Smart's point is that when you start talking about what AI can do - whether or not you're using my aspirational intelligence recasting of the term - you'd better have some grasp of what it really is. It means the difference between a blob on the horizon that can be safely ignored and a woman pushing a bicycle across a roadway in front of an oncoming LIDAR-equipped Uber self-driving car;

So he begins with this: "All sensors are terrible." We don't use better ones because either such a thing does not exist or because they're too expensive. They are all "noisy, late, and wrong" and "you can never measure what you want to."

What we want to measure are things like pressure, light, and movement, and because we imagine machines as analogues of ourselves, we want them to feel the pressure, see the light, and understand the movement. However, what sensors can measure is electrical current. So we are always "measuring indirectly through assumptions and physics". This is the point AI Weirdness makes too, more visually, by showing what happens when you apply a touch of surrealism to the pictures you feed through machine learning.

He described what a sensor does this way: "They send a ping of energy into the world. It interacts, and comes back." In the case of LIDAR - he used a group of humans to enact this - a laser pulse is sent out, and the time it takes to return is a number of oscillations of a crystal. This has some obvious implications: you can't measure anything shorter than one oscillation.

Grimm explains that a "time of flight" sensor like that is what cameras - back to old Kodaks - use to auto-focus. Smartphones are pretty good at detecting a cluster of pixels that looks like a face and using that to focus on. But now let's imagine it's being used in a knee-high robot on a sidewalk to detect legs. In an art installation Smart and Grimm did they found that it doesn't work in Portland...because of all those hipsters wearing black jeans.

So there are all sorts of these artefacts, and we will keep tripping over them because most of us don't really know what we're talking about. With image recognition, the important thing to remember is that the sensor is detecting pixel values, not things - and a consequence of that is that we don't necessarily know *what* the system has actually decided is important and we can't guarantee what it might be recognizing. So turn machine learning loose on a batch of photos of Audis, and if they all happen to be photographed at the same angle the system won't recognize an Audi photographed at a different one. Teach a self-driving car all the roads in San Francisco and it still won't know anything about driving in Portland.

That circumscription is important. Teach a machine learning system on a set of photos of Abraham Lincoln and a zebra fish, and you get a system that can't imagine it might be a cat. The computer - which, remember, is working with an array of numbers - looks at the numbers in the array and based on what it has identified as significant in previous runs makes the call based on what's closest. It's numbers in, numbers out, and we can't guarantee what it's "recognizing".

A linguistic change would help make all this salient. LIDAR does not "see" the roadway in front of the car that's carrying it. Google's software does not "translate" language. Software does not "recognize" images. The machine does not think, and it has no gender.

So when Mark Zuckerberg tells Congress that AI will fix everything, consider those arrays of numbers that may interpret a clutch of pixels as Abraham Lincoln when what's there is a zebra fish...and conclude he's talking out of his ass.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier co\lumns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/767

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives