Archives

April 20, 2018

Deception

werobot-pepper-head_zpsrvlmgvgl.jpg"Why are robots different?" 2018 co-chair Mark Lemley asked repeatedly at this year's We Robot. We used to ask this in the late 1990s when trying to decide whether a new internet development was worth covering. "Would this be a story if it were about telephones?" Tom Standage and Ben Rooney frequently asked at the Daily Telegraph.

The obvious answer is physical risk and our perception of danger. The idea that autonomously moving objects may be dangerous is deeply biologically hard-wired. A plant can't kill you if you don't go near it. Or, as Bill Smart put it at the first We Robot in 2012, "My iPad can't stab me in my bed." Autonomous movement fools us into thinking things are smarter than they are.

It is probably not much consolation to the driver of the crashed autopiloting Tesla or his bereaved family that his predicament was predicted two years ago at We Robot 2016. In a paper, Madeline Elish called humans in these partnerships "Moral Crumple Zones", because, she argued, in a human-machine partnership, the human would take all the pressure, like the crumple zone in a car.

Today, Tesla is fulfilling her prophecy by blaming the driver for not getting his hands onto the steering wheel fast enough when commanded. (Other prior art on this: Dexter Palmer's brilliant 2016 book Version Control.)

As Ian Kerr pointed out, the user's instructions are self-contradictory. The marketing brochure uses the metaphors "autopilot" and "autosteer" to seduce buyers into envisioning a ride of relaxed luxury while the car does all the work. But the legal documents and user manual supplied with the car tell you that you can't rely on the car to change lanes, and you must keep your hands on the wheel at all times. A computer ingesting this would start smoking.

Granted, no marketer wants to say, "This car will drive itself in a limited fashion, as long as you watch the road and keep your hands on the steering wheel." The average consumer reading that says, "Um...you mean I have to drive it?"

The human as moral crumple zone also appears in analyses of the Arizona Uber crash. Even-handedly, Brad Templeton points plenty of blame at Uber and its decisions: the car's LIDAR should have spotted the pedestrian crossing the road in time to stop safely. He then writes, "Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired." And yet humans are notoriously bad at the job required of her: monitor a machine. Safety drivers are typically deployed in pairs to split the work - but also to keep each other attentive.

The larger We Robot discussion was part about public perception of risk, based on a paper (PDF) by Aaron Mannes that discussed how easy it is to derail public trust in a company or new technology when statistically less-significant incidents spark emotional public outrage. Self-driving cars may in fact be safer overall than human drivers despite the fatal crash in Arizona; Mannes also mentioned were Three Mile Island, which made the public much more wary of nuclear power, and the Ford Pinto, which spent the 1970s occasionally catching fire.

Mannes suggested that if you have that trust relationship you may be able to survive your crisis. Without it, you're trying to win the public over on "Frankenfoods".

So much was funnier and more light-hearted seven years ago, as a long-time attendee pointed out; the discussions have darkened steadily year by year as theory has become practice and we can no longer think the problems are as far away as the Singularity.

In San Francisco, delivery robots cause sidewalk congestion and make some homeless people feel surveilled; in Chicago and Durham we risk embedding automated unfairness into criminal justice; the egregious extent of internet surveillance has become clear; and the world has seen its first self-driving car road deaths. The last several years have been full of fear about the loss of jobs; now the more imminent dragons are becoming clearer. Do you feel comfortable in public spaces when there's a like a mobile unit pointing some of its nine cameras at you?

Karen Levy, finds that truckers are less upset about losing their jobs than about automation invading their cabs, ostensibly for their safety. Sensors, cameras, and wearables that monitor them for wakefulness, heart health, and other parameters are painful and enraging to this group, who chose their job for its autonomy.

Today's drivers have the skills to step in; tomorrow's won't. Today's doctors are used to doing their own diagnostics; tomorrow's may not be. In the paper by Michael Froomkin, Ian Kerr, and Joƫlle Pinea (PDF), automation may mean not only deskilling humans (doctors) but also a frozen knowledge base. Many hope that mining historical patient data will expose patterns that enable more accurate diagnostics and treatments. If the machines take over, where will the new approaches come from?

Worse, behind all that is sophisticated data manipulation for which today's internet is providing the prototype. When, as Woody Hartzog suggested, Rocco, your Alexa-equipped Roomba, rolls up to you, fakes a bum wheel, and says, "Daddy, buy me an upgrade or I'll die", will you have the heartlessness to say no?

Illustrations: Pepper and handler at We Robot 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 14, 2018

Late, noisy, and wrong

Thumbnail image for Bill Smart - We Robot 2016.jpg"All sensors are terrible," Bill Smart and Cindy Grimm explained as part of a pre-conference workshop at this year's We Robot. Smart, an engineer at Oregon State with prior history here, loves to explain why robots and AI aren't as smart as people think. "Just a fancy hammer," he said the first year.

Thursday's target was broad: the reality of sensors, algorithms, and machine learnng.

One of his slides read:


  • It's all just math and physics.

  • There is no intelligence.

  • It's just a computer program.

  • Sensors turn physics into numbers.

That last one is the crucial bit, and it struck me as surprising only because in all the years I've read about and glibly mentioned sensors and how many there are in our phones they've never really been explained to me. I'm not an electrical engineering student, so like most of us, I wave around the words. Of course I know that digital means numbers, and computers do calculations with numbers not fuzzy things like light and sound, and therefore the camera in my phone (which is a sensor) is storing values describing light levels rather than photographing light in the way that analogue film did, But I don't' - or didn't until Thursday - really know what sensors do measure. For most purposes, it's OK that my understanding is...let's call it abstract. But it does make it easy to overestimate what the technology can do now and how soon it will be able to fulfil the fantasies of mad scientists.

Smart's point is that when you start talking about what AI can do - whether or not you're using my aspirational intelligence recasting of the term - you'd better have some grasp of what it really is. It means the difference between a blob on the horizon that can be safely ignored and a woman pushing a bicycle across a roadway in front of an oncoming LIDAR-equipped Uber self-driving car;

So he begins with this: "All sensors are terrible." We don't use better ones because either such a thing does not exist or because they're too expensive. They are all "noisy, late, and wrong" and "you can never measure what you want to."

What we want to measure are things like pressure, light, and movement, and because we imagine machines as analogues of ourselves, we want them to feel the pressure, see the light, and understand the movement. However, what sensors can measure is electrical current. So we are always "measuring indirectly through assumptions and physics". This is the point AI Weirdness makes too, more visually, by showing what happens when you apply a touch of surrealism to the pictures you feed through machine learning.

He described what a sensor does this way: "They send a ping of energy into the world. It interacts, and comes back." In the case of LIDAR - he used a group of humans to enact this - a laser pulse is sent out, and the time it takes to return is a number of oscillations of a crystal. This has some obvious implications: you can't measure anything shorter than one oscillation.

Grimm explains that a "time of flight" sensor like that is what cameras - back to old Kodaks - use to auto-focus. Smartphones are pretty good at detecting a cluster of pixels that looks like a face and using that to focus on. But now let's imagine it's being used in a knee-high robot on a sidewalk to detect legs. In an art installation Smart and Grimm did they found that it doesn't work in Portland...because of all those hipsters wearing black jeans.

So there are all sorts of these artefacts, and we will keep tripping over them because most of us don't really know what we're talking about. With image recognition, the important thing to remember is that the sensor is detecting pixel values, not things - and a consequence of that is that we don't necessarily know *what* the system has actually decided is important and we can't guarantee what it might be recognizing. So turn machine learning loose on a batch of photos of Audis, and if they all happen to be photographed at the same angle the system won't recognize an Audi photographed at a different one. Teach a self-driving car all the roads in San Francisco and it still won't know anything about driving in Portland.

That circumscription is important. Teach a machine learning system on a set of photos of Abraham Lincoln and a zebra fish, and you get a system that can't imagine it might be a cat. The computer - which, remember, is working with an array of numbers - looks at the numbers in the array and based on what it has identified as significant in previous runs makes the call based on what's closest. It's numbers in, numbers out, and we can't guarantee what it's "recognizing".

A linguistic change would help make all this salient. LIDAR does not "see" the roadway in front of the car that's carrying it. Google's software does not "translate" language. Software does not "recognize" images. The machine does not think, and it has no gender.

So when Mark Zuckerberg tells Congress that AI will fix everything, consider those arrays of numbers that may interpret a clutch of pixels as Abraham Lincoln when what's there is a zebra fish...and conclude he's talking out of his ass.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier co\lumns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.