" /> net.wars: April 2018 Archives

« March 2018 | Main

April 20, 2018

Deception

werobot-pepper-head_zpsrvlmgvgl.jpg"Why are robots different?" 2018 co-chair Mark Lemley asked repeatedly at this year's We Robot. We used to ask this in the late 1990s when trying to decide whether a new internet development was worth covering. "Would this be a story if it were about telephones?" Tom Standage and Ben Rooney frequently asked at the Daily Telegraph.

The obvious answer is physical risk and our perception of danger. The idea that autonomously moving objects may be dangerous is deeply biologically hard-wired. A plant can't kill you if you don't go near it. Or, as Bill Smart put it at the first We Robot in 2012, "My iPad can't stab me in my bed." Autonomous movement fools us into thinking things are smarter than they are.

It is probably not much consolation to the driver of the crashed autopiloting Tesla or his bereaved family that his predicament was predicted two years ago at We Robot 2016. In a paper, Madeline Elish called humans in these partnerships "Moral Crumple Zones", because, she argued, in a human-machine partnership, the human would take all the pressure, like the crumple zone in a car.

Today, Tesla is fulfilling her prophecy by blaming the driver for not getting his hands onto the steering wheel fast enough when commanded. (Other prior art on this: Dexter Palmer's brilliant 2016 book Version Control.)

As Ian Kerr pointed out, the user's instructions are self-contradictory. The marketing brochure uses the metaphors "autopilot" and "autosteer" to seduce buyers into envisioning a ride of relaxed luxury while the car does all the work. But the legal documents and user manual supplied with the car tell you that you can't rely on the car to change lanes, and you must keep your hands on the wheel at all times. A computer ingesting this would start smoking.

Granted, no marketer wants to say, "This car will drive itself in a limited fashion, as long as you watch the road and keep your hands on the steering wheel." The average consumer reading that says, "Um...you mean I have to drive it?"

The human as moral crumple zone also appears in analyses of the Arizona Uber crash. Even-handedly, Brad Templeton points plenty of blame at Uber and its decisions: the car's LIDAR should have spotted the pedestrian crossing the road in time to stop safely. He then writes, "Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired." And yet humans are notoriously bad at the job required of her: monitor a machine. Safety drivers are typically deployed in pairs to split the work - but also to keep each other attentive.

The larger We Robot discussion was part about public perception of risk, based on a paper (PDF) by Aaron Mannes that discussed how easy it is to derail public trust in a company or new technology when statistically less-significant incidents spark emotional public outrage. Self-driving cars may in fact be safer overall than human drivers despite the fatal crash in Arizona; Mannes also mentioned were Three Mile Island, which made the public much more wary of nuclear power, and the Ford Pinto, which spent the 1970s occasionally catching fire.

Mannes suggested that if you have that trust relationship you may be able to survive your crisis. Without it, you're trying to win the public over on "Frankenfoods".

So much was funnier and more light-hearted seven years ago, as a long-time attendee pointed out; the discussions have darkened steadily year by year as theory has become practice and we can no longer think the problems are as far away as the Singularity.

In San Francisco, delivery robots cause sidewalk congestion and make some homeless people feel surveilled; in Chicago and Durham we risk embedding automated unfairness into criminal justice; the egregious extent of internet surveillance has become clear; and the world has seen its first self-driving car road deaths. The last several years have been full of fear about the loss of jobs; now the more imminent dragons are becoming clearer. Do you feel comfortable in public spaces when there's a like a mobile unit pointing some of its nine cameras at you?

Karen Levy, finds that truckers are less upset about losing their jobs than about automation invading their cabs, ostensibly for their safety. Sensors, cameras, and wearables that monitor them for wakefulness, heart health, and other parameters are painful and enraging to this group, who chose their job for its autonomy.

Today's drivers have the skills to step in; tomorrow's won't. Today's doctors are used to doing their own diagnostics; tomorrow's may not be. In the paper by Michael Froomkin, Ian Kerr, and Joƫlle Pinea (PDF), automation may mean not only deskilling humans (doctors) but also a frozen knowledge base. Many hope that mining historical patient data will expose patterns that enable more accurate diagnostics and treatments. If the machines take over, where will the new approaches come from?

Worse, behind all that is sophisticated data manipulation for which today's internet is providing the prototype. When, as Woody Hartzog suggested, Rocco, your Alexa-equipped Roomba, rolls up to you, fakes a bum wheel, and says, "Daddy, buy me an upgrade or I'll die", will you have the heartlessness to say no?

Illustrations: Pepper and handler at We Robot 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 14, 2018

Late, noisy, and wrong

Thumbnail image for Bill Smart - We Robot 2016.jpg"All sensors are terrible," Bill Smart and Cindy Grimm explained as part of a pre-conference workshop at this year's We Robot. Smart, an engineer at Oregon State with prior history here, loves to explain why robots and AI aren't as smart as people think. "Just a fancy hammer," he said the first year.

Thursday's target was broad: the reality of sensors, algorithms, and machine learnng.

One of his slides read:


  • It's all just math and physics.

  • There is no intelligence.

  • It's just a computer program.

  • Sensors turn physics into numbers.

That last one is the crucial bit, and it struck me as surprising only because in all the years I've read about and glibly mentioned sensors and how many there are in our phones they've never really been explained to me. I'm not an electrical engineering student, so like most of us, I wave around the words. Of course I know that digital means numbers, and computers do calculations with numbers not fuzzy things like light and sound, and therefore the camera in my phone (which is a sensor) is storing values describing light levels rather than photographing light in the way that analogue film did, But I don't' - or didn't until Thursday - really know what sensors do measure. For most purposes, it's OK that my understanding is...let's call it abstract. But it does make it easy to overestimate what the technology can do now and how soon it will be able to fulfil the fantasies of mad scientists.

Smart's point is that when you start talking about what AI can do - whether or not you're using my aspirational intelligence recasting of the term - you'd better have some grasp of what it really is. It means the difference between a blob on the horizon that can be safely ignored and a woman pushing a bicycle across a roadway in front of an oncoming LIDAR-equipped Uber self-driving car;

So he begins with this: "All sensors are terrible." We don't use better ones because either such a thing does not exist or because they're too expensive. They are all "noisy, late, and wrong" and "you can never measure what you want to."

What we want to measure are things like pressure, light, and movement, and because we imagine machines as analogues of ourselves, we want them to feel the pressure, see the light, and understand the movement. However, what sensors can measure is electrical current. So we are always "measuring indirectly through assumptions and physics". This is the point AI Weirdness makes too, more visually, by showing what happens when you apply a touch of surrealism to the pictures you feed through machine learning.

He described what a sensor does this way: "They send a ping of energy into the world. It interacts, and comes back." In the case of LIDAR - he used a group of humans to enact this - a laser pulse is sent out, and the time it takes to return is a number of oscillations of a crystal. This has some obvious implications: you can't measure anything shorter than one oscillation.

Grimm explains that a "time of flight" sensor like that is what cameras - back to old Kodaks - use to auto-focus. Smartphones are pretty good at detecting a cluster of pixels that looks like a face and using that to focus on. But now let's imagine it's being used in a knee-high robot on a sidewalk to detect legs. In an art installation Smart and Grimm did they found that it doesn't work in Portland...because of all those hipsters wearing black jeans.

So there are all sorts of these artefacts, and we will keep tripping over them because most of us don't really know what we're talking about. With image recognition, the important thing to remember is that the sensor is detecting pixel values, not things - and a consequence of that is that we don't necessarily know *what* the system has actually decided is important and we can't guarantee what it might be recognizing. So turn machine learning loose on a batch of photos of Audis, and if they all happen to be photographed at the same angle the system won't recognize an Audi photographed at a different one. Teach a self-driving car all the roads in San Francisco and it still won't know anything about driving in Portland.

That circumscription is important. Teach a machine learning system on a set of photos of Abraham Lincoln and a zebra fish, and you get a system that can't imagine it might be a cat. The computer - which, remember, is working with an array of numbers - looks at the numbers in the array and based on what it has identified as significant in previous runs makes the call based on what's closest. It's numbers in, numbers out, and we can't guarantee what it's "recognizing".

A linguistic change would help make all this salient. LIDAR does not "see" the roadway in front of the car that's carrying it. Google's software does not "translate" language. Software does not "recognize" images. The machine does not think, and it has no gender.

So when Mark Zuckerberg tells Congress that AI will fix everything, consider those arrays of numbers that may interpret a clutch of pixels as Abraham Lincoln when what's there is a zebra fish...and conclude he's talking out of his ass.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier co\lumns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 6, 2018

Leverage

Facebook-76536_640.pngWell, what's 37 million or 2 billion scraped accounts more or less among friends? The exploding hairball of the Facebook/Cambridge Analytica scandal keeps getting bigger. And, as Rana Dasgubta writes in the Guardian, we are complaining now because it's happening to us, but we did not notice when these techniques were tried out first in third-world countries. Dasgupta has much to say about how nation-states will have to adapt to these conditions.

Given that we will probably never pin down every detail of how much data and where it went, it's safest to assume that all of us have been compromised in some way. The smug "I've never used Facebook" population should remember that they almost certainly exist in the dataset, by either reference (your sister posts pictures of "my brother's birthday") or inference (like deducing the existence, size, and orbit of an unseen planet based on its gravitational pull on already-known objects).

Downloading our archives tells us far less than people recognize. My own archive had no real surprises (my account dates in 2007, but I post little and adblock the hell out of everything). The shock many people have experienced of seeing years of messages and photographs laid out in front of them, plus the SMS messages and call records that Facebook shouldn't have been retaining in the first place, hides the fact that these archives are a very limited picture of what Facebook knows about us. It shows us nothing about information posted about us by others, photos others have posted and tagged, or comments made in response to things we've posted.

The "me-ness" of the way Facebook and other social media present themselves was called out by Christian Fuchs in launching his book Digital Demagogue: Authoritarian Capitalism in the Age of Trump and Twitter. "Twitter is a me-centred medium. 'Social media' is the wrong term, because it's actually anti-social, Me media. It's all about individual profiles, accumulating reputation, followers, likes, and so on."

Saying that, however, plays into Facebook's own public mythology about itself. Facebook's actual and most significant holdings about us are far more extensive, and the company derives its real power from the complex social graphs it has built and the insights that can be gleaned from them. None of that is clear from the long list of friends. Even more significant is how Facebook matches up user profiles to other public records and social media services and with other brokers' datasets - but the archives give us no sense of that either. Facebook's knowledge of you is also greatly enhanced - as is its ability to lock you in as a user - if you, like many people, have opted to use Facebook credentials to log into third-party sites. Undoing that is about as easy and as much fun as undoing all your direct debit payments in order to move your bank account.

Facebook and the other tech companies are only the beginning. There's a few people out there trying to suggest Google is better, but Zeynep Tufekci discovered it had gone on retaining her YouTube history even though she had withdrawn permission to do so. As Tufekci then writes, if a person with a technical background whose job it is to study such things could fail to protect her data, how could others hope to do so?

But what about publishers and the others dependent on that same ecosystem? As Doc Searls writes, the investigative outrage on display in many media outlets glosses over the fact that they, too, are compromised. Third party trackers, social media buttons, Google analytics, and so on all deliver up readers to advertisers in increasing detail, feeding the business plans of thousands of companies all aimed at improving precision and targeting.

And why stop with publishers? At least they have the defense of needing to make a living. Government sites, libraries, and other public services do the same thing, without that justification. The Richmond Council website shows no ads - but it still uses Google Analytics, which means sending a steady stream of user data Google's way. Eventbrite, which everyone now uses for event sign-ups, is constantly exhorting me to post my attendance to Facebook. What benefit does Eventbrite get from my complying? It never says.

Meanwhile, every club, member organization, and creative endeavor begs its adherents to "like my page on Facebook" or "follow me on Twitter". While they see that as building audience and engagement, the reality is that they are acting as propagandists for those companies. When you try to argue against doing this, people will say they know, but then shrug helplessly and say they have to go where the audience is. If the audience is on Facebook, and it takes page likes to make Facebook highlight your existence, then what choice is there? Very few people are willing to contemplate the hard work of building community without shortcuts, and many seem to have come to believe that social media engagement as measured in ticks of approval is community, like Mark Zuckerberg tried to say last year.

For all these reasons, it's not enough to "fix Facebook". We must undo its leverage.


Illustrations: Facebook logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.