" /> net.wars: April 2019 Archives

« March 2019 | Main | May 2019 »

April 26, 2019

This house

2001-hal.pngThis house may be spying on me.

I know it listens. Its owners say, "Google, set the timer for one minute," and a male voice sounds: "Setting the timer for one minute."

I think, one minute? You need a timer for one minute? Does everyone now cook that precisely?

They say, "Google, turn on the lamp in the family room." The voice sounds: "Turning on the lamp in the family room." The lamp is literally sitting on the table right next to the person issuing the order.

I think, "Arm, hand, switch, flick. No?"

This happens every night because the lamp is programmed to turn off earlier than we go to bed.

I do not feel I am visiting the future. Instead, I feel I am visiting an experiment that years from now people will look back on and say, "Why did they do that?"

I know by feel how long a minute is. A child growing up in this house would not. That child may not even know how to operate a light switch, even though one of the house's owners is a technical support guy who knows how to build and dismember computers, write code, and wire circuits. Later, this house's owner tells me, "I just wanted a reminder."

It's 16 years since I visited Microsoft's and IBM's visions of the smart homes they thought we might be living in by now. IBM imagined voice commands; Microsoft imagined fashion advice-giving closets. The better parts of the vision - IBM's dashboard with a tick-box so your lawn watering system would observe the latest municipal watering restrictions - are sadly unavailable. The worse parts - living in constant near-darkness so the ubiquitous projections are readable - are sadly closer. Neither envisioned giant competitors whose interests are served by installing in-house microphones on constant alert.

This house inaudibly alerts its owner's phones whenever anyone approaches the front door. From my perspective, new people mysteriously appear in the kitchen without warning.

This house has smartish thermostats that display little wifi icons to indicate that they're online. This house's owners tell me these are Ecobee Linux thermostats; the wifi connection lets them control the heating from their phones. The thermostats are not connected to Google.

None of this is obviously intrusive. This house looks basically like a normal house. The pile of electronics in the basement is just a pile of electronics. Pay no attention to the small blue flashing lights behind the black fascia.

One of this house's owners tells me he has deliberately chosen a male voice for the smart speaker so as not to suggest that women are or should be subservient to men. Both owners are answered by the same male voice. I can imagine personalized voices might be useful for distinguishing who asked what, particularly in a shared house or a company, and ensuring only the right people got to issue orders. Google says its speakers can be trained to recognize six unique voices - a feature I can see would be valuable to the company as a vector for gathering more detailed information about each user's personality and profile. And, yes, it would serve users better.

Right now, I could come down in the middle of the night and say, "Google, turn on the lights in the master bedroom." I actually did something like this once by accident years ago in a friend's apartment that was wirelessed up with X10 controls. I know this system would allow it because I used the word "Google" carelessly in a sentence while standing next to a digital photo frame, and the unexpected speaker inside it woke up to say, "I don't understand". This house's owner stared: "It's not supposed to do that when Google is not the first word in the sentence". The photo frame stayed silent.

I think it was just marking its territory.

Turning off the fan in their bedroom would be more subtle. They would wake up more slowly, and would probably just think the fan had broken. This house will need reprogramming to protect itself from children. Once that happens, guests will be unable to do anything for themselves.

This house's owners tell me there are many upgrades they could implement, and they will but: managing them needs skill and thought to segment and secure the network and implement local data storage. Keeping Google and Amazon at bay requires an expert.

This house's owners do not get their news from their smart speakers, but it may be only a matter of time. At a recent Hacks/Hackers, Nic Newman gave the findings of a recent Reuters Institute study: smart speakers are growing faster than smartphones at the same stage, they are replacing radios, and "will kill the remote control". So far, only 46% use them to get news updates. What was alarming was the gatekeeper control providers have: on a computer, the web could offer 20 links; on a smartphone there's room for seven, voice...one. Just one answer to, "What's the latest news on the US presidential race?"

At OpenTech in 2017, Tom Steinberg observed that now that his house was equipped with an Amazon Echo, homes without one seemed "broken". He predicted that this would become such a fundamental technology that "only billionaires will be able to opt out". Yet really, the biggest advance since the beginning of remote controls is that now your garage door opener can collect your data and send it to Google.

My house can stay "broken".


Illustrations: HAL (what else?).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 18, 2019

Math, monsters, and metaphors

Twitter-moral-labyrinth.jpg "My iPhone won't stab me in my bed," Bill Smart said at the first We Robot, attempting to explain what was different about robots - but eight years on, We Robot seems less worried about that than about the brains of the operation. That is, AI, which conference participant Aaron Mannes described as, "A pile of math that can do some stuff".

But the math needs data to work on, and so a lot of the discussion goes toward possible consequences: delivery drones displaying personalized ads (Ryan Calo and Stephanie Ballard); the wrongness of researchers who defend their habit of scraping publicly posted data by saying it's "the norm" when their unwitting experimental subjects have never given permission; the unexpected consequences of creating new data sources in farming (Solon Barocas, Karen Levy, and Alexandra Mateescu); and how to incorporate public values (Alicia Solow-Neiderman) into the control of...well, AI, but what is AI without data? It's that pile of math. "It's just software," Bill Smart (again) said last week. Should we be scared?

The answer seems to be "sometimes". Two types of robots were cited for "robotic space colonialism" (Kristen Thomasen), because they are here enough and now enough for legal cases to be emerging. These are 1) drones, and 2) delivery robots. Mostly. Mason Marks pointed out Amazon's amazing Kiva robots, but they're working in warehouses where their impact is more a result of the workings of capitalism that that of AI. They don't scare people in their homes at night or appropriate sidewalk space like delivery robots, which Paul Colhoun described as "unattended property in motion carrying another person's property". Which sounds like they might be sort of cute and vulnerable, until he continues: "What actions may they take to defend themselves?" Is this a new meaning for move fast and break things?

Colhoun's comment came during a discussion of using various forecasting methods - futures planning, design fiction, the futures wheel (which someone suggested might provide a usefully visual alternative to privacy policies) - that led Cindy Grimm to pinpoint the problem of when you regulate. Too soon, and you risk constraining valuable technology. Too late, and you're constantly scrambling to revise your laws while being mocked by technical experts calling you an idiot (see 25 years of Internet regulation). Still, I'd be happy to pass a law right now barring drones from advertising and data collection and damn the consequences. And then be embarrassed; as Levy pointed out, other populations have a lot more to fear from drones than being bothered by some ads...

The question remains: what, exactly do you regulate? The Algorithmic Accountability Act recently proposed by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) would require large companies to audit machine learning systems to eliminate bias. Discrimination is much bigger than AI, said conference co-founder Michael Froomkin in discussing Alicia Solow-Neiderman's paper on regulating AI, but special to AI is unequal access to data.

Grimm also pointed out that there are three different aspects: writing code (referring back to Petros Terzis's paper proposing to apply the regime of negligence laws to coders); collecting data; and using data. While this is true, it doesn't really capture the experience Abby Jacques suggested could be a logical consequence of following the results collected by MIT's Moral Machine: save the young, fit, and wealthy, but splat the old, poor, and infirm. If, she argued, you followed the mandate of the popular vote, old people would be scrambling to save themselves in parking lots while kids ran wild knowing the cars would never hit them. An entertaining fantasy spectacle, to be sure, but not quite how most of us want to live. As Jacques tells it, the trolley problem the Moral Machine represents is basically a metaphor that has eaten its young. Get rid of it! This was a rare moment of near-universal agreement. "I've been longing for the trolley problem to die," robotics pioneerRobin Murphy said. Jacques herself was more measured: "Philosophers need to take responsibility for what happens when we leave our tools lying around."

The biggest thing I've learned in all the law conferences I go to is that law proceeds by analogy and metaphor. You see this everywhere: Kate Darling is trying to understand how we might integrate robots into our lives by studying the history of domesticating animals; Ian Kerr and Carys Craig are trying to deromanticize "the author" in discussions of AI and copyright law; the "property" in "intellectual property" draws an uncomfortable analogy to physical objects; and Hideyuki Matsumi is trying to think through robot registration by analogy to Japan's Koseki family registration law.

Google koala car.jpgGetting the metaphors right is therefore crucial, which explains, in turn, why it's important to spend so much effort understanding what the technology can really do and what it can't. You have to stop buying the images of driverless cars to produce something like the "handoff model" proposed by Jake Goldenfein, Deirdre Mulligan, and Helen Nissenbaum to explore the permeable boundaries between humans and the autonomous or connected systems driving their cars. Similarly, it's easy to forget, as Mulligan said in introducing her paper with Daniel N. Kluttz, that in "machine learning" algorithms learn only from the judgments at the end; they never see the intermediary reasoning stages.

So metaphor matters. At this point I had a blinding flash of realization. This is why no one can agree about Brexit. *Brexit* is a trolley problem. Small wonder Jacques called the Moral Machine a "monster".

Previous We Robot events as seen by net.wars: 2018 workshop and conference; 2017; 2016 workshop and conference, 2015; 2013, and 2012. We missed 2014.

Illustrations: The Moral Labyrinth art installation, by Sarah Newman and Jessica Fjeld, at We Robot 2019; Google driverless car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 12, 2019

The Algernon problem

charly-movie-image.jpgLast week we noted that it may be a sign of a maturing robotics industry that it's possible to have companies specializing in something as small as fingertips for a robot hand. This week, the workshop day kicking off this year's We Robot conference provides a different reason to think the same thing: more and more disciplines are finding their way to this cross-the-streams event. This year, joining engineers, computer scientists, lawyers, and the odd philosopher are sociologists, economists, and activists.

The result is oddly like a meeting of the Research Institute for the Science of Cyber Security, where a large part of the point from the beginning has been that human factors and economics are as important to good security as technical knowledge. This was particularly true in the face-off between the economist Rob Seamans and the sociologist Beth Bechky, which pitted quantitative "things we can count" against qualitative "study the social structures" thinking. The range of disciplines needed to think about what used to be "computer" security keeps growing as the ways we use computers become more complex; robots are computer systems whose mechanical manifestations interact with humans. This move has to happen.

One sign is a change in language. Madeline Elish, currently in the news for her newly published 2016 We Robot paper, Moral Crumple Zones, said she's trying to replace the term "deploying" with "integrating" for arriving technologies. "They are integrated into systems," she explained, "and when you say "integrate" it implies into what, with whom, and where." By conrast, "deployment" is military-speak, devoid of context. I like this idea, since by 2015, it was clear from a machine learning conference at the Royal Society that many had begun seeing robots as partners rather than replacements.

Later, three Japanese academics - the independent researcher Hideyuki Matsumi, Takayuki Kato, and Pumio Shimpo - tried to explain why Japanese people like robots so much - more, it seems, than "we" do (whoever "we" are). They suggested three theories: the influence of TV and manga; the influence of the mainstream Shinto religion, which sees a spirit in everything; and the Japanese government strategy to make the country a robotics powerhouse. The latter has produced a 356-page guideline for research development.

"Japanese people don't like to draw distinctions and place clear lines," Shinto said. "We think of AI as a friend, not an enemy, and we want to blur the lines." Shimpo had just said that even though he has two actual dogs he wants an Aibo. Kato dissented: "I personally don't like robots."

The MIT researcher Kate Darling, who studies human responses to robots, found positive reinforcement in studies that have found that autistic kids respond well to robots. "One theory is that they're social, but not too social." An experiment that placed these robots in homes for 30 days last summer had "stellar results". But: when the robots were removed at the end of the experiment, follow-up studies found that the kids were losing the skills the robots had brought them. The story evokes the 1958 Daniel Keyes story Flowers for Algernon, but then you have to ask: what were the skills? Did they matter to the children or just to the researchers and how is "success" defined?

The opportunities anthropomorphization opens for manipulation are an issue everywhere. Woody Hartzog called the tendency to believe what the machine says "automation bias", but that understates the range of motivations: you may believe the machine because you like it, because it's manipulated you, or because you're working in a government benefits agency where you can't be sure you won't get fired if you defy the machine's decision. Would that everyone could see Bill Smart and Cindy Grimm follow up their presentation from last year to show: AI is just software; it doesn't "know" things; and it's the complexity that gets you. Smart hates the term "autnomous" for robots "because in robots it means deterministic software running on a computer. It's teleoperation via computer code."

This is the "fancy hammer" school of thinking about robots, and it can be quite valuable. Kevin Bankston soon demonstrated this: "Science fiction has trained us to worry about Skynet instead of housing discrimination, and expect individual saviors rather than communities working together to deal with community problems." AI is not taking our jobs; capitalists are using AI to take our jobs - a very different problem. As long as we see robots and AI as autonomous, we miss that instead they ares agents carrying out others' plans. This is a larger example of a pervasive problem with smartphones, social media sites, and platforms generally: they are designed to push us to forget the data-collecting, self-interested, manipulative behemoth behind them.

Returning to Elish's comment, we are one of the things robots integrate with. At the moment, this is taking the form of making random people research subjects: the pedestrian killed in Arizona by a supposedly self-driving car, the hapless prisoners whose parole is decided by it's-just-software, the people caught by the Metropolitan Police's staggeringly flawed facial recognition, the homeless people who feel threatened by security robots, the Caltrain passengers sharing a platform with an officious delivery robot. Did any of us ask to be experimented on?


Illustrations: Cliff Robertson in Charly, the movie version of "Flowers for Algernon".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 5, 2019

The collaborative hand

Rich Walker-Shadow-2019-04-03.jpgThe futurist Anders Sandberg has often observed that we call it "artificial intelligence" only as long as it doesn't work; after that it's simply "automation". This week, Rich Walker, the managing director of Shadow Robot, said the same thing about robotics. No one calls a self-driving car or a washing machine a robot, for example. Then again, a friend does indeed call the automated tea maker that reliably wakes up every morning before he does "the robot", which suggests we only call things "robots" when we can mock their limitations.

Walker's larger point was robotics, like AI, suffers from confusion between the things people think it can do and the things it can actually do. The gap in AI is so large, that effectively the term now has two meanings, a technological one revolving around the traditional definition of AI, and a political one, which includes the many emerging new technologies - machine learning, computer vision, and so on - that we need to grapple with.

When, last year, we found that Shadow Robot was collaborating on research into care robots it seemed time for a revisit: the band of volunteers I met in 1997 and the tiny business it had grown into in 2009 had clearly reached a new level.

Social care is just one of many areas Shadow is exploring; others include agritech and manufacturing. "Lots are either depending on other pieces of technology that are not ready or available yet or dependent on economics that are not working in our favor yet," Walker says. Social care is an example of the latter; using robots outside of production lines in manufacturing is an example of the former. "It's still effectively a machine vision problem." That is, machine vision is not accurate enough with high enough reliability. A 99.9% level of accuracy means a failure per shift in a car manufacturing facility.

Thumbnail image for R-shadow-walker.jpgGetting to Shadow Robot's present state involved narrowing down the dream founder Richard Greenhill conceived after reading a 1980s computer programming manual: to build a robot that could bring him a cup of tea. The project, then struggling to be taken seriously as it had no funding and Greenhill had no relevant degrees, built the first robot outside Japan that could stand upright and take a step; the Science Museum included it in its 2017 robot exhibition.

Greenhill himself began the winnowing process, focusing on developing a physical robot that could function in human spaces rather than AI and computer vision, reasoning that there were many others who would do that. Greenhill recognized the importance of the hand, but it was Walker who recognized its commercial potential: "To engage with real-world, human-scale tasks you need hands."

The result, Walker says, is, "We build the best robot hand in the world." And, he adds, because several employees have worked on all the hands Shadow has ever built, "We understand all the compromises we've made in the designs, why they're there, and how they could be changed. If someone asks for an extra thumb, we can say why it's difficult but how we could do it."

Meanwhile, the world around Shadow has changed to include specialists in everything else. Computer vision, for example: "It's outside of the set of things we think we should be good at doing, so we want others to do it who are passionate about it," Walker says. "I have no interest in building robot arms, for example. Lots of people do that." And anyway, "It's incredibly hard to do it better than Universal Robots" - which itself became the nucleus of a world-class robotics cluster in the small Danish city of Odense.

Specialization may be the clearest sign that robotics is growing up. Shadow's current model, mounted on a UR arm, sports fingertips developed by SynTouch. With SynTouch and HaptX, Shadow collaborated to create a remote teleoperation system using HaptX gloves in San Francisco to control a robot hand in London following instructions from a businessman in Japan. The reason sounds briefly weird: All Nippon Airways is seeking new markets by moving into avatars and telepresence. It sounds less weird when Walker says ANA first thought of teleportation...and then concluded that telepresence might be more realistic.

Shadow's complement of employees is nearing 40, and they've moved from the undifferentiated north London house they'd worked in since the 1990s, dictated, Walker says, by buying a new milling machine. Getting the previous one in, circa 2007, required taking out the front window and the stairs and building a crane. Walker's increasing business focus reflects the fact that the company's customers are now as often commercial companies as the academic and research institutions that used to form their entire clientele.

For the future, "We want to improve tactile sensing," Walker says. "Touch is really hard to get robots to do well." One aspect they're particularly interested in for teleoperation is understanding intent: when grasping something, does the controlling human want to pinch, twist, hold, or twist it? At the moment, to answer that he imagines "the robot equivalent" of Clippy that asks, "It looks like you're trying to twist the wire. Do you mean to roll it or twist it?" Or even: "It looks like you're trying to defuse a bomb. Do you want to cut the red wire or the black wire?" Well, do ya, punk?


Illustrations: Rich Walker, showing off the latest model, which includes fingertips from HaptX and a robot arm from Universal Robotics; the original humanoid biped, on display at the Science Museum.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.