« Revelations | Main | When content wanted to be free »

Multiplicity

"What would you tell them in 1983 about the development of the internet in 1983, knowing what we know now?" David Post asked in a panel on robotics governance at this week's We Robot conference. (My write-ups from previous years: 2013, 2012) Kristen Thomasen, whose paper proposed using the automobile industry as a source of lessons for how the law should consider robots, suggested creating an international body that could identify and collaborate on cross-border problems as they arose.

My first thought was that I'd tell them to expect abuse because community does not scale. (It seems like small beer, but the early days of the internet were marked by such giddy communitarian hive-mind-is-wonderful hyperbole.) Post led the discussion to broader questions: if you're going to intervene in the development of new norms and law, when do you do it? How do you do it while remaining flexible enough to allow the technology to develop? Particularly with respect to privacy and teens' willingness to share information in a way that scares their elders, "Could we have had that conversation in 1983?"

This is the heart of We Robot: the co-chairs, Michael Froomkin and Ryan Calo run the conference precisely to try to get ahead of prospective conflicts. froomkin-r2d2.jpgSo Froomkin's answer to Post's question was to note that being "in the room" matters. Had "just one lawyer" been present when engineers were creating the domain name system its design could have been different because that lawyer would have spotted the issues we have been grappling with ever since. "People with different backgrounds and perspectives spot problems," he said, "and also solutions." And, he added, those changes are easier at the beginning, when there's less deployment and less money invested.

Thomasen had a good example ready to hand: the frame still commonly used for cars was not developed for safety but because the French designer who invented it thought it looked good. "If he'd been thinking about safety, he would have used arm brakes," she said, "because they're faster than feet." But the original design is the norm we all know, and changing it now would be about as feasible as replacing all the world's keyboards with Dvorak models.

This year's conversation reflected the more general rise of interest in robot governance. The Brookings Institution, for example, has recently published two relevant reports. The first is by Calo, arguing the case for a Federal Robotics Commission (in a conference paper, Woody Hartzog suggested the FTC might be an appropriate regulatory agency); the second is by Carnegie-Mellon PhD student Heather Knight discussing human responses to robots and advocating "smart social design". The two reports are helpfully summarized at Robot State. Internet pioneers were notoriously resistant to the idea of regulation; the issue looks different when you're talking about machines that interact with the physical world.

In deciding how the law should treat robots, how much does it matter if we anthropomorphize them? While largely accepting Bill Smart's characterization of robots at the first We Robot as "really fancy hammers", Kate Darling's paper discussed situations in which anthropomorphization might be useful. Could, she asked, robots be used in a prison context to rehabilitate or console? I'm of the fancy-hammers school myself - discussions of robots having sentience or rights rapidly brings out my inner biological supremacist. Ken Goldberg, discussing Darling's paper, noted stories that in the military there have been cases where humans have put themselves at risk to protect robots that had previously saved the lives of their peers. "How do we design the system to avoid that?"

Goldberg's suggestion, later echoed in comments by Tony Dyson, the designer of the original R2D2, was to design things that are "just anthropomorphic enough". R2D2, which Dyson designed for comedic, rather than practical, function, is a case in point: it looks nothing like a human, yet is beloved. "I don't think anyone falls in love with C3PO," Dyson told me, noting that he has had hundreds of emails from people who say they now work in robotics because of R2D2.

Darling's studies of interaction with and empathy for robots expose interesting gaps in human reactions, as did the conference itself. An enterprising Irish farmer has demonstrated using a drone instead of a sheepdog to herd a flock. Is this more or less sad than reading calls to replace human pilots with automated flight systems in the wake of the Germanwings crash? It's worth noting a pilot's rebuttal of this notion: even when they're not directly flying the plane, pilots work plenty hard. The dichotomy immediately reminded of Tom Paxton's song, Don'tt Slay That Potato: "Do you mean to say you'll eat us [potatoes] because we're not cute?"

Goldberg also noted the recent Moore's Curse article, which argues that the exponential increase in computing power over the last 20 years has led to unrealistic expectations. "Technology is a sigmoidal process," he said. "Reality is lagging quite far behind science fiction." Instead of the Singularity, Goldberg believes "the Multiplicity" is a better and stronger idea: many diverse machines working together with many humans is far more powerful. "What's important to make it work is diversity."


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/554

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)