« The three IPs | Main | Old thinking »

Block that metaphor

oldest-robot-athens-2015-smaller.jpgMy favourite new term from this year's Privacy Law Scholars conference is "dishonest anthropomorphism". The term appeared in a draft paper written by Brenda Leung and Evan Selinger as part of a proposal for its opposite, "honest anthropomorphism". The authors' goal was to suggest a taxonomy that could be incorporated into privacy by design theory and practice, so that as household robots are developed and deployed they are less likely to do us harm. Not necessarily individual "harm" as in Isaac Asimov's Laws of Robotics, which tended to see robots as autonomous rather than a projection of its manufacturer into our personal space, therefore glossing over this more intentional and diffuse kind of deception. Pause to imagine that Facebook goes into making robots and you can see what we're talking about here.

"Dishonest anthropomorphism" derives from an earlier paper, Averting Robot Eyes by Margo Kaminski, Matthew Rueben, Bill Smart, and Cindy Grimm, which proposes "honest anthropomorphism" as a desirable principle in trying to protect people from the privacy problems inherent in admitting a robot, even something as limited as a Roomba, into your home. (At least three of these authors are regular attendees at We Robot since its inception in 2012.) That paper categorizes three types of privacy issues that robots bring: data privacy, boundary management, and social/relational.

The data privacy issues are substantial. A mobile phone or smart speaker may listen to or film you, but it has to stay where you put it (as Smart has memorably put it, "My iPad can't stab me in my bed"). Add movement and processing, and you have a roving spy that can collect myriad kinds of data to assemble an intimate picture of your home and its occupants. "Boundary management" refers to capabilities humans may not realize their robots have and therefore don't know to protect themselves against - thermal sensors that can see through walls, for example, or eyes that observe us even when the robot is apparently looking elsewhere (hence the title).

"Social/relational" refers to the our social and cultural expectations of the beings around us. In the authors' examples, unscrupulous designers can take advantage of our inclination to apply our expectations of other humans to entice us into disclosing more than we would if we truly understood the situation. A robot that mimics human expressions that we understand through our own muscle memory may be highly deceptive, inadvertently or intentionally. Robots may also be given the capability of identifying micro-reactions we can't control but that we're used to assuming go unnoticed.

A different session - discussing research by Marijn Sax, Natalie Helberger, and Nadine Bol - provided a worked example, albeit one without the full robot component. In other words: they've been studying mobile health apps. Most of these are obviously aimed at encouraging behavioral change - walk 10,000 steps, lose weight, do yoga. What the authors argue is that they are more aimed at effecting economic change than at encouraging health, an aspect often obscured from users. Quite apart from the wrongness of using an app marketed to improve your health as a vector for potentially unrelated commercial interests, the health framing itself may be questionable. For example, the famed 10,000 steps some apps push you to take daily has no evidence basis in medicine: the number was likely picked as a Japanese marketing term in the 1960s. These apps may also be quite rigid; in one case that came up during the discussion, an injured nurse found she couldn't adapt the app to help her follow her doctor's orders to stay off her feet. In other words, they optimize one thing, which may or may not have anything to do with health or even health's vaguer cousin, "wellness".

Returning to dishonest anthropomorphism, one suggestion was to focus on abuse rather than dishonesty; there are already laws that bar unfair practices and deception. After all, the entire discipline of user design is aimed at nudging users into certain behaviors and discouraging others. With more complex systems, even if the aim is to make the user feel good it's not simple: the same user will react differently to the same choice at different times. Deciding which points to single out in order to calculate benefit is as difficult as trying to decide where to begin and end a movie story, which the screenwriter William Goldman has likened to deciding where to cut a piece of string. The use of metaphor was harmless when we were talking desktops and filing cabinets; much less so when we're talking about a robot cat that closely emulates a biological cat and leads us into the false sense that we can understand it in the same way.

Deception is becoming the theme of the year, perhaps partly inspired by Facebook and Cambridge Analytica. It should be a good thing. It's already clear that neither the European data protection approach nor the US consumer protection approach will be sufficient in itself to protect privacy against the incoming waves of the Internet of Things, big data, smart infrastructure, robots, and AI. As the threats to privacy expand, the field itself must grow in new directions. What made these discussions interesting is that they're trying to figure out which ones.

Illustrations: Recreation of oldest known robot design (from the Ancient Greek Technology exhibition)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/778

Archives