« Is the juice worth the squeeze? | Main | The inside view »

Plausible diversions

amazon-astro.pngIf you want to shape a technology, the time to start is before it becomes fixed in the mindset of "'twas ever thus". This was the idea behind the creation of We Robot. At this year's event (see below for links to previous years), one clear example of this principle came from Thomas Krendl Gilbert and Roel I. J. Dobbe, whose study of autonomous vehicles pointed out the way we've privileged cars by coining "jaywalkification". On the blank page in the lawbook, we chose to make it illegal for pedestrians to get in cars'' way.

We Robot's ten years began with enthusiasm, segued through several depressed years of machine learning and AI, and this year has seemingly arrived at a twist on Arthur C. Clark's famous dictum To wit: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability. You could say it's been ten years of progressively removing robots' glamor.

Something like this was at the heart of the paper by Andrew Selbst, Suresh Venkatasubramanian, and I. Elizabeth Kumar, which uses the computer science staple of abstraction as a model for assigning responsibility for the behavior of complex systems. Weed out debates over the innards - is the system's algorithm unfair, or was the training data biased? - and aim at the main point​: this employer chose this system that produced these results. No one needs to be inside its "black box" if you can understand its boundaries. In one analogy, it's not the manufacturer's fault if a coffee maker fails to produce drinkable coffee from poisoned water and ground acorns; it *is* their fault if the machine turns potable water and ground coffee into toxic sludge. Find the decision points, and ask: how were those decisions made?

Gilbert and Dobbe used two other novel coinages: "moral crumple zoning" (from Madeleine Claire Elish's paper at We Robot 2016) and "rubblization", for altering the world to assist machines. Exhibit A, which exemplifies all three, is the 2018 incident in which an Uber car on autopilot killed a pedestrian in Tempe, Arizona. She was jaywalking; she and the inattentive safety driver were moral crumple zoned; and the rubblized environment prioritized cars.

Part of Gilbert's and Dobbe's complaint was that much discussion of autonomous vehicles focused on the trolley problem, which has little relevance to how either humans or AIs drive cars. It's more useful instead to focus on how autonomous vehicles reshape public space as they begin to proliferate.

This reshaping issue also arose in two other papers, one on smart farming in East Africa by Laura Foster, Katie Szilagyi, Angeline Wairegi, Chidi Oguamanam, and Jeremy de Beer, and one by Annie Brett on the rapid, yet largely overlooked expansion of autonomous vehicles in ocean shipping, exploration, and data collection. In the first case, part of the concern is the extension of colonization by framing precision agriculture and smart farming as more valuable than the local knowledge held by small farmers, the majority of whom are black women, and viewing that knowledge as freely available for appropriation. As in the Western world, where manufacturers like John Deere and Monsanto claim intellectual property rights in seeds and knowledge that formerly belonged to farmers, the arrival of AI alienates local knowledge by stowing it in algorithms, software, sensors, and equipment and makes the plants on which our continued survival depends into inert raw material. Brett, in her paper, highlights the growing gaps in international regulation as the Internet of Things goes maritime and changes what's possible.

A slightly different conflict - between privacy and the need to not be "mis-seen" - lies at the heart of Alice Xiang's discussion of computer vision. Elsewhere, Agathe Balayn and Seda Gürses make a related point in a new EDRi report that warns against relying on technical debiasing tweaks to datasets and algorithms at the expense of seeing the larger social and economic costs of these systems.

In a final example, Marc Canellas studied whole cybernetic systems and finds they create gaps where it's impossible for any plaintiff to prove liability, in part because of the complexity and interdependence inherent in these systems. Canellas proposes that the way forward is to redefine intentional discrimination and apply strict liability. You do not, Cynthia Khoo observed in discussing the paper, have to understand the inner workings of complex technology in order to understand that the system is reproducing the same problems and the same long history if you focus on the outcomes, and not the process - especially if you know the process is rigged to begin with. The wide spread of move fast and break things, Canellas noted, mostly encumbers people who are already vulnerable.

I like this overall approach of stripping away the shiny distraction of new technology and focusing on its results. If, as a friend says, Facebook accurately described setting up an account as "adding a line to our database" instead of "connecting with your friends", who would sign up? Similarly, don't let Amazon get cute about its new "Astro" comprehensive in-home data collector.

Many look at Astro and see instead the science fiction robot butler of decades hence. As Frank Pasquale noted, we tend to overemphasize the far future at the expense of today's decisions. In the same vein, Deborah Raji called robot rights a way of absolving people of their responsibility. Today's greater threat is that gig employers are undermining workers' rights, not whether robots will become sentient overlords. Today's problem is not that one day autonomous vehicles may be everywhere, but that the infrastructure needed to make partly-autonomous vehicles safe will roll over us. Or, as Gilbert put it: don't ask how you want cars to drive; ask how you want cities to work.


Previous years: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference; 2020.

Illustrations: Amazon photo of Astro.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
https://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/1024

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives