« Name change | Main | The right to lie »

Moral machines

2001-stewardess.jpgWhat are AI ethics boards for?

I've been wondering about this for some months now, particularly in April, when Google announced the composition of its new Advanced Technology External Advisory Council (ATEAC) - and a week later announced its dissolution. The council was dropped after a media storm that began with a letter from 50 of Google's own employees objecting to the inclusion of Kay Coles James, president of the Heritage Foundation.

At The Verge, James Vincent suggests the boards are for "ethics washing" rather than instituting change. The aborted Google board, for example, was intended, as member Joanna Bryston writes, to "stress test" policies Google had already formulated.

However, corporations are not the only active players. The new Ada Lovelace Institute's research program is intended to shape public policy in this area. The AI Now Institute is studying social implications. Data & Society is studying AI use and governance. Altogether, Brent Mittelstadt counts 63 public-private initiatives, and says the principles they're releasing "closely resemble the four classic principles of medical ethics" - an analogy he finds uncertain.

Last year, when Steven Croft, the Bishop of Oxford, proposed ten commandments for artificial intelligence, I also tended to be dismissive: who's going to listen? What company is going to choose a path against its own financial interests? A machine learning expert friend, has a different complaint: corporations are not the problem, it's governments. No matter what companies decide, governments always demand carve-outs for intelligence and security services, and once they have it, game over.

I did appreciate Croft's contention that all commandments are aspirational. An agreed set of principles would at least provide a standard against which to measure technology and decisions. Principles might be particularly valuable for guiding academic researchers, some of whom currently regard social media as a convenient public laboratory.

Still, human rights law already supplies that sort of template. What can ethics boards do that the law doesn't already? If discrimination is already wrong, why do we need an ethics board to add that it's wrong when an algorithm does it?

At a panel kicking off this year's Privacy Law Scholars, Ryan Calo suggested an answer: "We need better moral imagination." In his view, a lot of the discussion of AI ethics centers on form rather than content: how should it be applied? Should there be a certification regime? Or perhaps compliance requirements? Instead, he proposed that we should be looking at how AI changes the affordances available to us. His analogy: retrieving the sailors left behind in the water after you destroyed their ship was an ethical obligation until the arrival of new technology - submarines - made it infeasible.

For Calo, too many conversations about AI avoid considering the content, As a frustrating example: "The primary problem around the ethics of driverless cars is not how they will reshape cities or affect people with disabilities and ownership structures, but whether they should run over the nuns or the schoolchildren."

As anyone who's ever designed a survey knows, defining the questions is crucial. In her posting, Bryson expresses regret that the intended board will not now be called into action to consider and perhaps influence Google's policy. But the fact that Google, not the board, was to devise policies and set the questions about them makes me wonder how effective it could have been. So much depends on who imagines the prospective future.

The current Kubrick exhibition at London's Design Museum paid considerable homage to Kubrick's vision and imagination in creating the mysterious and wonderful universe in 2001: A Space Odyssey. Both the technology and the furniture still look "futuristic" despite having been designed more than 50 years ago. What *has* dated is the women: they are still wearing 1960s stewardess uniforms and hats, and the one woman with more than a few lines spends them discussing her husband and his whereabouts; the secrecy surrounding the appearance of a monolith in a crater on the moon is for the men to raise. Calo was finding the same thing in rereading Isaac Asimov's Foundation trilogy: "Not one woman leader for four books," he said. "And people still smoke!" Yet they are surrounded by interstellar travel and mind-reading devices.

So while what these boards - as Helen Nissenbaum said in the same panel, "There are so many institutes announcing principles as if that's the end of the story." - are doing now is not inspiring, maybe what they *could* do might be. What if, as Calo suggested, there are human and civil rights commitments AI allows us to make that were impossible before?

"We should be imagining how we can not just preserve extant ethical values but generate new ones based on affordances that we now have available to us," he said, suggesting as one example "mobility as a right". I'm not really convinced that our streets are going to be awash in autonomous vehicles any time soon, but you can see his point. If we have the technology to give independent mobility to people who are unable to drive themselves...well, shouldn't we? You may disagree on that specific idea, but you have to admit: it's a much better class of conversation.tw


Illustrations: Space Station receptionist from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
https://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/851

Archives