« Homeland insecurity | Main | Conventional wisdom »

Aspirational intelligence

2001-hal.png"All commandments are ideals," he said. He - Steven Croft, the Bishop of Oxford - had just finished reading out to the attendees of Westminster Forum's seminar (PDF) his proposed ten commandments for artificial intelligence. He's been thinking about this on our behalf: Croft malware writers not to adopt AI enhancements. Hence the reply.

The first problem is: what counts as AI? Anders Sandberg has quipped that it's only called AI until it starts working, and then it's called automation. Right now, though, to many people "AI" seems to mean "any technology I don't understand".

Croft's commandment number nine seems particularly ironic: this week saw the first pedestrian killed by a self-driving car. Early guesses are that the likely weakest links were the underemployed human backup driver and the vehicle's faulty LIDAR interpretation of a person walking a bicycle. Whatever the jaywalking laws are in Arizona, most of us instinctively believe that in a cage match between a two-ton automobile and an unprotected pedestrian the car is always the one at fault.

Thinking locally, self-driving cars ought to be the most ethics-dominated use of AI, if only because people don't like being killed by machines. Globally, however, you could argue that AI might be better turned to finding the best ways to phase out cars entirely.

We may have better luck at persuading criminal justice systems to either require transparency, fairness, and accountability in machine learning systems that predict recidivism and who can be helped or drop them entirely.

The less-tractable issues with AI are on display in the still-developing Facebook and Cambridge Analytica scandals. You may argue that Facebook is not AI, but the platform certainly uses AI in fraud detection and to determine what we see and decide which of our data parts to use on behalf of advertisers. All on its own, Facebook is a perfect exemplar of all the problems Australian privacy advocate foresaw in 2004 after examining the first social networks. In 2012, Clark wrote, "From its beginnings and onward throughout its life, Facebook and its founder have demonstrated privacy-insensitivity and downright privacy-hostility." The same could be said of other actors throughout the tech industry.

Yonatan Zunger is undoubtedly right when he argues in the Boston Globe that computer science has an ethics crisis. However, just fixing computer scientists isn't enough if we don't fix the business and regulatory environment built on "ask forgiveness, not permission". Matthew Stoll writes in the Atlantic about the decline since the 1970s of American political interest in supporting small, independent players and limiting monopoly power. The tech giants have widely exported this approach; now, the only other government big enough to counter it is the EU.

The meetings I've attended of academic researchers considering ethics issues with respect to big data have demonstrated all the careful thoughtfulness you could wish for. The November 2017 meeting of the Research Institute in Science of Cyber Security provided numerous worked examples in talks from Kat Hadjimatheou at the University of Warwick, C Marc Taylor from the the UK Research Integrity Office, and Paul Iganski the Centre for Research and Evidence on Security Threats (CREST). Their explanations of the decisions they've had to make about the practical applications and cases that have come their way are particularly valuable.

On the industry side, the problem is not just that Facebook has piles of data on all of us but that the feedback loop from us to the company is indirect. Since the Cambridge Analytica scandal broke, some commenters have indicated that being able to do without Facebook is a luxury many can't afford and that in some countries Facebook *is* the internet. That in itself is a global problem.

Croft's is one of at least a dozen efforts to come up with an ethics code for AI. The Open Data Institute has its Data Ethics Canvas framework to help people working with open data identify ethical issues. The IEEE has published some proposed standards (PDF) that focus on various aspects of inclusion - language, cultures, non-Western principles. Before all that, in 2011, Danah Boyd and Kate Crawford penned Six Provocations for Big Data, which included a discussion of the need for transparency, accountability, and consent. The World Economic Forum published its top ten ethical issues in AI in 2016. Also in 2016, a Stanford University Group published a report trying to fend off regulation by saying it was impossible.

If the industry proves to be right and regulation really is impossible, it won't be because of the technology itself but because of the ecosystem that nourishes amoral owners. "Ethics of AI", as badly as we need it, will be meaningless if the necessary large piles of data to train it are all owned by just a few very large organizations and well-financed criminals; it's equivalent to talking about "ethics of agriculture" when all the seeds and land are owned by a child's handful of global players. The pre-emptive antitrust movement of 2018 would find a way to separate ownership of data from ownership of the AI, algorithms, and machine learning systems that work on them.


Illustrations: HAL.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
http://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/763

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives