« The rhetoric meets the road | Main | Dangerous corner »

There may be trouble ahead...

ElliQ7.pngOne of the first things the magician and paranormal investigator James Randi taught all of us in the skeptical movement was the importance of consulting the right kind of expert.

Randi made this point with respect to tests of paranormal phenomena such as telekinesis and ESP. At the time - the 1970s and 1980s - there was a vogue for sending psychic claimants to physicists for testing. A fair amount of embarrassment ensued. As Randi liked to say, physicists, like many other scientists, are not experienced in the art of deception. Instead, they are trained to assume that things in their lab do not lie to them.

Not a safe assumption when they're trying to figure out how a former magician has moved an empty plastic film can a few millimeters, apparently with just the power of their mind. Put in a magician who knows how to set up the experiment so the claimant can't cheat, and *then* if the effect still occurs you know something genuinely weird is going on.

I was reminded of this reading this quote from Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins, writing in Nature: "When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research,"

The article itself is scary enough for one friend to react to it with, "This is the apocalypse". The researchers undertook a "thought experiment" after the Swiss Federal Institute for NBC Protection (Spiez Laboratory), asked theiir company, Collaborations Pharmaceuticals Inc, to provide a presentation on how their AI technology could be misused in drug discovery to its biennial conference on new technologies and their implications for the Chemical and Biological Weapons conventions. They work, they write, in an entirely virtual world; their molecules exist only in their computer. It had never previously occurred to them to wonder if the machine learning models they were building to help design new molecules that could be developed into new, life-saving drugs could be turned to generating toxins instead. Asked to consider it, they quickly discovered that it was disturbingly easy to generate prospective lethal neurotoxins. Because: generating potentially helpful molecules required creating models to *avoid* toxicity - which meant being able to predict its appearance.

As they go on to say, our general discussions of the potential harms AI can enable are really very limited. The biggest headlines go to putting people out of work; the rest is privacy, discrimination, fairness, and so on. Partly, that's because those are the ways AI has generally been most visible: automation that deskills or displaces humans, or algorithms that make decisions about government benefits, employment, education, content recommendations, or criminal justice outcomes. But also it's because the researchers working on this technology blinker their imagination to how they want their new idea to work.

The demands of marketing don't help. Anyone pursuing any form of research, whether funded by industry or government grant, has to make the case for why they should be given the money. So of course in describing their work they focus on the benefits. Those working on self-driving cars are all about how they'll be safer than human drivers, not scary possibilities like widespread hundred-car pileups if hackers were to find a way to exploit unexpected software bugs to make them all go haywire at the same time.

Sadly, many technology journalists pick up only the happy side. On Wednesday, as one tiny example, the Washington Post published a cheery article about EliiQ, an Alexa-like AI device "designed for empathy" meant to keep lonely older people company. The commenters saw more of the dark side than the writer did: ongoing $30 subscription, data collection and potential privacy invasion, and, especially, potential for emotional manipulation as the robot tells its renter what it (not she, as per writer Steven Zeitchik) calculates they want to hear.

It's not like this is the first such discovery. Malicious Generative Adversarial Networks (GANs) are the basis of DeepFakes. If you can use some new technology for good, why *wouldn't* you be able to use it for evil? Cars drive sick kids to hospitals and help thieves escape. Computer programmers write word processors and viruses, the Internet connects us directly to medical experts and sends us misinformation, cryptography protects both good and bad secrets, robots help us and collect our data. Why should AI be different?

I'd like to think that this paper will succeed where decades of prior experience have failed, and make future researchers think more imaginatively about how their work can be abused. Sadly, it seems a forlorn hope.

In Gemma Milne's 2020 book examining how hype interferes with our ability to make good decisions about new technology, Smoke and Mirrors, she warns that hype keeps us from asking the crucial question: Is this new technology worth its cost? Potential abuse is part of that cost-benefit assessment. We need researchers to think about what can go wrong a lot earlier in the development cycle - and we need them to add experts in the art of forecasting trouble (science fiction writers, perhaps?) to their teams. Even technology that looks like magic...isn't.

Illustrations: EliiQ (company PR photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

TrackBack

TrackBack URL for this entry:
https://WWW.pelicancrossing.net/cgi-sys/cgiwrap/wendyg/managed-mt/mt-tb.cgi/1050

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Archives