" /> net.wars: December 2019 Archives

« November 2019 | Main | January 2020 »

December 27, 2019

Runaway

shmatikov.jpgFor me, the scariest presentation of 2019 was a talk given by Cornell University professor Vitaly Shmatikov about computer models. It's partly a matter of reframing the familiar picture; for years, Bill Smart and Cindy Grimm have explained to attendees at We Robot that we don't necessarily really know what it is that neural nets are learning when they're deep learning.

In Smart's example, changing a few pixels in an image can change the machine learning algorithm's perception of it from "Abraham Lincoln" to "zebrafish". Misunderstanding what's important to an algorithm is the kind of thing research scientist Janelle Shane exploits when she pranks neural networks and asks them to generate new recipes or Christmas carols from a pile of known examples. In her book, You Look Like a Thing and I Love You, she presents the inner workings of many more examples.

All of this explains why researchers Kate Crawford and Trevor Paglen's ImageNet Roulette experiment tagged my my Twitter avatar as "the Dalai Lama". I didn't dare rerun it, because how can you beat that? The experiment over, would-be visitors are now redirected to Crawford's and Paglen's thoughtful examination of the problems they found in the tagging and classification system that's being used in training these algorithms.

Crawford and Paglen write persuasively about the world view captured by the inclusion of categories such as "Bad Person" and "Jezebel" - real categories in the Person classification subsystem. The aspect has gone largely unnoticed until now because conference papers focused on the non-human images in ten-year-old ImageNet and its fellow training databases. Then there is the *other* problem, that the people's pictures used to train the algorithm were appropriated from search engines, photo-sharing sites such as Flickr, and video of students walking their university campuses. Even if you would have approved the use of your forgotten Flickr feed to train image recognition algorithms, I'm betting you wouldn't have agreed to be literally tagged "loser" so the algorithm can apply that tag later to a child wearing sunglasses. Why is "gal" even a Person subcategory, still less the most-populated one? Crawford and Paglen conclude that datasets are "a political intervention". I'll take "Dalai Lama", gladly.

Again, though, all of this fits with and builds upon an already known problem: we don't really know which patterns machine learning algorithms identify as significant. In his recent talk to a group of security researchers at UCL, however, Shmatikov, whose previous work includes training an algorithm to recognize faces despite obfuscation, outlined a deeper problem: these algorithms "overlearn". How do we stop them from "learning" (and then applying) unwanted lessons? He says we can't.

"Organically, the model learns to recognize all sorts of things about the original data that were not intended." In his example, in training an algorithm to recognize gender using a dataset of facial images, alongside it will learn to infer race, including races not represented in the training dataset, and even identities. In another example, you can train a text classifier to infer sentiment - and the model also learns to infer authorship.

Options for counteraction are limited. Censoring unwanted features doesn't work because a) you don't know what to censor; b) you can't censor something that isn't represented in the training data; and c) that type of censoring damages the algorithm's accuracy on the original task. "Either you're doing face analysis or you're not." Shmatikov and Congzheng Song explain their work more formally in their paper Overlearning Reveals Sensitive Attributes.

"We can't really constrain what the model is learning," Shmatikov told a group of security researchers at UCL recently, "only how it is used. It is going to be very hard to prevent the model from learning things you don't want it to learn." This drives a huge hole through GDPR, which relies on a model of meaningful consent. How do you consent to something no one knows is going to happen?

What Shmatikov was saying, therefore, is that from a security and privacy point of view, the typical question we ask, "Did the model learn its task well?", is too limited. "Security and privacy people should also be asking: what else did the model learn?" Some possibilities: it could have memorized the training data; discovered orthogonal features; performed privacy-violating tasks; or incorporated a backdoor. None of these are captured in assessing the model's accuracy in performing the assigned task.

My first reaction was to wonder whether a data-mining company like Facebook could use Shmatikov's explanation as an excuse when it's accused of allowing its system to discriminate against people - for example, in digital redlinining. Shmatikov thought not, at least, not more than their work helps people find out what their models are really doing.

"How to force the model to discover the simplest possible representation is a separate problem worth invdstigating," he concluded.

So: we can't easily predict what computer models learn when we set them a task involving complex representations, and we can't easily get rid of these unexpected lessons while retaining the usefulness of the models. I was not the only person who found this scary. We are turning these things loose on the world and incorporating them into decision making without the slightest idea of what they're doing. Seriously?


Illustrations: Vitaly Shmatikov (via Cornell).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 20, 2019

Humans in, bugs out

Thumbnail image for Wilcox, Dominic - Stained Glass car.jpgAt the Guardian, John Naughton ponders our insistence on holding artificial intelligence and machine learning to a higher standard of accuracy than the default standard - that is, us.

Sure. Humans are fallible, flawed, prejudiced, and inconsistent. We are subject to numerous cognitive biases. We see patterns where none exist. We believe liars we like and distrust truth-tellers for picayune reasons. We dislike people who tell unwelcome truths and like people who spread appealing, though shameless, lies. We self-destruct, and then complain when we suffer the consequences. We evaluate risk poorly, fearing novel and recent threats more than familiar and constant ones. And on and on. In 10,000 years we have utterly failed to debug ourselves.

My inner failed comedian imagines the frustrated AI engineer muttering, "Human drivers kill 40,000 people in the US alone every year, but my autonomous car kills *one* pedestrian *one* time, and everybody gets all 'Oh, it's too dangerous to let these things out on the roads'."

New always scares people. But it seems natural to require new systems to do better than their predecessor; otherwise, why bother?

Part of the problem with Naughton's comparison is that machine learning and AI systems aren't really separate from us; they're humans all the way down. We create the algorithms, code the software, and allow them to mine the history of flawed human decisions, from which they make their new decisions. If humans are the problem with human-made decisions, then we are as much or more the problem with machine-made decisions.

I also think Naughton's frustrated AI researchers have a few details the wrong way round. While it's true that self-driving cars have driven millions of miles with very few deaths and human drivers were responsible for 36,560 deaths in 2018 in the US alone, it's *also* true that it's still rare for self-driving cars to be truly autonomous: Human intervention is still required startlingly often. In addition, humans drive in a far wider variety of conditions and environments than self-driving cars are as yet authorized to do. The idea that autonomous vehicles will be vastly safer than human drivers is definitely an industry PR talking point, but the evidence is not there yet.

We'd also point out that a clear trend in AI books this year has been to point out all the places where "automated" systems are really "last-mile humans". In Ghost Work, Mary L. Gray and Siddharth Suri document an astonishing array of apparently entirely computerized systems where remote humans intervene in all sorts of unexpected ways through task-based employment, while in Behind the Screen Sarah T. Roberts studies the specific case of the raters of online content. These workers are largely invisible (hence "ghost") because the companies who hire them, via subcontractors, think it sounds better to claim their work is really AI.

Throughout "automation's last mile", humans invisibly rate online content, check that the Uber driver picking you up is who they're supposed to be, and complete other tasks to hard for computers. As Janelle Shane writes in You Look Like a Thing and I Love You, the narrower the task you give an AI the smarter it seems. Humans are the opposite: no one thinks we're smart while we're getting bored by small, repetitive tasks; it's the creative struggle of finding solutions to huge, complex problems that signals brilliance. Some of AI's most ardent boosters like to hope that artificial *general* intelligence will be able to outdo us in solving our most intractable problems, but who is going to invent that? Us, if it ever happens (and it's unlikely to be soon).

There is also a problem with scale and replication. While a single human decision may affect billions, of people, there is always a next time when it will be reconsidered and reinterpreted by a different judge who takes into account differences of context and nuance. Humans have flexibility that machines lack, while computer errors can be intractable, especially when bugs are produced by complex interactions. The computer scientist Peter Neumann has been documenting the risks of over-relying on computers for decades.

However, a lot of our need for computers prove themselves to a superhuman standard is social, cultural, and emotional. AI adds a layer of remoteness and removes some of our sense of agency. With humans, we think we can judge character, talk them into changing their mind, or at least get them to explain the decision. In the just-linked 2017 event, the legal scholar Mireille Hildebrandt differentiated between law - flexible, reinterpretable, modifiable - and administration, which is what you get if a rules-based expert computer system is in charge. "Contestability is the heart of the rule of law," she said.

At the very least, we hope that the human has enough empathy to understand the impact their decision will have on their fellow human, especially in matters of life and death.

We give the last word to Agatha Christie, who decisively backed humans in her 1969 book, Hallowe'en Party, in which alter-ego Ariadne Oliver tells Hercule Poirot, "I know there's a proverb which says, 'To err is human' but a human error is nothing to what a computer can do if it tries."


Illustrations: Artist Dominic Wilcox's concept self-driving car (as seen at the Science Museum, July 2019).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 13, 2019

Becoming a science writer

JamesRandi-Florida2016jpg
As an Association of British Science Writers board member, I occasionally speak to science PhD students and postdocs about science writing. Since the most recent of these excursions was just this week, I thought I'd summarize some of what I've said.

To trained scientists aiming to make the switch: you are starting from a more knowledgeable place than the arts graduates who mostly populate this field. You already know how to investigate and add to a complex field of study, have a body of knowledge from which to reliably evaluate new claims, and know the significant contributors to your field and adjacent ones. What you need to learn are basic journalism skills such as interviewing, identifying stories, pitching them to venues where they might fit, remaining on the right side of libel law, and journalistic ethics and culture. Your new deadlines will seem really short!

Figuring out what kind of help you need is where an organization like the ABSW (and its counterparts in other countries) can help, first by offering opportunities for networking with other science writers, and second by providing training and resources. ABSW maintains, for example, a page that includes some basics and links.

Besides that, if you put "So You Want to Be a Science Writer" into your favorite search engine, you will find many guides from reputable sources such as other science writers' associations and university programs. I particularly like Ivan Oransky's talk for the National Association of Science Writers, because he begins with "my first failures".

Every career path is idiosyncratic enough that no one can copy its specifics. I began my writing career by founding The Skeptic magazine in 1987. Through the skeptics, I met all sorts of people, including one who got me my first writing-related job as a temporary subeditor on a computer magazine. Within weeks, I knew the editors of all the other magazines on its floor, and began writing features for them. In 1991, when I got online and sent my first email, I decided to specialize on the Internet because it was obviously the future of communication. A friend advises that if you find a fast-moving field, there will always be people willing to pay you to explain it to them.

So: I self-published, networked early and often - I joined the ABSW as soon as I was qualified - and luckily landed on a green field at the beginning of a complex and far-reaching social, cultural, political, and technological revolution. Today's early-career science writers will have to work harder to build their own networks than in the early 1990s, when we all met regularly at press conferences and shows - but they have vastly further reach than we had.

I have never had a job, so I can't tell people how to get one. I can, however, observe that if you focus solely on traditional media you will be aiming at a shrinking number of slots. Think more broadly about what science communication is, who does it, and in what context. The kind of journalism that used to be the sole province of newspapers and news magazines now has a home in NGOs, who also hire people who can do solid research, crunch data, and think creatively about new areas for investigation. You should also broaden your idea of "media" and "science communication". Few can be Robin Ince or Richard Wiseman, who combine comedy, magic, and science into sell-out shows, but everyone can work in non-traditional contexts in which to communicate science.

At the moment, commercial money is going into podcasts; people are building big followings for niche interests on YouTube and through self-publishing ebooks; and constant tweeters are important communicators, as botanist James Wong proves every day. Edward Hasbrouck, at the National Writers Union, has published solid advice on writing for digital formats: look to build revenue streams. The Internet offers many opportunities, but, as Hasbrouck writes, many are invisible to traditional publishing; as he also writes, traditional employment is just one of writers' many business models.

The big difficulty for trained academics is rethinking how you approach telling a story. Forget the academic structure of: 1) here is what I am going to say; 2) this is what I'm saying; 3) this is my summary of what I just said. Instead, when writing for the general public, put your most important findings first and tell your specific audience why it matters to *them*. Then show why they can have confidence in your claim by explaining your methods and how your findings fit into the rest of the relevant body of scientific knowledge. (Do not use net.wars as your model!)

Over time, you will probably want to branch out into other fields. Do not fear this; you know how to learn a complex field, and if you can learn one you can learn another.

It's inevitable that you will make some mistakes. When it happens, do your best to correct them, learn from how you made them, and avoid making the same one again.

Finally just a couple of other resources. My favorite book on writing is William Goldman's Adventures in the Screen Trade. He has solid advice for story structure no matter what you're writing. A handout I wrote for a blogging workshop for scientists (PDF) has some (I hope, useful) writing tips. Good luck!


Illustrations: Magician James Randi communicates science, Florida 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 6, 2019

The dot-org of our discontents

copernicus map of the universe.jpg
On so many sides, in so many ways, the sale of .org is a tangled thicket of stakeholder unhappiness. There seem to be six major categories of complaints: sentimental, financial, practical, structural, ethical, and paranoid.

First, some background. The domain name system of which .org is a part was devised in the mid-1980s by Paul Mockapetris. Even though no rules limit registrations in .org, the idea that it is used by non-commercial organizations persists. In 1997, as the Internet was being commercialized, the original one-man manager, John Postel, was replaced by the Internet Corporation for Assigned Names and Numbers, which manages the network of registries (each manages one top-level domain) and registrars (which sell domain names inside those TLDs). At the same time, the gTLDs were opened up to competition; the change led in 2002 to .org's being handed off to the Internet Society-created, non-profit Public Interest Registry. Six months ago, the Internet Society announced that PIR would drop its non-profit status; two weeks ago came the sale to newly-formed Ethos Capital. Suspicious minds shouted betrayal, and the Nonprofit Technology Enterprise Network promptly set up SaveDotOrg. Its nearly 13,000 signatures include hundreds of NGOs and Internet organizations.

At The Register, Kieren McCarthy - the leading journalistic expert on the DNS and its governance - laid out what little was known about the sale and the resulting conflict. He followed up with the silence that met the complaints; the discontent on view at the Internet Governance Forum; an interview with ISOC CEO Andrew Sullivan, who said "Most people don't care one way or another" and that only a court order would stop the sale; and finally the news that the opportunity to sell PIR came out of the blue. Late on, the price emerged: $1.14 billion. Internet Society says it will use the money to further its mission to promote the good of the Internet community at large.

So, to the six categories. The Old Net remains sentimentally attached to the idea of .org as a home for non-profits such as Internet infrastructure managers IETF and ISOC, human rights NGOs ACLU, and Amnesty International, and worldwide resources Wikipedia, the Internet Archive, and the Gutenberg Project. But, as the New York Times comments, this image of .org is deceptive; it's also the home of the commercial entity Craigslist and dozens of astroturf fronts for corporate interests.

At Techdirt, Mike Masnick traces the ethics, finding insider connections. The slightly paranoid concerns surround: the potential for the registry owner to engage in censorship. The practical issue is the removal of the price cap; an organization with a long-time Internet presence can in theory simply register a new domain name if the existing one becomes too expensive, but in practice the switching costs are substantial, and we all pay them as links break all over the web. A site like Public Domain Review could be held to ransom by rapidly rising prices. Finally, the structural concern is that yet another piece of the Internet infrastructure is being sold off to centralized private interests who will conveniently forget the promises they make at the time of the sale.

The most interesting is financial: New Zealand fund manager Lance Wiggs thinks ISOC is undercharging by $1 billion; he also thinks they should publish far more detail.

Most of Wiggs' questions remained unanswered after yesterday evening's community call organized by SaveDotOrg and NTEN. The question-answering session included Sullivan, Ethos CEO Erik Brooks, Ethos Chief Purpose Officer Noral Abusitta-Ouri, EFF attorneys Mitch Stoltz and Cara Gagliano, PIR CEO John Nevin, and ISOC Ireland head Brandt Dainow.

Sullivan said both that there were other suitors for PIR and that because speed was of the essence to close the deal it was necessary to negotiate under non-disclosure agreements. The EFFers were skeptical. The Ethos Capital folks sprayed reassurances and promises of community outreach, but were evasive about nailing these down with binding, legally enforceable contracts. Least impressed was Dainow, which shows there's disagreement within ISOC itself. I'm no financier, but I do know this: the more you're pressured to close a deal quickly the more you should distrust the terms.

To some extent, all of this is a consequence of the fundamental problem with the DNS: we have no consensus on what it's for. This was already obvious in 1997, when I first wrote about it. Then, as now, insiders were enraging the broader community by making deals among themselves without wider consultation - a habit agreed principles and purposes could constrain. In 2004 I asked, "Does it follow geography, trademarks and company names, or types of users? Is it a directory or a marketing construct? Should names be automatically guessable? What about, instead, dividing up the Net by language? Or registered company names, like .plc.uk and .ltd.uk? Or content, like .xxx or .kids? Why not have an electronic commerce space where retailers register according to the areas they deliver to?" None of these is more obviously right than another, but as long as there are no agreed answers, disputes like these will keep emerging.


Illustrations: Copernican map of the universe (from the Stanford collection, via Public Domain Review).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.