" /> net.wars: May 2019 Archives

« April 2019 | Main | June 2019 »

May 31, 2019

Moral machines

2001-stewardess.jpgWhat are AI ethics boards for?

I've been wondering about this for some months now, particularly in April, when Google announced the composition of its new Advanced Technology External Advisory Council (ATEAC) - and a week later announced its dissolution. The council was dropped after a media storm that began with a letter from 50 of Google's own employees objecting to the inclusion of Kay Coles James, president of the Heritage Foundation.

At The Verge, James Vincent suggests the boards are for "ethics washing" rather than instituting change. The aborted Google board, for example, was intended, as member Joanna Bryston writes, to "stress test" policies Google had already formulated.

However, corporations are not the only active players. The new Ada Lovelace Institute's research program is intended to shape public policy in this area. The AI Now Institute is studying social implications. Data & Society is studying AI use and governance. Altogether, Brent Mittelstadt counts 63 public-private initiatives, and says the principles they're releasing "closely resemble the four classic principles of medical ethics" - an analogy he finds uncertain.

Last year, when Steven Croft, the Bishop of Oxford, proposed ten commandments for artificial intelligence, I also tended to be dismissive: who's going to listen? What company is going to choose a path against its own financial interests? A machine learning expert friend, has a different complaint: corporations are not the problem, it's governments. No matter what companies decide, governments always demand carve-outs for intelligence and security services, and once they have it, game over.

I did appreciate Croft's contention that all commandments are aspirational. An agreed set of principles would at least provide a standard against which to measure technology and decisions. Principles might be particularly valuable for guiding academic researchers, some of whom currently regard social media as a convenient public laboratory.

Still, human rights law already supplies that sort of template. What can ethics boards do that the law doesn't already? If discrimination is already wrong, why do we need an ethics board to add that it's wrong when an algorithm does it?

At a panel kicking off this year's Privacy Law Scholars, Ryan Calo suggested an answer: "We need better moral imagination." In his view, a lot of the discussion of AI ethics centers on form rather than content: how should it be applied? Should there be a certification regime? Or perhaps compliance requirements? Instead, he proposed that we should be looking at how AI changes the affordances available to us. His analogy: retrieving the sailors left behind in the water after you destroyed their ship was an ethical obligation until the arrival of new technology - submarines - made it infeasible.

For Calo, too many conversations about AI avoid considering the content, As a frustrating example: "The primary problem around the ethics of driverless cars is not how they will reshape cities or affect people with disabilities and ownership structures, but whether they should run over the nuns or the schoolchildren."

As anyone who's ever designed a survey knows, defining the questions is crucial. In her posting, Bryson expresses regret that the intended board will not now be called into action to consider and perhaps influence Google's policy. But the fact that Google, not the board, was to devise policies and set the questions about them makes me wonder how effective it could have been. So much depends on who imagines the prospective future.

The current Kubrick exhibition at London's Design Museum paid considerable homage to Kubrick's vision and imagination in creating the mysterious and wonderful universe in 2001: A Space Odyssey. Both the technology and the furniture still look "futuristic" despite having been designed more than 50 years ago. What *has* dated is the women: they are still wearing 1960s stewardess uniforms and hats, and the one woman with more than a few lines spends them discussing her husband and his whereabouts; the secrecy surrounding the appearance of a monolith in a crater on the moon is for the men to raise. Calo was finding the same thing in rereading Isaac Asimov's Foundation trilogy: "Not one woman leader for four books," he said. "And people still smoke!" Yet they are surrounded by interstellar travel and mind-reading devices.

So while what these boards - as Helen Nissenbaum said in the same panel, "There are so many institutes announcing principles as if that's the end of the story." - are doing now is not inspiring, maybe what they *could* do might be. What if, as Calo suggested, there are human and civil rights commitments AI allows us to make that were impossible before?

"We should be imagining how we can not just preserve extant ethical values but generate new ones based on affordances that we now have available to us," he said, suggesting as one example "mobility as a right". I'm not really convinced that our streets are going to be awash in autonomous vehicles any time soon, but you can see his point. If we have the technology to give independent mobility to people who are unable to drive themselves...well, shouldn't we? You may disagree on that specific idea, but you have to admit: it's a much better class of conversation.tw


Illustrations: Space Station receptionist from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 24, 2019

Name change

Dns-rev-1-wikimedia.gifIn 2014, six months after the Snowden revelations, engineers began discussing how to harden the Internet against passive pervasive surveillance. Among the results have been efforts like Let's Encrypt, EFF's Privacy Badger, and HTTPS Everywhere. Real inroads have been made into closing some of the Internet's affordances for surveillance and improving security for everyone.

Arguably the biggest remaining serious hole is the domain name system, which was created in 1983. The DNS's historical importance is widely underrated; it was essential in making email and the web usable enough for mass adoption before search engines. Then it stagnated. Today, this crucial piece of Internet infrastructure still behaves as if everyone on the Internet can trust each other. We know the Internet doesn't live there any more; in February the Internet Corporation for Assigned Names and Numbers, which manages the DNS, warned of large-scale spoofing and hijacking attacks. The NSA is known to have exploited it, too.

The problem is the unprotected channel between the computer into which we type humanly-readable names such as pelicancrossing.net and the computers that translate those names into numbered addresses the Internet's routers understand, such as 216.92.220.214. The fact that routers all trust each other is routinely exploited for the captive portals we often see when we connect to public wi-fi systems. These are the pages that universities, cafes, and hotels set up to redirect Internet-bound traffic to their own page so they can force us to log in, pay for access, or accept terms and conditions. Most of us barely think about it, but old-timers and security people see it as a technical abuse of the system.

Several hijacking incidents raised awareness of DNS's vulnerability as long ago as 1998, when security researchers Matt Blaze and Steve Bellovin discussed it at length at Computers, Freedom, and Privacy. Twenty-one years on, there have been numerous proposals for securing the DNS, most notably DNSSEC, which offers an upwards chain of authentication. However, while DNSSEC solves validation, it still leaves the connection open to logging and passive surveillance, and the difficulty of implementing it has meant that since 2010, when ICANN signed the global DNS root, uptake has barely reached14% worldwide.

In 2018, the IETF adopted DNS-over-HTTPS as a standard. Essentially, this sends DNS requests over the same secure channel browsers use to visit websites. Adoption is expected to proceed rapidly because it's being backed by Mozilla, Google, and Cloudflare, who jointly intend to turn it on by default in Chrome and Firefox. In a public discussion at this week's Internet Service Providers Association conference, a fellow panelist suggested that moving DNS queries to the application level opens up the possibility that two different apps on the same device might use different DNS resolvers - and get different responses to the same domain name.

Britain's first public notice of DoH came a couple of week ago in the Sunday Times, which billed it as Warning over Google Chrome's new threat to children. This is a wild overstatement, but it's not entirely false: DoH will allow users to bypass the parts of Britain's filtering system that depend on hijacking DNS requests to divert visitors to blank pages or warnings. An engineer would probably argue that if Britain's many-faceted filtering system is affected it's because the system relies on workarounds that shouldn't have existed in the first place. In addition, because DoH sends DNS requests over web connections, the traffic can't be logged or distinguished from the mass of web traffic, so it will also render moot some of the UK's (and EU's) data retention rules.

For similar reasons, DoH will break captive portals in unfriendly ways. A browser with DoH turned on by default will ignore the hotel/cafe/university settings and instead direct DNS queries via an encrypted channel to whatever resolver it's been set to use. If the network requires authentication via a portal, the connection will fail - a usability problem that will have to be solved.

There are other legitimate concerns. Bypassing the DNS resolvers run by local ISPs in favor of those belonging to, say, Google, Cloudflare, and Cisco, which bought OpenDNS in 2015, will weaken local ISPs' control over the connections they supply. This is both good and bad: ISPs will be unable to insert their own ads - but they also can't use DNS data to identify and block malware as many do now. The move to DoH risks further centralizing the Internet's core infrastructure and strengthening the power of companies most of us already feel have too much control.

The general consensus, however, is that like it or not, this thing is coming. Everyone is still scrambling to work out exactly what to think about it and what needs to be done to mitigate accompanying risks, as well as find solutions to the resulting problems. It was clear from the ISPA conference panel that everyone has mixed feelings, though the exact mix of those feelings and which aspects are identified as problems - differ among ISPs, rights activists, and security practitioners. But it comes down to this: whether you like this particular proposal or not, the DNS cannot be allowed to remain in its present insecure state. If you don't want DoH, come up with a better proposal.


Illustrations: DNS diagram (via Б.Өлзий at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 17, 2019

Genomics snake oil

DNA_Double_Helix_by_NHGRI-NIH-PD.jpgIn 2011, as part of an investigation she conducted into the possible genetic origins of the streak of depression that ran through her family, the Danish neurobiologist Lone Frank had her genome sequenced and interviewed many participants in the newly-opening field of genomics that followed the first complete sequencing of the human genome. In her resulting book, My Beautiful Genome, she commented on the "Wild West" developing around retail genetic testing being offered to consumers over the web. Absurd claims such as using DNA testing to find your perfect mate or direct your child's education abounded.

This week, at an event organized by Breaking the Frame, New Zealand researcher Andelka M. Phillips presented the results of her ongoing study of the same landscape. The testing is just as unreliable, the claims even more absurd - choose your diet according to your DNA! find out what your superpower is! - and the number of companies she's collected has reached 289 while the cost of the tests has shrunk and the size of the databases has ballooned. Some of this stuff makes astrology look good.

To be perfectly clear: it's not, or not necessarily, the gene sequencing itself that's the problem. To be sure, the best lab cannot produce a reading that represents reality from poor-quality samples. And many samples are indeed poor, especially those snatched from bed sheets or excavated from garbage cans to send to sites promising surreptitious testing (I have verified these exist, but I refuse to link to them) to those who want to check whether their partner is unfaithful or whether their child is in fact a blood relative. But essentially, for health tests at least, everyone is using more or less the same technology for sequencing.

More crucial is the interpretation and analysis, as Helen Wallace, the executive director of GeneWatch UK, pointed out. For example, companies differ in how they identify geographical regions, frame populations , and the makeup of their databases of reference contributions. This is how a pair of identical Canadian twins got varying and non-matching test results from five companies, one Ashkenazi Jew got six different ancestry reports, and, according to one study, up to 40% of DNA results from consumer genetic tests are false positives. As I type, the UK Parliament is conducting an inquiry into commercial genomics.

Phillips makes the data available to anyone who wants to explore it. Meanwhile, so far she's examined the terms of service and privacy policies of 71 companies, and finds them filled with technology company-speak, not medical information. They do not explain these services' technical limitations or the risks involved. Yet it's so easy to think of disastrous scenarios: this week, an American gay couple reported that their second child's birthright citizenship is being denied under new State Department rules. A false DNA test could make a child stateless.

Breaking the Frame's organizer, Dave King, believes that a subtle consequence of the ancestry tests - the things everyone was quoting in 2018 that tell you that you're 13% German, 1% Somalian, and whatever else - is to reinforce the essentially racist notion that "Germanness" has a biological basis. He also particularly disliked the services claiming they can identify children's talents; these claim, as Phillips highlighted, that testing can save parents money they might otherwise waste on impossible dreams. That way lies Gattaca and generations of children who don't get to explore their own abilities because they've already been written off.

Even more disturbing questions surround what happens with these large databases of perfect identifiers. In the UK, last October the Department of Health and Social Care announced its ambition to sequence 5 million genomes. Included was the plan to being in 2019 to offer whole genome sequencing to all seriously ill children and adults with specific rare diseases or hard-to-treat cancers as part of their care. In other words, the most desperate people are being asked first, a prospect Phil Booth, coordinator of medConfidential, finds disquieting. As so much of this is still research, not medical care, he said, like the late despised care.data, it "blurs the line around what is your data, and between what the NHS was and what some would like it to be". Exploitation of the nation's medical records as raw material for commercial purposes is not what anyone thought they were signing up for. And once you have that giant database of perfect identifiers...there's the Home Office, which has already been caught using the NHS to hunt illegal immigrants and DNA testing immigrants.

So Booth asked this: why now? Genetic sequencing is 20 years old, and to date it has yet to come close to being ready to produce the benefits predicted for it. We do not have personalized medicine, or, except in a very few cases (such as a percentage of breast cancer) drugs tailored to genetic makeup. "Why not wait until it's a better bet?" he asked. Instead of spending billions today - billions that, as an audience member pointed out, would produce better health more widely if spent on improving the environment, nutrition, and water - the proposal is to spend them on a technology that may still not be producing results 20 years from now. Why not wait, say, ten years and see if it's still worth doing?


Illustrations: DNA double helix (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 10, 2019

Slime trails

ghostbusters-murray-slime.pngIn his 2000 book, Which Lie Did I Tell?, the late, great screenwriter William Goldman called the brilliant 1963 Stanley Donen movie Charade a "money-loser". Oh, sure, it was a great success - for itself. But it cost Hollywood hundreds of millions of dollars in failed attempts to copy its magical romantic-comedy-adventure-thriller mixture. (Goldman's own version, 1992's The Year of the Comet, was - his words - "a flop".) In this sense, Amazon may be the most expensive company ever launched in Silicon Valley because it encouraged everyone to believe losing money in 17 of its first 18 years doesn't matter.

Uber has been playing up this comparison in the run-up to its May 2019 IPO. However, two things make it clear the comparison is false. First - duh - losing money just isn't a magical sign of a good business, even in the Internet era. Second, Amazon had scale on its side, as well as a pioneering infrastructure it was able later to monetize. Nothing about transport scales, as Hubert Horan laid out in 2017; even municipalities can't make Uber cheaper than public transit. Horan's analysis of Uber's IPO filing is scathing. Investment advisers love to advise investing in companies that make popular products, but *not this time*.

Meanwhile, network externalities abound. The Guardian highlights the disparity between Uber's drivers, who have been striking this week, and its early investors, who will make billions even while the company says it intends to continue slicing drivers' compensation. The richest group, says the New York Times, have already decamped to lower-tax states.

If Horan is right, however, the impending shift of billions of dollars from drivers and greater fools to already-wealthy early investors will arguably be a regulatory failure on the part of the Securities and Exchange Commission. I know the rule of the stock market is "buyer beware", but without the trust conferred by regulators there will *be* no buyers, not even pension funds. Everyone needs government to ensure fair play.

Somewhere in one of his 500-plus books, the science/fiction writer Isaac Asimov commented that he didn't like to fly because in case of a plane crash his odds of survival were poor. "It's not sporting." In fact, most passengers survive, unharmed, but not, obviously, in the recent Boeing crashes. Blame, as Madeline Elish correctly predicted in her paper on moral crumple zones, is being sprayed widely, particularly among the humans who build and operate these things: faulty sensors, pilots, and software issues.

The reality seems more likely to be a perfect storm comprising numerous components: 1) the same kind of engineering-management disconnect that doomed Challenger in 1986, 2) trying to compensate with software for a hardware problem, 3) poorly thought-out cockpit warning light design, 4) the number and complexity of vendors involved, and 5) receding regulators. As hybrid cyber-physical systems become more pervasive, it seems likely we will see many more situations where small decisions made by different actors will collide to create catastrophes, much like untested drug interactions.

Again, regulatory failure is the most alarming. Any company can screw up. The failure of any complex system can lead to companies all blaming each other. There are always scapegoats. But in an industry where public perception of safety is paramount, regulators are crucial in ensuring trust. The flowchart at the Seattle Times says it all about how the FAA has abdicated its responsibility. It's particularly infuriating because many in the cybersecurity industry cite aviation as a fine example of what an industry can do to promote safety and security when the parties recognize their collective interests are best served by collaborating and sharing data. Regulators who audit and test provide an essential backstop.

The 6% of the world that flies relies on being able to trust regulators to ensure their safety. Even if the world's airlines now decide that they can't trust the US system, where are they going to go for replacement aircraft? Their own governments will have to step in where the US is failing, as the EU already does in privacy and antitrust. Does the environment win, if people decide it's too risky to fly? Is this a plan?

I want regulators to work. I want to be able to fly with reasonable odds of survival, have someone on the job to detect financial fraud, and be able to trust that medical devices are safe. I don't care how smart you are, no consumer can test these things for themselves, any more than we can tell if a privacy policy is worth the electrons it's printed on.

On that note, last week on Twitter Demos researcher Carl Miller, author of The Death of the Gods, made one of his less-alarming suggestions. Let's replace "cookie": "I'm willing to bet we'd be far less willing to click yes, if the website asked if we [are] willing to have a 'slime trail', 'tracking beacon' or 'surveillance agent' on our browser."

I like "slime trail", which extends to cover the larger use of "cookie" in "cookie crumbs" to describe the lateral lists that show the steps by which you arrived at the current page. Now, when you get a targeted ad, people will sympathize as you shout, "I've been slimed!"


Illustrations: Bill Murray, slimed in Ghostbusters (1984).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 3, 2019

Reopening the source

SphericalCow2.gif
"There is a disruption coming." Words of doom?

Several months back we discussed Michael Salmony's fear that the Internet is about to destroy science. Salmony reminded that his comments came in a talk on the virtues of the open economy, and then noted the following dangers:

- Current quality-assurance methods (peer-review, quality editing, fact checking etc) are being undermined. Thus potentially leading to an avalanche of attention-seeking open garbage drowning out the quality research;
- The excellent high-minded ideals (breaking the hold of the big controllers, making all knowledge freely accessible etc) of OA are now being subverted by models that actually ask authors (or their funders) to spend thousands of dollars per article to get it "openly accessible". Thus again privileging the rich and well connected.

The University of Bath associate professor Joanna Bryson rather agreed with Salmony, also citing the importance of peer review. So I stipulate: yes, peer review is crucial for doing good science.

In a posting deploring the death of the monograph, Bryson notes that, like other forms of publishing, many academic publishers are small and struggle for sustainability. She also points to a Dutch presentation arguing that open access costs more.

Since she, as an academic researcher, has skin in this game, we have to give weight to her thoughts. However, many researchers dissent, arguing that academic publishers like Elsevier, Axel Springer profit from an unfair and unsustainable business model. Either way, an existential crisis is rolling toward academic publishers like a giant spherical concrete cow.

So to yesterday's session on the ten-year future of research, hosted by European Health Forum Gastein and sponsored by Elsevier. The quote of doom we began with was voiced there.

The focal point was a report (PDF), the result of a study by Elsevier and Ipsos MORI. Their efforts eventually generated three scenarios: 1) "brave open world", in which open access publishing, collaboration, and extensive data sharing rule; 2) "tech titans", in which technology companies dominate research; 3) "Eastern ascendance", in which China leads. The most likely is a mix of the three. This is where several of us agreed that the mix is already our present. We surmised, cattily, that this was more an event looking for a solution to Elsevier's future. That remains cloudy.

The rest does not. For the last year I've been listening to discussions about how academic work can find greater and more meaningful impact. While journal publication remains essential for promotions and tenure within academia, funders increasingly demand that research produce new government policies, change public conversations, and provide fundamentally more effective practice.

Similarly, is there any doubt that China is leading innovation in areas like AI? The country is rising fast. As for "tech titans", while there's no doubt that these companies lead in some fields, it's not clear that they are following the lead of the great 1960s and 1970s corporate labs like Bell Labs, Xerox PARC and IBM Watson, which invested in fundamental research with no connection to products. While Google, Facebook, and Microsoft researchers do impressive work, Google is the only one publicly showing off research, that seems unrelated to its core business">.

So how long is ten years? A long time in technology, sure: in 2009: Twitter, Android, and "there's an app for that" were new(ish), the iPad was a year from release, smartphones got GPS, netbooks were rising, and 3D was poised to change the world of cinema. "The academic world is very conservative," someone at my table said. "Not much can change in ten years."

Despite Sci-Hub, the push to open access is not just another Internet plot to make everything free. Much of it is coming from academics, funders, librarians, and administrators. In the last year, the University of California dropped Elsevier rather than modify its open access policy or pay extra for the privilege of keeping it. Research consortia in Sweden, Germany, and Hungary have had similar disputes; a group of Norwegian institutions recently agreed to pay €9 million a year to cover access to Elsevier's journals and the publishing costs of its expected 2,000 articles.

What is slow to change is incentives within academia. Rising scholars are judged much as they were 50 years ago: how much have they published, and where? The conflict means that younger researchers whose work has immediate consequences find themselves forced to choose between prioritizing career management - via journal publication - or more immediately effective efforts such as training workshops and newspaper coverage to alert practitioners in the field of new problems and solutions. Choosing the latter may help tens of thousands of people - at a cost of a "You haven't published" stall to their careers. Equally difficult, today's structure of departments and journals is poorly suited for the increasing range of multi-, inter-, and trans-disciplinary research. Where such projects can find publication remains a conundrum.

All of that is without considering other misplaced or perverse incensitives in the present system: novel ideas struggle to emerge; replication largely does not happen or fails, and journal impact factors are overvalued. The Internet has opened up beneficial change: Ben Goldacre's COMPare project to identify dubious practices such as outcome switching and misreported findings, and the push to publish data sets; and preprint servers give much wider access to new work. It may not be all good; but it certainly isn't all bad.


Illustrations: A spherical cow jumping over the moon (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.