" /> net.wars: May 2019 Archives

« April 2019 | Main

May 17, 2019

Genomics snake oil

DNA_Double_Helix_by_NHGRI-NIH-PD.jpgIn 2011, as part of an investigation she conducted into the possible genetic origins of the streak of depression that ran through her family, the Danish neurobiologist Lone Frank had her genome sequenced and interviewed many participants in the newly-opening field of genomics that followed the first complete sequencing of the human genome. In her resulting book, My Beautiful Genome, she commented on the "Wild West" developing around retail genetic testing being offered to consumers over the web. Absurd claims such as using DNA testing to find your perfect mate or direct your child's education abounded.

This week, at an event organized by Breaking the Frame, New Zealand researcher Andelka M. Phillips presented the results of her ongoing study of the same landscape. The testing is just as unreliable, the claims even more absurd - choose your diet according to your DNA! find out what your superpower is! - and the number of companies she's collected has reached 289 while the cost of the tests has shrunk and the size of the databases has ballooned. Some of this stuff makes astrology look good.

To be perfectly clear: it's not, or not necessarily, the gene sequencing itself that's the problem. To be sure, the best lab cannot produce a reading that represents reality from poor-quality samples. And many samples are indeed poor, especially those snatched from bed sheets or excavated from garbage cans to send to sites promising surreptitious testing (I have verified these exist, but I refuse to link to them) to those who want to check whether their partner is unfaithful or whether their child is in fact a blood relative. But essentially, for health tests at least, everyone is using more or less the same technology for sequencing.

More crucial is the interpretation and analysis, as Helen Wallace, the executive director of GeneWatch UK, pointed out. For example, companies differ in how they identify geographical regions, frame populations , and the makeup of their databases of reference contributions. This is how a pair of identical Canadian twins got varying and non-matching test results from five companies, one Ashkenazi Jew got six different ancestry reports, and, according to one study, up to 40% of DNA results from consumer genetic tests are false positives. As I type, the UK Parliament is conducting an inquiry into commercial genomics.

Phillips makes the data available to anyone who wants to explore it. Meanwhile, so far she's examined the terms of service and privacy policies of 71 companies, and finds them filled with technology company-speak, not medical information. They do not explain these services' technical limitations or the risks involved. Yet it's so easy to think of disastrous scenarios: this week, an American gay couple reported that their second child's birthright citizenship is being denied under new State Department rules. A false DNA test could make a child stateless.

Breaking the Frame's organizer, Dave King, believes that a subtle consequence of the ancestry tests - the things everyone was quoting in 2018 that tell you that you're 13% German, 1% Somalian, and whatever else - is to reinforce the essentially racist notion that "Germanness" has a biological basis. He also particularly disliked the services claiming they can identify children's talents; these claim, as Phillips highlighted, that testing can save parents money they might otherwise waste on impossible dreams. That way lies Gattaca and generations of children who don't get to explore their own abilities because they've already been written off.

Even more disturbing questions surround what happens with these large databases of perfect identifiers. In the UK, last October the Department of Health and Social Care announced its ambition to sequence 5 million genomes. Included was the plan to being in 2019 to offer whole genome sequencing to all seriously ill children and adults with specific rare diseases or hard-to-treat cancers as part of their care. In other words, the most desperate people are being asked first, a prospect Phil Booth, coordinator of medConfidential, finds disquieting. As so much of this is still research, not medical care, he said, like the late despised care.data, it "blurs the line around what is your data, and between what the NHS was and what some would like it to be". Exploitation of the nation's medical records as raw material for commercial purposes is not what anyone thought they were signing up for. And once you have that giant database of perfect identifiers...there's the Home Office, which has already been caught using the NHS to hunt illegal immigrants and DNA testing immigrants.

So Booth asked this: why now? Genetic sequencing is 20 years old, and to date it has yet to come close to being ready to produce the benefits predicted for it. We do not have personalized medicine, or, except in a very few cases (such as a percentage of breast cancer) drugs tailored to genetic makeup. "Why not wait until it's a better bet?" he asked. Instead of spending billions today - billions that, as an audience member pointed out, would produce better health more widely if spent on improving the environment, nutrition, and water - the proposal is to spend them on a technology that may still not be producing results 20 years from now. Why not wait, say, ten years and see if it's still worth doing?


Illustrations: DNA double helix (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 10, 2019

Slime trails

ghostbusters-murray-slime.pngIn his 2000 book, Which Lie Did I Tell?, the late, great screenwriter William Goldman called the brilliant 1963 Stanley Donen movie Charade a "money-loser". Oh, sure, it was a great success - for itself. But it cost Hollywood hundreds of millions of dollars in failed attempts to copy its magical romantic-comedy-adventure-thriller mixture. (Goldman's own version, 1992's The Year of the Comet, was - his words - "a flop".) In this sense, Amazon may be the most expensive company ever launched in Silicon Valley because it encouraged everyone to believe losing money in 17 of its first 18 years doesn't matter.

Uber has been playing up this comparison in the run-up to its May 2019 IPO. However, two things make it clear the comparison is false. First - duh - losing money just isn't a magical sign of a good business, even in the Internet era. Second, Amazon had scale on its side, as well as a pioneering infrastructure it was able later to monetize. Nothing about transport scales, as Hubert Horan laid out in 2017; even municipalities can't make Uber cheaper than public transit. Horan's analysis of Uber's IPO filing is scathing. Investment advisers love to advise investing in companies that make popular products, but *not this time*.

Meanwhile, network externalities abound. The Guardian highlights the disparity between Uber's drivers, who have been striking this week, and its early investors, who will make billions even while the company says it intends to continue slicing drivers' compensation. The richest group, says the New York Times, have already decamped to lower-tax states.

If Horan is right, however, the impending shift of billions of dollars from drivers and greater fools to already-wealthy early investors will arguably be a regulatory failure on the part of the Securities and Exchange Commission. I know the rule of the stock market is "buyer beware", but without the trust conferred by regulators there will *be* no buyers, not even pension funds. Everyone needs government to ensure fair play.

Somewhere in one of his 500-plus books, the science/fiction writer Isaac Asimov commented that he didn't like to fly because in case of a plane crash his odds of survival were poor. "It's not sporting." In fact, most passengers survive, unharmed, but not, obviously, in the recent Boeing crashes. Blame, as Madeline Elish correctly predicted in her paper on moral crumple zones, is being sprayed widely, particularly among the humans who build and operate these things: faulty sensors, pilots, and software issues.

The reality seems more likely to be a perfect storm comprising numerous components: 1) the same kind of engineering-management disconnect that doomed Challenger in 1986, 2) trying to compensate with software for a hardware problem, 3) poorly thought-out cockpit warning light design, 4) the number and complexity of vendors involved, and 5) receding regulators. As hybrid cyber-physical systems become more pervasive, it seems likely we will see many more situations where small decisions made by different actors will collide to create catastrophes, much like untested drug interactions.

Again, regulatory failure is the most alarming. Any company can screw up. The failure of any complex system can lead to companies all blaming each other. There are always scapegoats. But in an industry where public perception of safety is paramount, regulators are crucial in ensuring trust. The flowchart at the Seattle Times says it all about how the FAA has abdicated its responsibility. It's particularly infuriating because many in the cybersecurity industry cite aviation as a fine example of what an industry can do to promote safety and security when the parties recognize their collective interests are best served by collaborating and sharing data. Regulators who audit and test provide an essential backstop.

The 6% of the world that flies relies on being able to trust regulators to ensure their safety. Even if the world's airlines now decide that they can't trust the US system, where are they going to go for replacement aircraft? Their own governments will have to step in where the US is failing, as the EU already does in privacy and antitrust. Does the environment win, if people decide it's too risky to fly? Is this a plan?

I want regulators to work. I want to be able to fly with reasonable odds of survival, have someone on the job to detect financial fraud, and be able to trust that medical devices are safe. I don't care how smart you are, no consumer can test these things for themselves, any more than we can tell if a privacy policy is worth the electrons it's printed on.

On that note, last week on Twitter Demos researcher Carl Miller, author of The Death of the Gods, made one of his less-alarming suggestions. Let's replace "cookie": "I'm willing to bet we'd be far less willing to click yes, if the website asked if we [are] willing to have a 'slime trail', 'tracking beacon' or 'surveillance agent' on our browser."

I like "slime trail", which extends to cover the larger use of "cookie" in "cookie crumbs" to describe the lateral lists that show the steps by which you arrived at the current page. Now, when you get a targeted ad, people will sympathize as you shout, "I've been slimed!"


Illustrations: Bill Murray, slimed in Ghostbusters (1984).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 3, 2019

Reopening the source

SphericalCow2.gif
"There is a disruption coming." Words of doom?

Several months back we discussed Michael Salmony's fear that the Internet is about to destroy science. Salmony reminded that his comments came in a talk on the virtues of the open economy, and then noted the following dangers:

- Current quality-assurance methods (peer-review, quality editing, fact checking etc) are being undermined. Thus potentially leading to an avalanche of attention-seeking open garbage drowning out the quality research;
- The excellent high-minded ideals (breaking the hold of the big controllers, making all knowledge freely accessible etc) of OA are now being subverted by models that actually ask authors (or their funders) to spend thousands of dollars per article to get it "openly accessible". Thus again privileging the rich and well connected.

The University of Bath associate professor Joanna Bryson rather agreed with Salmony, also citing the importance of peer review. So I stipulate: yes, peer review is crucial for doing good science.

In a posting deploring the death of the monograph, Bryson notes that, like other forms of publishing, many academic publishers are small and struggle for sustainability. She also points to a Dutch presentation arguing that open access costs more.

Since she, as an academic researcher, has skin in this game, we have to give weight to her thoughts. However, many researchers dissent, arguing that academic publishers like Elsevier, Axel Springer profit from an unfair and unsustainable business model. Either way, an existential crisis is rolling toward academic publishers like a giant spherical concrete cow.

So to yesterday's session on the ten-year future of research, hosted by < ahref="https://www.ehfg.org/projects-events/health-futures/">European Health Forum Gastein and sponsored by Elsevier. The quote of doom we began with was voiced there.

The focal point was a report (PDF), the result of a study by Elsevier and Ipsos MORI. Their efforts eventually generated three scenarios: 1) "brave open world", in which open access publishing, collaboration, and extensive data sharing rule; 2) "tech titans", in which technology companies dominate research; 3) "Eastern ascendance", in which China leads. The most likely is a mix of the three. This is where several of us agreed that the mix is already our present. We surmised, cattily, that this was more an event looking for a solution to Elsevier's future. That remains cloudy.

The rest does not. For the last year I've been listening to discussions about how academic work can find greater and more meaningful impact. While journal publication remains essential for promotions and tenure within academia, funders increasingly demand that research produce new government policies, changed public conversations, and fundamentally more effective practice.

Similarly, is there any doubt that China is leading innovation in areas like AI? The country is rising fast. As for "tech titans", while there's no doubt that these companies lead in some fields, it's not clear that they are following the lead of the great 1960s and 1970s corporate labs like Bell Labs, Xerox PARC and IBM Watson, which invested in fundamental research with no connection to products. While Google, Facebook, and Microsoft researchers do impressive work, Google is the only one publicly showing off research, that seems unrelated to its core business">.

So how long is ten years? A long time in technology, sure: in 2009: Twitter, Android, and "there's an app for that" were new(ish), the iPad was a year from release, smartphones got GPS, netbooks were rising, and 3D was poised to change the world of cinema. "The academic world is very conservative," someone at my table said. "Not much can change in ten years."

Despite Sci-Hub, the push to open access is not just another Internet plot to make everything free. Much of it is coming from academics, funders, librarians, and administrators. In the last year, the University of California dropped Elsevier rather than modify its open access policy or pay extra for the privilege of keeping it. Research consortia in Sweden, Germany, and Hungary have had similar disputes; a group of Norwegian institutions recently agreed to pay €9 million a year to cover access to Elsevier's journals and the publishing costs of its expected 2,000 articles.

What is slow to change is incentives within academia. Rising scholars are judged much as they were 50 years ago: how much have they published, and where? The conflict means that younger researchers whose work has immediate consequences find themselves forced to choose between prioritizing career management - via journal publication - or more immediately effective efforts such as training workshops and newspaper coverage to alert practitioners in the field of new problems and solutions. Choosing the latter may help tens of thousands of people - at a cost of a "You haven't published" stall to their careers. Equally difficult, today's structure of departments and journals is poorly suited for the increasing range of multi-, inter-, and trans-disciplinary research. Where such projects can find publication remains a conundrum.

All of that is without considering other misplaced or perverse incensitives in the present system: novel ideas struggle to emerge; replication largely does not happen or fails, and journal impact factors are overvalued. The Internet has opened up beneficial change: Ben Goldacre's COMPare project to identify dubious practices such as outcome switching and misreported findings, and the push to publish data sets; and preprint servers give much wider access to new work. It may not be all good; but it certainly isn't all bad.


Illustrations: A spherical cow jumping over the moon (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.