" /> net.wars: July 2018 Archives

« June 2018 | Main | August 2018 »

July 27, 2018

Think horses, not zebras

IBM-watson-jeopardy.pngThese two articles made a good pairing: Oscar Schwartz's critique of AI hype in the Guardian, and Jennings Brown's takedown of IBM's Watson in real-world contexts. Brown's tl;dr: "This product is a piece of shit," a Florida doctor reportedly told IBM in the leaked memos on which Gizmodo's story is based. "We can't use it for most cases."

Watson has had a rough ride lately: in August 2017 Brown catalogued mounting criticisms of the company and its technology; that June, MIT Technology Review did, too. All three agree: IBM's marketing has outstripped Watson's technical capability.

That's what Schwartz is complaining about: even when scientists make modest claims; media and marketing hype it to the hilt. As a result, instead of focusing on design and control issues such as how to encode social fairness into algorithms, we're reading Nick Bostrom's suggestion that an uncontrolled superintelligent AI would kill humanity in the interests of making paper clips or the EU's deliberation about whether robots should have rights. These are not urgent issues, and focusing on them benefits only vendors who hope we don't look too closely at what they're actually doing.

Schwartz's own first example is the Facebook chat bots that were intended to simulate negotiation-like conversations. Just a couple of days ago someone referred to this as bots making up their own language and cited it as an example of how close AI is to the Singularity. In fact, because they lacked the right constraints, they just made strange sentences out of normal English words. The same pattern is visible with respect to self-driving cars.

You can see why: wild speculation drives clicks - excuse me, monetized eyeballs - but understanding what's wrong with how most of us think about accuracy in machine learning is *mathy*. Yet understanding the technology's very real limits is crucial to making good decisions about it.

With medicine, we're all particularly vulnerable to wishful thinking, since sooner or later we all rely on it for our own survival (something machines will never understand). The UK in particular is hoping AI will supply significant improvements because of the vast amount of patient, that is, training, data the NHS has to throw at these systems. To date, however, medicine has struggled to use information technology effectively.

Attendees at We Robot have often discussed what happens when the accuracy of AI diagnostics outstrips that of human doctors. At what point does defying the AI's decision become malpractice? At this year's conference, Michael Froomkin presented a paper studying the unwanted safety consequences of this approach (PDF).

The presumption is that the AI system's ability to call on the world's medical literature on top of generations of patient data will make it more accurate. But there's an underlying problem that's rarely mentioned: the reliability of the medical literature these systems are built on. The true extent of this issue began to emerge in 2005, when John Ioannidis published a series of papers estimating that 90% of medical research is flawed. In 2016, Ioannidis told Retraction Watch that systematic reviews and meta-analyses are also being gamed because of the rewards and incentives involved.

The upshot is that it's more likely to be unclear, when doctors and AI disagree, where to point the skepticism. Is the AI genuinely seeing patterns and spotting things the doctor can't? (In some cases, such as radiology, apparently yes. But clinical trials and peer review are needed.) Does common humanity mean the doctor finds clues in the patient's behavior and presentation that an AI can't? (Almost certainly.) Is the AI neutral in ways that biased doctors may not be? Stories of doctors not listening to patients, particularly women, are legion. Yet the most likely scenario is that the doctor will be the person entering data - which means the machine will rely on the doctor's interpretation of what the patient says. In all these conflicts, what balance do we tell the AI to set?

Much sooner than Watson will cure cancer we will have to grapple with which AIs have access to which research. In 2015, the team responsible for drafting Liberia's ebola recovery plan in 2014 wrote a justifiably angry op-ed in the New York Times. They had discovered that thousands of Liberians could have been spared ebola had a 1982 paper for Annals of Virology been affordable for them to read; it warned that Liberia needed to be included in the ebola virus endemic zone. Discussions of medical AI to date appear to handwave this sort of issue, yet cost structures, business models, and use of medical research are crucial. Is the future open access, licensing and royalties, all-you-can-eat subscriptions?

The best selling point for AI is that its internal corpus of medical research can be updated a lot faster than doctors' brains can be. In 2017, David Epstein wrote at ProPublica, many procedures and practices become entrenched, and doctors are difficult to dissuade from prescribing them even when they've been found useless. In the US, he added, the 21st Century Cures Act, passed in December 2016, threatens to make all this worse by lowering standards of evidence.

All of these are pressing problems no medical AI can solve. The problem, as usual, is us.

Illustrations: Watson wins at Jeopardy (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 20, 2018

Competing dangerously

Thumbnail image for Conversation_with_Margrethe_Vestager,_European_Commissioner_for_Competition_(17222242662).jpgIt is just over a year since the EU fined Google what seemed a huge amount, and here we are again: this week the EU commissioner for competition Margrethe Vestager levied an even bigger €4.34 billion fine over "serious illegal behavior". At issue was Google's licensing terms for its Android apps and services, which essentially leveraged its ownership of the operating system to ensure its continued market dominance in search as the world moved to mobile. Google has said it will appeal; it is also appealing the 2017 fine. The present ruling gives the company 90 days to change behaviour or face further fines of up to 5% of daily worldwide turnover.

Google's response is to say that Google's rules have enabled it not to charge manufacturers to use Android, made Android phones easier to use, and are efficient for both developers and consumers. The ruling, writes CEO Sundar Pichai, will "upset the balance of the Android ecosystem".

Google's claim that users are free to install other browsers and search engines and are used to downloading apps is true but specious. It's widely known that 95% of users never change default settings. Defaults *matter*, and Google certainly knows this. When you reach a certain size - Android holds 80% of European and worldwide smart mobile devices, and 95% of the licensable mobile market outside of China - the decisions you make about choice architecture determine the behavior of large populations.

Also, the EU's ruling isn't about a user's specific choice on their individual smartphone. Instead, it's based on three findings: 1) Google's licensing terms made access to the Play Store contingent on pre-installing Google's search app and Chrome; 2) Google paid some large manufacturers and network operators to exclusively pre-install Google's search app; 3) Google prevented manufacturers that pre-install Google apps from selling *any* devices using non-Google-approved ("forked") versions of Android. It puts the starting date at 2011, "when Google became dominant".

There are significant similarities here to the US's 1998 ruling against Microsoft over tying Internet Explorer to Windows. Back then, Microsoft was the Big Evil on the block, and there were serious concerns that it would use Internet Explorer as a vector for turning the web into a proprietary system under its control. For a good account, see Charles H. Ferguson's 1999 book, High St@kes, No Prisoners. Ferguson would know: his web page design start-up, Vermeer, was the subject of an acquisition battle between Microsoft and Netscape. Google, which was founded in 1998, ultimately benefited from this ruling, because it helped keep the way open for "alternative" browsers such as Google's own Chrome.

There are also similarities to the EU's 2004 ruling against Microsoft, which required the company to stop bundling its media player with Windows and to disclose the information manufacturers needed to integrate non-Microsoft networking and streaming software. The EU's fine was the largest-ever at the time: €497 million. At that point, media players seemed like important gateways to content. The significant gateway drug turned out to be Web browsers; either way, Microsoft and streaming have both prospered.

Since 1998, however, in another example of EU/US divergence, the US has largely abandoned enforcing anti-competition law. As Lina M. Khan pointed out last year, it's no longer the case that waiting will produce two guys in a garage with a new technology that up-ends the market and its biggest players. The EU explains carefully in its announcement that Android is different from Apple's iOS or Blackberry because as vertically integrated companies that do not license their products they are not part of the same market. In the Android market, however, it says, "...it was Google - and not users, app developers, and the market - that effectively determined which operating systems could prosper."

Too little, too late, some are complaining, and more or less correctly: the time for this action was 2009; even better, says the New York Times, block in advance the mergers that are creating these giants. Antitrust actions against technology companies are almost always a decade late. Others buy Google's argument that consumers will suffer, but Google is a smart company full of smart engineers who are entirely capable of figuring out well-designed yet neutral ways to present choices, just as Microsoft did before it.

There's additional speculation that Google might have to recoup lost revenues by charging licensing fees; that Samsung might be the big winner, since it already has its own full competitive suite of apps; and that the EU should fine Apple, too, on the basis that the company's closed system bars users from making *any* unapproved choices.

Personally, I wish the EU had applied more attention to the ways Google leverages the operating system to enable user tracking to fuel its advertising business. The requirement to tie every phone to a Gmail address is an obvious candidate for regulatory disruption; so is the requirement to use it to access the Play Store. The difficulty of operating a phone without being signed into Google has ratcheted up over time - and it seems wholly unnecessary *unless* the purpose is to make it easier to do user tracking. This issue may yet find focus under GDPR.

Illustrations: Margrethe Vestager.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 13, 2018

Exporting the Second Amendment

Ultimaker_3D_Printer_(16656068207).jpgOne thing about a fast-moving world in a time of technological change is that it's easy to lose track of things in the onslaught. This week alone - the UK Information Commissioner's Office fined Facebook the pre-GDPR maximum £500,000, ; Uber is firing human safety drivers because it's scaling back its tests of autonomous vehicles; and Twitter is currently deleting more than 1 million fake accounts a *day*.

Until a couple of days ago, one such forgotten moment in internet history was the 2013 takedown of the 3D printing designs site Defcad after it was prosecuted for publishing blueprints for various guns. In the years since, Andy Greenberg writes at Wired, Defcad owner Cody Wilson went on to sue the US Department of Justice, arguing that in demanding removal of his gun blueprints from the internet the DoJ was violating both the First Amendment (freedom of speech) and the Second (the right to bear arms). Wilson has now won his case in a settlement.

It's impossible for anyone with a long memory of the internet's development to read this and not be immediately reminded of the early 1990s battles surrounding the PC-based encryption software PGP. In a 1993 interview for a Guardian piece about his investigation, PGP creator Phil Zimmermann explicitly argued that keeping strong cryptography available for public use, like the right to bear arms enshrined in the Second Amendment, was essential to limit the power of the state.

The reality is that crypto is much more of a leveler than guns are. Few governments are so small that a group of civilians can match their military might. Crypto is much more of a leveler. In World War II, only governments had enough resources to devise and crack the strongest encryption. Today, which government has a cluster the size of GAFA's?

More immediately relevant is the fact that the law the DoJ used in both cases - Wilson and Zimmermann - is the same one: the International Traffic in Arms Regulations. Based on crypto's role in World War II, ITAR restricted strong encryption restricted as a weapon of strategic importance. The Zimmerman investigation focused on whether he had exported PGP to other countries by uploading it to the internet. The contemporaneous Computers, Freedom, and Privacy conferences quivered with impassioned fury over the US's insistence that export restrictions were essential. It all changed around 1996, when cryptographer Daniel Bernstein won his court case against the US government over ITAR's restrictions. By then cryptography's importance in ecommerce made restrictions untenable anyway. Lifting the restrictions did not end the arguments over law enforcement access; these continue today.

The battles over cryptography, however, are about a technology that is powerfully important in preserving the privacy and security of everyone's data, from banks to retailers to massive numbers of innocent citizens. Human rights organizations argue that the vast majority who are innocent citizens have a right to protect the confidentiality of the records we keep of our conversations with our doctors, lawyers, and best friends. In addition, the issues surrounding encryption are the same irrespective of location and timing. For nearly three decades myriad governments have cited the dangers of terrorists, drug dealers, pedophiles, and organized crime in demanding free access to encrypted data. Similarly, privacy activists worldwide have responded with the need to protect journalists, whistleblowers, human rights activists, victims of domestic violence, and other vulnerable people from secret snooping and the wrongness of mass surveillance.

Arguments over guns, however, play out as differently outside the US as arguments about data protection, competition, and antitrust laws do. Put simply, outside the US there is no Second Amendment, and the idea that guns should be restricted is much less controversial. European friends often comment on how little Americans trust their government.

For this reason, it's likely that publishing blueprints for DIY guns, though now explicitly ruled legal in the US, will become a new excuse for censoring the internet for other governments. In the US, the Electronic Frontier Foundation backed Wilson as a matter of protecting free speech; it's doubtful that human rights organizations elsewhere will see gun designs in the same way.

One major change since this case first came up: 3D printing has not become anything like the mass phenomenon its proponents were predicting in 2013. Then, many thought 3D printing was the coming thing. Scientists like Hod Lipson were imagining the new shapes and functions strange materials composited molecule by molecule would imminently create. Then, few people had 3D printers in their homes.

But today...although 3D printing has made some inroads in manufacturing and prototyping, consumers still find 3D printers too expensive for their limited usefulness, even though they can be fun. Some gain access to them through Hackspaces/FabLabs/Makerspaces, but that movement, though important and valuable, seems similarly to have largely stalled a few years back. Lipson's future may still happen. But it isn't happening yet to any appreciable degree.

Instead, the future that's rushing at us is the Internet of Things, where the materials are largely familiar and what's different is that they're laced with lectronics that make them programmable. There is more to worry about in "smart" guns than in readily downloadable designs for guns.


Illustrations: Ultimaker 3D printer in London, 2014 (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 6, 2018

This is us

Thumbnail image for ACTA_Protest_Crowd_in_London.JPGAfter months of anxiety among digital rights campaigners such as the Open Rights Group and the Electronic Frontier Foundation, the European Parliament has voted 318-278 against fast-tracking a particularly damaging set of proposed changes to copyright law.

There will be a further vote on September 10, so as a number of commentators are reminding us on Twitter, it's not over yet.

The details of the European Commission's alarmingly wrong-headed approach have been thoroughly hashed out for the last year by Glyn Moody. The two main bones of contention are euphoniously known as Article 11 and Article 13. Article 11 (the "link tax") would give publishers the right to require licenses (that is, payment) for the text accompanying links shared on social media, and Article 13 (the "upload filter") would require sites hosting user content to block uploads of copyrighted material.

In a Billboard interview with MEP Helga Trüpel, Muffett quite rightly points out the astonishing characterization of the objections to Articles 11 and 13 as "pro-Google". There's a sudden outburst of people making a similar error: Even the Guardian's initial report saw the vote as letting tech giants (specifically, YouTube) off the hook for sharing their revenues. Paul McCartney's last-minute plea hasn't helped this perception. What was an argument about the open internet is now being characterized as a tussle over revenue share between a much-loved billionaire singer/songwriter and a greedy tech giant that exploits artists.

Yet, the opposition was never about Google. In fact, probably most of the active opponents to this expansion of copyright and liability would be lobbying *against* Google on subjects like privacy, data protection, tax avoidance, and market power, We just happen to agree with Google on this particular topic because we are aware that forcing all sites to assume liability for the content their users post will damage the internet for everyone *else*. Google - and its YouTube subsidiary - has both the technology and the financing to play the licensing game.

But licensing and royalties are a separate issue from mandating that all sites block unauthorized uploads. The former is about sharing revenues; the latter is about copyright enforcement, and conflating them helps no one. The preventive "copyright filter" that appears essential for compliance with Article 13 would fail the "prior restraint" test of the US First Amendment - not that the EU needs to care about that. As copyright-and-technology consultant Bill Rosenblatt writes, licensing is a mess that this law will do nothing to fix. If artists and their rights holders want a better share of revenues, they could make it a *lot* easier for people to license their work. This is a problem they have to fix themselves, rather than requiring lawmakers to solve it for them by placing the burden on the rest of us. The laws are what they are because for generations they made them.

Article 11, which is or is not a link tax depending who you listen to, is another matter. Germany (2013) and Spain (2014) have already tried something similar, and in both cases it was widely acknowledged to have been a mistake. So much so that one of the opponents to this new attempt is the Spanish newspaper El País.

My guess is that those who want these laws passed are focusing on Google's role in lobbying against them - for example, Digital Music News reports that Google spent more than $36 million on opposing Article 13 - is preparation for the next round in September. Google and Facebook are increasingly the targets people focus on when they're thinking about internet regulation. Therefore, if you can recast the battle as being one between deserving artists and a couple of greedy American big businesses, they think it will be an easier sell to legislators.

But there are two of them and billions of us, and the opposition to Articles 11 and 13 was never about them. The 2012 SOPA and PIPA protests and the street protests against ACTA were certainly not about protecting Google or any other large technology company. No one goes out on the street or dresses up their website in protest banners in order to advocate for *Google*. They do it because what's been proposed threatens to affect them personally.

There's even a sound economic argument: had these proposed laws been in place in 1998, when Sergey Brin and Larry Page were meeting in dorm rooms, Google would not exist. Nor would thousands of other big businesses. Granted, most of these have not originated in the EU, but that's not a reason to wreck the open internet. Instead, that's a reason to find ways to make the internet hospitable to newcomers with bright ideas.

This debate is about the rest of us and our access to the internet. We - for some definition of "we" - were against these kinds of measures when they first surfaced in the early 1990s, when there were no tech giants to oppose them, and for the same reasons: the internet should be open to all of us.

Let the amendments begin.

Illustrations: Protesters against ACTA in London, 2012 (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.