" /> net.wars: November 2017 Archives

« October 2017 | Main | December 2017 »

November 23, 2017

Twister

Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".


Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 17, 2017

Counterfactuals

Thumbnail image for lanier-lrm-2017.jpgOn Tuesday evening, virtual reality pioneer and musician Jaron Lanier, in London to promote his latest book, Dawn of the New Everything, suggested the internet took a wrong turn in the 1990s by rejecting the idea of combating spam by imposing a tiny - "homeopathic" - charge to send email. Think where we'd be now, he said. The mindset of paying for things would have been established early, and instead of today's "behavior modification empires" we'd have a system where people were paid for the content they produce.

Lanier went on to invoke the ghost of Ted Nelson who began his earliest work on Project Xanadu in 1960, before ARPAnet, the internet, and the web. The web fosters copying. Xanadu instead gave every resource a permanent and unique address, and linking instead of copying meant nothing ever lost its context.

The problem, as Nelson's 2011 autobiography Possiplex and a 1995 Wired article, made plain, is that trying to get the thing to work was a heartbreaking journey filled with cycles of despair and hope that was increasingly orthogonal to where the rest of the world was going. While efforts continue, it's still difficult to comprehend, no matter how technically visionary and conceptually advanced it was. The web wins on simplicity.

But the web also won because it was free. Tim Berners-Lee is very clear about the importance he attaches to deciding not to patent the web and charge licensing fees. Lanier, whose personal stories about internetworking go back to the 1980s, surely knows this. When the web arrived, it had competition: Gopher, Archie, WAIS. Each had its limitations in terms of user interface and reach. The web won partly because it unified all their functions and was simpler - but also because it was freer than the others.

Suppose those who wanted minuscule payments for email had won? Lanier believes today's landscape would be very different. Most of today's machine learning systems, from IBM Watson's medical diagnostician to the various quick-and-dirty translation services rely on mining an extensive existing corpus of human-generated material. In Watson's case, it's medical research, case studies, peer review, and editing; in the case of translation services it's billions of side-by-side human-translated pages that are available on the web (though later improvements have taken a new approach). Lanier is right that the AIs built by crunching found data are parasites on generations of human-created and curated knowledge. By his logic, establishing payment early as a fundamental part of the internet would have ensured that the humans that created all that data would be paid for their contributions when machine learning systems mined it. Clarity would result: instead of the "cruel" trope that AIs are rendering humans unnecessary, it would be obvious that AI progress relied on continued human input. For that we could all be paid rather than being made "wards of the state".

Consider a practical application. Microsoft's LinkedIn is in court opposing HiQ, a company that scrapes LinkedIn's data to offer employers services that LinkedIn might like to offer itself. The case, which was decided in HiQ's favor in August but is appeal-bound, pits user privacy (argued by EPIC) against innovation and competition (argued by EFF). Everyone speaks for the 500 million whose work histories are on LinkedIn, but no one speaks for our individual ownership of our own information.

Let's move to Lanier's alternative universe and say the charge had been applied. Spam dropped out of email early on. We developed the habit of paying for information. Publishers and the entertainment industry would have benefited much sooner, and if companies like Facebook and LinkedIn had started, their business models would have been based on payments for posters and charges for readers (he claims to believe that Facebook will change its business model in this direction in the coming years; it might, but if so I bet it keeps the advertising).

In that world, LinkedIn might be our broker or agent negotiating terms with HiQ on our behalf rather than in its own interests. When the web came along, Berners-Lee might have thought pay-to-click logical, and today internet search might involve deciding which paid technology to use. If, that is, people found it economic to put the information up in the first place. The key problem with Lanier's alternative universe: there were no micropayments. A friend suggests that China might be able to run this experiment now: Golden Shield has full control, and everyone uses WeChat and AliPay.

I don't believe technology has a manifest destiny, but I do believe humans love free and convenient, and that overwhelms theory. The globally spreading all-you-can-eat internet rapidly killed the existing paid information services after commercial access was allowed in 1994. I'd guess that the more likely outcome of charging for email would have been the rise of free alternatives to email - instant messaging, for example, which happened in our world to avoid spam. The motivation to merge spam with viruses and crack into people's accounts to send spam would have arisen earlier than it did, so security would have been an earlier disaster. As the fundamental wrong turn, I'd instead pickcentralization.

Lanier noted the culminating irony: "The left built this authoritarian network. It needs to be undone."

The internet is still young. It might be possible, if we can agree on a path.


Illustrations: Jaron Lanier in conversation with Luke Robert Mason (Eva Pascoe);

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


November 10, 2017

Regulatory disruption

Thumbnail image for Northern_Rock_Queue.jpgThe financial revolution due to hit Britain in mid-January has had surprisingly little publicity and has little to do with the money-related things making news headlines over the last few years. In other words, it's not a new technology, not even a cryptocurrency. Instead, this revolution is regulatory: banks will be required to open up access to their accounts to third parties.

The immediate cause of this change is two difficult-to-distinguish pieces of legislation, one UK-specific and one EU-wide. The EU piece is Payment Services Directive 2, which is intended to foster standards and interoperability in payments across Europe. In the UK, Open Banking requires the nine biggest retail banks to create APIs that, given customer consent, will give third parties certified by the Financial Conduct Authority direct access to customer accounts. Account holders have begun getting letters announcing new terms and conditions, although recipients report that the parts that refer to open banking and consent are masterfully vague.

Thumbnail image for rotated-birch-contactlessmonopoly-ttf2016.jpgAs anyone attending the annual Tomorrow's Transactions Forum knows, open banking has been creeping up on us for the last few years. Consult Hyperion's Tim Richards has a good explanation of the story so far. At this year's event, Dave Birch, who has a blog posting outlining PSD2's background and context, noted that in China, where the majority of non-cash payments are executed via mobile, Alipay and Tencent are already executing billions of transactions a year, bypassing banks entirely. While the banks aren't thrilled about losing the transactions and their associated (dropping) revenue, the bigger issue is that they are losing the data and insight into their customers that traditionally has been exclusively theirs.

We could pick an analogy from myriad internet-disrupted sectors, but arguably the best fit is telecoms deregulation, which saw AT&T (in the US) and BT (in the UK) forced to open up their networks to competitors. Long distance revenues plummeted and all sorts of newcomers began leaching away their customers.

For banks, this story began the day Elon Musk's x.com merged with Peter Thiel's money transfer business to create the first iteration of Paypal so that anyone with an email address could send and receive money. Even then, the different approach of cryptocurrencies was the subject of experiments, but for most people the rhetoric of escaping government was less a selling point than being able to trade small sums with strangers who didn't take credit cards. Today's mobile payment users similarly don't care whether a bank is involved or not as long as they get their money.

Part of the point is to open up competition. In the UK, consumer-bank relationships tend to be lifelong, partly because so much of banking here has been automated for decades. For most people, moving their account involves not only changing arrangements for inbound payments like salary, but also also all the outbound payments that make up a financial life. The upshot is to give the banks impressive customer lock-in, which the Competition and Markets Authority began trying to break with better account portability.

The larger point of Open Banking, however, is to drive innovation in financial services. Why, the reasoning goes, shouldn't it be easier to aggregate data from many sources - bank and other financial accounts, local transport, government benefits - and provide a dashboard to streamline management or automatically switch to the cheapest supplier of unavoidable services? At Wired, Rowland Manthorpe has a thorough outline of the situation and its many uncertainties. Among these are the impact on the banks themselves - will they become, as the project's leader and the telecoms analogy suggest, plumbing for the financial sector or will they become innovators themselves? Or, despite the talk of fintech startups, will the big winners be Google and Facebook?

The obvious concerns in all this are security and privacy. Few outside the technology sector understand what an API is; how do we explain it to the broad range of the population so they understand how to protect themselves? Assuming that start-ups emerge, what mechanisms will we have to test how well our data is secured or trace how it's being used? What about the potential for spoof apps that steal people's data and money?

It's also easy to imagine that "consent" may be more than ordinarily mangled, a problem a friend calls the "tendency to mandatory". It's easy to imagine that the companies to whom we apply for insurance, a loan, or a job may demand an opened gateway to account data as part of the approvals process, which is extortion rather than consent.

This is also another situation where almost all of "my" data inevitably involves exposing third parties, the other halves of our transactions who have never given consent for that to happen. Given access to a large enough percentage of the population's banking data, triangulation should make it possible to fill in a fair bit of the rest. Amazon already has plenty of this kind of data from its own customers; for Facebook and Google this must be an exciting new vista.

Understanding what this will all mean will take time. But it represents a profound change, not only in the landscape of financial services but in the area of technical innovation. This time, those fusty old government regulators are the ones driving disruption.


Illustrations: Northern Rock in 2007 (Dominic Alves); Dave Birch.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 3, 2017

Life forms

Thumbnail image for Cephalopod_barnstar.pngWould you rather be killed by a human or a machine?

At this week's Royal Society meeting on AI and Society, Chris Reed recounted asking this question of an audience in Singapore. They all picked the human, even though they knew it was irrational, because they thought at least they'd know *why*.

A friend to whom I related this had another theory: maybe they thought there was a chance they could talk the the human killer out of it, whereas the machine would be implacable. It's possible.

My own theory pins this distaste for machine killing on a different, crucial underlying factor: a sense of shared understanding. The human standing over you with the axe or driving the oncoming bus may be a professional paid to dispatch you, a serial killer, an angry ex, or mentally ill, but they all have a personal understanding of what a human life means because they all have one they know they, too, will one day lose. The meaning of removing someone else's life is thoroughly embedded in all of us. Not having that is more or less the definition of a machine, or was until Philip K. Dick and his replicants. But there is no reason to assume that every respondent had the same reason.

Similarly, a commenter in the audience found similar responses to an Accenture poll he encountered on Twitter that inquired whether he would be in favor of AI making health decisions. When he checked the voting results, 69% had said no. Here again, the death of a patient by medical mistake keeps a human doctor awake at night (if television is to be believed), while to a machine it's a statistic, no matter how heavily weighted in its inner backpropagating neural networks.

Marion-Oswald-in-template.jpgThese two anecdotes resonated because earlier, Marion Oswald had opened her talk by asking whether, like Peter Godfrey-Smith's observation of cephalopods, interacting with AI was the closest we can come to interacting with an intelligent alien. Arguably, unless the aliens are immortal, on issues of life and death we can actually expect to have more shared understanding with them, as per above, than with machines.

The primary focus of Oswald's talk was actually to discuss her work studying HART, an algorithmic model used by Durham Constabulary to decide whether offenders qualified for deferred prosecution and help with their problems. The study raises all sorts of questions we're going to have to consider over the coming years about the role of police in society.

These issues were somewhat taken up later by Mireille Hildebrandt, who warned of the risks of transforming text-driven law - the messy stuff centuries of court cases have contested and interpreted - to data-driven law Allowing that to happen, she argued, transforms law into administration. "Contestability is the heart of the rule of law," she said. "There is more to the law that predictability and expedience." A crucial part of that is being able to test the system, and here Hildebrandt was particularly gloomy, in that although legal systems that comb the legal corpus are currently being marketed as aids for lawyers, she views it as inevitable that at some point they will become replacements. Some time after that, the skills necessary to test the inner workings of these systems will have vanished from the systems' human owners' firms.

At the annual We Robot conference, a recurring theme is the hard edges of computer systems, an aspect Ellen Ullman examined closely in her 1997 book, Close to the Machine. In Bill Smart's example, the difference between 59.99 miles an hour and 60.01 miles an hour is indistinguishable, but to a computer fitted with the right sensors the difference is a speeding ticket. An aspect of this that is insufficiently discussed is that all biological beings have some level of unpredictability. Robots and AI with far greater sensing precision than is available to humans will respond to changes we can't detect, making them appear less predictable, and therefore more intelligent, than they actually are. This is a deception we will have to learn to decode.

Already, machines that are billed as tools to aid human judgement are often much more trusted than they should be. Danielle Citron's 2006 paper Technological Due Process studied this in connection with benefits scoring systems in Texas and California, and found two problems. First, humans tended to trust the machine's decisions rather than apply their own judgement, a problem Hildebrandt referred to as "judgemental atrophy". Second, computer programmers are not trained lawyers, and are therefore not good at accurately translating legal text into decision-making systems. How do you express a fuzzy but widely understood and often-used standard like the UK's "reasonable person" in computer code? You'd have to precisely define the attopoint at which "reasonable" abruptly flicks to "unreasonable".

Ultimately, Oswald came down against the "intelligent alien" idea: "These are people-made, and it's up to us to find the benefits and tackle the risks," she said. "Ignorance of mathematics is no excuse."

That determination rests on the notion that the people building AI systems and the people using them have shared values. We already know that's not true, but even so: I vote less alien than a cephalopod on everything but the fear of death.

Illustrations: Cephalopod (via Obsidian Soul; Marion Oswald.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.