Archives

November 16, 2018

Septet

bush-gore-hanging-chad-florida.jpgThis week catches up on some things we've overlooked. Among them, in response to a Twitter comment: two weeks ago, on November 2, net.wars started its 18th unbroken year of Fridays.

Last year, the writer and documentary filmaker Astra Taylor coined the term "fauxtomation" to describe things that are hyped as AI but that actually rely on the low-paid labor of numerous humans. In The Automation Charade she examines the consequences: undervaluing human labor and making it both invisible and insecure. Along these lines, it was fascinating to read that in Kenya, workers drawn from one of the poorest places in the world are paid to draw outlines around every object in an image in order to help train AI systems for self-driving cars. How many of us look at a self-driving car see someone tracing every pixel?

***

Last Friday, Index on Censorship launched Demonising the media: Threats to journalists in Europe, which documents journalists' diminishing safety in western democracies. Italy takes the EU prize, with 83 verified physical assaults, followed by Spain with 38 and France with 36. Overall, the report found 437 verified incidents of arrest or detention and 697 verified incidents of intimidation. It's tempting - as in the White House dispute with CNN's Jim Acosta - to hope for solidarity in response, but it's equally likely that years of politicization have left whole sectors of the press as divided as any bullying politician could wish.

***

We utterly missed the UK Supreme Court's June decision in the dispute pitting ISPs against "luxury" brands including Cartier, Mont Blanc, and International Watch Company. The goods manufacturers wanted to force BT, EE, and the three other original defendants, which jointly provide 90% of Britain's consumer Internet access, to block more than 46,000 websites that were marketing and selling counterfeits. In 2014, the High Court ordered the blocks. In 2016, the Court of Appeal upheld that on the basis that without ISPs no one could access those websites. The final appeal was solely about who pays for these blocks. The Court of Appeal had said: ISPs. The Supreme Court decided instead that under English law innocent bystanders shouldn't pay for solving other people's problems, especially when solving them benefits only those others. This seems a good deal for the rest of us, too: being required to pay may constrain blocking demands to reasonable levels. It's particularly welcome after years of expanded blocking for everything from copyright, hate speech, and libel to data retention and interception that neither we nor ISPs much want in the first place.

***

For the first time the Information Commissioner's Office has used the Computer Misuse Act rather than data protection law in a prosecution. Mustafa Kasim, who worked for Nationwide Accident Repair Services, will serve six months in prison for using former colleagues' logins to access thousands of customer records and spam the owners with nuisance calls. While the case reminds us that the CMA still catches only the small fry, we see the ICO's point.

***

In finally catching up with Douglas Rushkoff's Throwing Rocks at the Google Bus, the section on cashless societies and local currencies reminded us that in the 1960s and 1970s, New Yorkers considered it acceptable to tip with subway tokens, even in the best restaurants. Who now would leave a Metro Card? Currencies may be local or national; cashlessness is global. It may be great for those who don't need to think about how much they spend, but it means all transactions are intermediated, with a percentage skimmed off the top for the middlefolk. The costs of cash have been invisible to us, as Dave Birch says, but it is public infrastructure. Cashlessness privatizes that without any debate about the social benefits or costs. How centralized will this new infrastructure become? What happens to sectors that aren't commercially valuable? When do those commissions start to rise? What power will we have to push back? Even on-the-brink Sweden is reportedly rethinking its approach for just these reasons In a survey, only 25% wanted a fully cashless society.

***

Incredibly, 18 years after chad hung and people disposed in Bush versus Gore, ballots are still being designed in ways that confuse voters, even in Broward County, which should have learned better. The Washington Post tell us that in both New York and Florida ballot designs left people confused (seeing them, we can see why). For UK voters accustomed to a bit of paper with big names and boxes to check with a stubby pencil, it's baffling. Granted, the multiple federal races, state races, local officers, judges, referendums, and propositions in an average US election make ballot design a far more complex problem. There is advice available, from the US Election Assistance Commission, which publishes design best practices, but I'm reliably told it's nonetheless difficult to do well. On Twitter, Dana Chisnell provides a series of links that taken together explain some background. Among them is this one from the Center for Civic Design, which explains why voting in the US is *hard* - and not just because of the ballots.

***

Finally, a word of advice. No matter how cool it sounds, you do not want a solar-powered, radio-controlled watch. Especially not for travel. TMOT.

Illustrations: Chad 2000.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 9, 2018

Escape from model land

Thumbnail image for lennysmith-davidtuckett-cruise-2018-11-08.jpg
"Models are best for understanding, but they are inherently wrong," Helen Dacre said, evoking robotics engineer Bill Smart on sensors. Dacre was presenting a tool that combines weather forecasts, air quality measurements, and other data to help airlines and other stakeholders quickly assess the risk of flying after a volcanic eruption. In April 2010, when Iceland's Eyjafjallajökull blew its top, European airspace shut down for six days at an estimated overall cost of £1.1 billion. Since then, engine manufacturers have studied the effect of atmospheric volcanic ash on aircraft engines, and are finding that a brief excursion through peak levels of concentration is less damaging than prolonged exposure at lower levels. So, do you fly?

This was one of the projects presented at this week's conference of the two-year-old network Challenging Radical Uncertainty in Science, Society and the Environment (CRUISSE). To understand "radical uncertainty", start with Frank Knight, who in 1921 differentiated between "risk", where the outcomes are unknown but the probabilities are known, and uncertainty, where even the probabilities are unknown. Timo Ehrig summed this up as "I know what I don't know" versus "I don't know what I don't know", evoking Donald Rumsfeld's "unknown unknowns". In radical uncertainty decisions, existing knowledge is not relevant because the problems are new: the discovery of metal fatigue in airline jets; the 2008 financial crisis; social media; climate change. The prior art, if any, is of questionable relevance. And you're playing with live ammunition - real people's lives. By the million, maybe.

How should you change the planning system to increase the stock of affordable housing? How do you prepare for unforeseen cybersecurity threats? What should we do to alleviate the impact of climate change? These are some of the questions that interested CRUISSE founders Leonard Smith and David Tuckett. Such decisions are high-impact, high-visibility, with complex interactions whose consequences are hard to foresee.

It's the process of making them that most interests CRUISSE. Smith likes to divide uncertainty problems into weather and climate. With "weather" problems, you make many similar decisions based on changing input; with "climate" problems your decisions are either a one-off or the next one is massively different. Either way, with climate problems you can't learn from your mistakes: radical uncertainty. You can't reuse the decisions; but you *could* reuse the process by which you made the decision. They are trying to understand - and improve - those processes.

This is where models come in. This field has been somewhat overrun by a specific type of thinking they call OCF, for "optimum choice framework". The idea there is that you build a model, stick in some variables, and tweak them to find the sweet spot. For risks, where the probabilities are known, that can provide useful results - think cost-benefit analysis. In radical uncertainty...see above. But decision makers are tempted to build a model anyway. Smith said, "You pretend the simulation reflects reality in some way, and you walk away from decision making as if you have solved the problem." In his hand-drawn graphic, this is falling off the "cliff of subjectivity" into the "sea of self-delusion".

Uncertainty can come from anywhere. Kris de Meyer is studying what happens if the UK's entire national electrical grid crashes. Fun fact: it would take seven days to come back up. *That* is not uncertain. Nor are the consequences: nothing functioning, dark streets, no heat, no water after a few hours for anyone dependent on pumping. Soon, no phones unless you still have copper wire. You'll need a battery or solar-powered radio to hear the national emergency broadcast.

The uncertainty is this: how would 65 million modern people react in an unprecedented situation where all the essentials of life are disrupted? And, the key question for the policy makers funding the project, what should government say? *Don't* fill your bathtub with water so no one else has any? *Don't* go to the hospital, which has its own generators, to charge your phone?

"It's a difficult question because of the intention-behavior gap," de Meyer said. De Meyer is studying this via "playable theater", an effort that starts with a story premise that groups can discuss - in this case, stories of people who lived through the blackout. He is conducting trials for this and other similar projects around the country.

In another project, Catherine Tilley is investigating the claim that machines will take all our jobs . Tilley finds two dominant narratives. In one, jobs will change, not disappear, and automation more of them, enhanced productivity, and new wealth. In the other, we will be retired...or unemployed. The numbers in these predictions are very large, but conflicting, so they can't all be right. What do we plan for education and industrial policy? What investments do we make? Should we prepare for mass unemployment, and if so, how?

Tilley identified two common assumptions: tasks that can be automated will be; automation will be used to replace human labor. But interviews with ten senior managers who had made decisions about automation found otherwise. Tl;dr: sectoral, national, and local contexts matter, and the global estimates are highly uncertain. Everyone agrees education is a partial solution - "but for others, not for themselves".

Here's the thing: machines are models. They live in model land. Our future depends on escaping.


Illustrations: David Tuckett and Lenny Smith.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.