" /> net.wars: May 2017 Archives

« April 2017 | Main | June 2017 »

May 26, 2017

Hate week

Thumbnail image for mit-moral-machine.pngWhen, say ten years from now, someone leaks the set of rules by which self-driving cars make decisions about how to behave in physical-world human encounters of the third kind, I suspect they're going to look like a mess of contradictions and bizarre value judgements. I doubt they'll look anything like MIT's Moral Machine experiment, which aims to study the ethical principles we might like such cars to follow. Instead, they will be absurd-seeming patches of responses to mistakes and resemble much more closely Richard Stallman's contract rider. They will be the self-driving car equivalent of "Please don't buy me a parrot" or "Please just leave me alone when I cross streets", and, if exposed in leaked documents for public study without explanation of the difficulties that generated them, people will marvel, "How messed-up is that?"

The tangle of leaked guides and training manuals for Facebook moderators that the Guardian has been studying this week is a case in point. How can it be otherwise for a site with 1,94 billion users in 150? countries, each with its own norms and legal standards, all of which depend on context and must filter through Facebook's own internal motivators. Images of child abuse are illegal almost everywhere, but Napalm Girl, the 1972 nude photo of nine-year-old Kim Phuc, is a news photograph whose deletion sparks international outrage and therefore restoration. Holocaust denial is illegal in 14 countries; according to Facebook's presentation, it geoblocks such content in just the four countries that pursue legal action: Germany, France, Israel, and Austria. In the US, of course, it's legal, and "Congress may make no law...".

Mark_Zuckerberg_em_setembro_de_2014.jpgAutomation will require deriving general principles from this piece-by-piece process - a grammar of acceptable human behavior - so that it can be applied across an infinite number of unforeseeable contexts. Like the Google raters Annalee Newitz profiled for Ars Technica a few weeks ago, most of Facebook's content moderators are subcontracted. As the Guardian reports, they have nothing like the training or support that the Internet Watch Foundation provides to its staff. Plus, their job is much more complicated: the IWF's remit is specifical to child abuse images and focuses on whether or not the material that's been reported to it is illegal. That is a much narrower question than whether something is "extremist", yet the work sounds just as traumatic. "Every day, every minute, that's what you see. Heads being cut off," one moderator tells the Guardian.

Wired has found The Moderators, a short documentary by Adrian Chen and Ciaran Cassidy that shows how the world's estimated 150,000 social media content moderators are trained. The filmed group are young, Indian, mostly male, and in their first jobs. The video warns that the images we see them consider are disturbing; but isn't it also disturbing that lower-paid laborers in the Global South are being tasked with removing images considered too upsetting for Westerners?

A couple of weeks ago, in a report on online hate crime, the Home Affairs Select Committee lambasted social media companies for "not doing more". Among its complaints was the recommendation that social media companies should publish regular reports detailing the number of staff, the number of complaints, and how these were resolved. As Alec Muffett responded, how many humans is a 4,000-node computer cluster? In a separate posting, Muffett, a former Facebook engineer, tried to explain the sheer mathematical scale of the problem. It's yuge. Bigly.

A related question is whether Facebook - or any commercial company - on its own should be the one to solve it and what kind of collateral damage "solving it" might inflict. The HASC report argued that if social media companies don't proactively remove hateful content then they should be forced to pay for police to do it for them. Let's say that in newspapers: "If newspapers don't do more to curb hateful speech they will have to pay police to do it for them." Viewed that way, it's a lot easier to parse this into: If you do not censor yourselves the government will do it for you. This is a threat very like the one leveled at Internet Service Providers in the UK in 1996, when the big threat in town was thought to be Usenet. The result was the creation of the IWF, which occupies a grey area between government and private company. IWF is not particularly transparent or accountable, but it does have a check on its work in the form of the police, who are the arbiters of whether a piece of content is illegal - and, eventually, the courts, if somene is brought to trial. Most of Facebook's decisions about its daily billions of pieces of content have no such check: most of what's at issue here falls in the grey areas child protection experts insist should not exist.

I am no fan of Facebook, and assume that its values as a company are designed to serve primarily the interests of its shareholders and advertisers, but I can find some sympathy for it in facing these conundrums. When Monika Bickert, head of global policy management, writes about the difficulty of balancing safety and freedom, she sounds reasonable, as Joe Fingas writes at Engadget. This is a genuinely hard problem: they are trying to parse human nature.

Illustrations: Mark Zuckerberg

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 19, 2017

Policy questions for 2022

wg-opentech-2017.jpgFor last weekend's OpenTech I had recklessly suggested imagining the Daily Mail headlines of the future. Come to actually plan the talk, a broader scope seemed wiser, and after some limbering up with possible near-future headlines, which you can read on my slides, posted here, I wound up considering what might be genuine policy questions five years from now. As a way of assessing whether I had any credentials as a futurist, I referred the audience to a piece I wrote for Salon as 1998 opened, Top Ten New Jobs for 2002. (There was also a Daily Telegraph version.)

To score the jobs I proposed there, two we do for ourselves: people reviewer (or at least researcher) and real-time biographer. Large companies and the very wealthy have citizenship brokers (aka "tax advisors"), data obfuscators are better known as reputation managers, and embedded advertising managers are what a certain number of editors have been forced to become when the job hasn't been automated entirely. Copyright protection officers, digital actors' guild representatives, and computer therapists have yet to arise, but for "human virtual servants" we have myriad versions of mechanical Turk (see, for example, Annalee Newitz's Atlantic piece on Google raters). I haven't actually heard anyone describe themselves as an "electronic image consultant" but I can't believe they don't exist. So: not too bad a score, really.

The rest of the limbering-up portion proposed some near-future headlines, and considered extracts from the actual Class of 2020 Mindset list (issued in 2016), plus some thoughts about the 2035 Mindset list (babies born in 2017) and the 2026 list (today's nine-year-olds). You can read these for yourselves.

policy-2022-slide.jpgThe policy questions all have current inspirations. These days, randomly wandering phone-neverwhered pedestrians are a constant menace, and it seems to be getting worse - in London you even see cyclists in traffic, earphones in, texting (while a similarly-equipped oblivious pedestrian vaguely strays in front of them). Road safety for this wantonly deaf-and-blind generation is an obvious conundrum, only partially solvable by initiatives like that of the German city of Augsberg, where they've embedded lights in the pavement so phone-starers will notice them.

Various options occur: repurposing disused tube tunnels as segregated walkways, for example, or building an elaborate network of sensors that an app on the phone follows automatically. My favorite suggestion from a pre-conference conversation: pneumatic tubes! This is a way-underused technology.

Video ads for malware on TV are with us, at least for the YouTube generation: to its product trials and technical support the shadow malware business infrastructure has added polished marketing campaigns complete with video ads on YouTube. Cyber crime is the fastest-growing industry in the world, I was told at a security meeting recently. Given the UK's imminent need for new sources of economic growth...

The Wannacry attack has since given new weight to the question of how long manufacturers should be required to issue security patches, because software is forever. Columbia University professor Steve Bellovin more thoughtfully asks, who pays? As he writes, until we find a different answer, we all do. An audience member suggested requiring "supported-until" declarations on new hardware and software. This won't help when vendors go out of business, and won't make consumers patch big-ticket items like refrigerators and cars, but it would help us make slightly more informed decisions, especially regarding content restricted by digital rights management.

The IoT at-home health monitoring requirement in return for receiving NHS benefits seems a logical extension of then-Prime Minister David Cameron's 2011 statement that all patients should contribute their data for research; I believe he later said sharing data should be a required return for receiving NHS benefits. Deaf access to video calling seems like a no-brainer, particularly for those whose first language is signing.

An audience member suggested we may need a law to prevent the appearance in ads of synthetic versions of dead relatives. Of course you hope advertisers won't have that level of bad taste, but Facebook did mark a friend's 50th birthday with an ad for funeral arrangements featuring a bereaved female who looked disturbingly like her daughter.

Further suggestions were more along the lines of the headlines I originally promised:

- Large internet company creates its own military force. Seems all too possible.

- Alexa wins First Amendment rights. Two months ago, Arkansas police sought to compel a defendant in a murder case to grant access to the data collected by the Alexa in his home. Amazon tried to claim Alexa's replies were protected by the First Amendment, but withdrew when the suspect agreed to hand over the data. Google has also tried to claim First Amendment protection for its search results. So: not too far-fetched.

- Replacing 999 (UK emergency services; 911 in the US) requires all phone microphones to be kept on all the time. All too imaginable: I worry that within my lifetime it will become suspicious if you do not have data collection devices in your home that can be secretly accessed and reviewed by police at any time.

- The last person on a permanent employment contract retires.

Autobahn_Schild_Zukunft.jpgSadly, this week's election manifestos tread the same old ground. Don't they know we have a different generation's future to imagine?

Illustrations: Presenting at OpenTech (photo by Hadley Beeman); Future, 1000 meters;

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 12, 2017


2001-hal.pngBefore there was the internet there were commercial information services and conferencing systems. Since access to things like news wires, technical support, and strangers across the world who shared your weird obsession with balloon animals was scarce, they could charge rather noticeable amounts of money for hourly access. On services like CompuServe and AOL, and Prodigy, discussion areas were owned by independents who split the revenue their users generated by spending time in their forums with the host service. So far, so reasonably fair.

What kept these forums from drowning in a sea of flames, abuse, bullying, and other bad behavior was not what today's politicians may think. It was not that everyone was real-world identified because they all had to pay by credit card. It was not that you had higher-class, because wealthier early adopters, people. And it was not because so many were business people who really needed access to stock quotes, technical support, and balloon animal news. It was because forum owners could trade free access to their forums for help with system administration. Volunteer SysOps moderated discussions, defused fights, issued warnings and bans for bad behavior, cleaned out inappropriate postings, and curated files.

Then came the internet with its monthly subscription fees for however much data you used and its absence of technical controls to stop people from putting up their own content, and business models changed. Forum owners saw their revenues plummet. The value to volunteers of their free access did likewise. Forum participation thinned. AOL embraced advertising, dumping the niche sites whose obsessive loyal followings had paid such handsome access fees in favour of mainstream content that aggregated the mass audiences advertisers pay for. Why have balloon animals when you can have the cast of Friends?

Tl;dr: a crucial part of building those successful businesses was volunteer humans.

Annalee_Newitz.jpgI remember this every time a site shuts down its comment board because of the volume of crap. This week, at Ars Technica, writer and activist Annalee Newitz found a new angle with a piece about Google's raters. Newitz finds that these folks are paid an hourly rate somewhat above minimum wage, though they lack health insurance and are dependent on being logged in when tasks arrive.

The immediate reason for her story was while Google is talking about deploying thousands of raters to help fix YouTube's problem with advertisers and extremist videos, this group's hours are being cut. The exact goals are murky, but the main driver is apparently to avoid loading their actual employer, to which Google subcontracts this part of its operation, with a benefits burden that company can't afford. Much of the story is a messy tale of American broken healthcare system. However, in researching these workers' lives, Newitz uncovers Sarah Roberts, a researcher at UCLA who has been traveling the world to study raters' work for five years. What has she found? "Actually their AIs are people in the Philippines"

So again: under-recognized humans are the technology industry's equivalent of TV's funny friend. In 2003, on a visit to Microsoft Research, I was struck by the fact that although the company was promoting its smart home, right outside it was a campus run entirely by human receptionists who controlled access, dispensed information, and filled their open hours with small but useful administrative projects.

This pattern is everywhere. Uber's self-driving cars need human monitors to intervene approximately once every 0.8 miles. Google Waymo's cars perform better - but even so, they require human aid at the far more dangerous rate of once every 5,000 miles. Plus the raters: on Google, obviously, but also Facebook and myriad other sites.

The goal for these companies is rather obviously that the human assistance should act as training wheels for automation, which - returning to Newitz's piece - is a lot easier to do if they're not covered by employment laws that make them hard to lay off. There is an old folk song about this: Keep That Wheel a-Turning.

In the pre-computer world, your seriousness about a particular effort could be judged by the number of humans you deployed to work on it. In the present game, the perfect system (at least for technology companies and their financiers) would require no human input at all, preferably while generating large freighter-loads of cash. WhatsApp got close: when Facebook bought it, it had 55 employees to 420 million users worldwide.

Sarah+Roberts_mid.jpgHuman moderation is more effective - and likely to remain so for the foreseeable future - but cannot scale to manage 1.2 billion Facebook users. Automation is imperfect, but scalable and immune to post-rating trauma. Which is why, Alec Muffett points out, the Home Affairs Select Committee's May 1 report's outraged complaint that Google, Facebook, and Twitter deploy insufficient numbers to counteract online hate speech is a sign that the committee has not fully grasped the situation. "How many "staff" will equate to a four-thousand-node cluster of computers running machine-learning / artificial intelligence software?" he asks.

It's a good question, and one we're going to have to answer in the interests of having the necessary conversations about social media and responsibility. As Roberts says, the discussion is "incomplete" without an understanding of the part humans play in these systems.

Illustrations: HAL, from Stanley Kubrick's 2001: A Space Odyssey; Annalee Newitz; Sarah Roberts (UCLA).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 5, 2017

Pre-existing conditions

Obamacare_replacement_brainstorming_session.jpgFriends from countries with national health services - that is, any developed country other than the US - often ask in puzzlement why so many Americans oppose something that the rest of the developed world regards as a human right. Ill-health comes to us all, whether because we hedonistically drank, smoked, drugged, sugared, salted, fatted, and unsafely fucked our way to it or despite our living in drink-free, smoke-free, drug-free, vegetable-filled, celibate austerity. There is as the late film critic Roger Ebert wrote, a pre-existing condition we all have. It's called life.

A new round of questioning will likely follow yesterday's House vote to repeal the Affordable Care Act (aka "Obamacare") and replace it with the American healthcare Act. The bill's provisions, Mother Jones reports include defunding Planned Parenthood, cut Medicaid funding by 25%, and allowing insurance companies to raise premiums for pre-existing conditions, and deny coverage for maternity care and mental health. (Gotta love the logic of denying both contraception and maternity care.) The margin was only four votes, and the bill hasn't been scored, so its path through the Senate may stall.

In the meantime, what do you make of a country where people call care for the elderly abusive socialism and totalitarianism?

Asklepios_-_Epidauros.jpgIt looks increasingly like the best chance at national healthcare were blown in the early 1970s. The plan Nixon announced in 1972 was, as many have pointed out, very like the AHA President Barack Obama pushed through in 2010. Reportedly, long-serving Senator Edward Kennedy (D-MA) was prepared to cut a deal with Nixon to make it happen, but ran afoul of labor leaders demanding a single-payer system. Nearly 50 years later, that hope is receded: Nixon was then seen as deeply conservative; today we call Obama a liberal. Nixon's personal history of a childhood in poverty and the loss of two brothers to tuberculosis was the key. By 1980, it was on to Ronald Reagan, who called universal healthcare as socialized medicine, despite passing the law requiring hospitals to treat all patients needing emergency care, whether or not they could pay.

Tarring healthcare with "socialism" was superb marketing in a country that deeply feared communists. Wikipedia pins "socialized medicine" as a term of disparagement to 1945, when President Harry S. Truman proposed a universal healthcare plan, right when the House Un-American Activities Committeewas peering everywhere for communists, most notoriously in Hollywood. As Ebert wrote in 2009, "socialist" served as shorthand to shut down all discussion - Godwin's Law of American medicine. Ebert also marveled at the marketing prowess of death panels (he called them a "lie" and a "meme"; today we'd yell, "Fake news!").

Roger_Ebert_(4590674207_d0ab1b653d_n)_(cropped).jpgHowever, Ebert also said something that ought to give anyone pause: "I had group health insurance plans through my unions at both jobs. They were good plans. But during the course [of] four major surgeries--no, make that five--I maxed out one, and so much for that policy. I'm approaching the cap on the second. Most policies aren't unlimited, you know. Luckily, I now qualify for Medicare."

If someone as prominent and successful as Roger Ebert cannot make it in the American healthcare system, you know it's utterly broken. This is why so many British people stepped up during the 2009 debates to say, We love the NHS.

So: why are they like this? "Socialism is bad" is clearly one deep-rooted reason. Related is calling healthcare a "benefit" and tacking it to employment. An American with a good job is "taking care of themselves", an idea the folksinger Bill Steele lampooned in his song Please Take Care of Me. Paid healthcare is aspirational, something anyone can qualify for out of merit and hard work, a cleanly devised class system that can pretend to be no such thing. It is of course profoundly destructive: it means that American workers, fearing the loss of health insurance, cannot afford to stand up to their employers and effectively turns the American middle class into peasants. It makes starting your own business once you have children hugely risky unless you're already wealthy. See also I Owe My Soul to the Company Store.

The same individualist element that helped immigrants survive in populating the country works against Americans here by making collectivism seem like imposing an unfair burden on people who think everyone ought to be able to manage their own lives. This ties neatly into some of the more extreme religious attitudes you come across that suggest that ill-health is some god's way of meting out punishment the sick person undoubtedly did something to deserve. The best response to that came from late-night comedian Jimmy Kimmel, whose newborn son has a heart condition.

Yet the lack of universal access to healthcare is far more dangerous now than it was 50 years ago. The spread of cheap air travel links us all together in a physical analogue to the internet. Bacteria and viruses don't care who they infect. Even in New Zealand.

Illustrations: March 2017 White house meeting to replace the ACA (aka Obamacare); Asclepius, god of medicine; Roger Ebert.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.