Archives

May 19, 2017

Policy questions for 2022

wg-opentech-2017.jpgFor last weekend's OpenTech I had recklessly suggested imagining the Daily Mail headlines of the future. Come to actually plan the talk, a broader scope seemed wiser, and after some limbering up with possible near-future headlines, which you can read on my slides, posted here, I wound up considering what might be genuine policy questions five years from now. As a way of assessing whether I had any credentials as a futurist, I referred the audience to a piece I wrote for Salon as 1998 opened, Top Ten New Jobs for 2002. (There was also a Daily Telegraph version.)

To score the jobs I proposed there, two we do for ourselves: people reviewer (or at least researcher) and real-time biographer. Large companies and the very wealthy have citizenship brokers (aka "tax advisors"), data obfuscators are better known as reputation managers, and embedded advertising managers are what a certain number of editors have been forced to become when the job hasn't been automated entirely. Copyright protection officers, digital actors' guild representatives, and computer therapists have yet to arise, but for "human virtual servants" we have myriad versions of mechanical Turk (see, for example, Annalee Newitz's Atlantic piece on Google raters). I haven't actually heard anyone describe themselves as an "electronic image consultant" but I can't believe they don't exist. So: not too bad a score, really.

The rest of the limbering-up portion proposed some near-future headlines, and considered extracts from the actual Class of 2020 Mindset list (issued in 2016), plus some thoughts about the 2035 Mindset list (babies born in 2017) and the 2026 list (today's nine-year-olds). You can read these for yourselves.

policy-2022-slide.jpgThe policy questions all have current inspirations. These days, randomly wandering phone-neverwhered pedestrians are a constant menace, and it seems to be getting worse - in London you even see cyclists in traffic, earphones in, texting (while a similarly-equipped oblivious pedestrian vaguely strays in front of them). Road safety for this wantonly deaf-and-blind generation is an obvious conundrum, only partially solvable by initiatives like that of the German city of Augsberg, where they've embedded lights in the pavement so phone-starers will notice them.

Various options occur: repurposing disused tube tunnels as segregated walkways, for example, or building an elaborate network of sensors that an app on the phone follows automatically. My favorite suggestion from a pre-conference conversation: pneumatic tubes! This is a way-underused technology.

Video ads for malware on TV are with us, at least for the YouTube generation: to its product trials and technical support the shadow malware business infrastructure has added polished marketing campaigns complete with video ads on YouTube. Cyber crime is the fastest-growing industry in the world, I was told at a security meeting recently. Given the UK's imminent need for new sources of economic growth...

The Wannacry attack has since given new weight to the question of how long manufacturers should be required to issue security patches, because software is forever. Columbia University professor Steve Bellovin more thoughtfully asks, who pays? As he writes, until we find a different answer, we all do. An audience member suggested requiring "supported-until" declarations on new hardware and software. This won't help when vendors go out of business, and won't make consumers patch big-ticket items like refrigerators and cars, but it would help us make slightly more informed decisions, especially regarding content restricted by digital rights management.

The IoT at-home health monitoring requirement in return for receiving NHS benefits seems a logical extension of then-Prime Minister David Cameron's 2011 statement that all patients should contribute their data for research; I believe he later said sharing data should be a required return for receiving NHS benefits. Deaf access to video calling seems like a no-brainer, particularly for those whose first language is signing.

An audience member suggested we may need a law to prevent the appearance in ads of synthetic versions of dead relatives. Of course you hope advertisers won't have that level of bad taste, but Facebook did mark a friend's 50th birthday with an ad for funeral arrangements featuring a bereaved female who looked disturbingly like her daughter.

Further suggestions were more along the lines of the headlines I originally promised:

- Large internet company creates its own military force. Seems all too possible.

- Alexa wins First Amendment rights. Two months ago, Arkansas police sought to compel a defendant in a murder case to grant access to the data collected by the Alexa in his home. Amazon tried to claim Alexa's replies were protected by the First Amendment, but withdrew when the suspect agreed to hand over the data. Google has also tried to claim First Amendment protection for its search results. So: not too far-fetched.

- Replacing 999 (UK emergency services; 911 in the US) requires all phone microphones to be kept on all the time. All too imaginable: I worry that within my lifetime it will become suspicious if you do not have data collection devices in your home that can be secretly accessed and reviewed by police at any time.

- The last person on a permanent employment contract retires.

Autobahn_Schild_Zukunft.jpgSadly, this week's election manifestos tread the same old ground. Don't they know we have a different generation's future to imagine?


Illustrations: Presenting at OpenTech (photo by Hadley Beeman); Future, 1000 meters;

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 12, 2017

Intervention

2001-hal.pngBefore there was the internet there were commercial information services and conferencing systems. Since access to things like news wires, technical support, and strangers across the world who shared your weird obsession with balloon animals was scarce, they could charge rather noticeable amounts of money for hourly access. On services like CompuServe and AOL, and Prodigy, discussion areas were owned by independents who split the revenue their users generated by spending time in their forums with the host service. So far, so reasonably fair.

What kept these forums from drowning in a sea of flames, abuse, bullying, and other bad behavior was not what today's politicians may think. It was not that everyone was real-world identified because they all had to pay by credit card. It was not that you had higher-class, because wealthier early adopters, people. And it was not because so many were business people who really needed access to stock quotes, technical support, and balloon animal news. It was because forum owners could trade free access to their forums for help with system administration. Volunteer SysOps moderated discussions, defused fights, issued warnings and bans for bad behavior, cleaned out inappropriate postings, and curated files.

Then came the internet with its monthly subscription fees for however much data you used and its absence of technical controls to stop people from putting up their own content, and business models changed. Forum owners saw their revenues plummet. The value to volunteers of their free access did likewise. Forum participation thinned. AOL embraced advertising, dumping the niche sites whose obsessive loyal followings had paid such handsome access fees in favour of mainstream content that aggregated the mass audiences advertisers pay for. Why have balloon animals when you can have the cast of Friends?

Tl;dr: a crucial part of building those successful businesses was volunteer humans.

Annalee_Newitz.jpgI remember this every time a site shuts down its comment board because of the volume of crap. This week, at Ars Technica, writer and activist Annalee Newitz found a new angle with a piece about Google's raters. Newitz finds that these folks are paid an hourly rate somewhat above minimum wage, though they lack health insurance and are dependent on being logged in when tasks arrive.

The immediate reason for her story was while Google is talking about deploying thousands of raters to help fix YouTube's problem with advertisers and extremist videos, this group's hours are being cut. The exact goals are murky, but the main driver is apparently to avoid loading their actual employer, to which Google subcontracts this part of its operation, with a benefits burden that company can't afford. Much of the story is a messy tale of American broken healthcare system. However, in researching these workers' lives, Newitz uncovers Sarah Roberts, a researcher at UCLA who has been traveling the world to study raters' work for five years. What has she found? "Actually their AIs are people in the Philippines"

So again: under-recognized humans are the technology industry's equivalent of TV's funny friend. In 2003, on a visit to Microsoft Research, I was struck by the fact that although the company was promoting its smart home, right outside it was a campus run entirely by human receptionists who controlled access, dispensed information, and filled their open hours with small but useful administrative projects.

This pattern is everywhere. Uber's self-driving cars need human monitors to intervene approximately once every 0.8 miles. Google Waymo's cars perform better - but even so, they require human aid at the far more dangerous rate of once every 5,000 miles. Plus the raters: on Google, obviously, but also Facebook and myriad other sites.

The goal for these companies is rather obviously that the human assistance should act as training wheels for automation, which - returning to Newitz's piece - is a lot easier to do if they're not covered by employment laws that make them hard to lay off. There is an old folk song about this: Keep That Wheel a-Turning.

In the pre-computer world, your seriousness about a particular effort could be judged by the number of humans you deployed to work on it. In the present game, the perfect system (at least for technology companies and their financiers) would require no human input at all, preferably while generating large freighter-loads of cash. WhatsApp got close: when Facebook bought it, it had 55 employees to 420 million users worldwide.

Sarah+Roberts_mid.jpgHuman moderation is more effective - and likely to remain so for the foreseeable future - but cannot scale to manage 1.2 billion Facebook users. Automation is imperfect, but scalable and immune to post-rating trauma. Which is why, Alec Muffett points out, the Home Affairs Select Committee's May 1 report's outraged complaint that Google, Facebook, and Twitter deploy insufficient numbers to counteract online hate speech is a sign that the committee has not fully grasped the situation. "How many "staff" will equate to a four-thousand-node cluster of computers running machine-learning / artificial intelligence software?" he asks.

It's a good question, and one we're going to have to answer in the interests of having the necessary conversations about social media and responsibility. As Roberts says, the discussion is "incomplete" without an understanding of the part humans play in these systems.

Illustrations: HAL, from Stanley Kubrick's 2001: A Space Odyssey; Annalee Newitz; Sarah Roberts (UCLA).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.