" /> net.wars: June 2015 Archives

« May 2015 | Main

June 26, 2015

The rich sardine

As if it's not bad enough that the sidewalks are filled with people moving at a snail's pace - because as long as they can stare at their phones why should they care how long it takes them to get anywhere? - for the last couple of years there's been a noticeable trend toward making ordinary websites look like apps. richsardine.jpgNavigational aids have been replaced by the little block of lines, headlines and standfirsts have shortened to be replaced by big pictures, and the whole effect for someone sitting at a desk with three 24-inch monitors is rather like the 1954 Roger Price droodle "The Rich Sardine". What a waste...

Still, while some websites are becoming increasingly non-functional - and yes, I'm looking at the tennis tour's awful 2015 versions of Wimbledon, the French Open, and the ATP tour - one could be smug about one thing: blocking obtrusive ads and locking out scripts is a lot easier on the desktop. That is, of course, exactly why publishers were so quick to leap on the app bandwagon. At last! A way to make some money on digital!

If there is one advantage Apple has, it's that it does not depend on advertising. Apple, like Amazon, eBay, and Microsoft, ultimately thrives on real people putting down real money to buy things. Google, Facebook, Twitter, and the rest have far fewer expenses (no messy factories to build, or distribution logistics to manage), but there's a certain fragility in depending solely on advertising. Sooner or later, the the need to increase revenues collides with your audience's unwillingness to consume ads, and that way lies an arms race. The Do Not Track initiative foundered on just such conflicts.

A week or two back, Nieman Labs broke the story that Apple would include ad-blocking in the next version of mobile Safari. The Economist recently noted how popular ad-blocking is. For Apple, this is a fantastic way to sell many more phones. For publishers who thought Apple was providing a revenue path, this is a body blow that goes to prove, as the Reuters chief said in the Guardian, that it's essential to retain ownership of your audience. He warns against being pwned by Facebook's Instant Articles, but you could easily make the same case regarding Google's search engine, as Spanish publishers recently learned. It's easy to imagine some bright soul at Apple realizing they could make a deal where publishers pay a premium to get their ads through the native block., much like mobile operators want sponsored data.

Writing in the New York Times, Andrew Lih predicts the death of Wikipedia. It's a little unfortunate that he uses the still-alive WELL and blogging to make his case. True, many who used to write screeds in personal blogs happily confine themselves to 140 characters on Twitter, but blogs are everywhere, it's just that they're no longer a novelty. Things fade into invisibility for a bit after the novelty explosion subsides, and then they find their true market. The dot-com bust of 2000-2001 is an example.

Another is the web itself: in 2010 Wired proclaimed the web is dead At the time, I thought it was a twist on the New York restaurant joke: "Nobody goes there any more. It's too crowded." How could they possibly be serious? Four years later, Wired recanted. To their apparent surprise, apps, instead of supplanting the web, drove millions more people to it. A few years from now, the people who think blogging is dead will be looking around in surprise at all the blogs everywhere. Wikipedia may well be the same: why shouldn't it adopt new technology to enable editing and mbed its content everywhere in new ways?

At the Transparent (Season 1, episode 3, "Rollin") when he asks who's taking the encyclopedias, "No one, Dad. Nobody in the world wants those."

Plenty has been written about Wikipedia's internal culture and the ferocity of its edit wars; the page detailing its "lamest edit wars is both hilarious and sad (and you thought Jennifer Aniston was just American). The organization knows itself that attracting greater diversity of editors requires toning down some of the hostilities. It's possible that five years from now Wikipedia will be atrophying. But anyone who's been to Wikimania recently knows the passion that still animates hundreds of people, especially in countries where content is at an earlier stage of being built. It's possible that five years from now the site will be turning into an ordinary organization with paid editors and a dwindling volunteer corps. But it's also possible that even if its technology looks dated to mobile-trained eyes Wikipedia will be bigger then than it is now, and what appears today to be incipient death is just that same old intermediate novelty-worn-off stage. Maybe greatness still awaits.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 19, 2015

Indirect line

Seven months ago, the Samaritans put a public foot wrong with Radar, an effort to leverage Twitter's social graphs to help people identify friends who might need help. The program was widely criticized and rapidly dropped.

They haven't stopped mulling over options, though: these are people who don't ignore someone passed out on a sidewalk thinking, "probably just drunk". So what should they do? they asked a group of people including privacy advocates, social workers, and others this week.

Online can be a difficult environment in which to find the right balance between outreach and interference. Some years ago, I saw someone post a farewell suicide note on a bulletin board. The rules of that particular section limited postings to news items only, with no comments allowed except for a single summary of received email replies by the original poster. In this case, people commented: they complained about being confronted by this depressing farewell over breakfast, one or two thought someone should do something, and finally several invoked the no-comments rule. Soon afterwards, a moderator arrived to vape the whole thing. As a friend later observed, these reactions suggested that posting to this particular site was a good way of showing you weren't just crying for help. Help did arrive, however: a couple of readers who knew the poster personally got him to a hospital. Today, someone in similar distress might also get rescued - but meanwhile his mobile phone would explode with trolls shouting "Jump!"

Ah, someone said, but they themselves may be in terrible distress. FremontTroll.jpg"Trolls need love, too." They may indeed; but experience suggests that sometimes life is just too short.

The basis of Radar was that the large number of people using social media could be used to spot the relatively small number of individuals among their friends who need help. From the Samaritans' numbers , in 2013 there were 6,233 suicides in the UK, an overall average of 19 per 100,000 for men and 5.1 per 100,000 for women. The male suicide rate, which had been dropping steadily, began climbing again in 2008, and is now at its highest since 2001. The most at-risk group is men aged 45 to 59: 25.1 per 100,000, the highest it's been since 1981. Lower social class, economic deprivation, and mental health issues are all risk factors. The suicide rate is as low as 3 per 100,000 in parts of Surrey, as high as 30 per 100,000 in Merseyside.

Of the organization's 5 million contacts per year, 85% are by phone, around 200,000 are by email, a few are face to face. Texting, currently at 380,000, is growing rapidly here as elsewhere. Many categories of desperate people such as teens and victims of domestic violence have little privacy or agency to make voice calls, and many younger people are more comfortable with text.

If you grant the premise - that the Samaritans should change with the times and that social media are an opportunity to reach people who might otherwise die of isolation in the middle of crowds - then the question becomes, what should it be? It's tempting, but probably wrong (at least at the present state of the art), to think that turning computers loose on the firehouse will work. Yes, a classifier, such as appears in research at the University of Cardiff in work by Jonathan Scourfield and Pete Burnap, might find evidence of suicidal intent, but it's far more likely to find people unhelpfully joking about suicide. Given that we're talking of billions of messages, false positive rates seem an obvious key issue here, as is the possibility that people in existential distress may in fact be less likely to post than to close themselves off. A system that studies contents of tweets rather than patterns of metadata will not see silence.

Finding and helping those who need it ought, on the face of it, to be a simpler problem than that the security services face in trying to find terrorists. The security services need to locate a precise individual to stop a specific plan to attack a particular target. At any given time there will be far fewer of these active, and identification has to be precise. The Samaritans don't want to miss anyone, but for them success can be measured in people helped, with less emphasis needed on exactly who those people are. Even if they miss some, they've done well. Yet nothing is ever simple. A billion people are on Facebook: Facebook chats to support people seems like a no-brainer except: privacy, since among the Samaritans' principles are anonymity and the promise that records won't be kept.

For the Samaritans' purposes the best form of outreach is likely indirect: give access to tools and choices that anyone can benefit from, whether they're in immediate distress or not. Democratise the training given to volunteers by using social media, apps, the web, to help as many people as possible to understand how to recognise and support friends in crisis. The obvious model here might be CPR: many people who are not medical personnel have taken the training so that in a life-or-death emergency they can keep someone alive until medical staff can reach them - and many public areas have defibrillators for the same reason. As has been the norm in the past, let people find you, and be there to be found.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 12, 2015

The reasonable woman

Law cases frequently turn on interpreting the "reasonable expectation of privacy". At last week's Privacy Law Scholars, Pepperdine professor Victoria Schwartz victoria-schwartz.jpgasked an interesting and seemingly obvious question that apparently is new: how does that reasonable expectation change if you stick "woman" in the middle? As in, "a reasonable woman's expectation of privacy".

Organized by UC Berkeley's Chris Hoofnagle and George Washington University's Daniel Solove, Privacy Law Scholars is more workshop than conference. Lawyers, mostly academics, have fun here by submitting a draft paper, sitting quietly while a peer summarizes and critiques it, and then answering comments and questions from a whole group of peers, who discuss it, tease its ideas apart, and suggest new angles. This year's event featured some 80 papers presented in concurrent tracks over eight sessions; everyone is expected to have read at least the papers whose sessions they will attend.Hoofnagle-Solove.jpgLawyers love it: the thing is growing like Defcon, and October will see the first European offshoot, in Amsterdam r. The overall idea is to improve the quality of privacy law scholarship. Which, you know: good.

As a case study, Schwartz, whose work analyzing and studying court rulings has a long way to go, used drug testing. While the intrusiveness of having to pee under observation is evenly distributed between genders, which is why she chose it, the *informational* privacy implications are different: a woman's urinalysis can show she's pregnant. Or, as in Ferguson v. Charleston, pregnancy can provide the reason for drug tests whose results can be demanded by police (though this case is less uniquely applicable, since men also might have occasion to give a hospital samples for unrelated medical reasons).

Yet this aspect doesn't register in court judgments where drug testing rules have been challenged. In Ferguson, the Supreme Court ruled that handing such tests over to the police constituted an illegal search under the Fourth Amendment. In dissent, however, Justice Antonin Scalia wrote, "No good deed goes unpunished", and argued that turning over the test results to police did not constitute unreasonable search because it was done as part of a larger goal to get these women to seek treatment and protect the health of their fetuses. In another case, National Treasury Employees Union v. Von Raab, which used auditory but not visual monitoring to ensure the integrity of samples. SCOTUS upheld the tests in two of the three categories. Scalia again dissented, calling conducting the urinalysis test while a same-sex monitor "remains close at hand to listen for the normal sounds" "particularly destructive of privacy and offensive to personal dignity." So it's more invasive to be monitored while peeing than to have the results turned over to the authorities? This led to a discussion of the notion that how some judges rule in cases may depend on whether and how they identify with the people in the case.

One of the points made in the discussion was that there are many cases where the law is privacy-invasive but not thought of along those lines, the cited example being mandatory ultrasound laws, put in place to delay or deter women seeking abortions. Other case law has obvious privacy implications: many states have ruled that "upskirt" photographs are protected by the First Amendment, effectively ruling that women's private parts may not be private if she's in public wearing a skirt. Some examples: Massachusetts; Washington, DC; Texas.

A secondary point arrived via a mention of rape apps. It makes me enormously sad that such a thing needs to exist, and especially that young women feel they need these, but apparently they do: these Some of these involve GPS tracking: essentially, the women who use these are trading their privacy for feeling safer. Along those lines, we haven't come as long a way (baby) as one might wish: many, many strictures on women's behavior derived from the notion that they needed to be protected, and many divisions that persist between men and women continue to derive from it. And, as someone pointed out in the session, women historically had no privacy: that's why Virginia Woolf longed for A Room of One's Own.

Although the particular conversation ignited by Schwartz's paper did revolve a great deal around women - there was at least one self-identified feminist from each of the last four decades - the box she's opened is larger than that. As another conference participant said, "Poor people, black people, elderly, disabled..." Anyone, in fact, who is not "neutral", where "neutral" typically means "normal" to whomever is speaking. Understanding how broad an array "normal" covers is hard, and involves being willing to let go of preconceived notions, as this week's please, girls, don't cry moment also showed. There's a good analogy here to accents: I speak unaccented English; you have an accent; he talks funny.

As soon as you add *any* modifier to "reasonable expectation", replacing the generic white, male stick figure in your head with a more specific real person, you have moved "privacy" from the abstract to the personal. This is something privacy advocates struggle with all the time, and it's badly needed, whatever that replacement looks like.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 5, 2015

The end

At the Royal Society's scientific meeting on machine learning on May 22, it was notable how many things the assembled scientists thought machine learning was going to bring to an end: the end of today's computer processor architectures (Simon Knowles, CTO of XMOS, and Oxford researcher Simon Benjamin); the end of the primacy of the program (also Knowles); the end of programming (Christopher Bishop); the end of the Royal Society (Nick Bostrom), and, quite possibly the end of humanity (also Bostrom).

The last idea, which we'll call "terminator theory", was out of sync with the generally optimistic tone, which held that machine learning has the potential to...well, all the usual economic and social transformations that tend to be predicted for all technological breakthroughs. Still: you have to like an AI conference that talks more about what *we* will achieve in building and partnering with AIs rather than what AI can achieve with or without us and whether they will like us afterwards.

Nick Jennings stressed the value of partnering experienced humans with machine learning systems to allocate resources in crisis situations. Bishop's work on recommendation and matching systems - movies and subscribers, game players with each other based on constantly updating skill levels - is meant to serve us, if only to get us to consume entertainment.

Bostrom, who runs the Future of Humanity Institute at Oxford and therefore specializes in identifying existential threats, was the only one to seem worried that sufficiently advanced AIs - "superintelligences" - might prefer to run things to suit themselves. The only other speaker who came close was Demis Hassabis from (now Google) DeepMind, who said his company has two goals: "Crack intelligence. Then use it to solve everything else." Modest dreams are no fun.

Note that no one (not even Bostrom, or at least, not clearly) was talking about creating *consciousness". This was a scientific meeting on machine learning to outline where we are, what happens next, what's needed, and how to use the results to date. Where we are is not "artificial general intelligence" like Hassabis wants to build, even though DeepMind has some algorithms that have impressively taught themselves superhuman expertise at a series of old computer games. A former child chess prodigy himself, in 1996, when he watched IBM's Deep Blue beat Gary Kasparov at chess, he was more impressed by Kasparov, who managed to be competitive but could also speak three languages, drive cars, and tie shoelaces. Deep Blue, "had to be reprogrammed to play noughts and crosses." (For Americans: tic tac toe.)

Still: Hassabis's games-player, given the chance to play long enough (overnight, hundreds of games) has found strategies its programmers didn't know about. Geoff Hinton geoffhinton.jpgand others he referenced have created systems that can reliably recognize the subjects of photographs and apply captions. We all know about voice recognition and driverless cars - to the point that, to borrow a quip I believe is from Bostrom colleague Anders Sandberg, we no longer think of those functions as "AI" because they work (mostly).

Hinton, in showing off a system that can create captions for photographs with surprising reliability, however, noted the next question. The machine looks at a picture of a toddler asleep cuddling a stuffed animal and describes it as: "child holding stuffed animal". This machine is good at objects, but not so much at relationships. So: when it says "holding" is it because it sees the relationship between the child's posture and the animal, or does its inner language model pick out "child" and "stuffed animal" and probabilistically predict that "holding" is the correct intervening word? In other words, is it interpreting or guessing? This - and other questions of how much a machine can be said to "think" - are problems for us, not them.

The big winner in all this is Bayes' theorem, which is everywhere because every one of these systems has to deal with uncerntainty. This is the difference between trying to program a machine to handle every conceivable eventuality (which beyond the simplest situations is infeasible) and giving a machine a set of rues for handling uncertainty, which is what Bayes is all about.

About a month ago, NPR's Planet Money ran a face-off between a robot and a human in three areas: folding laundry (human won), therapy for those recovering from trauma and depression (robot won), and radio news reporting. Planet Money awarded the robot a win for reporting: the robot took two minutes and change to create a perfectly accurate and correctly highlighted report from financial data; the human, chosen for his speed, took seven and something. Yet there's no question that if you're a radio station wishing to keep your car-bound listeners during commute time, you'd run the human-written version: it was colorful and entertaining as well as accurate. It told you things humans, as opposed to high-speed traders, would enjoy hearing even if they didn't care that much about the quarterly results of that specific restaurant chain.

That aside, the most interesting conclusion of the program: the earlier in life a skill can be learned by a human the harder it is to teach a robot. Put all these things together, and the logical conclusion is obvious: this means the end of shoelaces. Note that's already started...

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.