" /> net.wars: September 2018 Archives

« August 2018 | Main | October 2018 »

September 27, 2018

We know where you should live

Thumbnail image for PatCadigan-Worldcon75.jpgIn the memorable panel "We Know Where You Will Live" at the 1996 Computers, Freedom, and Privacy conference, the science fiction writer Pat Cadigan startled everyone, including fellow panelists Vernor Vinge, Tom Maddox, and Bruce Sterling, by suggesting that some time in the future insurance companies would levy premiums for "risk purchases" - beer, junk foods - in supermarkets in real time.

Cadigan may have been proved right sooner than she expected. Last week, John Hancock, a 156-year-old US insurance company, announced it would discontinue underwriting traditional life insurance policies. Instead, in future all its policies will be "interactive"; that is, they will come with the "Vitality" program, under which customers supply data collected by their wearable fitness trackers or smartphones. John Hancock promotes the program, which it says is already used by 8 million customers in 18 countries, and as providing discounts. In the company's characterization, it's a sort of second reward for "living healthy". In the company's depiction, everyone wins - you get lower premiums and a healthier life, and John Hancock gets your data, enabling it to make more accurate risk assessments and increase its efficiency.

Even then, Cadigan was not the only one with the idea that insurance companies would exploit the Internet and the greater availability of data. A couple of years later, a smart and prescient friend suggested that we might soon be seeing insurance companies offer discounts for mounting a camera on the hood of your car so they could mine the footage to determine blame when accidents occurred. This was long before smartphones and GoPros, but the idea of small, portable cameras logging everything goes back at least to 1945, when Vannevar Bush wrote As We May Think, an essay that imagined something a lot like the web, if you make allowances for storing the whole thing on microfilm.

This "interactive" initiative is clearly a close relative of all these ideas, and is very much the kind of thing University of Maryland professor Frank Pasquale had in mind when writing his book The Black Box Society. John Hancock may argue that customers know what data they're providing, so it's not all that black a box, but the reality is that you only know what you upload. Just like when you download your data from Facebook, you do not know what other data the company matches it with, what else is (wrongly or rightly) in your profile, or how long the company will keep penalizing you for the month you went bonkers and ate four pounds of candy corn. Surely it's only a short step to scanning your shopping cart or your restaurant meal with your smartphone to get back an assessment of how your planned consumption will be reflected in your insurance premium. And from there, to automated warnings, and...look, if I wanted my mother lecturing me in my ear I wouldn't have left home at 17.

There has been some confusion about how much choice John Hancock's customers have about providing their data. The company's announcement is vague about this. However, it does make some specific claims: Vitality policy holders so far have been found to live 13-21 years longer than the rest of the insured population; generate 30% lower hospitalization costs; take nearly twice as many steps as the average American; and "engage with" the program 576 times a year.

John Hancock doesn't mention it, but there are some obvious caveats about these figures. First of all, the program began in 2015. How does the company have data showing its users live so much longer? Doesn't that suggest that these users were living longer *before* they adopted the program? Which leads to the second point: the segment of the population that has wearable fitness trackers and smartphones tends to be more affluent (which tends to favor better health already) and more focused on their health to begin with (ditto). I can see why an insurance company would like me to "engage with" its program twice a day, but I can't see why I would want to. Insurance companies are not my *friends*.

At the 2017 Computers, Privacy, and Data Protection, one of the better panels discussed the future for the insurance industry in the big data era. For the insurance industry to make sense, it requires an element of uncertainty: insurance is about pooling risk. For individuals, it's a way of managing the financial cost of catastrophes. Continuously feeding our data into insurance companies so they can more precisely quantify the risk we pose to their bottom line will eventually mean a simple equation: being able to get insurance at a reasonable rate is a pretty good indicator you're unlikely to need it. The result, taken far enough, will be to undermine the whole idea of insurance: if everything is known, there is no risk, so what's the point? betting on a sure thing is cheating in insurance just as surely as it is in gambling. In the panel, both Katja De Vries and Mireille Hildebrandt noted the sinister side of insurance companies acting as "nudgers" to improve our behavior for their benefit.

So, less "We know where you will live" and more "We know where and how you *should* live."


Illustrations: Pat Cadigan (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 21, 2018

Facts are screwed

vlad-enemies-impaled.gif"Fake news uses the best means of the time," Paul Bernal said at last week's gikii conference, an annual mingling of law, pop culture, and technology. Among his examples of old media turned to propaganda purposes: hand-printed woodcut leaflets, street singers, plays, and pamphlets stuck in cracks in buildings. The big difference today is data mining, profiling, targeting, and the real-time ability to see what works and improve it.

Bernal's most interesting point, however, is that like a magician's plausible diversion the surface fantasy story may stand in front of an earlier fake news story that is never questioned. His primary example was Vlad, the Impaler, the historical figure who is thought to have inspired Dracula. Vlad's fame as a vicious and profligate killer, derives from those woodcut leaflets. Bernal suggests the reasons: a) Vlad had many enemies who wrote against him, some of it true, most of it false; b) most of the stories were published ten to 20 years after he died; and c) there was a whole complicated thing about the rights to Transylvanian territory.

"Today, people can see through the vampire to the historical figure, but not past that," he said.

His main point was that governments' focus on content to defeat fake news is relatively useless. A more effective approach would have us stop getting our news from Facebook. Easy for me personally, but hard to turn into public policy.

Soon afterwards, Judith Rauhofer outlined a related problem: because Russian bots are aimed at exacerbating existing divisions, almost anyone can fall for one of the fake messages. Spurred on by a message from the Tumblr powers that be advising that she had shared a small number of messages that were traced to now-closed Russian accounts, Rauhofer investigated. In all, she had shared 18 posts - and these had been reblogged 2.7 million times, and are still being recirculated. The focus on paid ads means there is relatively little research on organic and viral sharing of influential political messages. Yet these reach vastly bigger audiences and are far more trusted, especially because people believe they are not being influenced by them.

In the particular case Rauhofer studied, "There are a lot of minority groups under attack in the US, the UK, Germany, and so on. If they all united in their voting behavior and political activity they would have a chance, but if they're fighting each other that's unlikely to happen." Divide and conquer, in other words, works as well as it ever has.

The worst part of the whole thing, she said, is that looking over those 18 posts, she would absolutely share them again and for the same reason: she agreed with them.

Rauhofer's conclusion was that the combination of prioritization - that is, the ordering of what you see according to what the site believes you're interested in - and targeting form "a fail-safe way of creating an environment where we are set against each other."

So in Bernal's example, an obvious fantasy masks an equally untrue - or at least wildly exaggerated - story, while in Rauhofer's the things you actually believe can be turned into weapons of mass division. Both scenarios require much more nuance and, as we've argued here before, many more disciplines to solve than are currently being deployed.

Andrea Matwyshyn, in providing five mini-fables as a way of illustrating five problems to consider when designing AI - or, as she put it, five stories of "future AI failure". These were:

- "AI inside" a product can mean sophisticated machine learning algorithms or a simple regression analysis; you cannot tell from the outside what is real and what's just hype, and the specifics of design matter. When Google's algorithm tagged black people as "gorillas", the company "fixed" the algorithm by removing "gorilla" from its list of possible labels. The algorithm itself wasn't improved.

- "Pseudo-AI" has humans doing the work of bots. Lots of historical examples for this one, most notably the mechanical Turk; Matwyshyn chose the fake autonomaton the Digesting Duck.

- Decisions that bring short-term wins may also bring long-term losses in the form of unintended negative consequences that haven't been thought through. Among Matwyshyn's examples were a number of cases where human interaction changed the analysis such as the failure of Google flu trends and Microsoft's Tay bot.

- Minute variations or errors in implementation or deployment can produce very different results than intended. Matwyshyn's prime example was a pair of electronic hamsters she thought could be set up to repeat each other w1ords to form a recursive loop. Perhaps responding to harmonics less audible to humans, they instead screeched unintelligibly at each other. "I thought it was a controlled experiment," she said, "and it wasn't."

- There will always be system vulnerabilities and unforeseen attacks. Her example was squirrels that eat power lines, but ten backhoes is the traditional example.

To prevent these situations, Matwyshyn emphasized disclosure about code, verification in the form of third-party audits, substantiation in the form of evidence to back up the claims that are made, anticipation - that is, liability and good corporate governance, and remediation - again a function of good corporate governance.

"Fail well," she concluded. Words for our time.


Illustrations: Woodcut of Vlad, with impaled enemies.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 14, 2018

Hide by default

Beeban-Kidron-Dubai-2016.jpgLast week, defenddigitalme, a group that campaigns for children's data privacy and other digital rights, and Livingstone's group at the London School of Economics assembled a discussion of the Information Commissioner's Office's consultation on age-appropriate design for information society services, which is open for submissions until September 19. The eventual code will be used by the Information Commissioner when she considers regulatory action, may be used as evidence in court, and is intended to guide website design. It must take into account both the child-related provisions of the child-related provisions of the General Data Protection Regulation and the United National Convention on the Rights of the Child.

There are some baseline principles: data minimization, comprehensible terms and conditions and privacy policies. The last is a design question: since most adults either can't understand or can't bear to read terms and conditions and privacy policies, what hope of making them comprehensible to children? The summer's crop of GDPR notices is not a good sign.

There are other practical questions: when is a child not a child any more? Do age bands make sense when the capabilities of one eight-year-old may be very different from those of another? Capacity might be a better approach - but would we want Instagram making these assessments? Also, while we talk most about the data aggregated by commercial companies, government and schools collect much more, including biometrics.

Most important, what is the threat model? What you implement and how is very different if you're trying to protect children's spaces from ingress by abusers than if you're trying to protect children from commercial data aggregation or content deemed harmful. Lacking a threat model, "freedom", "privacy", and "security" are abstract concepts with no practical meaning.

There is no formal threat model, as the Yes, Minister episode The Challenge (series 3, episode 2), would predict. Too close to "failure standards". The lack is particularly dangerous here, because "protecting children" means such different things to different people.

The other significant gap is research. We've commented here before on the stratification of social media demographics: you can practically carbon-date someone by the medium they prefer. This poses a particular problem for academics, in that research from just five years ago is barely relevant. What children know about data collection has markedly changed, and the services du jour have different affordances. Against that, new devices have greater spying capabilities, and, the Norwegian Consumer Council finds (PDF), Silicon Valley pays top-class psychologists to deceive us with dark patterns.

Seeking to fill the research gap are Sonia Livingstone and Mariya Stoilova. In their preliminary work, they are finding that children generally care deeply about their privacy and the data they share, but often have little agency and think primarily in interpersonal terms. The Cambridge Analytica scandal has helped inform them about the corporate aggregation that's taking place, but they may, through familiarity, come to trust people such as their favorite YouTubers and constantly available things like Alexa in ways their adults disl. The focus on Internet safety has left many thinking that's what privacy means. In real-world safety, younger children are typically more at risk than older ones; online, the situation is often reversed because older children are less supervised, explore further, and take more risks.

The breath of passionate fresh air in all this, is Beeban Kidron, an independent - that is, appointed - member of the House of Lords who first came to my attention by saying intelligent and measured things during the post-referendum debate on Brexit. She refuses to accept the idea that oh, well, that's the Internet, there's nothing we can do. However, she *also* genuinely seems to want to find solutions that preserve the Internet's benefits and incorporate the often-overlooked child's right to develop and make mistakes. But she wants services to incorporate the idea of childhood: if all users are equal, then children are treated as adults, a "category error". Why should children have to be resilient against systemic abuse and indifference?

Kidron, who is a filmmaker, began by doing her native form of research: in 2013 she made a the full-length documentary InRealLife that studied a number of teens using the Internet. While the film concludes on a positive note, many of the stories depressingly confirm some parents' worst fears. Even so it's a fine piece of work because it's clear she was able to gain the trust of even the most alienated of the young people she profiles.

Kidron's 5Rights framework proposes five essential rights children should have: remove, know, safety and support, informed and conscious use, digital literacy. To implement these, she proposes that the industry should reverse its current pattern of defaults which, as is widely known, 95% of users never change (while 98% never read terms and conditions). Companies know this, and keep resetting the defaults in their favor. Why shouldn't it be "hide by default"?

This approach sparked ideas. A light that tells a child they're being tracked or recorded so they can check who's doing it? Collective redress is essential: what 12-year-old can bring their own court case?

The industry will almost certainly resist. Giving children the transparency and tools with which to protect themselves, resetting the defaults to "hide"...aren't these things adults want, too?


Illustrations: Beeban Kidron (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 7, 2018

Watching brief

Amazon-error-message-usopen-2018.png"Hope the TV goes out at the same time," the local cable company advised me regarding outages when they supplied my Internet service circa 2001. "Because then so many people complain that it gets fixed right away."

Amazon is discovering the need to follow this principle. As the Guardian reported last week, this year's US Open tennis is one of Amazon Prime's first forays into live sports streaming, and tennis fans are unhappy.

"Please leave tennis alone," says one of the more polite user reviews.

It seems like only yesterday that being able to watch grainy, stuttering video in a corner of one's computer screen was like a miracle (and an experience no one would ever want to repeat unless they had to). Now, streaming is so well established that people complain about the quality, the (lack of) features, and even the camera angles. People! Only ten years ago you'd have been *grateful*!

A friend, seeing the Guardian's story, emailed: "Are you seeing this?" Well, yes. Most of it. On my desktop machine the picture looks fine to me, but it's a 24-inch monitor, not a giant HD TV, and as long as I can pick out the ball consistently, who cares whether it's 1020p? However, on two Windows laptops both audio and video stutter badly. That was a clue: my Linux-based desktop has a settings advisory: "HD TV Not Available - Why?" It transpires that because Linux machines lack the copy protection built into HDMI, Amazon doesn't send HD. I'm guessing that the smaller amount of data means smoother reception and a better experience, even if the resolution is lower. That said, even on the Linux machines the stream fails regularly. Reload window, click play.

The camera angle is indeed annoying, but for that you have to blame the USTA and the new Armstrong stadium design. There's only one set of cameras, and the footage is distributed by the host broadcaster to everyone else. Whine to Amazon all you want; but all the company can do is forward the complaints.

One reason tennis fans are so picky is that the tennis tours adopted streaming years ago, as did Eurosport, as a way of reaching widely dispersed fans: tennis is a global minority sport. So they are experienced, and they have expectations. On the ATP (men's) tour's own site, TennisTV, if you're getting a stuttering picture you can throttle the bitrate; the scores and schedule are ready to hand; and you can pause a match and resume it later or step back to the beginning or any point in between. Replays are available very soon after a match ends. On Amazon, there's an icon to click to replay the last ten seconds, but you can't pause and resume, and you can only go back about a half an hour. Lest you think that's trivial: US Open night sessions, which generally feature the most popular matches, start at 7pm New York time - and therefore midnight in the UK.

In general, it's clear that Amazon hasn't really thought through the realities of the way fans embrace the US Open. Instead of treating the US Open as an *event*: instead of replays, Amazon treats live matches, and highlights compilations as separate "items". The replays Amazon began posting after a couple of days seem to be particularly well-hidden in that they're not flagged from either the highlights page or the live page and they're called "match of the day". When I did find them, they refused to play.

I would probably have been more annoyed about all this if UK coverage of the US Open hadn't been so frequently frustrating in the past (I remember "watching" the 1990 men's final by watching the Teletext scores update, and the frustrations of finding live matches when Sky scattered them across four premium channels). Watching the US Open in Britain is like boarding a plane for a long flight in economy: you don't ask if you're going to be uncomfortable. Instead, you assemble a toolkit and then which ask components you're going to need to make it as tolerable as possible within the constraints. So: I know where the Internet hides recordings of recently played matches and free streams. The US Open site has the scores and schedule of who's playing where. All streams bomb out at exactly the wrong moment. Unlike the USTA, however, it only took a day or two for Amazon to respond to viewer complaints by labeling the streams with who was playing. I *have* liked hearing some different commentators for a change. But I do not want to be a Prime subscriber.

Amazon will likely get better at this over the next four years of its five-year, $40 million contract and the course of its £50 million, five-year contract to show the ATP Tour. Nonetheless, sports are almost the only programming viewers are guaranteed to want to watch in real time, and fans, broadcasters, and the sports themselves are unlikely to be well-served in the long term by a company that uses live sports is a loss-leader - like below-cost pricing on milk in a grocery store - to build platform loyalty and subscribers for its delivery service. Sports are a strategy for the company, not its business. Book publishers welcomed Amazon, too, once.

Illustrations: Amazon error message.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.