" /> net.wars: April 2018 Archives

« March 2018 | Main | May 2018 »

April 26, 2018

Game of thrones

tennisballonclay.jpg"If a sport could have been invented with the possibility of corruption in mind, that sport would be tennis," wrote Richard Ings, the former Administrator of Rules for the Association of Tennis Professionals, in 2005 in a report not published until now.

This is a tennis story - but it's also a story about what happens when new technology meets a porous, populous, easily socially engineered, global system with great economic inequality that can be hacked to produce large sums of money. In other words, it's arguably a prototype for any number of cybersecurity stories.

A newly published independent panel report (PDF) finds a "tsunami" of corruption at the lower levels of tennis in the form of match-fixing and gambling, exactly as Ings predicted. This should surprise no one who's been paying attention. The extreme disparity between the money at the highly visible upper levels and the desperate scratching for the equivalent of worms for everyone else clearly sets up motives. Less clear until this publication were the incentives contributed by the game's structure and the tours' decision to expand live scoring to hundreds of tiny events.

Here's how tennis really works. The players - even those who never pass the first round - at the US Open or Wimbledon are the cream of the cream of the cream, generally ranked in the top 150 or so of the millions globally who play the game. Four times a year at the majors - the Australian Open, Roland Garros, the US Open, and Wimbledon - these pros have pretty good paydays. The rest of the year, they rack up frequent flyer miles, hotel bills, coaches' and trainers' salaries, and the many other expenses that go into maintaining an itinerant business style.

As Michael Mewshaw reported as long ago as 1983 in his book Short Circuit, "tanking" - deliberately losing - is a tour staple despite rules requiring "best efforts". People tank for many reasons: bad mood, fatigue, frustration, weather, niggling injuries, better money on offer elsewhere. But also, as Ings wrote: some matches have no significance, in part beeause, as Daily Tennis editor Robert Waltz has often pointed out, the ranking system does not penalize losses and pushes players to overplay, likely contributing to the escalating injury rate.

Between the Association of Tennis Players (the men's tour) and the Women's Tennis Association, there are more than 3,000 players with at least one ranking point. The report counted 336 men and 253 women who actually break even. Besides them, in 2013 the International Tennis Federation says counted 8,874 male, 4,862 female, professional tennis players, of whom 3,896 men, 2,212 women earned no prize money.

So, do the math: you're ranked in the low 800s, your shoulder hurts, your year-to-date prize money is $555, and you're playing three rounds of qualifying to earn entry to a 32-player main draw event whose total prize money is $25,000. Tournaments at that level are not required to provide housing or hospitality (food) and you're charged a $40 entry fee. You're a young player gaining match practice and points hoping to move up with all possible speed, or a player rebuilding your ranking after injury, or an aging player on the verge of having to find other employment. And someone who has previously befriended you as a fan offers you $50,000 to throw the match. Not greed, the report writes: desperation.

In some cases, losing in the final round of qualifying doesn't matter because there's already an open slot for a lucky loser. No one but you has to know. No one really *can* know if the crucial shot you missed was deliberate or not, and you're in the main draw anyway and will get that prize money and ranking points. Accommodating the request may not hurt you, but, the report argues, each of those individual decisions is a cancer eating away at the integrity of the game.

Ings foresaw all this in 2005, when he wrote his report (PDF), which appears as Appendix 11 of the new offering. On Twitter, Ings called it his proudest achievement in his time in sports.

Obviously, smartphones and the internet - especially mobile banking - play a crucial role. Between 2001 and 2014 Australian sports betting quintupled. Live scores and online betting companies facilitate remote in-play betting on matches most people don't know exist. And, the report finds, the restrictions on gambling in many countries have not helped; when people can't gamble legally they do so illegally, facilitated by online options. Provided with these technologies, corrupt bettors can bet on any sub-unit of a match: a point, a game, a set. A player who won't throw a match might still throw a point or a set. Bettors can also cheat by leveraging the delay between the second they see a point end on court and the time the umpire pushes the button to send the score to live services across the world. In some cases, the Guardian reported in 2016, corrupt umpires help out by extending that delay.

The fixes needed for all this are like the ones suggested for any other cybersecurity problem. Disrupt the enabling technology; educate the users; and root out the bad guys. The harder - but more necessary - things are fixing the incentives, because doing so requires today's winners to restructure a game that's currently highly profitable for them. Thirteen years on, will they do it?


Illustrations: Tennis ball (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 20, 2018

Deception

werobot-pepper-head_zpsrvlmgvgl.jpg"Why are robots different?" 2018 co-chair Mark Lemley asked repeatedly at this year's We Robot. We used to ask this in the late 1990s when trying to decide whether a new internet development was worth covering. "Would this be a story if it were about telephones?" Tom Standage and Ben Rooney frequently asked at the Daily Telegraph.

The obvious answer is physical risk and our perception of danger. The idea that autonomously moving objects may be dangerous is deeply biologically hard-wired. A plant can't kill you if you don't go near it. Or, as Bill Smart put it at the first We Robot in 2012, "My iPad can't stab me in my bed." Autonomous movement fools us into thinking things are smarter than they are.

It is probably not much consolation to the driver of the crashed autopiloting Tesla or his bereaved family that his predicament was predicted two years ago at We Robot 2016. In a paper, Madeline Elish called humans in these partnerships "Moral Crumple Zones", because, she argued, in a human-machine partnership, the human would take all the pressure, like the crumple zone in a car.

Today, Tesla is fulfilling her prophecy by blaming the driver for not getting his hands onto the steering wheel fast enough when commanded. (Other prior art on this: Dexter Palmer's brilliant 2016 book Version Control.)

As Ian Kerr pointed out, the user's instructions are self-contradictory. The marketing brochure uses the metaphors "autopilot" and "autosteer" to seduce buyers into envisioning a ride of relaxed luxury while the car does all the work. But the legal documents and user manual supplied with the car tell you that you can't rely on the car to change lanes, and you must keep your hands on the wheel at all times. A computer ingesting this would start smoking.

Granted, no marketer wants to say, "This car will drive itself in a limited fashion, as long as you watch the road and keep your hands on the steering wheel." The average consumer reading that says, "Um...you mean I have to drive it?"

The human as moral crumple zone also appears in analyses of the Arizona Uber crash. Even-handedly, Brad Templeton points plenty of blame at Uber and its decisions: the car's LIDAR should have spotted the pedestrian crossing the road in time to stop safely. He then writes, "Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired." And yet humans are notoriously bad at the job required of her: monitor a machine. Safety drivers are typically deployed in pairs to split the work - but also to keep each other attentive.

The larger We Robot discussion was part about public perception of risk, based on a paper (PDF) by Aaron Mannes that discussed how easy it is to derail public trust in a company or new technology when statistically less-significant incidents spark emotional public outrage. Self-driving cars may in fact be safer overall than human drivers despite the fatal crash in Arizona; Mannes also mentioned were Three Mile Island, which made the public much more wary of nuclear power, and the Ford Pinto, which spent the 1970s occasionally catching fire.

Mannes suggested that if you have that trust relationship you may be able to survive your crisis. Without it, you're trying to win the public over on "Frankenfoods".

So much was funnier and more light-hearted seven years ago, as a long-time attendee pointed out; the discussions have darkened steadily year by year as theory has become practice and we can no longer think the problems are as far away as the Singularity.

In San Francisco, delivery robots cause sidewalk congestion and make some homeless people feel surveilled; in Chicago and Durham we risk embedding automated unfairness into criminal justice; the egregious extent of internet surveillance has become clear; and the world has seen its first self-driving car road deaths. The last several years have been full of fear about the loss of jobs; now the more imminent dragons are becoming clearer. Do you feel comfortable in public spaces when there's a like a mobile unit pointing some of its nine cameras at you?

Karen Levy, finds that truckers are less upset about losing their jobs than about automation invading their cabs, ostensibly for their safety. Sensors, cameras, and wearables that monitor them for wakefulness, heart health, and other parameters are painful and enraging to this group, who chose their job for its autonomy.

Today's drivers have the skills to step in; tomorrow's won't. Today's doctors are used to doing their own diagnostics; tomorrow's may not be. In the paper by Michael Froomkin, Ian Kerr, and Joƫlle Pinea (PDF), automation may mean not only deskilling humans (doctors) but also a frozen knowledge base. Many hope that mining historical patient data will expose patterns that enable more accurate diagnostics and treatments. If the machines take over, where will the new approaches come from?

Worse, behind all that is sophisticated data manipulation for which today's internet is providing the prototype. When, as Woody Hartzog suggested, Rocco, your Alexa-equipped Roomba, rolls up to you, fakes a bum wheel, and says, "Daddy, buy me an upgrade or I'll die", will you have the heartlessness to say no?

Illustrations: Pepper and handler at We Robot 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 14, 2018

Late, noisy, and wrong

Thumbnail image for Bill Smart - We Robot 2016.jpg"All sensors are terrible," Bill Smart and Cindy Grimm explained as part of a pre-conference workshop at this year's We Robot. Smart, an engineer at Oregon State with prior history here, loves to explain why robots and AI aren't as smart as people think. "Just a fancy hammer," he said the first year.

Thursday's target was broad: the reality of sensors, algorithms, and machine learnng.

One of his slides read:


  • It's all just math and physics.

  • There is no intelligence.

  • It's just a computer program.

  • Sensors turn physics into numbers.

That last one is the crucial bit, and it struck me as surprising only because in all the years I've read about and glibly mentioned sensors and how many there are in our phones they've never really been explained to me. I'm not an electrical engineering student, so like most of us, I wave around the words. Of course I know that digital means numbers, and computers do calculations with numbers not fuzzy things like light and sound, and therefore the camera in my phone (which is a sensor) is storing values describing light levels rather than photographing light in the way that analogue film did, But I don't' - or didn't until Thursday - really know what sensors do measure. For most purposes, it's OK that my understanding is...let's call it abstract. But it does make it easy to overestimate what the technology can do now and how soon it will be able to fulfil the fantasies of mad scientists.

Smart's point is that when you start talking about what AI can do - whether or not you're using my aspirational intelligence recasting of the term - you'd better have some grasp of what it really is. It means the difference between a blob on the horizon that can be safely ignored and a woman pushing a bicycle across a roadway in front of an oncoming LIDAR-equipped Uber self-driving car;

So he begins with this: "All sensors are terrible." We don't use better ones because either such a thing does not exist or because they're too expensive. They are all "noisy, late, and wrong" and "you can never measure what you want to."

What we want to measure are things like pressure, light, and movement, and because we imagine machines as analogues of ourselves, we want them to feel the pressure, see the light, and understand the movement. However, what sensors can measure is electrical current. So we are always "measuring indirectly through assumptions and physics". This is the point AI Weirdness makes too, more visually, by showing what happens when you apply a touch of surrealism to the pictures you feed through machine learning.

He described what a sensor does this way: "They send a ping of energy into the world. It interacts, and comes back." In the case of LIDAR - he used a group of humans to enact this - a laser pulse is sent out, and the time it takes to return is a number of oscillations of a crystal. This has some obvious implications: you can't measure anything shorter than one oscillation.

Grimm explains that a "time of flight" sensor like that is what cameras - back to old Kodaks - use to auto-focus. Smartphones are pretty good at detecting a cluster of pixels that looks like a face and using that to focus on. But now let's imagine it's being used in a knee-high robot on a sidewalk to detect legs. In an art installation Smart and Grimm did they found that it doesn't work in Portland...because of all those hipsters wearing black jeans.

So there are all sorts of these artefacts, and we will keep tripping over them because most of us don't really know what we're talking about. With image recognition, the important thing to remember is that the sensor is detecting pixel values, not things - and a consequence of that is that we don't necessarily know *what* the system has actually decided is important and we can't guarantee what it might be recognizing. So turn machine learning loose on a batch of photos of Audis, and if they all happen to be photographed at the same angle the system won't recognize an Audi photographed at a different one. Teach a self-driving car all the roads in San Francisco and it still won't know anything about driving in Portland.

That circumscription is important. Teach a machine learning system on a set of photos of Abraham Lincoln and a zebra fish, and you get a system that can't imagine it might be a cat. The computer - which, remember, is working with an array of numbers - looks at the numbers in the array and based on what it has identified as significant in previous runs makes the call based on what's closest. It's numbers in, numbers out, and we can't guarantee what it's "recognizing".

A linguistic change would help make all this salient. LIDAR does not "see" the roadway in front of the car that's carrying it. Google's software does not "translate" language. Software does not "recognize" images. The machine does not think, and it has no gender.

So when Mark Zuckerberg tells Congress that AI will fix everything, consider those arrays of numbers that may interpret a clutch of pixels as Abraham Lincoln when what's there is a zebra fish...and conclude he's talking out of his ass.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier co\lumns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 6, 2018

Leverage

Facebook-76536_640.pngWell, what's 37 million or 2 billion scraped accounts more or less among friends? The exploding hairball of the Facebook/Cambridge Analytica scandal keeps getting bigger. And, as Rana Dasgubta writes in the Guardian, we are complaining now because it's happening to us, but we did not notice when these techniques were tried out first in third-world countries. Dasgupta has much to say about how nation-states will have to adapt to these conditions.

Given that we will probably never pin down every detail of how much data and where it went, it's safest to assume that all of us have been compromised in some way. The smug "I've never used Facebook" population should remember that they almost certainly exist in the dataset, by either reference (your sister posts pictures of "my brother's birthday") or inference (like deducing the existence, size, and orbit of an unseen planet based on its gravitational pull on already-known objects).

Downloading our archives tells us far less than people recognize. My own archive had no real surprises (my account dates in 2007, but I post little and adblock the hell out of everything). The shock many people have experienced of seeing years of messages and photographs laid out in front of them, plus the SMS messages and call records that Facebook shouldn't have been retaining in the first place, hides the fact that these archives are a very limited picture of what Facebook knows about us. It shows us nothing about information posted about us by others, photos others have posted and tagged, or comments made in response to things we've posted.

The "me-ness" of the way Facebook and other social media present themselves was called out by Christian Fuchs in launching his book Digital Demagogue: Authoritarian Capitalism in the Age of Trump and Twitter. "Twitter is a me-centred medium. 'Social media' is the wrong term, because it's actually anti-social, Me media. It's all about individual profiles, accumulating reputation, followers, likes, and so on."

Saying that, however, plays into Facebook's own public mythology about itself. Facebook's actual and most significant holdings about us are far more extensive, and the company derives its real power from the complex social graphs it has built and the insights that can be gleaned from them. None of that is clear from the long list of friends. Even more significant is how Facebook matches up user profiles to other public records and social media services and with other brokers' datasets - but the archives give us no sense of that either. Facebook's knowledge of you is also greatly enhanced - as is its ability to lock you in as a user - if you, like many people, have opted to use Facebook credentials to log into third-party sites. Undoing that is about as easy and as much fun as undoing all your direct debit payments in order to move your bank account.

Facebook and the other tech companies are only the beginning. There's a few people out there trying to suggest Google is better, but Zeynep Tufekci discovered it had gone on retaining her YouTube history even though she had withdrawn permission to do so. As Tufekci then writes, if a person with a technical background whose job it is to study such things could fail to protect her data, how could others hope to do so?

But what about publishers and the others dependent on that same ecosystem? As Doc Searls writes, the investigative outrage on display in many media outlets glosses over the fact that they, too, are compromised. Third party trackers, social media buttons, Google analytics, and so on all deliver up readers to advertisers in increasing detail, feeding the business plans of thousands of companies all aimed at improving precision and targeting.

And why stop with publishers? At least they have the defense of needing to make a living. Government sites, libraries, and other public services do the same thing, without that justification. The Richmond Council website shows no ads - but it still uses Google Analytics, which means sending a steady stream of user data Google's way. Eventbrite, which everyone now uses for event sign-ups, is constantly exhorting me to post my attendance to Facebook. What benefit does Eventbrite get from my complying? It never says.

Meanwhile, every club, member organization, and creative endeavor begs its adherents to "like my page on Facebook" or "follow me on Twitter". While they see that as building audience and engagement, the reality is that they are acting as propagandists for those companies. When you try to argue against doing this, people will say they know, but then shrug helplessly and say they have to go where the audience is. If the audience is on Facebook, and it takes page likes to make Facebook highlight your existence, then what choice is there? Very few people are willing to contemplate the hard work of building community without shortcuts, and many seem to have come to believe that social media engagement as measured in ticks of approval is community, like Mark Zuckerberg tried to say last year.

For all these reasons, it's not enough to "fix Facebook". We must undo its leverage.


Illustrations: Facebook logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.