Main

December 30, 2022

Lost for words

Thumbnail image for sbisson-parrot-49487515926_0c97364f80_o.jpgPrivacy advocacy has the effect of making you hyper-conscious of the exponentially increasing supply of data. All sorts of activities that used to leave little or no trace behind now create a giant stream of data exhaust, from interactions with friends (now captured by social media companies), to TV viewing (now captured by streaming services and cable companies), where and when we travel (now captured by credit card companies, municipal smart card systems, and facial recognition-equipped cameras), and everything we buy (unless you use cash). And then there are the vast amounts of new forms of data being gathered by the sensors attached to Internet of Things devices and, increasingly, more intimate devices, such as medical implants.

And yet. In a recent paper (PDF) that Tammy Xu summarizes at MIT Technology Review, the EPOCH AI research and forecasting unit argues that we are at risk of running out of a particular kind of data: the stuff we use to train large language models. More precisely, the stock of data deemed suitable for use in language training datasets is growing more slowly than the size of the datasets these increasingly large and powerful models require for training. The explosion of privacy-invasive, mechanically captured data mentioned above doesn't help with this problem; it can't help train what today passes for "artificial intelligence to improve its ability to generate content that reads like it could have been written by a sentient human.

So in this one sense the much-debunked saw that "data is the new oil" is truer than its proponents thought. Like drawing water from aquifers or depleting oil reserves, data miners have been relying on capital resources that have taken eras to build up and that can only be replenished over similar time scales. We professional writers produce new "high-quality" texts too slowly.

As Xu explains, "high-quality" in this context generally means things like books, news articles, scientific papers, and Wikipedia pages - that is, the kind of prose researchers want their models to copy. Wikipedia's English language section makes up only 0.6% of GPT-3 training data. "Low-quality" is all the other stuff we all churn out: social media postings, blog postings, web board comments, and so on. There is of course vastly more of this (and some of it is, we hope, high-quality)..

The paper's authors estimate that the high-quality text modelers prefer could be exhausted by 2026. Images, which are produced at higher rates, will take longer to exhaust - lasting to perhaps between 2030 and 2040. The paper considers three options for slowing exhaustion: broaden the standard for acceptable quality; find new sources; and develop more data-efficient solutions for training algorithms. Pursuing the fossil fuel analogy, I guess the equivalents might be: turning to techniques such as fracking to extract usable but less accessible fossil fuels, developing alternative sources such as renewables, and increasing energy efficiency. As in the energy sector, we may need to do all three.

I suppose paying the world's laid-off and struggling professional writers to produce text to feed the training models can't form part of the plan?

The first approach might have some good effects by increasing the diversity of training data. The same is true of the second, although using AI-generated text (synthetic data to train the model seems as recursive as using an algorithm to highlight trends to tempt users. Is there anything real in there?

Regarding the third... It's worth remembering the 2020 paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (the paper over which Google apparently fired AI ethics team leader Timnit Gebru). In this paper (and a FaCCT talk), Gebru, Emily M. Bender, Angelina McMillan-Major, and Shmargaret Shmitchell outlined the escalating environmental and social costs of increasingly large language models and argued that datasets needed to be carefully curated and documented, and tailored to the circumstances and context in which the model was eventually going to be used.

As Bender writes at Medium, there's a significant danger that humans reading the language generated by systems like GPT-3 may *believe* it's the product of a sentient mind. At IAI News, she and Chirag Shah call text generators like GPT-3 dangerous because they have no understanding of meaning even as they spit out coherent answers to user questions in natural language. That is, these models can spew out plausible-sounding nonsense at scale; in 2020, Renée DiResta predicted at The Atlantic that generative text will provide an infinite supply of disinformation and propaganda.

This is humans finding patterns even where they don't exist: all the language model does is make a probabilistic guess about the next word based on statistics derived from the data it's been trained on. It has no understanding of its own results. As Ben Dickson puts it at TechTalks as part of an analysis of the workings of the language model BERT, "Coherence is in the eye of the beholder." On Twitter, Bender quipped that a good new name would be PSEUDOSCI (for Pattern-matching by Syndicate Entities of Uncurated Data Objects, through Superfluous (energy) Consumption and Incentives).

If running out of training data means a halt on improving the human-like quality of language generators' empty phrases, that may not be such a bad thing.


Illustrations: Drunk parrot (taken by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 28, 2022

MAGICAL, Part 1

hasrbrouck-cpdp2017.jpg"What's that for?" I asked. The question referred to a large screen in front of me, with my newly-captured photograph in the bottom corner. Where was the camera? In the picture, I was trying to spot it.

The British Airways gate attendant at Chicago's O'Hare airport tapped the screen and a big green checkmark appeared.

"Customs." That was all the explanation she offered. It had all happened so fast there was no opportunity to object.

Behind me was an unforgiving line of people waiting to board. Was this a good time to stop to ask:

- What is the specific purpose of collecting my image?

- What legal basis do you have for collecting it?

- Who will be storing the data?

- How long will they keep it?

- Who will they share it with?

- Who is the vendor that makes this system and what are its capabilities?

It was not.

I boarded, tamely, rather than argue with a gate attendant who certainly didn't make the decision to install the system and was unlikely to know much about its details. Plus, we were in the US, where the principles of the data protection law don't really apply - and even if they did, they wouldn't apply at the border - even, it appears, in Illinois, the only US state to have a biometric privacy law.

I *did* know that US Customs and Border Patrol had begun trialing facial recognition in selected airports, beginning in 2017. Long-time readers may remember a net.wars report from the 2013 Biometrics Conference about the MAGICAL [sic] airport, circa 2020, through which passengers flow unimpeded because their face unlocks all. Unless, of course, they're "bad people" who need to be kept out.

I think I even knew - because of Edward Hasbrouck's indefatagable reporting on travel privacy - that at various airports airlines are experimenting with biometric boarding. This process does away entirely with boarding cards; the airline captures biometrics at check-in and uses them to entirely automate the "boarding process" (a favorite bit of airline-speak of the late comedian George Carlin). The linked explanation claims this will be faster because you can have four! automated lanes instead of one human-operated lane. (Presumably then the four lanes merge into a giant pile-up in the single-lane jetway.)

It was nonetheless startling to be confronted with it in person - and with no warning. CBP proposed taking non-US citizens' images in 2020, when none of us were flying, and Hasbrouck wrote earlier this year about the system's use in Seattle. There was, he complained, no signage to explain the system despite the legal requirement to do so, and the airport's website incorrectly claimed that Congress mandated capturing biometrics to identify all arriving and departing international travelers.

According to Biometric Update, as of last February, 32 airports were using facial recognition on departure, and 199 airports were using facial recognition on arrival. In total, 48 million people had their biometrics taken and processed in this way in fiscal 2021. Since the program began in 2018, the number of alleged impostors caught: 46.

"Protecting our nation, one face at a time," CBP calls it.

On its website, British Airways says passengers always have the ability to opt out except where biometrics are required by law. As noted, it all happened too fast. I saw no indication on the ground that opting out was possible, even though notice is required under the Paperwork Reduction Act (1980).

As Hasbrouck says, though, travelers, especially international travelers and even more so international travelers outside their home countries, go through so many procedures at airports that they have little way to know which are required by law and which are optional, and arguing may get you grounded.

He also warns that the system I encountered is only the beginning. "There is an explicit intention worldwide that's already decided that this is the new normal, All new airports will be designed and built with facial recognition built into them for all airlines. It means that those who opt out will find it more and more difficult and more and more delaying."

Hasbrouck, who is probably the world's leading expert on travel privacy, sees this development as dangerous. Largely, he says, it's happening unopposed because the government's desire for increased surveillance serves the airlines' own desire to cut costs through automating their business processes - which include herding travelers onto planes.

"The integration of government and business is the under-noticed aspect of this. US airports are public entities but operate with the thinking of for-profit entities - state power merged with the profit motive. State *monopoly* power merged with the profit motive. Automation is the really problematic piece of this. Once the infrastructure is built it's hard for airline to decide to do the right thing." That would be the "right thing" in the sense of resisting the trend toward "pre-crime" prediction.

"The airline has an interest in implying to you that it's required by government because it pressures people into a business process automation that the airline wants to save them money and implicitly put the blame on the government for that," he says. "They don't want to say 'we're forcing you into this privacy-invasive surveillance technology'."


Illustrations: Edward Hasbrouck in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 23, 2022

Insert a human

We Robot - 2022 - boston dynamics.JPGRobots have stopped being robots. This is a good thing.

This is my biggest impression of this year's We Robot conference: we have moved from the yay! robots! of the first year, 2012, through the depressed doldrums of "AI" systems that make the already-vulnerable more vulnerable circa 2018 to this year, when the phrase that kept twanging was "sociotechnical systems". For someone with my dilettantish conference-hopping habit, this seems like the necessary culmination of a long-running trend away from robots as autonomous mobile machines to robots/AI as human-machine partnerships. We Robot has never talked much about robot rights, instead focusing on considering the policy challenges that arise as robots and AI become embedded in our lives. This is realism; as We Robot co-founder Michael Froomkin writes, we're a long, long way from a self-aware and sentient machine.

The framing of sociotechnical systems is a good thing in part because so much of what passes for modern "artificial intelligence" is humans all the way down, as Mary L. Gray and Siddhart Suri documented in their book, Ghost Work. Even the companies that make self-driving cars, which a few years ago were supposed to be filling the streets by now, are admitting that full automation is a long way off. "Admitting" as in consolidating or being investigated for reckless hyping.

If this was the emerging theme, it started with the first discussion, of a paper on humans in the loop, by Margot Kaminski, Nicholson Price, and Rebecca Crootof. Too often, the proposed policy-making proposal for handling problems with decision making systems is to insert a human, a "solution" they called the "MABA-MABA trap", for "Machines Are Better At / Men Are Better At". While obviously humans and machines have differing capabilities - people are creative and flexible, machines don't get bored - just dropping in a human without considering what role that human is going to fill doesn't necessarily take advantage of the best capabilities of either. Hybrid systems are of necessity more complex - this is why cybersecurity keeps getting harder - but policy makers may not take this into account or think clearly about what the human's purpose is going to be.

At this conference in 2016, Madeleine Claire Elish foresaw that the human would become a moral crumple zone or liability sponge, absorbing blame without necessarily being at fault. No one will admit that this is the human's real role - but it seems an apt description of the "safety driver" watching the road, trying to stay alert in case the software driving the car needs backup or the poorly-paid human given a scoring system and tasked with awarding welfare benefits. What matters, as Andrew Selbst said in discussing this paper, is the *loop*, not the human - and that may include humans with invisible control, such as someone who can massage the data they enter into a benefits system in order to help a particularly vulnerable child, or who have wide discretion, such as a judge who is ultimately responsible for parole decisions no matter what the risk assessment system says.

This is not the moment to ask what constitutes a human.

It might be, however, the moment to note the commentator who said that a lot of the problems people are suggesting robots/AI can solve have other, less technological solutions. As they said, if you are putting a pipeline through a community without its consent, is the solution to deploy police drones to protect the pipeline and the people working on it - or is it to put the pipeline somewhere else (or to move to renewables and not have a pipeline at all)? Change the relationship with the community and maybe you can partly disarm the police.

One unwelcome forthcoming issue, discussed in a paper by Kate Darling and Daniella DiPaola is the threat merging automation and social marketing poses to consumer protection. A truly disturbing note came from DiPaola, who investigated manipulation and deception with personal robots and 75 children. The children had three options: no ads, ads allowed only if they are explicitly disclosed to be ads, or advertising through casual conversation. The kids chose casual conversation because they felt it showed the robot *knew* them. They chose this even though they knew the robot was intentionally designed to be a "friend". Oy. In a world where this attitude spreads widely and persists into adulthood, no amount of "media literacy" or learning to identify deception will save us; these programmed emotional relationships will overwhelm all that. As DiPaola said, "The whole premise of robots is building a social relationship. We see over and over again that it works better if it is more deceptive."

There was much more fun to be had - steamboat regulation as a source of lessons for regulating AI (Bhargavi Ganesh and Shannon Vallor), police use of canid robots (Carolin Kemper and Michael Kolain), and - a new topic - planning for the end of life of algorithmic and robot systems (Elin Björling and Laurel Riek). The robots won't care, but the humans will be devastated.

Illustrations: Hanging out at We Robot with Boston Dynamics' "Spot".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 16, 2022

Coding ethics

boston-dynamics-werobot-2022-370.jpgWhy is robotics hard?

This was Bill Smart's kickoff on the first (workshop) day of this year's We Robot. It makes sense: We Robot is 11 years old, and if robots were easy we'd have them by now. The basic engineering difficulties are things he's covered in previous such workshops: 2021, 2019, 2018, 2016.

More to the point for this cross-the-technicians-with-the-lawyers event: why is making robots "ethical" hard? Ultimately, because the policy has to be translated into computer code, and as Smart and others explain, the translation demands an order of precision humans don't often recognize. Wednesday's workshops explored the gap between what a policy says and what a computer can be programmed to do. For many years, Smart has liked to dramatize this gap by using four people to represent a "robot" and assigning a simple task. Just try picking up a ball with no direct visual input by asking yes/no questions of a voltage-measuring sensor.

This year, in a role-playing breakout group, we were asked to redesign a delivery robot to resolve complaints in a fictional city roughly the size of Seattle. Injuries to pedestrians have risen since delivery robots arrived; the residents of a retirement community are complaining that the robots' occupation of the sidewalks interferes with their daily walks; and one companysends its delivery robot down the street ;past a restaurant while playing ads for its across-the-street competitor.

It's not difficult to come up with ideas for ways to constrain these robots. Ban them from displaying ads. Limit them to human walking speed (which you'll need to specify precisely). Limit the time or space they're allowed to occupy. Eliminate cars and reallocate road space to create zones for pedestrians, cyclists, public tranport, and robots. Require lights and sound to warn people of the robots' movements. Let people ride on the robots. (Actually, not sure how that solves any of the problems presented, but it sounds like fun.)

As you can see from the sample, many of the solutions that the group eventually proposed were only marginally about robot design. Few could be implemented without collaboration with the city, which would have to agree and pay for infrastructure changes or develop policies and regins specifying robot functionality.

This reality was reinfoced in a later exercise, in which Cindy Grimm, Ruth West, and Kristen Thomasen broke us into robot design teams and tasked us to design a robot to solve these complaints reinforced this. Most of the proposals involved reorganizating public space (one group suggested sending package delivery robots through the sewer system rather than on public streets and sidewalks), sometimes at considerable expense. Our group, concerned about sustainability, wanted the eventual robot made out of 3D printed engineered wood, but hit physical constraints when Grimm pointed out that our comprehensive array of sensors wouldn't fit on the small form factor we'd picked - and would be energy-intensive. No battery life.

The deeper problem we raised: why use robots for this at all? Unless you're a package delivery company seeking to cut labor costs, what's the benefit over current delivery systems? We couldn't think of one. With Canadian journalist Paris Marx's recent book on autonomous vehicles , Road to Nowhere fresh in my mind, however, the threat to publc ownership of the sidewalk seemed real.

The same sort of real problem surfaced in discussions of a different problem, based on Paige Tutosi's winning entry in a recent roboethics competition. In this exercise, we were given three short lists: rooms in a house, people who live in the house, and objects around the house. The idea was to come up with rules for sending the objects to individuals that could be implemented in computer code for its robot servant. In an example ruleset, no one can order the robot to send a beer to the baby or chocolate to the dog.

My breakout group quickly got stuck in contemplating the possible power dynamics and relationships in the house. Was the "mother" the superuser who operated in God mode? Or was she an elderly dementia patient who lived with her superuser daughter, her daughter's boyfriend, and their baby? Then someone asked the killer question: "Who is paying for the robot?" People whose benefits payments arrive on prepay credit cards with government-designed constraints on their use could relate.

The summary reports from the other groups revealed a significant split between those who sought to build a set of rules that specified what was forbidden (comparable to English or American law) and those who sought to build a set of rules that specified what was permitted (more like German law).

For the English approach, you have to think ahead of time of all the things that could go wrong and create rules to prevent them. This is by far the easier approach - easier to code, and safer for robot manufacturers seeking to limit their liability. Robots' capabilities will default to strictly limited to "known-safe".

The fact of this split suggested that at heart developing "robot ethics" is recapitulating all of legal history back to first principles. Viewed that way, robots are dangerous. Not because they are likely to attack us - but because they can be the vector for making moot, in stealth, by inches, and to benefit their empowered commissioners, our entire framework of human rights and freedoms.


Illustrations: Boston Dynamics' canine robot visits We Robot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 26, 2022

Zero day

Tesla-crash-NYTimes-370.pngYears ago, an alarmist book about cybersecurity threats concluded with the suggestion that attackers' expertise at planting backdoors could result in a "zero day" when, at an attacker-specified time, all the world's computers could be shut down simultaneously.

That never seemed likely.

But if you *do* want to take down all of the computers in an area the easiest way is to cut off the electricity supply. Which, if the worst predictions for this year's winter in Britain come true, is what could happen, no attacker required. All you need is a government that insists, despite expert warnings, that there will be plenty of very expensive energy to go round for those who can afford it - even while the BBC reports that in some areas of West London the power grid is so stretched by data centers' insatiable power demands that new homes can't be built

Lack of electrical power is something even those rich enough not to have to choose between eating and heating can't ignore - particularly because they're also most likely to be dependent on broadband for remote working. But besides that: no power means no Internet: no way for kids to do their schoolwork or adults to access government sites to apply for whatever grants become available. Exponentially increasing energy prices already threatens small businesses, charities, care homes, child care centers, schools, food banks, hospitals, and libraries, as well as households. It won't be much consolation if we all wind up "saving" money because there's no power available to pay for.

***

In an earlier, analog, era, parents taking innocent nude photos of their kids were sometimes prosecuted when they tried to have them developed at the local photo shop. In the 2021 equivalent, Kashmir Mill reports at the New York Times, Google flagged pictures two fathers took of their young sons' genitalia in order to help doctors diagnose an infection, labeled them child sexual abuse material, ordered them deleted, suspended the fathers' accounts, and reported them to the police.

It's not surprising that Google has automated content moderation systems dedicated to identifying abuse images, which are illegal almost everywhere. What *has* taken people aback, however, was these fathers' complete inability to obtain redress, even after the police exonerated them. Most of us would expect Google to have a "human in the loop" review process to whom someone who's been wrongfully accused can appeal.

In reality, though, the result is more likely to be like what happened in the so-called Twitter joke trial. In that case, a frustrated would-be airline passenger trying to visit his girlfriend posted on Twitter that he might blow up the airport if he still couldn't get a flight. Everyone who saw the tweet, from the airport's security staff to police, agreed he was harmless - and yet no one was willing to be the person who took the risk of signing off on it, just in case. With suspected child abuse, the same applies: no one wants to risk being the person who wrongly signs off on dropping the accusations. Far easier to trust the machine, and if it sets of a cascade of referrals that cost an innocent parent their child (as well as all their back GMail, contacts list, and personal data), well...it's not your fault. This goes double for a company like Google, whose bottom line depends on providing as little customer services as possible.

***

Even though all around us are stories about the risks of trusting computers not to fail, last week saw a Twitter request for the loan of a child. For the purpose of: having it run in front of a Tesla operating on Full Self-Drive to prove the car would stop. At the Guardian, Arwa Mahdawi writes that said poster did find a volunteer, albeit with this caveat: "They just have to convince their wife." Apparently several wives were duly persuaded, and the children got to experience life as crash test dummies - er, beta testers. Fortunately, none were harmed .

Reportedly, Google/YouTube is acting promptly to get the resulting videos taken down, though is not reporting the parents, who, as a friend quipped, are apparently unaware that the Darwin Award isn't meant to be aspirational.

***

The last five years of building pattern recognition systems - facial recognition, social scoring, and so on - have seen a lot of evidence-based pushback against claims that these systems are fairer because they eliminate human bias. In fact they codify it because they are trained on data with the historical effects of those biases already baked in.

This week saw a disturbing watershed: bias has become a selling point. An SFGate story by Joshua Bote (spotted at BoingBoing) highlights Sanos, a Bay Area startup that offers software intended to "whiten" call center workers' voices by altering their accents into "standard American English". Having them adopt obviously fake English pseudonyms apparently wasn't enough.

Such as system, as Bote points out, will reinforce existing biases. If it works, it's perfectly designed to expand prejudice and entitlement along the lines of "Why should I have to deal with anyone whose voice or demeanor I don't like?" It's worse than virtual reality, which is at least openly a fictional simulation; it puts a layer of fake over the real world and makes us all less tolerant. This idea needs to fail.


Illustrations: One of the Tesla crashes investigated in New York Times Presents, discussed here in June.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 29, 2022

On the Internet, they always knew you were a dog

RunJoshRun0106-370.pngThis week: short cuts.

Much excitement that Disney's copyright in the first Mickey Mouse film will expire in...2024. Traditionally, Disney would be lobbying to extend copyright terms - as it did in 1998, when the Copyright Term Extension Act lengthened it to life plus 70 years for authors and 95 years for corporations. In 1928, when Disney released Mickey's first cartoon, copyright lasted 28 years, renewable once. In 1955, Disney duly renewed it until 1984. The 1976 Copyright Act extended that until 2003, and the 1998 law pushed it through 2023. Other companies also profit from these extensions, but Disney is the most notorious.

The losers have been us: the acts froze the public domain for decades. In the interim, as both the Guardian and the Authors Alliance report, Disney has registered trademarks in the character, and even shorn of copyright Mickey remains protected.

The weird reason Disney is unlikely to get another extension *this* time is that the US Republican party is picking a fight with Disney over LGBTQ+ rights. US Senator Josh Hawley (R-MO) is pushing a copyright term *reduction* bill as a *punishment*. I want to laugh at the bonkersness of this, but can't because: sucks to be the humans whose rights are caught in this crossfire. But yay! public domain.

***

Airlines do it. Scalpers do it. Even educated algorithms do it. Which is how this week angry Bruce Springsteen fans complained that concert tickets hit $5,500. The reason: Ticketmaster's demand-driven dynamic pricing. Spingsteen's manager, Jon Landau, called the *average* pricing of $200 "fair"; Ticketmaster says only 1% of tickets sold for over $1,000, and 18% sold for under $99.

A Ticketmaster option adjusts pricing to the perceived market. Those first in the queue when sales opened saw four-figure prices; waiting and searching would, NJ.com reports, have found other sites with more modest prices.

In the Internet's early days, many expected it to advantage consumers by making market information transparent. On eBay, this remains somewhat true. Elsewhere, corporate consolidation and automation have eliminated that insight. In the Springsteen case, as your hand hovers on the purchase button you have seconds to decide on the price in front of you. You aren't really paying for Springsteen, you're paying for *certainty*.

***

The 1998 copyright term extension coincided with the beginnings of the MIT Media Lab's Things That Think, which presaged today's "smart" Internet of Things. Coupling that with the nascent software industry move from purchase to subscription and the history of digital rights management, made limitations on ownership of *things* imaginable.

This week, BMW offered British drivers this exact dystopia: it will charge £10 per month for heated seats for those whose car, when new, didn't include them. Of course that means that all the necessary hardware infrastructure is present in every car, and BMW activates a subscription by toggling a line of code to "true" - an infuriating reason to pay extra.

***

The shrinking company Meta is unhappy about leap seconds, joining a history of computer industry objections to celestial mechanics. For computer folks, leap seconds pose thorny synchronization problems (see also GPS); for astronomers and physicists, leap seconds crucially align human time with celestial time. When I first wrote about this in 2005, here and at Scientific American, proposals to eliminate them were already on the table at the International Telecommunications Union. That year's vote deferred the decision to its 2015 World Radiocommunications Congress - 2014noted here in 2014 - which duly deferred it again to 2023. Hence the present revival.

Meta is pushing the idea of "smearing" the leap second over 17 hours, which sounds like the kind of magic technology that was supposed to solve the Northern Ireland-Brexit conundrum. Personally, I'm for the astronomers and physicists; as the pandemic, the climate, and the war remind, it's unwise to forget our dependence on the natural world. Prediction: the 2023 meeting will defer it again because the two sides will never agree. Different people need different kinds of time, and that's how it is.

***

The problem with robots and AIs is that they expect consistency humans rarely provide. This week, a chess-playing robot broke a seven-year-old's finger during a game in the Moscow Chess Open when the boy began his move faster than it was programmed to expect. As Madeline Claire Elish predicted in 2016 in positing moral crumple zones, the tournament organizer seemed to blame the child for not giving the robot enough time. Autonomous vehicle, anyone?

***

And finally: remaining a meme almost 30 years after its first publication in The New Yorker is Peter Steiner's cartoon of a dog at a computer telling another dog, "On the Internet no one knows you're a dog". It's a wonderful wish-it-were-truth. But it was dubious even in 1993, when most online contacts were strangers who could, theoretically, safely assume fake identities. However, it's hard to lie consistently over a period of time, and even harder to disguise fundamental characteristics that shape life experience. Today's surveillance capitalism would spot the dog immediately - but its canine nature would be obvious anyway from its knee-level world view. On the Internet everyone always knew you were a dog - they just didn't used to care.


Illustrations: US Senator Josh Hawley (R-MO), running to expand the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 8, 2022

Orphan consciousness

icelandverse.pngWhat if, Paul Bernal asked late in this year's Gikii, someone uploaded a consciousness and then we forgot where we got it from? Taking an analogy from copyrighted works whose owners are unknown - orphan works, an orphan consciousness. What rights would it have? Can it commit crimes? Is it murder to erase it? What if it met fellow orphan consciousness and together they created a third? Once it's up there without a link to humanity, then what?

These questions annoyed me less than proposals for robot rights, partly because they're more obviously a thought experiment, and partly because they specifically derived from Greg Daniels' science fiction series Upload, which inspired many of this year's gikii presentations. The gist: Nathan (Robbie Arnell), whose lung is collapsing after an autonomous vehicle crash, is offered two choices: take his chances in the operating room, or have his consciousness uploaded into Lakeview, a corporately owned and run "paradise" where he can enjoy an afterlife in considerable comfort. His girlfriend, Ingrid (Allegra Edwards), begs him to take the afterlife, at her family's expense. As he's rushed into signing the terms and conditions, I briefly expected him to land at the waystation in Albert Brooks' 1991 film Defending Your Life.

Instead, he wakes in a very nice country club hotel where he struggles to find his footing among his fellow uploaded avatars and wrangle the power dynamics in his relationship with Ingrid. What is she willing to fund? What happens if she stops paying? (A Spartan 2GB per day, we find later.) And, as Bernal asked, what are his neurorights?

Fictional use cases, as Gikii proves every year (2021): provide fully-formed use cases through which to explore the developing ethics and laws surrounding emergent technologies. For the current batch - the Digital Markets Act (EU, passed this week), the Digital Services Act (ditto), the Online Safety bill (UK, pending), the Platform Work Directive (proposed, EU), the platform-to-business regulations (in force 2020, EU and UK), and, especially, the AI Act (pending, EU) - Upload couldn't be more on point.

Side note: in-person attendees got to sample the Icelandverse, a metaverse of remarkable physical reality and persistence.

Upload underpinned discussions of deception and consent laws (Burkhard Schäfer and Chloë Kennedy), corporate objectification (Mauricio Figueroa ), and property rights - English law bans perpetual trusts. Can uploads opt out? Can they be murdered? Maybe like copyright, give them death plus 70 years?

Much of this has direct relevance to the "metaverse", which Anna-Maria Piskopani called "just one new way to do surveillance capitalism". The show's perfect example: when sex fails to progress, Ingrid yells out, "Tech support!".

In life, Nora (Andy Allo), the "angel" who arrives to help, works in an open plan corporate dystopia where her co-workers gossip about the avatars they monitor. As in this year's other notable fictional world, Dan Erickson's Severance, the company is always watching, a real pandemic-accelerated trend. In our paper, Andelka Phillips and I noted that although the geofenced chip implanted in Severance's workers prevents their work selves ("innies") from knowing anything about their out-of-hours selves ("outies"), their employer has no such limitation. Modern companies increasingly expect omniscience.

Both series reflect the growing ability of cyber systems to effect change in the physical world. Lachlan Urquhart, Lilian Edwards, and Derek McAuley used the science fiction comedy film Ron's Gone Wrong to examine the effect of errors at scale. The film's damaged robot, Ron, is missing safety features and spreads its settings to its counterparts. Would the AI Act view Ron as high or low risk? It may be a distinction without a difference; MacAuley reminded there will always be failures in the field. "A one-bit change can make changes of orders of magnitude." Then that chip ships by the billion, and can be embedded in millions of devices before it's found. Rinse, repeat, and apply to autonomous vehicles.

In Japan, however, as Naomi Lindvedt explained, the design culture surrounding robots has been far more influenced by the rules written for Astro Boy in 1951 by creator Tezuka Osamu than by Asimov's Laws. These rules are more restrictive and prescriptive, and designers aim to create robots that integrate into society and are user-friendly.

In other quick highlights, Michael Veale noted the Deliveroo ads that show food moving by itself, as if there are no delivery riders, and noted that technology now enforces the exclusivity that used to be contractual, so that drivers never see customer names and contact information, and so can't easily make direct arrangements; Tima Otu Anwana and Paul Eberstaller examined the business relationship between Only Fans and its creators; Sandra Schmitz-Berndt and Paula Contreras showed the difficulty of reporting cyber incidents given the multiple authorities and their inconsistent requirements; Adrian Aronsson-Storrier produced an extraordinary long-lest training video (Super-Betamax!) for a 500-year-old Swedish copyright cult; Helen Oliver discussed attitudes to privacy as revealed by years of UK high school students' entries for a competition to design fictional space stations; and Andy Phippen, based on his many discussions with kids, favors a harm reduction approach to online safety. "If the only horse in town is the Online Safety bill, nothing's going to change."


Illustrations: Image from the Icelandverse (by Inspired by Iceland).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 24, 2022

Creepiness at scale

Thumbnail image for 2001-hal.pngThis week, Amazon announced a prospective new feature for its Alexa "smart" speakers: the ability to mimic anyone;s voice from less than on minute of recording. Amazon is, incredibly, billing this as the chance to memorialize a dead loved one as a digital assistant.

As someone commented on Twitter, technology companies are not *supposed* to make ideas from science fiction dystopias into reality. As so often, Philip K. Dick got here first; in his 1969 novel Ubik, a combination of psychic powers and cryonics lets (rich) people visit and consult their dead, whose half-life fades with each contact.

Amazon can call this preserving "memories", but at The Overspill Charles Arthur is likely closer to reality, calling it "deepfake for voice". Except that were deepfakes emerged from a Reddit group and requires some technical effort, Amazon's functionality will be right there in millions of people's homes, planted by one of the world's largest technology companies. Questions abound: who gets access to the data and models, and will Amazon link it to its Ring doorbell network and thousands of partnerships with law enforcement?

The answers, like the service, are probably years off. The lawsuits may not be.

This piece began as some notes on the company that so far has been the technology industry's creepiest: the facial image database company Clearview AI. Clearview, which has built its multibillion-item database by scraping images off social media and other publicly accessible sites, has fallen foul of regulators in the UK, Australia, France, Italy, Canada, and Illinois. In a world full of intrusive companies collecting mass amounts of personal data about all of us, Clearview AI still stands out.

It has few, if any, defenders outside its own offices. For one thing, unlike Facebook or Google, it offers us - citizens, consumers - nothing in return for our data, which it appropriates wholesale. It is the ultimate two-sided market in which we are nothing but salable data points. It came to public notice in January 2020, when Kashmir Hill exposed its existence and asked if this was the company that was going to end privacy.

Clearview, which bills itself as "building a secure world one face at a time", defends itself against both data protection and copyright laws by arguing that scraping and storing billions of images from what law enforcement likes to call "open source intelligence" is legitimate because the images are posted in public. Even if that were how data protection laws work, it's not how copyright works! Both Twitter and Facebook told Clearview to stop scraping their sites shortly after Hill's article appeared in 2020, as did Google, LInkedIn, and YouTube. It's not clear if the company stopped or deleted any of the data.

Among regulators, Canada was first, starting federal and provincial investigations in June 2020, when Clearview claimed its database held 3 billion images. In February 2021, the Canadian Privacy Commissioner, David Therrien, issued a public warning that the company could not use facial images of Canadians without their explicit consent. Clearview, which had been selling its service to the Royal Canadian Mounted Police among dozens of others, opted to leave the country and mount a court challenge - but not to delete images of Canadians, as Therrien had requested.

In December 2021, the French data protection authority, CNIL, ordered Clearview to delete all the data it holds relating to French citizens within two months, and threatened further sanctions and administrative fines if the company failed to comply within that time.

In March 2022, with Clearview openly targeting 100 billion images and commercial users, Italian DPA Garante per la protezione dei dati personali fined Clearview €20 million, ordered it to delete any data it holds on Italians, and banned it from further processing of Italian citizens' biometrics.

In May 2022, the UK's Information Commissioner's Office fined the company £7.5 million and ordered it to delete the UK data it holds.

All these cases are based on GDPR and find the same complaints: Clearview has no legal basis for holding the data, and it is in breach of data retention rules and subjects' rights. Clearview appears not to care, taking the view that it is not subject to GDPR because it's not a European company.

It couldn't make that argument to the state of Illinois. In early May 2022, Clearview and the American Civil Liberties Union settled a court action filed in May 2020 under Illinois' Biometric Information Privacy Act. Result: Clearview has accepted a ban on selling its services or offering them for free to most private companies *nationwide* and a ban on selling access to its database to any private or state or local government entity, including law enforcement, in Illinois for five years. Clearview has also developed an opt-out form for Illinois residents to use to withdraw their photos from searches, and continue to try to filter out photographs taken in or uploaded from Illinois. On its website, Clearview paints all this as a win.

Eleven years ago, Google's then-CEO, Eric Schmidt, thought automating facial recognition was too creepy to pursue and synthesizing a voice from recordings took months. The problem isn't any more that potentially dangerous technology has developed faster than laws can be formulated to control it. It's that we now have well-funded companies that don't care about either.


Illustrations: HAL, from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 17, 2022

Level two

Tesla-crash-NYTimes-370.pngThis week provided two examples of the dangers of believing too much hype about modern-day automated systems and therefore overestimating what they can do.

The first is relatively minor: Google employee Blake Lemoine published his chats with a bot called LaMDA and concluded it was sentient "basd on my religious beliefs". Google put Lemoine on leave and the press ran numerous (many silly) stories. Veterans shrugged and muttered, "ELIZA, 1966".

The second, however...

On Wednesday, the US National Highway Traffic Safety Administration released a report (PDF) studying crashes involving cars under the control of "driver-assist" technologies. Out of 367 such crashes in the nine months after NHTSA began collecting data in July 2021, 273 involved Teslas being piloted by either "full self-driving software" or its precursor, "Tesla Autopilot".

There are important caveats, which NTHSA clearly states. Many contextual details are missing, such as how many of each manufacturer's cars are on the road and the number of miles they've traveled. Some reports may be duplicates; others may be incomplete (private vehicle owners may not file a report) or unverified. Circumstances such as surface and weather conditions, or whether passengers were wearing seat belts, are missing. Manufacturers differ in the type and quantity of crash data they collect. Reports may be unclear about whether the car was equipped with SAE Level 2 Advanced Driver Assistance Systems (ADAS) or SAE Levels 3-5 Automated Driving Systems (ADS). Therefore, NTHSA says, "The Summary Incident Report Data should not be assumed to be statistically representative of all crashes." Still, the Tesla number stands out, far ahead of Honda's 90, which itself is far ahead of the other manufacturers listed.

SAE, ADAS, and ADS refer to the system of levels devised by the Society of Automotive Engineers (now SAE International) in 2016. Level 0 is no automation at all; Level 1 is today's modest semi-automated assistance such as cruise control, lane-keeping, and automatic emergency braking. Level 2, "partial automation", is now: semi-automated steering and speed systems, road edge detection, and emergency braking.

Tesla's Autopilot is SAE Level 2. Level 3 - which may someday include Tesla's Full Self Drive Capability - is where drivers may legitimately begin to focus on things other than the road. In Level 4, most primary driving functions will be automated, and the driver will be off-duty most of the time. Level 5 will be full automation, and the car will likely not even have human-manipulable controls.

Right now, in 2022, we don't even have Level 3, though Tesla CEO Elon Musk keeps promising we're on the verge of it with his company's Full Self-Drive Capability, its arrival always seems to be one to two years away. As long ago as 2015, Musk was promising Teslas would be able to drive themselves while you slept "within three years"; in 2020 he estimated "next year" - and he said it again a month ago. In reality, it's long been clear that cars autonomous enough for humans to check out while on the road are further away than they seemed five years ago, as British transport commentator Christian Wolmar accurately predicted in 2018.

Many warned that Levels 2 and 3 are would be dangerous. The main issue, pointed out by psychologists and behavorial scientists, is that humans get bored watching a computer do stuff. In an emergency, where the car needs the human to take over quickly, said human, whose attention has been elsewhere, will not be ready. In this context it's hard to know how to interpret the weird detail in the NTHSA report that in 16 cases Autopilot disengaged less than a second before the crash.

The NHTSA news comes just a few weeks after a New York Times TV documentary investigation examining a series of Tesla crashes. Some it links to the difficulty of designing software that can distinguish objects across the road - that is, the difference between a truck crossing the road and a bridge. In others, such as the 2018 crash in Mountain View, California, the NTSB found a number of contributing factors, including driver distraction and overconfidence in the technology - "automation complacence", as Robert L. Sumwalt calls it politely.

This should be no surprise. In his 2019 book, Ludicrous, auto industry analyst Edward Niedermeyer mercilessly lays out the gap between the rigorous discipline embraced by the motor industry so it can turn out millions of cars at relatively low margins with very few defects and the manufacturing conditions Niedermeyer observes at Tesla. The high-end, high-performance niche sports cars Tesla began with were, in Niedermeyer's view, perfectly suited to the company's disdain for established industry practice - but not to meeting the demands of a mass market, where affordability and reliability are crucial. In line with Nidermeyer's observations, Bloomberg Intelligence predicts that Volkswagen will take over the lead in electric vehicles by 2024. Niedermeyer argues that because it's not suited to the discipline required to serve the mass market, Tesla's survival as a company depends on these repeated promises of full autonomy. Musk himself even said recently that the company is "worth basically zero" if it can't solve self-driving.

So: financial self-interest meets the danger zone of Level 2 with perceptions of Level 4. I can't imagine anything more dangerous.

Illustrations: One of the Tesla crashes investigated in New York Times Presents.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 27, 2022

Well may the bogeyman come

NCC-EPIC-award-CPDP-2022.jpgIt's only an accident of covid that this year's Computers, Privacy, and Data Protection conference - delayed from late January - coincided with the fourth anniversary of the EU's General Data Protection Regulation. Yet its failures and frustrations were on everyone's mind as they considered new legislation forthcoming from the EU: the Digital Services Act, the Digital Markets Act, and, especially, the AI Act,

Two main frustrations: despite GDPR, privacy invasions continue to expand, and, related, enforcement has been extremely limited. The first is obvious to everyone here. For the second...as Max Schrems explained in a panel on GDPR enforcement, none of the cross-border cases his NGO, noyb, filed on May 19, 2018, the day after GDPR came into force, have been decided, and even decisions on simpler cases have failed to deal with broader questions.

In one of his examples, Spain rejected a complaint because it wasn't doing historic cases and Austria claimed the case was solved because the organization involved had changed its procedures. "But my rights were violated then." There was no redress.

Schrems is the data protection bogeyman; because legal actions he has brought have twice struck down US-EU agreements to enable data flows, the possibility of "Schrems III" if the next version gets it wrong is frequently mentioned. This particular panel highlighted numerous barriers that block effective action.

Other speakers highlighted numerous gaps between countries that impede cross-border complaints: some authorities have tight deadlines that expire while other authorities are working to more leisurely schedules; there are many conflicts between national procedural laws; each data protection authority has its own approach and requirements; and every cross-border complaint must be time-consumingly translated into English, even when both relevant authorities speak, say, German. "Getting an answer to a two-minute question takes four months," Nina Herbort said, highlighting the common underlying problem: underresourcing.

"Weren't they designed to fail?" Finn Myrstad asked.

Even successful enforcement has largely been limited to levying fines - and despite some of the eye-watering numbers they're still just cost of doing business to major technology platforms.

"We have the tools for structural sanctions," Johnny Ryan said in a discussion on judicial actions. Some of that is beginning to happen. A day earlier, the UK'a Information Commissioner's Office fined Clearview AI £7.5 million and ordered it to delete the images it holds of UK residents. In February, Canada issued a similar order; a few weeks ago, Illinois permanently banned the company from selling its database to most private actors and businesses nationwide, and barred it from selling its service to any entity within Illinois for five years. Sanctions like these hurt more than fines as does requiring companies to delete the algorithms they've based on illegally acquired data.

Other suggestions included building sovereignty by ensuring that public procurement does not default to off-the-shelf products from a few foreign companies but is built on local expertise, advocated by. Jan-Philipp Albrecht, the former MEP who panel on the impact of Schrems II that he is now building up cloud providers using locally-built hardware and open source software for the province of Schleswig-Holstein. Quang-Minh Lepescheux suggested requiring transparency in how people are trained to use automated decision making systems and forcing technology providers to accept third-party testing. Cristina Caffara, probably the only antitrust lawyer in sight, wants privacy advocates and antitrust lawyers to work together; the economists inside competition authorities insist that more data means better products so it's good for consumers. Rebecca Slaughter wants to give companies the clarity they say they want (until they get it): clear, regularly updated rules banning a list of practices with a catchall. Ryan also noted that some sanctions can vastly improve enforcement efficiency: there's nothing to investigate after banning a company from making acquisitions. Enforcing purpose limitation and banning the single "OK to everything" is more complicated but, "Purpose limitation is Kryptonite to Big Tech when it's misusing data."

Any and all of these are valuable. But new kinds of thinking are also needed. The more complex issue and another major theme was the limitations of focusing on personal data and individual rights. This was long predicted as a particular problem for genetic data - the former science journalist Tom Wilkie was first to point out the implications, sounding a warning in his book Perilous Knowledge, published in 1994, at the beginning of the Human Genome Project. Singling out individuals who have been harmed can easily obfuscate collective damage. The obvious example is Cambridge Analytica and Facebook; the damage to national elections can't be captured one Friends list at a time, controls on the increasing use of aggregated data require protection at scale, and, perversely, monitoring for bias and discrimination requires data collection.

In response to a panel on harmful patterns in recent privacy proposals, an audience member suggested that the African philosophy of ubuntu as a useful source of ideas for thinking about collective and, even more important, *interdependent* data. This is where we need to go. Many forms of data - including both genetic data and financial data - cannot be thought of any other way.


Illustrations: The Norwegian Consumer Council receives EPIC's International Privacy Champion award at CPDP 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 18, 2022

There may be trouble ahead...

ElliQ7.pngOne of the first things the magician and paranormal investigator James Randi taught all of us in the skeptical movement was the importance of consulting the right kind of expert.

Randi made this point with respect to tests of paranormal phenomena such as telekinesis and ESP. At the time - the 1970s and 1980s - there was a vogue for sending psychic claimants to physicists for testing. A fair amount of embarrassment ensued. As Randi liked to say, physicists, like many other scientists, are not experienced in the art of deception. Instead, they are trained to assume that things in their lab do not lie to them.

Not a safe assumption when they're trying to figure out how a former magician has moved an empty plastic film can a few millimeters, apparently with just the power of their mind. Put in a magician who knows how to set up the experiment so the claimant can't cheat, and *then* if the effect still occurs you know something genuinely weird is going on.

I was reminded of this reading this quote from Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins, writing in Nature: "When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research,"

The article itself is scary enough for one friend to react to it with, "This is the apocalypse". The researchers undertook a "thought experiment" after the Swiss Federal Institute for NBC Protection (Spiez Laboratory), asked theiir company, Collaborations Pharmaceuticals Inc, to provide a presentation on how their AI technology could be misused in drug discovery to its biennial conference on new technologies and their implications for the Chemical and Biological Weapons conventions. They work, they write, in an entirely virtual world; their molecules exist only in their computer. It had never previously occurred to them to wonder if the machine learning models they were building to help design new molecules that could be developed into new, life-saving drugs could be turned to generating toxins instead. Asked to consider it, they quickly discovered that it was disturbingly easy to generate prospective lethal neurotoxins. Because: generating potentially helpful molecules required creating models to *avoid* toxicity - which meant being able to predict its appearance.

As they go on to say, our general discussions of the potential harms AI can enable are really very limited. The biggest headlines go to putting people out of work; the rest is privacy, discrimination, fairness, and so on. Partly, that's because those are the ways AI has generally been most visible: automation that deskills or displaces humans, or algorithms that make decisions about government benefits, employment, education, content recommendations, or criminal justice outcomes. But also it's because the researchers working on this technology blinker their imagination to how they want their new idea to work.

The demands of marketing don't help. Anyone pursuing any form of research, whether funded by industry or government grant, has to make the case for why they should be given the money. So of course in describing their work they focus on the benefits. Those working on self-driving cars are all about how they'll be safer than human drivers, not scary possibilities like widespread hundred-car pileups if hackers were to find a way to exploit unexpected software bugs to make them all go haywire at the same time.

Sadly, many technology journalists pick up only the happy side. On Wednesday, as one tiny example, the Washington Post published a cheery article about EliiQ, an Alexa-like AI device "designed for empathy" meant to keep lonely older people company. The commenters saw more of the dark side than the writer did: ongoing $30 subscription, data collection and potential privacy invasion, and, especially, potential for emotional manipulation as the robot tells its renter what it (not she, as per writer Steven Zeitchik) calculates they want to hear.

It's not like this is the first such discovery. Malicious Generative Adversarial Networks (GANs) are the basis of DeepFakes. If you can use some new technology for good, why *wouldn't* you be able to use it for evil? Cars drive sick kids to hospitals and help thieves escape. Computer programmers write word processors and viruses, the Internet connects us directly to medical experts and sends us misinformation, cryptography protects both good and bad secrets, robots help us and collect our data. Why should AI be different?

I'd like to think that this paper will succeed where decades of prior experience have failed, and make future researchers think more imaginatively about how their work can be abused. Sadly, it seems a forlorn hope.

In Gemma Milne's 2020 book examining how hype interferes with our ability to make good decisions about new technology, Smoke and Mirrors, she warns that hype keeps us from asking the crucial question: Is this new technology worth its cost? Potential abuse is part of that cost-benefit assessment. We need researchers to think about what can go wrong a lot earlier in the development cycle - and we need them to add experts in the art of forecasting trouble (science fiction writers, perhaps?) to their teams. Even technology that looks like magic...isn't.

Illustrations: EliiQ (company PR photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 18, 2022

The search for intelligent life

IPFS-medium-zkcapital.jpegThe mythology goes like this. In the beginning, the Internet was decentralized. Then came money and Web 2.0, and they warped the best dreams of Web 2.0 into corporate giants. Now, web3 is going to restore the status ante?

Initial reaction: why will it be different this time?

Maybe it won't. Does that mean people shouldn't try? Ah. No. No, it does not.

One reason it's so difficult to write about web3 is that under scrutiny it dissolves into a jumble of decentralized web, cryptocurrencies, blockchain, and NFTs, though the Economist has an excellent explanatory podcast. Decentralizing the web I get: ever since Edward Snowden decentralization has been seen as a way to raise the costs of passive surveillance. The question has been: how? Blockchain and bitcoin sound nothing like the web - or a useful answer.

But even if you drop all the crypto stuff and just say "decentralized web to counter surveillance and censorship, it conveys little to the man on the Clapham omnibus. Try to explain, and you rapidly end up in a soup of acronyms that are meaningful only to technologists. In November, on first encountering web3, I suggested there are five hard problems. The first of those, ease of use, is crucial. Most people will always flock to whatever requires least effort; the kind of people who want to build a decentralized Internet are emphatically unusual. The biggest missed financial opportunity of my lifetime will likely have been ignoring the advice to buy some bitcoin in 2009 because it was just too much trouble. Most of today's big Internet companies got that way because whatever they were offering was better - more convenient, saved time, provided better results.

This week, David Rosenthal, developer of core Nvidia technologies, published a widely-discussed dissection of cryptocurrencies and blockchain, which Cory Doctorow followed quickly with a recap/critique. Tl;dr: web3 is already centralized, and blockchain and cryptocurrencies only pay off if their owners can ignore the external costs they impose on the rest of the world. Rosenthal argues that ignoring externalities is inherent in theSilicon Valley-type libertarianism from which they sprang.

Rosenthal also makes an appearance in the Economist podcast to explain that if you ask most people what the problems are with the current state of the Web, they don't talk centralization. They talk about overwhelming amounts of advertising, harassment, scams, ransomware, and expensive bandwidth. In his view, changing the technical infrastructure won't change the underlying economics - scale and network effects - that drive centralization, which, as all of these commentators note, has been the eventual result of every Internet phase since the beginning.

It's especially easy to be suspicious about this because of the venture capital money flooding in seeking returns.

"Get ready for the crash," Tim O'Reilly told CBS News. In a blog posting last December, he suggestshow to find the good stuff in web3: look for the parts that aren't about cashing out and getting rich fast but *are* about solving hard problems that matter in the real world.

This is all helpful in understanding the broader picture, but doesn't answer the question of whether there's presently meat inside web3. Once bitten, twice shy, three times don't be ridiculous.

What gave me pause was discovering that Danny O'Brien has gone to work for the Filecoin Foundation and the Filecoin Foundation for the Distributed Web - aka, "doing something in web3". O'Brien has a 30-year history of finding the interesting places to be. In the UK, he was one-half of the 1990s must-read newsletter NTK, whose slogan was "They stole our revolution. Now we're stealing it back." Filecoin - a project to develop blockchain-based distributed storage, which he describes as "the next generation of something like Bittorrent" - appears to be the next stage of that project. The mention of Bittorrent reminded how technologically dull the last few years have been.

O'Brien's explanation of Filecoin and distributed storage repeatedly evoked prior underused art that only old-timers remember. For example, in 1997 Cambridge security engineer Ross Anderson proposed the Eternity Service, an idea for distributing copies of data around the world so its removal from the Internet would be extremely difficult. There was Ian Clarke's 1999 effort to build such a thing, Freenet, a peer-to-peer platform for distributing data that briefly caused a major moral panic in the UK. Freenet failed to gain much adoption - although it's still alive today - because no one wanted to risk hosting unknown caches of data. Filecoin intends to add financial economic incentives: think a distributed cloud service.

O'Brien's mention of the need to ensure that content remains addressable evokes Ted Nelson's Project Xanadu, a pre-web set of ideas about sharing information. Finally, zero-knowledge proofs make it possible to show a proof that you have run a particular program and gotten back a specific result without revealing the input. The mathematics involved is arcane, but the consequence is far-reaching: you can prove results *and* protect privacy.

If this marriage of old and new research is "web3", suddenly it sounds much more like something that matters. And it's being built, at least partly, by people who remember the lessons of the past well enough not to repeat them. So: cautious signs that some part of "web3" will do something.


Illustrations: Diagram of centralized vs decentralized (IPFS) systems (from zK Capital at Medium).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 12, 2021

Third wave

512px-Web_2.0_Map.svg.pngIt seems like only yesterday that we were hearing that Web 2.0 was the new operating system of the Internet. Pause to look up. It was 2008, in the short window between the founding of today's social media giants (2004-2006) and their smartphone-accelerated explosion (2010).

This week a random tweet led me to discover Web3. As Aaron Mak explains at Slate, "Web3" is an idea for running a next-generation Internet on public blockchains in the interests of decentralization (which net.wars has long advocated). To date, the aspect getting the most attention is decentralized finance (DeFi, or, per Mary Branscombe, deforestation finance), a plan for bypassing banks and governments by conducting financial transactions on the blockchain.

At Freecode, Nader Dabit goes into more of the technical underpinnings. At Fabric Ventures (Medium), Max Mersch and Richard Muirhead explain its importance. Web3 will bring a "borderless and frictionless" native payment layer (upending mediator businesses like Paypal and Square), bring the "token economy" to support new businesses (upending venture capitalists), and tie individual identity to wallets (bypassing authentication services like OAuth, email plus password, and technology giant logins), thereby enabling multiple identities, among other things. Also interesting is the Cloudflare blog, where Thibault Meunier states that as a peer-to-peer system Web3 will use cryptographic identifiers and allow users to selectively share their personal data at their discretion. Some of this - chiefly the robustness of avoiding central points of failure - is a return to the Internet's original design goals.

Standards-setter W3C is working on at least one aspect - cryptographically verifiable Decentralized Identifiers, and it's running into opposition, from Google, Apple, and Mozilla, whose browsers control 87% of the market.

Let's review a little history.

The 20th century Internet was sorta, kinda decentralized, but not as much as people like to think. The technical and practical difficulties of running your own server at home fueled the growth of portals and web farms to do the heavy lifting. Web design went from plain text (see for example, Live Journal and Blogspot (now owned by Google). You can argue about how exactly it was that a lot of blogs died off circa 2010, but I'd blame Twitter, writers found it easier to craft a sentence or two and skip writing the hundreds of words that make a blog post. Tim O'Reilly and Clay Shirky described the new era as interactive, and moving control "up the stack" from web browsers and servers to the services they enabled. Data, O'Reilly predicted, was the key enabler, and the "long tail" of niche sites and markets would be the winner. He was right about data, and largely wrong about the long tail. He was also right about this: "Network effects from user contributions are the key to market dominance in the Web 2.0 era." Nearly 15 years later, today's web feels like a landscape of walled cities encroaching on all the public pathways leading between them.

Point Network (Medium) has a slightly different version of this history; they call Web 1.0 the "read-only web"; Web 2.0 the "server/cloud-based social Web", and Web3 the "decentralized web".

The pattern here is that every phase began with a "Cambrian" explosion of small sites and businesses and ended with a consolidated and centralized ecosystem of large businesses that have eaten or killed everyone else. The largest may now be so big that they can overwhelm further development to ensure their future dominance; at least, that's one way of looking at Mark Zuckerberg's metaverse plan.

So the most logical outcome from Web3 is not the pendulum swing back to decentralization that we may hope, but a new iteration of the existing pattern, which is at least partly the result of network effects. The developing plans will have lots of enemies, not least governments, who are alert to anything that enables mass tax evasion. But the bigger issue is the difficulty of becoming a creator. TikTok is kicking ass, according to Chris Stokel-Walker, because it makes it extremely easy for users to edit and enhance their videos.

I spy five hard problems. One: simplicity and ease of use. If it's too hard, inconvenient, or expensive for people to participate as equals, they will turn to centralized mediators. Two: interoperability and interconnection. Right now, anyone wishing to escape the centralization of social media can set up a Discord or Mastodon server, yet these remain decidedly minority pastimes because you can't message from them to your friends on services like Facebook, WhatsApp, Snapchat, or TikTok. A decentralized web in which it's hard to reach your friends is dead on arrival. Three: financial incentives. It doesn't matter if it's venture capitalists or hundreds of thousands of investors each putting up $10, they want returns. As a rule of thumb, decentralized ecosystems benefit all of society; centralized ones benefit oligarchs - so investment flows to centralized systems. Four: sustainability. Five: how do we escape the power law of network effects?

Gloomy prognostications aside, I hope Web3 changes everything, because in terms of its design goals, Web 2.0 has been a bust.


Illustrations: Tag cloud from 2007 of Web 2.0 themes (Markus Angermeier and Luca Cremonini, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 13, 2021

Legacy

QRCode-2-Structure.pngThe first months of the pandemic saw a burst of energetic discussion about how to make it an opportunity to invest in redressing inequalities and rebuilding decaying systems - public health, education, workers' rights. This always reminded me of the great French film director François Truffaut, who, in his role as the director of the movie-within-the-movie in Day for Night, said, "Before starting to shoot, I hope to make a fine film. After the problems begin, I lower my ambition and just hope to finish it." It seemed more likely that if the pandemic went on long enough - back then the journalist Laurie Garrett was predicting a best case of three years - early enthusiasm for profound change would drain away to leave most people just wishing for something they could recognize as "normal". Drinks at the pub!

We forget what "normal" was like. London today seems busy. But with still no tourists, it's probably a tenth as crowded as in August 2019.

Eighteen months (so far) has been long enough to make new habits driven by pandemic-related fears, if not necessity, begin to stick. As it turns out the pandemic's new normal is really not the abrupt but temporary severance of lockdown, which brought with it fears of top-down government-driven damage to social equity and privacy: covid legislation, imminuty passports, and access to vaccines. Instead, the dangerous "new normal" is the new habits building up from the bottom. If Garrett was right, and we are at best halfway through this, these are likely to become entrenched. Some are healthy: a friend has abruptly realized that his grandmother's fanaticism about opening windows stemmed from living through the 1918 Spanish flu pandemic. Others...not so much.

One of the first non-human casualties of the pandemic has been cash, though the loss is unevenly spread. This week, a friend needed more than five minutes to painfully single-finger-type masses of detail into a pub's app, the only available option for ordering and paying for a drink. I see the convenience for the pub's owner, who can eliminate the costs of cash (while assuming the costs of credit cards and technological intermediation) and maybe thin the staff, but it's no benefit to a customer who'd rather enjoy the unaccustomed sunshine and chat with a friend. "They're all like this now," my friend said gloomily. Not where I live, fortunately.

Anti-cash campaigners have long insisted that cash is dirty and spreads disease; but, as we've known for a year, covid rarely spreads through surfaces, and (as Dave Birch has been generous enough to note) a recent paper finds that cash is sometimes cleaner. But still: try to dislodge the apps.

A couple of weeks ago, the Erin Woo at the New York Times highlighted cash-free moves. In New York City, QR codes have taken over in restaurants and stores as contact-free menus and ordering systems. In the UK, QR codes mostly appear as part of the Test and Trace contact tracing app; the idea is you check in when you enter any space, be it restaurant, cinema, or (ludicrously) botanic garden, and you'll be notified if it turns out it was filled with covid-infected people when you were there.

Whatever the purpose, the result is tight links between offline and online behavior. Pre-pandemic, these were growing slowly and insidiously; now they're growing like an invasive weed at a time when few of us can object. The UK ones may fall into disuse alongside the app itself. But Woo cites Bloomberg: half of all US full-service restaurant operators have adopted QR-code menus since the pandemic began.

The pandemic has also helped entrench workplace monitoring. By September 2020, Alex Hern was reporting at the Guardian that companies were ramping up their surveillance of workers in their homes, using daily mandatory videoconferences, digital timecards in the form of cloud logins, and forced participation on Slack and other channels.

Meanwhile at NBC News, Olivia Solon reports that Teleperformance, one of the world's largest call center companies, to which companies like Uber, Apple, and Amazon outsource customer service, has inserted clauses in its employment contracts requiring workers to accept in-home cameras that surveil them, their surroundings, and family members under 18. Solon reports that the anger over this is enough to get these workers thinking about unionizing. Teleperformance is global; it's trying this same gambit in other countries.

Nearer to home, all along, there's been a lot of speculation about whether anyone would ever again accept commuting daily. This week, the Guardian reports that only 18% of workers have gone back to their offices since UK prime minister Boris Johnson ended all official restrictions on July 19. Granted, it won't be clear for some time whether this is new habit or simply caution in the face of the fact that Britain's daily covid case numbers are still 25 times what they were a year ago. In the US, Google is suggesting it will cut pay for staff who resist returning to the office, on the basis that their cost of living is less. Without knowing the full financial position, doesn't it sound like Google is saving money twice?

All these examples suggest that what were temporary accommodations are hardening into "the way things are". Undoing them is a whole new set of items for last year's post-pandemic to-do list.


Illustrations: Graphic showing the structure of QR codes (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 23, 2021

Internet fragmentation as a service

Screenshot from 2021-07-23 11-48-13.png"You spend most of your day telling a robot that you're not a robot. Think about that for two minutes and tell me you don't want to walk into the ocean," the comedian John Mulaney said in his 2018 comedy special, Kid Gorgeous. He was talking about captchas.

I was reminded of this during a recent panel at the US Internet Governance Forum hosted by Mike Nelson. Nelson's challenge to his panelists: imagine alternative approaches to governments' policy goals that won't damage the Internet. They talked about unintended consequences (and the exploitation thereof) of laws passed with good intentions, governments' demands for access to data, ransomware, content blocking, multiplying regional rulebooks, technical standards and interoperability, transparency, and rising geopolitical tensions, which cyberspace policy expert Melissa Hathaway suggested should be thought about by playing a mash-up of the games Risk and Settlers of Catan.The main topic: is the Internet at risk of Internet fragmentation?

So much depends on what you mean by "fragmentation". No one mentioned the physical damage achievable by ten backhoes. Nor the domain name system that allows humans and computers to find each other; "splitting the root" (that is, the heart of the DNS) used to dominate such discussions. Nor captchas, but the reason Mulaney sprang to mind was that every day (in every way) captchas frustrate access. Saying that makes me privileged; in countries where Facebook is zero-rated but the rest of the Internet costs money people can't afford on their data plans, the Internet is as cloven as it can possibly be.

Along those lines, Steve DelBianco raised the idea of splintering-by-local-law, the most obvious example being the demand in many countries for data localization. DelBianco, however, cited Illinois' Biometric Information Privacy Act (2008), which has been used to sue platforms on behalf of unnamed users for automatically tagging their photos online. Result: autotagging is not available to Illinois users on the major platforms, and neither is the Google Nest and Amazon Ring doorbells' facility for recognizing and admitting friends and family. See also GDPR, noted above, which three and a half years after taking force still has US media sites blocking access by insisting that our European visitors are important to us.

You could also say that the social Internet is splintering along ideological lines as the extreme right continue to build their own media and channels. In traditional media, this was Roger Ailes' strategy. Online, the medium designed to connect people doesn't care who it connects or for what purpose. Commercial social media engagement algorithms have exacerbated this, as many current books make plain.

Nelson, whose Internet policy experience goes back to the Clinton administration, suggested that policy change is generally driven by a big event: 9/11, for example, which led promptly to the passage of the PATRIOT Act (US) and the Anti-Terrorism, Crime, and Security Act (UK), or the Colonial Pipeline hack that has made ransomware an urgent mainstream concern. So, he asked: what kind of short, sharp shock would cause the Internet to fracture? If you see data protection law as a vector, the 2013 Snowden revelations were that sort of event; a year earlier, GDPR looked like fading away.

You may be thinking, as I was, that we're literally soaking in global catastrophes: the COVID-19 pandemic, and climate change. Both are slow-burning issues, unlike the high-profile drivers of legislative panic Nelson was looking for, but both generate dozens of interim shocks.

I'm always amazed so little is said about climate change and the future of the Internet; the IT industry's emissions just keep growing. China's ban on cryptocurrency mining, which it attributes to environmental concerns, may be the first of many such limits on the use of computing power. Disruptions to electricity supplies - just yesterday, the UK's National Grid warned there may be blackouts this winter - don't "break" the Internet, but they do make access precarious.

So far, the pandemic's effect has mostly been to exacerbate ideological splits and accelerate efforts to curb the spread of misinformation via social media. It's also led to increased censorship in some places; early on, China banned virus-related keywords on WeChat, and this week the Indian authorities raided a newspaper that criticized the government's pandemic response. In addition, the exposure and exacerbation of social inequalities brought by the pandemic may, David Bray suggested in the panel, be contributing to the increase in cybercrime, as "failed states" struggle to rescue their economies. This week's revelations of the database of numbers of interest to NSO Group clients since 2016 doesn't fragment the Internet as a global communications system, but it might in the sense that some people may not be able to afford the risk of being on it.

This is where Mulaney comes in. Today, robots gatekeep web pages. Three trends seem likely to expand their role: online, age verification and online safety laws; covid passports, which are beginning to determine access to physical-world events; and the Internet of Things, which is bridging what's left of the divide between cyberspace and the real world. In the Internet subsumed into everything of our future, "splitting the Internet" may no longer be meaningful as the purely virtual construct Nelson's panel was considering. In the cyber-physical world world, Internet fragmentation must also be hybrid.


Illustrations: The IGF-USA panel in action.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 2, 2021

Medical apartheid

swiss-cheese-virus-defence.jpgEver since 1952, when Clarence Willcock took the British government to court to force the end of wartime identity cards, UK governments have repeatedly tried to bring them back, always claiming they would solve the most recent public crisis. The last effort ended in 2010 after a five-year battle. This backdrop is a key factor in the distrust that's greeting government proposals for "vaccination passports" (previously immunity passports). Yesterday, the Guardian reported that British prime minister Boris Johnson backs certificates that show whether you've been vaccinated, have had covid and recovered, or had a test. An interim report will be published on Monday; trials later this month will see attendees to football matches required to produce proof of negative lateral flow tests 24 hours before the game and on entry.

Simultaneously, England chief medical officer Chris Whitty told the Royal Society of Medicine that most experts think covid will become like the flu, a seasonal disease that must be perennially managed.

Whitty's statement is crucial because it means we cannot assume that the forthcoming proposal will be temporary. A deeply flawed measure in a crisis is dangerous; one that persists indefinitely is even more so. Particularly when, as this morning, culture secretary Oliver Dowden tries to apply spin: "This is not about a vaccine passport, this is about looking at ways of proving that you are covid secure." Rebranding as "covid certificates" changes nothing.

Privacy advocates and human rights NGOs saw this coming. In December, Privacy International warned that a data grab in the guise of immunity passports will undermine trust and confidence while they're most needed. "Until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair." We are a long, long way from that universal access and likely to remain so; today's vaccines will have to be updated, perhaps as soon as September. There is substantial, but not enough, parliamentary opposition.

A grassroots Labour discussion Wednesday night showed this will become yet another highly polarized debate. Opponents and proponents combine issues of freedom, safety, medical efficacy, and public health in unpredictable ways. Many wanted safety - "You have no civil liberties if you are dead," one person said; others foresaw segregation, discrimination, and exclusion; still others cited British norms in opposing making compulsory either vaccinations or carrying any sort of "papers" (including phone apps).

Aside from some specific use cases - international travel, a narrow range of jobs - vaccination passports in daily life are a bad idea medically, logistically, economically, ethically, and functionally. Proponents' concerns can be met in better - and fairer - ways.

The Independent SAGE advisory group, especially Susan Michie, has warned repeatedly that vaccination passports are not a good solution for solution life. The added pressure to accept vaccination will increase distrust, she has repeatedly said, particularly among victims of structural racism.

Instead of trying to identify which people are safe, she argues that the government should be guiding employers, businesses, schools, shops, and entertainment venues to make their premises safer - see for example the CDC's advice on ventilation and list of tools. Doing so would not only help prevent the spread of covid and keep *everyone* safe but also help prevent the spread of flu and other pathogens. Vaccination passports won't do any of that. "It again puts the burden on individuals instead of spaces," she said last night in the Labour discussion. More important, high-risk individuals and those who can't be vaccinated will be better protected by safer spaces than by documentation.

In the same discussion, Big Brother Watch's Silkie Carlo predicted that it won't make sense to have vaccination passports and then use them in only a few places. "It will be a huge infrastructure with checkpoints everywhere," she predicted, calling it "one of the civil liberties threats of all time" and "medical apartheid" and imagining two segregated lines of entry to every venue. While her vision is dramatic, parts of it don't go far enough: imagine when this all merges with systems already in place to bar access to "bad people". Carlo may sound unduly paranoid, but it's also true that for decades successive British governments at every decision point have chosen the surveillance path.

We have good reason to be suspicious of this government's motives. Throughout the last year, Johnson has been looking for a magic bullet that will fix everything. First it was contact tracing apps (failed through irrelevance), then test and trace (failing in the absence of "and isolate and support"), now vaccinations. Other than vaccinations, which have gone well because the rollout was given to the NHS, these failed high-tech approaches have handed vast sums of public money to private contractors. If by "vaccination certificates" the government means the cards the NHS gives fully-vaccinated individuals listing the shots they've had, the dates, and the manufacturer and lot number, well fine. Those are useful for those rare situations where proof is really needed and for our own information in case of future issues, it's simple, and not particularly expensive. If the government means a biometric database system that, as Michie says, individualizes the risk while relieving venues of responsibility, just no.

Illustrations: The Swiss Cheese Respiratory Virus Defence, created by virologist Ian McKay.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 25, 2020

The zero on the phone

WeRobot2020-Poster.jpegAmong the minor casualties of the pandemic has been the appearance of a Swiss prototype robot at this year's We Robot, the ninth year of this unique conference that crosses engineering, technology policy, and law to identify future conflicts and pre-emptively suggest solutions. The result was to leave the robots considered by this virtual We Robot remarkably (appropriately) abstract.

We Robot was founded to get a jump on the coming conflicts that robots will bring to law and policy, in part so that we don't repeat the Internet experience of repeating the same arguments decades on end. This year's event pre-empted the Internet experience in a new way: many authors have drawn on the failed optimism and cooperation of the 1990s to begin defining ways to ensure that robotics and AI do not follow the same path. Where at the beginning we were all eager to embrace robots, this year their disembodied AIs are being done *to* us.

In the one slight exception to this rule, Hallie Siegel's exploration of senior citizens' attitudes towards new technologies found that the seniors she studies are pragmatic, concerned about their privacy and autonomy and only really interested in technologies that provided benefits they really need.

Jason Millar and Elizabeth Gray drew directly on the Internet experience by comparing network neutrality to the issues surrounding the mapping software that controls turn-by-turn navigation systems in a discussion of "mobility shaping". Should navigation services be common carriers, as telephone lines are? The idea appeals to me, if only because the potential for physical control of where our vehicles are allowed to go seems so clear.

The theme of exploitation was particularly visible in the two papers on Africa. In the first, Arthur Gwagwa (Strathmore University, Nairobi), Erika Kraemer-Mbula, Nagla Rizk, Isaac Rutenberg, and Jeremy de Beer warn that the combination of foreign capital and local resources is likely to reproduce the power structures of previous forms of colonialism, an argument also seen recently in a paper by Abeba Birhane. Women in particular, who run the majority of start-ups in some African countries, may be ignored, and the authors suggest that a GDPR-like rule awarding individuals control over their own data could be crucial in creating value for, rather than extracted from, Africa.

In the second, Laura Foster (Indiana University), Bram Van Wiele, and Tobias Schönwetter extracted a database of press stories about AI in Africa from Lexis-Nexus, to find the familiar set of claims for new technology: happy, value-neutral disruption, yay!. The failure of most of these articles to consider gender and race, they observed, doesn't make the emerging picture neutral, but serves to reinforce the default of the straight, white male.

One way we push back against AI/robot control is the "human in the loop" to whom the final decision is delegated. This human has featured in every We Robot conference, most notably in 2016 as Madeleine Elish's moral crumple zone. In his paper, Liam McCoy argues for the importance of meaningful control, because the middle ground, where the human is expected to solve the most complex situations where AI fails without support or authority is truly dangerous. The middle ground may be profitable; at UK IGF a few weeks ago, Gus Hosein noted that automating dispute resolution has what's made GAFA rich. But in the higher stakes of cyber-physical systems, the human you summon by pushing zero has to be able to make a difference.

Silvia de Conca's idea of "human-centered legal design", which sought to give autonomous agents a duty of care as a way of filling the gap in liability that presently exists, and Cynthia Khoo's interest in vulnerable communities who are harmed by behavior that emerges from combined business models, platform scale, human nature, and algorithm design presented different methods of putting a human in the loop. Often, Khoo has found in investigating this idea, the potential harm was in fact known and simply ignored; how much can and should be foreseen when system parts interact in unexpected ways is a rising issue.

Several papers explored previously unnoticed vectors for bias and control. Sentiment analysis, last seen being called "the snake oil of 2011", and its successor, emotion analysis, which I first saw explored in the 1990s by Rosalind Picard at MIT, are creeping into AI systems. Some are particularly dubious: aggression detection systems and emotion recognition cameras.

Emily McBain-Ashfield and Jason Millar are the first I'm aware of to study how stereotyping gets into these systems. Yes, it's in the data - but the problem lies in the process analyzing and tagging it. The authors found three methods of doing this: manual (human, slow), dictionary-based using seed words (automated), and crowdsourced (see also Mary L. Gray and Siddharth Suri's 2019 book, Ghost Work. All have problems; automating that sort of issue creates notoriously crude mistakes, and the participants in crowdsourcing may be from very different linguistic and cultural contexts.

The discussant for this paper, Osonde Osaba sounded appalled: "By having these AI models of emotion out in the wild in commercial products we are essentially sanctioning the unregulated experimentation on humans and their emotional processes without oversight or control."

Remedies have to contend, however, with the legacy infrastructure. Alice Xiang discovered a conflict between traditional anti-discrimination law, which bars decision making based on a set of protected classes and the technical methods of mitigating algorithmic bias. "If we're not careful," she said, "the vast majority of approaches proposed in machine learning literature might actually be illegal if they are ever tested in court."

We Robot 2020 was the first to be held outside the US, and chairs Florian Martin-Bariteau, Jason Millar, and Katie Szilagyi set out to widen its international character and diversity. When the pandemic hit, the resulting exceptional breadth of location of authors and discussants made it infeasible to ask everyone to pretend they were in Ottawa's time zone. The conference therefore has recorded the authors' and discussants' conversations as if live - which means that you, too, can experience the originals. Just follow the links. We Robot events not already linked here: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference.


Illustrations: Our robot avatars attend the conference for us on the We Robot 2020 poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 31, 2020

Driving while invisible

jamesbond-invisiblecar.jpegThe point is not whether it's ludicrous but whether it breaks the law.

Until Hannah Smethurst began speaking at this week's gikii event - the year's chance to mix law, digital rights, and popular culture - I had not realized just how many invisible vehicles there are in our books and films. A brief trawl turns up: Wonder Woman's invisible jet, Harry Potter's invisibility cloak and other invisibility devices, and James Bond's invisible Aston Martin. Do not trouble me with your petty complaints about physics. This is about the law.

Every gikii (see here for links to writeups of previous years) - ranges from deeply serious-with-a-twist to silly-with-an-insightful undercurrent. This year's papers included the need for a fundamental rethink of how we regulate power (Michael Veale), the English* "bubble" law that effectively granted flatmates permanent veto power over each other's choice of sex partner (gikii founder Lilian Edwards), and the mistaken-identity frustrations of having early on used your very common name as your Gmail address (Jat Singh).

In this context, Smethurst's paper is therefore business as usual. As she explained, there is nothing in highway legislation that requires your car to be visible. The same is not true of number plates, which the law says must be visible at all times. But can you enforce it? If you can't see the car, how do you know you can't see the number plate? More uncertain is the highway code's requirement to indicate braking and turns when people don't know you're there; Smethurst suggested that a good lawyer could argue successfully that turning on the lights unexpectedly would dazzle someone. No, she said, the main difficulty is the dangerous driving laws. Well, that and the difficulty of getting insurance to cover the many accidents when people - pedestrians, cyclists, other cars - collide with it.

This raised the possibly of "invisibility lanes", an idea that seems like it should be the premise for a sequel to Death Race 2000. My overall conclusion: invisibility is like online anonymity. People want it for themselves, but not for other people - at least, not for other people they don't trust to behave well. If you want an invisible car so you can drive 100 miles an hour with impunity, I suggest a) you probably aren't safe to have one, and b) try driving across Kansas.

We then segued into the really important question: if you're riding an invisible bike, are *you* visible? (General consensus: yes, because you're not enclosed.)

On a more serious note, people have a tendency to laugh nervously when you mention that numerous jurisdictions are beginning to analyze sewage for traces of coronavirus. Actually, wastewater epidemiology, as this particular public health measure is known, is not a new surveillance idea born of just this pandemic, though it does not go all the way back to John Snow and the Broadwick Street pump. Instead, Snow plotted known cases on a map, and spotted the pump as the source of contagion when they formed a circle around it. Still, epidemiology did start with sewage.

In the decades since wastewater epidemiology was developed, some of its uses have definitely had an adversarial edge, such asestablishing the level of abuse of various drugs and doping agents or particular diseases in a given area. The goal, however, is not to supposed to be trapping individuals; instead it's to provide population-wide data. Because samples are processed at the treatment plant along with everyone else's, there's a reasonable case to be made the system is privacy-preserving; even though you could analyze samples for an individual's DNA and exact microbiome, matching any particular sample to its own seems unlikely.

However, Reuben Binns argued, that doesn't mean there are no privacy implications. Like anything segmented by postcode, the catchment areas defined for such systems are likely to vary substantially in the number of households and individuals they contain, and a lot may depend on where you put the collection points. This isn't so much an issue for the present purpose, which is providing an early-warning system for coronavirus outbreaks, but will be later, when the system is in place and people want to use it for other things. A small neighborhood with a noticeable concentration of illegal drugs - or a small section of an Olympic athletes village with traces of doping agents above a particular threshold - could easily find itself a frequent target of more invasive searches and investigations. Also, unless you have your own septic field, there is no opt-out.

Binns added this unpleasant prospect: even if this system is well-intentioned and mostly harmless, it becomes part of a larger "surveillant assemblage" whose purpose is fundamentally discriminatory: "to create distinctions and hierarchies in populations to treat them differently," as he put it. The direction we're going, eventually every part of our infrastructure will be a data source, for our own good.

This was also the point of Veale's paper: we need to stop focusing primarily on protecting privacy by regulating the use and collection of data, and start paying attention to the infrastructure. A large platform can throw away the data and still have the models and insights that data created - and the exceptional computational power to make use of it. All that infrastructure - there's your invisible car.

Illustrations: James Bond's invisible car (from Live and Let Die).

*Correction: I had incorrectly identified this law as Scottish.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 24, 2020

The invisible Internet

1964 world's fair-RCA_Pavilion-Doug Coldwell.jpgThe final session of this week's US Internet Governance Forum asked this question: what do you think Internet governance will look like five, ten, and 25 years from now?

Danny Weizner, who was assigned 25 years, started out by looking back 25 years to 1995, and noted that by and large we have the same networks, and he therefore thinks we will have largely the same networks in 2045. He might have - but didn't - point out how many of the US-IGF topics were the same ones we were discussing in 1995: encryption and law enforcement access, control of online content, privacy, and cyber security. The encryption panel was particularly nostalgic; it actually featured three of the same speakers I recall from the mid-1990s on the same topic. The online content one owed its entertainment value to the presence of one of the original authors of Section 230, the liability shield written into the 1996 Communications Decency Act. There were newcomers: 5G; AI, machine learning, and big data; and some things to do with the impact of the pandemic.

As Laura DeNardis then said, looking back to the past helps when thinking about the future, if only to understand how much change can happen in that time. Through that lens, although the Internet has changed enormously in 25 years in many ways the *debates* and *issues* have barely altered - they're just reframed. But here's your historical reality: 25 years ago we were reading Usenet newsgroups to find interesting websites and deploring the sight of the first online ads.

This is a game anyone can play, and so we will. We will try to avoid seeing the November US presidential election as a hinge.

The big change of the last ten years is the transformation of every Internet debate into a debate about a few huge companies, none of which were players in the mid-1990s. The rise of the mobile Internet was predicted by 2000, but it wasn't until 2006 and the arrival of the iPhone that it became a mass-market reality and began the merger of the physical and online worlds, followed by machine learning, and AI as the next big wave. Now, as DiNardis correctly said, we're beginning to see the Internet moving into the biological world. She predicted, therefore, that the Internet will be both very small (the biological cellular level) and very large (Vint Cerf's galactic Internet). "The Internet will have to move out of communications issues and into environmental policy, consumer safety, and health," she said. Meanwhile, Danny Weizner suggested that data scientists will become the new priests - almost certainly true, because if we do nothing to rein in technology they will be the people whose algorithms determine how decisions are made.

But will we really take no control? The present trend is toward three computing power blocs: China, the United States, and the EU. Chinese companies are beginning to move into the West, either by operating (such as TikTok, which US president Donald Trump has mooted banning) or by using their financial clout to push Westerners to conform to their values. The EU is only 28 years old (dating from the Maastricht Treaty), but in that time has emerged as the only power willing to punish US companies by making them pay taxes, respect privacy law, or accept limits on acquisitions. Will it be as willing to take on Chinese companies if they start to become equally dominant in the West and as willing to violate the fundamental rights enshrined in data protection law?

In his 1998 book, The Invisible Computer, usability pioneer Donald Norman predicted that computers would become invisible, embedded inside all sorts of devices, like electric motors before them. Yesterday, Brenda Leong made a similar prediction by asking the AI session how we will think about robots when they've become indistinguishable. Her analogy, the Internet itself, which in the 1990s was something you had to "go to" by dialing up and waiting for modems to wait, but somewhere around 2010 began to simply be wherever you go, there you are.

So my prediction for 25 years from now is that there will effectively be no such thing as today's "Internet governance"; it will have disappeared into every other type of governance, though engineering and standards bodies will still work to ensure that the technical underpinnings remain robust and reliable. I'd like to think that increasingly technical standards will be dominated by climate change, so that emerging technologies that, like cryptocurrencies, use more energy than entire countries, will be sent back to the drawing board because someone will do the math at the design stage.

Today's debates will merge with their offline counterparts, just as data protection law no longer differentiates between paper-based and electronic data. As the biological implants DiNardis mentioned - and Andrea Matwyshyn has been writing about 2016 - come into widespread use, they will be regulated as health care. We will regulate Internet *companies*, but regulating Facebook (in Western countries) is not governing the Internet.

Many conflicts will persist. Matwyshyn's Internet of Bodies is the perfect example, as copyright laws written for the entertainment industry are invoked by medical device manufacturers. A final prediction, therefore: net.wars is unlikely to run out of subjects in my lifetime.


Illustrations: A piece of the future as seen at the 1964 New York World's Fair (by Doug Coldwell.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 27, 2020

The to-do list

Thumbnail image for casablanca-dooley-wilson-as-time-goes-by.pngWith so much insecurity and mounting crisis, there's no time now to think about a lot of things that will matter later. But someday there will be. And at that time...

Remember that health workers - doctors, nurses, technicians, ambulance drivers - matter just as much every day as they do during a crisis. Six months after everyone starts feeling safe and starts to forget, remind them how much we owe health workers..

The same goes for other essential services workers, the ones who keep the food stores open, the garbage and recycling being picked up, who harvest the crops, catch the fish, and raise and slaughter the animals and birds, who drive the trucks and supply the stores, and deliver post, takeout, and packages from Amazon et. al, and keep the utilities running, and the people who cook the takeout food, and clean the hospitals and streets. Police. Fire. Pharmacists. Journalists. Doubtless scores of other people doing things I haven't thought of. In developed countries, we forget how our world runs until something breaks, evidenced by Steve Double (Con-St Austell and Newquay), the British MP who said on Monday, "One of the things that the current crisis is teaching us is that many people who we considered to be low-skilled are actually pretty crucial to the smooth running of our country - and are, in fact, recognised as key workers." (Actually, a lot of us knew this.)

Stop taking travel, particularly international travel, for granted. Even when bans and lockdowns are eventually fully lifted, it's likely that pre-boarding and immigration health checks will become as routine as security scanning and showing ID have since 2001. Even if governments don't mandate it the public will demand it: who will sit crammed next to a random stranger unless they can believe it's safe?

Demand better travel conditions. Airlines are likely to find the population is substantially less willing to be crammed in as tightly as we have been.

Along those lines, I'm going to bet that today's children and young people, separated from older relatives by travel bans and lockdowns in this crisis, will think very differently about moving across the country or across the world, where they might be cut off in a future health crisis. Families and friends have been separated before by storms, earthquakes, fires, and floods - but travel links have rarely been down this far for this long - and never so widely. The idea of travel as conditional has been growing through security and notification requirements (I'm thinking of the US's ESTA requirements), but health will bring a whole new version of requiring permission.

Think differently about politicians. For years now it's been fashionable for people to say it doesn't matter who gets in because "they're all the same". You have only to compare US governors' different reactions to this crisis to see how false that is. As someone said on Twitter the other day, when you elect a president you are choosing a crisis manager, not a friend or favorite entertainer.

Remember the importance of government and governance. The US's unfolding disaster owes much of its amplitude to the fact that the federal government has become, as Ed Yong, writing in The Atlantic, calls it, "a ghost town of scientific expertise".

Stop asking "How much 'excess' can we trim from this system?" to asking "What surge capacity do we need, and how can we best ensure it will be available?" This will apply not only to health systems, hospitals, and family practices but to supply chains. The just-in-time fad of the 1990s and the outsourcing habits of the 2000s have left systems predictably brittle and prone to failure. Much of the world - including the US - depends on China to supply protective masks rather than support local production. In this crisis, Chinese manufacturing shut down just before every country in the world began to realize it had a shortage. Our systems are designed for short, sharp local disasters, not expanding global catastrophes where everyone needs the same supplies.

Think collaboratively rather than competitively. In one of his daily briefings this week, New York State governor Andrew Cuomo said forthrightly that sending ventilators to New York now, as its crisis builds, did not mean those ventilators wouldn't be available for other places where the crisis hasn't begun yet. It means New York can send them on when the need begins to drop. More ventilators for New York now is more ventilators for everyone later.

Ensure that large companies whose policies placed their staff at risk during this time are brought to account.

Remember these words from Nancy Pelosi: "And for those who choose prayer over science, I say that science is the answer to our prayers."

Reschedule essential but timing-discretionary medical care you've had to forego during the emergency. Especially, get your kids vaccinated so no one has to fight a preventable illness and an unpreventable one at the same time.

The final job: remember this. Act to build systems so we are better prepared for the next one before you forget. It's only 20 years since Y2K, and what people now claim is that "nothing happened"; the months and person-millennia that went into remediating software to *make* "nothing" happen have faded from view. If we can remember old movies, we can remember this.

Illustrations: Dooley Wilson, singing "As Time Goes by", from Casablanca (1942).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 20, 2020

The beginning of the world as we don't know it

magnolia-1.jpgOddly, the most immediately frightening message of my week was the one from the World Future Society, subject line "URGENT MESSAGE - NOT A DRILL". The text began, "The World Future Society over its 60 years has been preparing for a moment of crisis like this..."

The message caused immediate flashbacks to every post-disaster TV show and movie, from The Leftovers (in which 2% of the world's population mysteriously vanishes) to The Last Man on Earth (in which everyone who isn't in the main cast has died of a virus). In my case, it also reminds unfortunately of the very detailed scenarios I saw posted in the late 1990s to the comp.software.year-2000 Usenet newsgroup, in which survivalists were certain that the Millennium Bug would cause the collapse of society. In one scenario I recall, that collapse was supposed to begin with the banks failing, pass through food riots and cities burning, and end with four-fifths of the world's population dead: the end of the world as we know it (TEOTWAWKI). So what I "heard" in the World Future Society's tone was that the "preppers", who built bunkers, stored sacks of beans, rice, dried meat, and guns, were finally right and this was their chance to prove it.

Naturally, they meant no such thing. What they *did* mean was that futurists have long thought about the impact of various types of existential risks, and that what they want is for as many people as possible to join their effort to 1) protect local government and health authorities, 2) "co-create back-up plans for advanced collaboration in case of societal collapse", and 3) collaborate on possible better futures post-pandemic. Number two still brings those flashbacks, but I like the first goal very much, and the third is on many people's minds. If you want to see more, it's here.

It was one of the notable aspects of the early Internet that everyone looked at what appeared to be a green field for development and sought to fashion it in their own desired image. Some people got what they wanted: China, for example, defying Western pundits who claimed it was impossible, successfully built a controlled national intranet. Facebook, while coming along much later, through zero rating deals with local telcos for its Free Basics, is basically all the Internet people know in countries like Ghana and the Philippines, a phenomenon Global Voices calls "digital colonialism". Something like that mine-to-shape thinking is visible here.

I don't think WFS meant to be scary; what they were saying is in fact what a lot of others are saying, which is that when we start to rebuild after the crisis we have a chance - and a need - to do things differently. At Wired, epidemiologist Larry Brilliant tells Steven Levy he hopes the crisis will "cause us to reexamine what has caused the fractional division we have in [the US]".

At Singularity University's virtual summit on COVID-19 this week, similar optimism was on display (some of it probably unrealistic, like James Ehrlich's land-intensive sustainable villages). More usefully, Jamie Metzl compared the present moment to 1941, when US president Franklin Delano Roosevelt began to imagine how the world might be reshaped after the war would end in the Atlantic charter. Today, Metzl said, "We are the beneficiaries of that process." Therefore, like FDR we should start now to think about how we want to shape our upcoming different geopolitical and technological future. Like net.wars last week and John Naughton at the Guardian, Metzl is worried that the emergency powers we grant today will be hard to dislodge later. Opportunism is open to all.

I would guess that the people who think it's better to bail out businesses than support struggling people also fear permanence will become true of the emergency support measures being passed in multiple countries. One of the most surreal aspects of a surreal time is that in the space of a few weeks actions that a month ago were considered too radical to live are suddenly happening: universal basic income, grounding something like 80% of aviation, even support for *some* limited free health care and paid sick leave in the US.

The crisis is also exposing a profound shift in national capabilities. China could build hospitals in ten days; the US, which used to be able to do that sort of thing, is instead the object of charity from Chinese billionaire Alibaba founder Jack Ma, who sent over half a million test kits and 1 million face masks.

Meanwhile, all of us, with a few billionaire exceptions are turning to the governments we held in so little regard a few months ago to lead, provide support, and solve problems. Libertarians who want to tear governments down and replace all their functions with free-market interests are exposed as a luxury none of us can afford. Not that we ever could; read Paulina Borsook's 1996 Mother Jones article Cyberselfish if you doubt this.

"It will change almost everything going forward," New York State governor Andrew Cuomo said of the current crisis yesterday. Cuomo, who is emerging as one of the best leaders the US has in an emergency, and his counterparts are undoubtedly too busy trying to manage the present to plan what that future might be like. That is up to us to think about while we're sequestered in our homes.


Illustrations:: A local magnolia tree, because it *is* spring.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 31, 2020

Dirty networks

Thumbnail image for European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpgWe rarely talk about it this way, but sometimes what makes a computer secure is a matter of perspective. Two weeks ago, at the CPDP-adjacent Privacy Camp, a group of Russians explained seriously why they trust Gmail, WhatsApp, and Facebook.

"If you remove these tools, journalism in Crimea would not exist," said one. Google's transparency reports show that the company has never given information on demand to the Russian authorities.

That is, they trust Google not because they *trust* Google but because using it probably won't land them in prison, whereas their indigenous providers are stoolies in real time. Similarly, journalists operating in high-risk locations may prefer WhatsApp, despite its Facebookiness, because they can't risk losing their new source by demanding a shift to unfamiliar technology, and the list of shared friends helps establish the journalist's trustworthiness. The decision is based on a complex set of context and consequences, not on a narrow technological assessment.

So, now. Imagine you lead a moderately-sized island country that is about to abandon its old partnerships, and you must choose whether to allow your telcos to buy equipment from a large Chinese company, which may or may not be under government orders to build in surveillance-readiness. Do you trust the Chinese company? If not, who *do* you trust?

In the European Parliament, during Wednesday's pro forma debate on the UK's Withdrawal Agreement and emotional farewell, Guy Verhofstadt, the parliament's Brexit coordinator, asked: "What is in fact threatening Britain's sovereignty most - the rules of our single market or the fact that tomorrow they may be planting Chinese 5G masts in the British islands?"

He asked because back in London Boris Johnson was announcing he would allow Huawei to supply "non-core" equipment for up to 35% (measured how?) of the UK's upcoming 5G mobile network. The US, in the form of a Newt Gingrich, seemed miffed. Yet last year Brian Fung noted at the Washington Post ($) the absence of US companies among the only available alternatives: ZTE (China), Nokia (Finland), and Ericsson (Sweden). The failure of companies like Motorola and Lucent to understand, circa 2000, the importance of common standards to wireless communications - a failure Europe did not share - cost them their early lead. Besides, Fung adds, people don't trust the US like they used to, given Snowden's 2013 revelations and the unpredictable behavior of the US's current president. So, the question may be less "Do you want spies with that?" and more, "Which spy would you prefer?"

A key factor is cost. Huawei is both cheaper *and* the technology leader, partly, Alex Hern writes at the Guardian, because its government grants it subsidies that are illegal elsewhere. Hern calls the whole discussion largely irrelevant, because *actually* Huawei equipment is already embedded. Telcos - or rather, we - would have to pay to rip it out. A day later, BT proves he's right: it forecasts bringing the Huawei percentage down will cost £500 million.

All of this discussion has been geopolitical: Johnson's fellow Conservatives are unhappy; US secretary of state Mike Pompeo doesn't want American secrets traveling through Huawei equipment.

Technical expertise takes a different view. Bruce Schneier, for example, says: yes, Huawei is not trusted, and yes, the risks are real, but barring Huawei doesn't make the network secure. The US doesn't even want a secure network, if that means a network it can't spy into.

In a letter to The Times, Martyn Thomas, a fellow at the Royal Academy of Engineering, argues that no matter who supplies it the network will be "too complex to be made fully secure against an expert cyberattack". 5G's software-defined networks will require vastly more cells and, crucially, vastly more heterogeneity and complexity. You have to presume a "dirty network", Sue Gordon, then (US) Principal Deputy Director of National Intelligence, warned in April 2019. Even if Huawei is barred from Britain, the EU, and the US, it will still have a huge presence in Africa, which it's been building for years, and probably Latin America.

There was a time when a computer was a wholly-owned system built by a single company that also wrote and maintained its software; if it was networked it used that company's proprietary protocols. Then came PCs, and third-party software, and the famously insecure Internet. 5G, however, goes deeper: a network in which we trust nothing and no one, not just software but chips, wires, supply chains, and antennas, which Thomas explains "will have to contain a lot of computer components and software to process the signals and interact with other parts of the network". It's impossible to control every piece of all that; trying would send us into frequent panics over this or that component or supplier (see for example Super Micro). The discussion Thomas would like us to have is, "How secure do we need the networks to be, and how do we intend to meet those needs, irrespective of who the suppliers are?"

In other words, the essential question is: how do you build trusted communications on an untrusted network? The Internet's last 25 years have taught us a key piece of the solution: encrypt, encrypt, encrypt. Johnson, perhaps unintentionally, has just made the case for spreading strong, uncrackable encryption as widely as possible. To which we can only say: it's about time.


Illustrations: The European Court of Justice, to mark the fact that on this day the UK exits the European Union.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 24, 2020

The inevitability narrative

new-22portobelloroad.jpg"We could create a new blueprint," Woody Hartzog said in a rare moment of hope on Wednesday at this year's Computers, Privacy, and Data Protection in a panel on facial recognition. He went on stress the need to move outside of the model of privacy for the last two decades: get consent, roll out technology. Not necessarily in that order.

A few minutes earlier, he had said, "I think facial recognition is the most dangerous surveillance technology ever invented - so attractive to governments and industry to deploy in many ways and so ripe for abuse, and the mechanisms we have so weak to confront the harms it poses that the only way to mitigate the harms is to ban it."

This week, a leaked draft white paper revealed that the EU is considering, as one of five options, banning the use of facial recognition in public places. In general, the EU has been pouring money into AI research, largely in pursuit of economic opportunity: if the EU doesn't develop its own AI technologies, the argument goes, Europe will have to buy it from China or the United States. Who wants to be sandwiched between those two?

This level of investment is not available to most of the world's countries, as Julia Powles elsewhere pointed out with respect to AI more generally. Her country, Australia, is destined to be a "technology importer and data exporter", no matter how the three-pronged race comes out. "The promises of AI are unproven, and the risks are clear," she said. "The real reason we need to regulate is that it imposes a dramatic acceleration on the conditions of the unrestrained digital extractive economy." In other words, the companies behind AI will have even greater capacity to grind us up as dinosaur bones and use the results to manipulate us to their advantage.

At this event last year there was a general recognition that, less than a year after the passage of the general data protection regulation, it wasn't going to be an adequate approach to the growth of tracking through the physical world. This year, the conference is awash in AI to a truly extraordinary extent. Literally dozens of sessions: if it's not AI in policing it's AI and data protection, ethics, human rights, algorithmic fairness, or embedded in autonomous vehicles, Hartzog's panel was one of at least half a dozen on facial recognition, which is AI plus biometrics plus CCTV and other cameras. As interesting are the omissions: in two full days I have yet to hear anything about smart speakers or Amazon Ring doorbells, both proliferating wildly in the soon-to-be non-EU UK.

These technologies are landing on us shockingly fast. This time last year, automated facial recognition wasn't even on the map. It blew up just last May, when Big Brother Watch pushed the issue into everyone's consciousness by launching a campaign to stop the police from using what is still a highly flawed technology. But we can't lean too heavily on the ridiculous - 98%! - inaccuracy of its real-world trials, because as it becomes more accurate it will become even more dangerous to anyone on the wrong list. Here, it has become clear that it's being rapidly followed by "emotional recognition", a build-out of technology pioneered 25 years ago at MIT by Rosalind Picard under the rubric "affective computing".

"Is it enough to ban facial recognition?" a questioner asked. "Or should we ban cameras?"

Probably everyone here is carrying at least two camera (pause to count: two on phone, one on laptop).

Everyone here is also conscious that last week, Kashmir Hill broke the story that the previously unknown, Peter Thiel-backed company Clearview AI had scraped 3 billion facial images off social media and other sites to create a database that enables its law enforcement cutomers to grab a single photo and get back matches from dozens of online sites. As Hill reminds, companies like Facebook have been able to do this since 2011, though at the time - just eight and a half years ago! - this was technology that Google (though not Facebook) thought was "too creepy" to implement.

In the 2013 paper A Theory of Creepy, Omer Tene and Jules Polonetsky. cite three kinds of "creepy" that apply to new technologies or new uses: it breaks traditional social norms; it shows the disconnect between the norms of engineers and those of the rest of society; or applicable norms don't exist yet. AI often breaks all three. Automated, pervasive facial recognition certainly does.

And so it seems legitimate to ask: do we really want to live in a world where it's impossible to go anywhere without being followed? "We didn't ban dangerous drugs or cars," has been a recurrent rebuttal. No, but as various speakers reminded, we did constrain them to become much safer. (And we did ban some drugs.) We should resist, Hartzog suggested, "the inevitability narrative".

Instead, the reality is that, as Lokke Moerel put it, "We have this kind of AI because this is the technology and expertise we have."

One panel pointed us at the AI universal guidelines, and encouraged us to sign. We need that - and so much more.


Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 20, 2019

Humans in, bugs out

Thumbnail image for Wilcox, Dominic - Stained Glass car.jpgAt the Guardian, John Naughton ponders our insistence on holding artificial intelligence and machine learning to a higher standard of accuracy than the default standard - that is, us.

Sure. Humans are fallible, flawed, prejudiced, and inconsistent. We are subject to numerous cognitive biases. We see patterns where none exist. We believe liars we like and distrust truth-tellers for picayune reasons. We dislike people who tell unwelcome truths and like people who spread appealing, though shameless, lies. We self-destruct, and then complain when we suffer the consequences. We evaluate risk poorly, fearing novel and recent threats more than familiar and constant ones. And on and on. In 10,000 years we have utterly failed to debug ourselves.

My inner failed comedian imagines the frustrated AI engineer muttering, "Human drivers kill 40,000 people in the US alone every year, but my autonomous car kills *one* pedestrian *one* time, and everybody gets all 'Oh, it's too dangerous to let these things out on the roads'."

New always scares people. But it seems natural to require new systems to do better than their predecessor; otherwise, why bother?

Part of the problem with Naughton's comparison is that machine learning and AI systems aren't really separate from us; they're humans all the way down. We create the algorithms, code the software, and allow them to mine the history of flawed human decisions, from which they make their new decisions. If humans are the problem with human-made decisions, then we are as much or more the problem with machine-made decisions.

I also think Naughton's frustrated AI researchers have a few details the wrong way round. While it's true that self-driving cars have driven millions of miles with very few deaths and human drivers were responsible for 36,560 deaths in 2018 in the US alone, it's *also* true that it's still rare for self-driving cars to be truly autonomous: Human intervention is still required startlingly often. In addition, humans drive in a far wider variety of conditions and environments than self-driving cars are as yet authorized to do. The idea that autonomous vehicles will be vastly safer than human drivers is definitely an industry PR talking point, but the evidence is not there yet.

We'd also point out that a clear trend in AI books this year has been to point out all the places where "automated" systems are really "last-mile humans". In Ghost Work, Mary L. Gray and Siddharth Suri document an astonishing array of apparently entirely computerized systems where remote humans intervene in all sorts of unexpected ways through task-based employment, while in Behind the Screen Sarah T. Roberts studies the specific case of the raters of online content. These workers are largely invisible (hence "ghost") because the companies who hire them, via subcontractors, think it sounds better to claim their work is really AI.

Throughout "automation's last mile", humans invisibly rate online content, check that the Uber driver picking you up is who they're supposed to be, and complete other tasks to hard for computers. As Janelle Shane writes in You Look Like a Thing and I Love You, the narrower the task you give an AI the smarter it seems. Humans are the opposite: no one thinks we're smart while we're getting bored by small, repetitive tasks; it's the creative struggle of finding solutions to huge, complex problems that signals brilliance. Some of AI's most ardent boosters like to hope that artificial *general* intelligence will be able to outdo us in solving our most intractable problems, but who is going to invent that? Us, if it ever happens (and it's unlikely to be soon).

There is also a problem with scale and replication. While a single human decision may affect billions, of people, there is always a next time when it will be reconsidered and reinterpreted by a different judge who takes into account differences of context and nuance. Humans have flexibility that machines lack, while computer errors can be intractable, especially when bugs are produced by complex interactions. The computer scientist Peter Neumann has been documenting the risks of over-relying on computers for decades.

However, a lot of our need for computers prove themselves to a superhuman standard is social, cultural, and emotional. AI adds a layer of remoteness and removes some of our sense of agency. With humans, we think we can judge character, talk them into changing their mind, or at least get them to explain the decision. In the just-linked 2017 event, the legal scholar Mireille Hildebrandt differentiated between law - flexible, reinterpretable, modifiable - and administration, which is what you get if a rules-based expert computer system is in charge. "Contestability is the heart of the rule of law," she said.

At the very least, we hope that the human has enough empathy to understand the impact their decision will have on their fellow human, especially in matters of life and death.

We give the last word to Agatha Christie, who decisively backed humans in her 1969 book, Hallowe'en Party, in which alter-ego Ariadne Oliver tells Hercule Poirot, "I know there's a proverb which says, 'To err is human' but a human error is nothing to what a computer can do if it tries."


Illustrations: Artist Dominic Wilcox's concept self-driving car (as seen at the Science Museum, July 2019).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 15, 2019

A short history of the future

1990s-books-wg-cropped.jpgThe years between 1995 and 1999 were a time when predicting the future was not a spectator sport. The lucky prognosticators gained luster from having their best predictions quoted and recirculated. The unlucky ones were often still lucky enough to have their worst ideas forgotten. I wince, personally, to recall (I don't dare actually reread) how profoundly I underestimated the impact of electronic commerce, although I can more happily point to predicting that new intermediaries would be the rule, not the disintermediation that everyone else seemed obsessed with.. Two things sparked this outburst: the uncertainty of fast-arriving technological change, and the onrushing new millennium.

Those early books fell into several categories. First was travelogues: the Internet for people who never expected to go there (the joke would be on them except that the Old Net these books explored mostly doesn't exist any more, nor the Middle Net after it). These included John Seabrook's Deeper, Melanie McGrath's Hard, Soft, and Wet, and JC Herz's Surfing on the Internet. Second was futurology and techno-utopianism: Nicholas Negroponte's Being Digital, and Tips for Time Travellers, by Peter Cochrane, then head of BT Research. There were also well-filled categories of now-forgotten how-to books and, as now, computer crime. What interested me, then as now, was the conflict between old and new: hence net.wars-the-book and its sequel, From Anarchy to Power. The conflicts those books cover - cryptography, copyright, privacy, censorship, crime, pornography, bandwidth, money, and consumer protection - are ones were are still wrangling over.

A few were simply contrarian: in 1998, David Brin scandalized privacy advocates with The Transparent Society, in which he proposed that we should embrace surveillance, but ensure that it's fully universal. Privacy, I remember him saying at that year's Computers, Freedom, and Privacy, favors the rich and powerful. Today, instead, privacy is as unequally distributed as money.

Among all these, one book had its own class: Frances Cairncross's The Death of Distance. For one thing, at that time writing about the Internet was almost entirely an American pastime (exceptions above: Cochrane and McGrath). For another, unlike almost everyone else, she didn't seem to have written her book by hanging around either social spaces on the Internet itself or in a technology lab or boardroom where next steps were being plotted out and invented. Most of us wrote about the Internet because we were personally fascinated by it. Cairncross, a journalist with The Economist studied it like a bug pinned to cardboard under a microscope. What was this bug? And what could it *do*? What did it mean for businesses and industries?

To answer those questions she did - oh, think of it - *research*. Not the kind that involves reading Usenet for hours on end, either: real stuff on industries and business models.

"I was interested in the economic impact it was going to have," she said the other day. Cairncross's current interest is the future of local news; early this year she donated her name to the government-commissioned review of that industry. Ironically, both because of her present interest and because of her book's title, she says the key thing she missed in considering the impact of collapsing communications costs and therefore distance was the important of closeness and the complexity of local supply chains. It may seem obvious in hindsight, now that three of the globe's top six largest companies by market capitalization are technology giants located within 15 miles of each other in Silicon Valley (the other two are 800 miles north, in Seattle).

The person who got that right was Michael Porter, who argued in 1998 that clusters mattered. Clusters allow ecosystems to develop to provide services and supplies, as well as attract skills and talent.

Still, Cairncross was right about quite a few things. She correctly predicted that the inequality of wages would grow within countries (and, she thought, narrow between countries); she was certainly right about the ongoing difficulty of enforcing laws restricting the flow of information - copyright, libel, bans on child abuse imagery; the increased value of brands; and the concentration that would occur in industries where networks matter. On the other hand, she suggested people would accept increased levels of surveillance in return for reduced crime; when she was writing, the studies showing cameras were not effective were not well-known. Certainly, we've got the increased surveillance either way.

More important, she wrote about the Internet in a way that those of us entranced with it did not, offering a dispassionate view even where she saw - and missed - the same trends everyone else did. Almost everyone missed how much mobile would take over. It wasn't exactly an age thing; more that if you came onto the Internet with big monitors and real keyboards it was hard to give them up -and if you remember having to wait to do things until you were in the right location your expectations are lower.

I think Cairncross's secret, insofar as she had one, was that she didn't see the Internet, as so many of us did, as a green field she could remake in her own desired image. There's a lesson there for would-be futurologists: don't fall in love with the thing whose future you're predicting, just like they tell journalists not to sleep with the rock stars.


Illustrations: Late 1990s books.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 1, 2019

Nobody expects the Spanish Inquisition

Monty_Python_Live_02-07-14-sporti-jpgSo can we stop now with the fantasy that data can be anonymized?

Two things sparked this train of thought. The first was seeing that researchers at the Mayo Clinic have shown that commercial facial recognition software accurately identified 70 of a sample set of 84 (that's 83%) MRI brain scans. For ten additional subjects, the software placed the correct identification in its top five choices. Yes, on reflection, it's obvious that you can't scan a brain without including its container, and that bone structure defines a face. It's still a fine example of data that is far more revealing than you expect.

The second was when Phil Booth, the executive director of medConfidential, on Twitter called out the National Health Service for weakening the legal definition of "anonymous" in its report on artificial intelligence (PDF).

In writing the MRI story for the Wall Street Journal (paywall), Melanie Evans notes that people have also been reidentified from activity patterns captured by wearables, a cautionary tale now that Google's owner, Alphabet, seeks to buy Fitbit. Cautionary, because the biggest contributor to reidentifying any particular dataset is other datasets to which it can be matched.

The earliest scientific research on reidentification I know of was Latanya Sweeney's 1997 success in identifying then-governor William Weld's medical record by matching the "anonymized" dataset of records of visits to Massachusetts hospitals against the voter database for Cambridge, which anyone could buy for $20. Sweeney has since found that 87% of Americans can be matched from just their gender, date of birth, and zip code. More recently, scientists at Louvain and Imperial College found that just 15 attributes can identify 99.8% of Americans. Scientists have reidentified individuals from anonymized shopping data, and by matching mobile phone logs against transit trips. Combining those two datasets identified 95% of the Singaporean population in 11 weeks; add GPS records and you can do it in under a week.

This sort of thing shouldn't be surprising any more.

The legal definition that Booth cited is Recital 26 of the General Data Protection Regulation, which specifies in a lot more detail about how to assess the odds ("all the means likely to be used", "account should be taken of all objective factors") of successful reidentification.

Instead, here's the passage he highlighted from the NHS report as defining "anonymized" data (page 23 of the PDF, 44 of the report): "Data in a form that does not identify individuals and where identification through its combination with other data is not likely to take place."

I love the "not likely". It sounds like one of the excuses that's so standard that Matt Blaze put them on a bingo card. If you asked someone in 2004 whether it was likely that their children's photos would be used to train AI facial recognition systems that in 2019 would be used to surveil Chinese Muslims and out pornography actors in Russia. And yet here we are. You can never reliably predict what data will be of what value or to whom.

At this point, until proven otherwise it is safer to assume that that there really is no way to anonymize personal data and make it stick for any length of time. It's certainly true that in some cases the sensitivity of any individual piece of data - say your location on Friday at 11:48 - vanishes quickly, but the same is not true of those data points when aggregated over time. More important, patient data is not among those types and never will be. Health data and patient information are sensitive and personal not just for the life of the patient but for the lives of their close relatives on into the indefinite future. Many illnesses, both mental and physical, have genetic factors; many others may be traceable to conditions prevailing where you live or grew up. Either way, your medical record is highly revealing - particularly to insurance companies interested in minimizing their risk of payouts or an employer wishing to hire only robustly healthy people - about the rest of your family members.

Thirty years ago, when I was first encountering large databases and what happens when you match them together, I came up with a simple privacy-protecting rule: if you do not want the data to leak, do not put it in the database. This still seems to me definitive - but much of the time we have no choice.

I suggest the following principles and assumptions.

One: Databases that can be linked, will be. The product manager's comment Ellen Ullman reported in 1997 still pertains: "I've never seen anyone with two systems who didn't want us to hook them together."

Two: Data that can be matched, will be.

Three: Data that can be exploited for a purpose you never thought of, will be.

Four: Stop calling it "sharing" when the entities "sharing" your personal data are organizations, especially governments or commercial companies, not your personal friends. What they're doing is *disclosing* your information.

Five: Think collectively. The worst privacy damage may not be to *you*.

The bottom line: we have now seen so many examples of "anonymized" data that can be reidentified that the claim that any dataset is anonymized should be considered as extraordinary a claim as saying you've solved Brexit. Extraordinary claims require extraordinary proof, as the skeptics say.

Addendum: if you're wondering why net.wars skipped the 50th anniversary of the first ARPAnet connection: first of all, we noted it last week; second of all, whatever headline writers think, it's not the 50th anniversary of the Internet, whose beginnings, as we wrote in 2004, are multiple. If you feel inadequately served, I recommend this from 2013, in which some of the Internet's fathers talk about all the rules they broke to get the network started.


Illustrations: Monty Python performing the Spanish Inquisition sketch in 2014 (via Eduardo Unda-Sanzana at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2019

When we were

zittrain-cim-iphone.jpg
"These people changed the world," said Jeff Wilkins, looking out across a Columbus, Ohio ballroom filled with more than 400 people. "And they know it, and are proud of it."

At one time, all this was his.

Wilkins was talking about...CompuServe, which he co-founded in 1969. How does it happen, he asked, that more than 400 people show up to celebrate a company that hasn't really existed for the last 23 years? I can't say, but a group of people happier to see each other (and random outsiders) again would be hard to find. "This is the only reunion I go to," one woman said.

It's easy to forget - or to never have known - CompuServe's former importance. Circa 1993, the Twitter handle now displayed on everyone's business cards and slides was their numbered CompuServe ID. My inclusion of mine (70007,5537) at the end of a Guardian article led a reader to complain that I should instead promote the small ISPs it would kill when broadband arrived. In 1994, Aerosmith released a single on CompuServe, the first time a major label tried online distribution. It probably took five hours to download.

In Wilkins' story, he was studying electrical engineering at the University of Arizona when his father-in-law asked for help with data processing for his new insurance company. Wilkins and fellow grad students Sandy Trevor, John Goltz, Larry Shelley, and Doug Chinnock, soon relocated to Columbus. It was, Wilkins said, Shelley who suggested starting a time-sharing company - "or should I say cloud computing?" Wilkins quipped, to applause and cheers.

Yes, he should. Everything new is old again.

In time-sharing, the fledgling company competed with GE and IBM. The information service started in 1979, as a way to occupy the computers during the empty evenings when the businesses had gone home. For the next 20 years, CompuServers invented everything for themselves: "GO" navigation commands, commercial email (first customer: HJ Heinz), live chat ("CB , news wires, online games and virtual worlds (partnering with Fujitsu on a graphical MUD), shopping... The now-ubiquitous GIF was the brainchild of Steve Wilhite (it's pronounced "JIF"). The legend of CompuServe inventions is kept alive by by Sandy Trevor and Dave Eastburn, whose Nuvocom "software archeology" business holds archives that have backed expert defense against numerous patent claims on technologies that CompuServe provably pioneered.

A panel reminisced about the CIS shopping mall. "We had an online stockbroker before anyone else thought about it," one said. Another remembered a call asking for a 30-minute meeting from the then-CEO of the nationwide flowers delivery service FTD. "I was too busy." (The CEO was Meg Whitman.). For CompuServe's 25th anniversary, the mall's travel agency collaborated on a three-day cruise with, as invited guests, the film critic Roger Ebert, who disseminated his movie reviews through the service and hosted the "Ask Roger Ebert" section in the Movies Forum, and his wife, Chaz. "That may have been the peak."

Mall stores paid an annual fee; curation ensured there weren't too many of any one category of store. Banners advertising products were such a novelty at the time - and often the liveliest, most visually attractive thing on the page - that as many as 25% of viewers clicked on them. Today, Amazon takes a percentage of transactions instead. "If we could have had a universal shopping cart, like Amazon," lamented one, "what might have been?"

Well, what? Could CompuServe now be under threat of a government-mandated breakup to separate its social media business, search, cloud provider, and shopping? Both CompuServe and AOL, whose speed to embrace graphical interfaces and aggressive marketing led it to first outstrip and then buy and dismantle CompuServe in the 1990s, would have had to cannibalize their existing businesses. Used to profits from access fees, both resisted the Internet's monthly subscription model.

One veteran openly admitted how profoundly he underestimated the threat of the Internet after surveying the rickety infrastructure designed by/for academics and students. "I didn't think that the Internet could survive in the reality of a business..." Instead, the information services saw their competition as each other. A contemporary view of the challenges is visible in this 1995 interview with Barry Berkov, the vice-president in charge of CIS.

However, CompuServe's closed approach left no opening for individuals' self-expression. The 1990s rising Internet stars, Geocities and MySpace, were all about that, as are today's social media.

So many shifts have changed social media since then: from topic-centered to person-centered forums, from proprietary to open to centralized, from dial-up modems to pervasive connections, the massive ramp-up of scale and, mobile-fueled, speed, along with the reconfiguration of business models and tehcnical infrastructure. Some things have degraded: past postings on Twitter and Facebook are much harder to find, and unwanted noise is everywhere. CompuServe would have had to navigate each of those shifts without error. As we know now, they didn't make it.

And yet, for 20-odd years, a company of early 20-somethings 2,500 miles from Silicon Valley, invented a prototype of today's world, at first unaware of the near-simultaneous first ARPAnet connection, the beginnings of the network they couldn't imagine would ever be trustworthy enough for businesses and governments to rely on. They may yet be proven right about that.

cis50-banner.jpg

Illustrations: Jonathan Zittrain's mockup of the CompuServe welcome screen (left, with thanks) next to today's iPhone showing how little things have changed; the reunion banner.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 13, 2019

Purposeful dystopianism

Truman-Show-exist.pngA university comparative literature class on utopian fiction taught me this: all utopias are dystopias underneath. I was reminded of this at this week's Gikii, when someone noted the converse, that all dystopias contain within themselves the flaw that leads to their destruction. Of course, I also immediately thought of the bare patch on Smaug's chest in The Hobbit because at Gikii your law and technology come entangled with pop culture. (Write-ups of past years: 2018; 2016; 2014; 2013; 2008.)

Granted, as was pointed out to me, fictional utopias would have no dramatic conflict without dystopian underpinnings, just as dystopias would have none without their misfits plotting to overcome. But the context for this subdiscussion was the talk by Andres Guadamuz, which he began by locating "peak Cyber-utopianism" at 2006 to 2010, when Time magazine celebrated the power the Internet had brought each of us, Wikileaks was doing journalism, bitcoin was new, and social media appeared to have created the Arab Spring. "It looked like we could do anything." (Ah, youth.)

Since then, serially, every item on his list has disappointed. One startling statistic Guadamuz cited: streaming now creates more carbon emissions than airplanes. Streaming online video generates as much carbon dioxide per year as Belgium; bitcoin uses as much energy as Austria. By 2030, the Internet is projected to account for 20% of all energy consumption. Cue another memory, from 1995, when MIT Media Lab founder Nicholas Negroponte was feted for predicting in Being Digital that wired and wireless would switch places: broadcasting would move to the Internet's series of tubes, and historically wired connections such as the telephone network would become mobile and wireless. Meanwhile, all physical forms of information would become bits. No one then queried the sense of doing this. This week, the lab Negroponte was running then is in trouble, too. This has deep repercussions beyond any one institution.

Twenty-five years ago, in Tainted Truth, journalist Cynthia Crossen documented the extent to which funders get the research results they want. Successive generations of research have backed this up. What the Media Lab story tells us is that they also get the research they want - not just, as in the cases of Big Oil and Big Tobacco, the *specific* conclusions they want promoted but the research ecosystem. We have often told the story of how the Internet's origins as a cooperative have been coopted into a highly centralized system with central points of failure, a process Guadamuz this week called "cybercolonialism". Yet in focusing on the drivers of the commercial world we have paid insufficient attention to those driving the academic underpinnings that have defined today's technological world.

To be fair, fretting over centralization was the most mundane topic this week: presentations skittered through cultural appropriation via intellectual property law (Michael Dunford, on Disney's use of Māui, a case study of moderation in a Facebook group that crosses RuPaul and Twin Peaks fandom (Carolina Are), and a taxonomy of lying and deception intended to help decode deepfakes of all types (Andrea Matwyshyn and Miranda Mowbray).

Especially, it is hard for a non-lawyer to do justice to the discussions of how and whether data protection rights persist after death, led by Edina Harbinja, Lilian Edwards, Michael Veale, and Jef Ausloos. You can't libel the dead, they explained, because under common law, personal actions die with the person: your obligation not to lie about someone dies when they do. This conflicts with information rights that persist as your digital ghost: privacy versus property, a reinvention of "body" and "soul". The Internet is *so many* dystopias.

Centralization captured so much of my attention because it is ongoing and threatening. One example is the impending rollout of DNS-over-HTTPS. We need better security for the Internet's infrastructure, but DoH further concentrates centralized control. In his presentation Derek MacAuley noted that individuals who need the kind of protection DoH is claimed to provide would do better to just use Tor. It, too, is not perfect, but it's here and it works. This adds one more to so many historical examples where improving the technology we had that worked would have spared us the level of control now exercised by the largest technology companies.

Centralization completely undermines the Internet's original purpose: to withstand a bomb outage. Mozilla and Google surely know this. The third DoH partner, Cloudflare, the content delivery network in the middle, certainly does: when it goes down, as it did for 15 minutes in July, millions of websites become unreachable. The only sensible response is to increase resilience with multiple pathways. Instead, we have Facebook proposing to further entrench its central role in many people's lives with its nascent Libra cryptocurrency. "Well, *I*'m not going to use it" isn't an adequate response when in some countries Facebook effectively *is* the Internet.

So where are the flaws in our present Internet dystopias? We've suggested before that advertising saturation may be one; the fakery that runs all the way through the advertising stack is probably another. Government takeovers and pervasive surveillance provide motivation to rebuild alternative pathways. The built-in lack of security is, as ever, a growing threat. But the biggest flaw built into the centralized Internet may be this: boredom.


Illustrations: The Truman Show.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 9, 2019

Collision course

800px-Kalka-Shimla_Railway_at_night_in_Solan_-_approaching_train.JPGThe walk from my house to the tube station has changed very little in 30 years. The houses and their front gardens look more or less the same, although at least two have been massively remodeled on the inside. More change is visible around the tube station, where shops have changed hands as their owners retired. The old fruit and vegetable shop now sells wine; the weird old shop that sold crystals and carved stones is now a chain drug store. One of the hardware stores is a (very good) restaurant and the other was subsumed into the locally-owned health food store. And so on.

In the tube station itself, the open platforms have been enclosed with ticket barriers and the second generation of machines has closed down the ticket office. It's imaginable that had the ID card proposed in the early 2000s made it through to adoption the experience of buying a ticket and getting on the tube could be quite different. Perhaps instead of an Oyster card or credit card tap, we'd be tapping in and out using a plastic ID smart card that would both ensure that only I could use my free tube pass and ensure that all local travel could be tracked and tied to you. For our safety, of course - as we would doubtless be reminded via repetitive public announcements like the propaganda we hear every day about the watching eye of CCTV.

Of course, tracking still goes on via Oyster cards, credit cards, and, now, wifi, although I do believe Transport for London when it says its goal is to better understand traffic flows through stations in order to improve service. However, what new, more intrusive functions TfL may choose - or be forced - to add later will likely be invisible to us until an expert outsider closely studies the system.

In his recently published memoir, the veteran campaigner and Privacy International founder Simon Davies tells the stories of the ID cards he helped to kill: in Australia, in New Zealand, in Thailand, and, of course, in the UK. What strikes me now, though, is that what seemed like a win nine years ago, when the incoming Conservative-Liberal Democrat alliance killed the ID card, is gradually losing its force. (This is very similar to the early 1990s First Crypto Wars "win" against key escrow; the people who wanted it have simply found ways to bypass public and expert objections.)

As we wrote at the time, the ID card itself was always a brightly colored decoy. To be sure, those pushing the ID card played on it and British wartime associations to swear blind that no one would ever be required to carry the ID card and forced to produce it. This was an important gambit because to much of the population at the time being forced to carry and show ID was the end of the freedoms two world wars were fought to protect. But it was always obvious to those who were watching technological development that what mattered was the database because identity checks would be carried out online, on the spot, via wireless connections and handheld computers. All that was needed was a way of capturing a biometric that could be sent into the cloud to be checked. Facial recognition fits perfectly into that gap: no one has to ask you for papers - or a fingerprint, iris scan, or DNA sample. So even without the ID card we *are* now moving stealthily into the exact situation that would have prevailed if we had. Increasing numbers of police departments - South Wales, London, LA, India, and, notoriously, China - as Big Brother Watch has been documenting for the UK. There are many more remotely observable behaviors to be pressed into service, enhanced by AI, as the ACLU's Jay Stanley warns.

The threat now of these systems is that they are wildly inaccurate and discriminatory. The future threat of these systems is that they will become accurate and discriminatory, allowing much more precise targeting that may even come to seem reasonable *because* it only affects the bad people.

This train of thought occurred to me because this week Statewatch released a leaked document indicating that most of the EU would like to expand airline-style passenger data collection to trains and even roads. As Daniel Boffay explains at the Guardian (and as Edward Hasbrouck has long documented), the passenger name records (PNRs) airlines create for every journey include as many as 42 pieces of information: name, address, payment card details, itinerary, fellow travelers... This is information that gets mined in order to decide whether you're allowed to fly. So what this document suggests is that many EU countries would like to turn *all* international travel into a permission-based system.

What is astonishing about all of this is the timing. One of the key privacy-related objections to building mass surveillance systems is that you do not know who may be in a position to operate them in future or what their motivations will be. So at the very moment that many democratic countries are fretting about the rise of populism and the spread of extremism, those same democratic countries are proposing to put in place a system that extremists who get into power can operate anti-democratic ways. How can they possibly not see this as a serious systemic risk?


Illustrations: The light of the oncoming train (via Andrew Gray at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 13, 2019

Matrices of numbers

Wilcox, Dominic - Stained Glass car.jpgThe older man standing next to me was puzzled. "Can you drive it?"

He gestured at the VW Beetle-style car-like creation in front of us. Its exterior, except for the wheels and chassis, was stained glass. This car was conceived by the artist Desmond Wilcox, who surmised that by 2059 autonomous cars will be so safe that they will no longer need safety features such as bumpers and can be made of fragile materials. The sole interior furnishing, a bed, lets you sleep while in transit. In person, the car is lovely to look at. Utterly impractical today in 2019, and it always will be. The other cars may be safe, but come on: falling tree, extreme cold, hailstorm...kid with a baseball?

On being told no, it's an autonomous car that drives itself, my fellow visitor to the Science Museum's new exhibition, Driverless, looked dissatisfied. He appeared to prefer driving himself.

"It would look good with a light bulb inside it hanging at the back of the garden," he offered. It would. Bit big, though last week in San Francisco I saw a bigger superbloom.

"Driverless" is a modest exhibition by Science Museum standards, and unlike previous robot exhibitions, hardly any of these vehicles are ready for real-world use. Many are graded according to their project status: first version, early tests, real-world tests, in use. Only a couple were as far along as real-world tests.

Probably a third are underwater explorers. Among the exhibits: the (yellow submarine!) long-range Boaty McBoatface Autosub, which is meant to travel up to 2,000 km over several months, surfacing periodically to send information back to scientists. Both this and the underwater robot swarms are intended for previously unexplored hostile environments, such as underneath the Antarctic ice sheet.

Alongside these and Wilcox's Stained Glass Driverless Car of the Future was the Capri Mobility pod, the result of a project to develop on-demand vans that can shuttle up to four people along a defined route either through a pedestrian area or on public roads. Small Robot sent its Tom farm monitoring robot. And from Amsterdam came Roboat, a five-year research project to develop the first fleet of autonomous floating boats for deployment in Amsterdam's canals. These are the first autonomous vehicles I've seen that really show useful everyday potential for rethinking traditional shapes, forms, and functionality: their flat surfaces and side connectors allow them to be linked into temporary bridges a human can walk across.

There's also an app-controlled food delivery drone; the idea is you trigger it to drop your delivery from 20 meters up when you're ready to receive it. What could possibly go wrong?

On the fun side is Duckietown (again, sadly present only as an image), a project to teach robotics via a system of small, mobile robots that motor around a Lego-like "town" carrying small rubber ducks. It's compelling like model trains, and is seeking Kickstarter funding to make the hardware for wider distribution. This should have been the hands-on bit.

Previous robotics-related Science Museum exhibitions have asked as many questions as they answered. At that, this one is less successful. dont-cross.jpgDrive.ai's car-mounted warning signs, for example, are meant to tell surrounding pedestrians what its cars are doing. But are we really going to allow cars onto public roads (or even worse, pedestrian areas, like the Capri pods) to mow people down who don't see, don't understand, can't read, or willfully ignore the "GOING NOW; DON'T CROSS" sign? So we'll have to add sound: but do we want cars barking orders at us? Today, navigating the roads is a constant negotiation between human drivers, human pedestrians, and humans on other modes of transport (motorcycles, bicycles, escooters, skateboards...). Do we want a tomorrow where the cars have all the power?

In video clips researchers and commentators like Noel Sharkey, Kathy Nothstine, and Natasha Merat discuss some of these difficulties. Merat has an answer for the warning sign: humans and self-driving cars will have to learn each other's capabilities in order to peacefully coexist. This is work we don't really see happening today, and that lack is part of why I tend to think Christian Wolmar is right in predicting that these cars are not going to be filling our streets any time soon.

The placard for the Starship Bot (present only as a picture) advises that it cannot see above knee height, to protect privacy, but doesn't discuss the issues raised when Edward Hasbrouck encountered one in action. I was personally disappointed, after the recent We Robot discussion of the "monstrous" Moral Machine and its generalized sibling the trolley problem, to see it included here with less documentation than on the web. This matters, because the most significant questions about autonomous vehicles are going to be things like: what data do they collect about the people and things around them? To whom are they sending it? How long will it be retained? Who has the right to see it? Who has the right to command where these cars go?

More important, Sharkey says in a video clip, we must disentangle autonomous and remote-controlled vehicles, which present very different problems. Remote-controlled vehicles have a human in charge that we can directly challenge. By contrast, he said, we don't know why autonomous vehicles make the decisions they do: "They're just matrices of numbers."


Illustrations: Wilcox's stained glass car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 31, 2019

Moral machines

2001-stewardess.jpgWhat are AI ethics boards for?

I've been wondering about this for some months now, particularly in April, when Google announced the composition of its new Advanced Technology External Advisory Council (ATEAC) - and a week later announced its dissolution. The council was dropped after a media storm that began with a letter from 50 of Google's own employees objecting to the inclusion of Kay Coles James, president of the Heritage Foundation.

At The Verge, James Vincent suggests the boards are for "ethics washing" rather than instituting change. The aborted Google board, for example, was intended, as member Joanna Bryston writes, to "stress test" policies Google had already formulated.

However, corporations are not the only active players. The new Ada Lovelace Institute's research program is intended to shape public policy in this area. The AI Now Institute is studying social implications. Data & Society is studying AI use and governance. Altogether, Brent Mittelstadt counts 63 public-private initiatives, and says the principles they're releasing "closely resemble the four classic principles of medical ethics" - an analogy he finds uncertain.

Last year, when Steven Croft, the Bishop of Oxford, proposed ten commandments for artificial intelligence, I also tended to be dismissive: who's going to listen? What company is going to choose a path against its own financial interests? A machine learning expert friend, has a different complaint: corporations are not the problem, it's governments. No matter what companies decide, governments always demand carve-outs for intelligence and security services, and once they have it, game over.

I did appreciate Croft's contention that all commandments are aspirational. An agreed set of principles would at least provide a standard against which to measure technology and decisions. Principles might be particularly valuable for guiding academic researchers, some of whom currently regard social media as a convenient public laboratory.

Still, human rights law already supplies that sort of template. What can ethics boards do that the law doesn't already? If discrimination is already wrong, why do we need an ethics board to add that it's wrong when an algorithm does it?

At a panel kicking off this year's Privacy Law Scholars, Ryan Calo suggested an answer: "We need better moral imagination." In his view, a lot of the discussion of AI ethics centers on form rather than content: how should it be applied? Should there be a certification regime? Or perhaps compliance requirements? Instead, he proposed that we should be looking at how AI changes the affordances available to us. His analogy: retrieving the sailors left behind in the water after you destroyed their ship was an ethical obligation until the arrival of new technology - submarines - made it infeasible.

For Calo, too many conversations about AI avoid considering the content, As a frustrating example: "The primary problem around the ethics of driverless cars is not how they will reshape cities or affect people with disabilities and ownership structures, but whether they should run over the nuns or the schoolchildren."

As anyone who's ever designed a survey knows, defining the questions is crucial. In her posting, Bryson expresses regret that the intended board will not now be called into action to consider and perhaps influence Google's policy. But the fact that Google, not the board, was to devise policies and set the questions about them makes me wonder how effective it could have been. So much depends on who imagines the prospective future.

The current Kubrick exhibition at London's Design Museum paid considerable homage to Kubrick's vision and imagination in creating the mysterious and wonderful universe in 2001: A Space Odyssey. Both the technology and the furniture still look "futuristic" despite having been designed more than 50 years ago. What *has* dated is the women: they are still wearing 1960s stewardess uniforms and hats, and the one woman with more than a few lines spends them discussing her husband and his whereabouts; the secrecy surrounding the appearance of a monolith in a crater on the moon is for the men to raise. Calo was finding the same thing in rereading Isaac Asimov's Foundation trilogy: "Not one woman leader for four books," he said. "And people still smoke!" Yet they are surrounded by interstellar travel and mind-reading devices.

So while what these boards - as Helen Nissenbaum said in the same panel, "There are so many institutes announcing principles as if that's the end of the story." - are doing now is not inspiring, maybe what they *could* do might be. What if, as Calo suggested, there are human and civil rights commitments AI allows us to make that were impossible before?

"We should be imagining how we can not just preserve extant ethical values but generate new ones based on affordances that we now have available to us," he said, suggesting as one example "mobility as a right". I'm not really convinced that our streets are going to be awash in autonomous vehicles any time soon, but you can see his point. If we have the technology to give independent mobility to people who are unable to drive themselves...well, shouldn't we? You may disagree on that specific idea, but you have to admit: it's a much better class of conversation.tw


Illustrations: Space Station receptionist from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 24, 2019

Name change

Dns-rev-1-wikimedia.gifIn 2014, six months after the Snowden revelations, engineers began discussing how to harden the Internet against passive pervasive surveillance. Among the results have been efforts like Let's Encrypt, EFF's Privacy Badger, and HTTPS Everywhere. Real inroads have been made into closing some of the Internet's affordances for surveillance and improving security for everyone.

Arguably the biggest remaining serious hole is the domain name system, which was created in 1983. The DNS's historical importance is widely underrated; it was essential in making email and the web usable enough for mass adoption before search engines. Then it stagnated. Today, this crucial piece of Internet infrastructure still behaves as if everyone on the Internet can trust each other. We know the Internet doesn't live there any more; in February the Internet Corporation for Assigned Names and Numbers, which manages the DNS, warned of large-scale spoofing and hijacking attacks. The NSA is known to have exploited it, too.

The problem is the unprotected channel between the computer into which we type humanly-readable names such as pelicancrossing.net and the computers that translate those names into numbered addresses the Internet's routers understand, such as 216.92.220.214. The fact that routers all trust each other is routinely exploited for the captive portals we often see when we connect to public wi-fi systems. These are the pages that universities, cafes, and hotels set up to redirect Internet-bound traffic to their own page so they can force us to log in, pay for access, or accept terms and conditions. Most of us barely think about it, but old-timers and security people see it as a technical abuse of the system.

Several hijacking incidents raised awareness of DNS's vulnerability as long ago as 1998, when security researchers Matt Blaze and Steve Bellovin discussed it at length at Computers, Freedom, and Privacy. Twenty-one years on, there have been numerous proposals for securing the DNS, most notably DNSSEC, which offers an upwards chain of authentication. However, while DNSSEC solves validation, it still leaves the connection open to logging and passive surveillance, and the difficulty of implementing it has meant that since 2010, when ICANN signed the global DNS root, uptake has barely reached14% worldwide.

In 2018, the IETF adopted DNS-over-HTTPS as a standard. Essentially, this sends DNS requests over the same secure channel browsers use to visit websites. Adoption is expected to proceed rapidly because it's being backed by Mozilla, Google, and Cloudflare, who jointly intend to turn it on by default in Chrome and Firefox. In a public discussion at this week's Internet Service Providers Association conference, a fellow panelist suggested that moving DNS queries to the application level opens up the possibility that two different apps on the same device might use different DNS resolvers - and get different responses to the same domain name.

Britain's first public notice of DoH came a couple of week ago in the Sunday Times, which billed it as Warning over Google Chrome's new threat to children. This is a wild overstatement, but it's not entirely false: DoH will allow users to bypass the parts of Britain's filtering system that depend on hijacking DNS requests to divert visitors to blank pages or warnings. An engineer would probably argue that if Britain's many-faceted filtering system is affected it's because the system relies on workarounds that shouldn't have existed in the first place. In addition, because DoH sends DNS requests over web connections, the traffic can't be logged or distinguished from the mass of web traffic, so it will also render moot some of the UK's (and EU's) data retention rules.

For similar reasons, DoH will break captive portals in unfriendly ways. A browser with DoH turned on by default will ignore the hotel/cafe/university settings and instead direct DNS queries via an encrypted channel to whatever resolver it's been set to use. If the network requires authentication via a portal, the connection will fail - a usability problem that will have to be solved.

There are other legitimate concerns. Bypassing the DNS resolvers run by local ISPs in favor of those belonging to, say, Google, Cloudflare, and Cisco, which bought OpenDNS in 2015, will weaken local ISPs' control over the connections they supply. This is both good and bad: ISPs will be unable to insert their own ads - but they also can't use DNS data to identify and block malware as many do now. The move to DoH risks further centralizing the Internet's core infrastructure and strengthening the power of companies most of us already feel have too much control.

The general consensus, however, is that like it or not, this thing is coming. Everyone is still scrambling to work out exactly what to think about it and what needs to be done to mitigate accompanying risks, as well as find solutions to the resulting problems. It was clear from the ISPA conference panel that everyone has mixed feelings, though the exact mix of those feelings and which aspects are identified as problems - differ among ISPs, rights activists, and security practitioners. But it comes down to this: whether you like this particular proposal or not, the DNS cannot be allowed to remain in its present insecure state. If you don't want DoH, come up with a better proposal.


Illustrations: DNS diagram (via Б.Өлзий at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 17, 2019

Genomics snake oil

DNA_Double_Helix_by_NHGRI-NIH-PD.jpgIn 2011, as part of an investigation she conducted into the possible genetic origins of the streak of depression that ran through her family, the Danish neurobiologist Lone Frank had her genome sequenced and interviewed many participants in the newly-opening field of genomics that followed the first complete sequencing of the human genome. In her resulting book, My Beautiful Genome, she commented on the "Wild West" developing around retail genetic testing being offered to consumers over the web. Absurd claims such as using DNA testing to find your perfect mate or direct your child's education abounded.

This week, at an event organized by Breaking the Frame, New Zealand researcher Andelka M. Phillips presented the results of her ongoing study of the same landscape. The testing is just as unreliable, the claims even more absurd - choose your diet according to your DNA! find out what your superpower is! - and the number of companies she's collected has reached 289 while the cost of the tests has shrunk and the size of the databases has ballooned. Some of this stuff makes astrology look good.

To be perfectly clear: it's not, or not necessarily, the gene sequencing itself that's the problem. To be sure, the best lab cannot produce a reading that represents reality from poor-quality samples. And many samples are indeed poor, especially those snatched from bed sheets or excavated from garbage cans to send to sites promising surreptitious testing (I have verified these exist, but I refuse to link to them) to those who want to check whether their partner is unfaithful or whether their child is in fact a blood relative. But essentially, for health tests at least, everyone is using more or less the same technology for sequencing.

More crucial is the interpretation and analysis, as Helen Wallace, the executive director of GeneWatch UK, pointed out. For example, companies differ in how they identify geographical regions, frame populations , and the makeup of their databases of reference contributions. This is how a pair of identical Canadian twins got varying and non-matching test results from five companies, one Ashkenazi Jew got six different ancestry reports, and, according to one study, up to 40% of DNA results from consumer genetic tests are false positives. As I type, the UK Parliament is conducting an inquiry into commercial genomics.

Phillips makes the data available to anyone who wants to explore it. Meanwhile, so far she's examined the terms of service and privacy policies of 71 companies, and finds them filled with technology company-speak, not medical information. They do not explain these services' technical limitations or the risks involved. Yet it's so easy to think of disastrous scenarios: this week, an American gay couple reported that their second child's birthright citizenship is being denied under new State Department rules. A false DNA test could make a child stateless.

Breaking the Frame's organizer, Dave King, believes that a subtle consequence of the ancestry tests - the things everyone was quoting in 2018 that tell you that you're 13% German, 1% Somalian, and whatever else - is to reinforce the essentially racist notion that "Germanness" has a biological basis. He also particularly disliked the services claiming they can identify children's talents; these claim, as Phillips highlighted, that testing can save parents money they might otherwise waste on impossible dreams. That way lies Gattaca and generations of children who don't get to explore their own abilities because they've already been written off.

Even more disturbing questions surround what happens with these large databases of perfect identifiers. In the UK, last October the Department of Health and Social Care announced its ambition to sequence 5 million genomes. Included was the plan to being in 2019 to offer whole genome sequencing to all seriously ill children and adults with specific rare diseases or hard-to-treat cancers as part of their care. In other words, the most desperate people are being asked first, a prospect Phil Booth, coordinator of medConfidential, finds disquieting. As so much of this is still research, not medical care, he said, like the late despised care.data, it "blurs the line around what is your data, and between what the NHS was and what some would like it to be". Exploitation of the nation's medical records as raw material for commercial purposes is not what anyone thought they were signing up for. And once you have that giant database of perfect identifiers...there's the Home Office, which has already been caught using the NHS to hunt illegal immigrants and DNA testing immigrants.

So Booth asked this: why now? Genetic sequencing is 20 years old, and to date it has yet to come close to being ready to produce the benefits predicted for it. We do not have personalized medicine, or, except in a very few cases (such as a percentage of breast cancer) drugs tailored to genetic makeup. "Why not wait until it's a better bet?" he asked. Instead of spending billions today - billions that, as an audience member pointed out, would produce better health more widely if spent on improving the environment, nutrition, and water - the proposal is to spend them on a technology that may still not be producing results 20 years from now. Why not wait, say, ten years and see if it's still worth doing?


Illustrations: DNA double helix (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 10, 2019

Slime trails

ghostbusters-murray-slime.pngIn his 2000 book, Which Lie Did I Tell?, the late, great screenwriter William Goldman called the brilliant 1963 Stanley Donen movie Charade a "money-loser". Oh, sure, it was a great success - for itself. But it cost Hollywood hundreds of millions of dollars in failed attempts to copy its magical romantic-comedy-adventure-thriller mixture. (Goldman's own version, 1992's The Year of the Comet, was - his words - "a flop".) In this sense, Amazon may be the most expensive company ever launched in Silicon Valley because it encouraged everyone to believe losing money in 17 of its first 18 years doesn't matter.

Uber has been playing up this comparison in the run-up to its May 2019 IPO. However, two things make it clear the comparison is false. First - duh - losing money just isn't a magical sign of a good business, even in the Internet era. Second, Amazon had scale on its side, as well as a pioneering infrastructure it was able later to monetize. Nothing about transport scales, as Hubert Horan laid out in 2017; even municipalities can't make Uber cheaper than public transit. Horan's analysis of Uber's IPO filing is scathing. Investment advisers love to advise investing in companies that make popular products, but *not this time*.

Meanwhile, network externalities abound. The Guardian highlights the disparity between Uber's drivers, who have been striking this week, and its early investors, who will make billions even while the company says it intends to continue slicing drivers' compensation. The richest group, says the New York Times, have already decamped to lower-tax states.

If Horan is right, however, the impending shift of billions of dollars from drivers and greater fools to already-wealthy early investors will arguably be a regulatory failure on the part of the Securities and Exchange Commission. I know the rule of the stock market is "buyer beware", but without the trust conferred by regulators there will *be* no buyers, not even pension funds. Everyone needs government to ensure fair play.

Somewhere in one of his 500-plus books, the science/fiction writer Isaac Asimov commented that he didn't like to fly because in case of a plane crash his odds of survival were poor. "It's not sporting." In fact, most passengers survive, unharmed, but not, obviously, in the recent Boeing crashes. Blame, as Madeline Elish correctly predicted in her paper on moral crumple zones, is being sprayed widely, particularly among the humans who build and operate these things: faulty sensors, pilots, and software issues.

The reality seems more likely to be a perfect storm comprising numerous components: 1) the same kind of engineering-management disconnect that doomed Challenger in 1986, 2) trying to compensate with software for a hardware problem, 3) poorly thought-out cockpit warning light design, 4) the number and complexity of vendors involved, and 5) receding regulators. As hybrid cyber-physical systems become more pervasive, it seems likely we will see many more situations where small decisions made by different actors will collide to create catastrophes, much like untested drug interactions.

Again, regulatory failure is the most alarming. Any company can screw up. The failure of any complex system can lead to companies all blaming each other. There are always scapegoats. But in an industry where public perception of safety is paramount, regulators are crucial in ensuring trust. The flowchart at the Seattle Times says it all about how the FAA has abdicated its responsibility. It's particularly infuriating because many in the cybersecurity industry cite aviation as a fine example of what an industry can do to promote safety and security when the parties recognize their collective interests are best served by collaborating and sharing data. Regulators who audit and test provide an essential backstop.

The 6% of the world that flies relies on being able to trust regulators to ensure their safety. Even if the world's airlines now decide that they can't trust the US system, where are they going to go for replacement aircraft? Their own governments will have to step in where the US is failing, as the EU already does in privacy and antitrust. Does the environment win, if people decide it's too risky to fly? Is this a plan?

I want regulators to work. I want to be able to fly with reasonable odds of survival, have someone on the job to detect financial fraud, and be able to trust that medical devices are safe. I don't care how smart you are, no consumer can test these things for themselves, any more than we can tell if a privacy policy is worth the electrons it's printed on.

On that note, last week on Twitter Demos researcher Carl Miller, author of The Death of the Gods, made one of his less-alarming suggestions. Let's replace "cookie": "I'm willing to bet we'd be far less willing to click yes, if the website asked if we [are] willing to have a 'slime trail', 'tracking beacon' or 'surveillance agent' on our browser."

I like "slime trail", which extends to cover the larger use of "cookie" in "cookie crumbs" to describe the lateral lists that show the steps by which you arrived at the current page. Now, when you get a targeted ad, people will sympathize as you shout, "I've been slimed!"


Illustrations: Bill Murray, slimed in Ghostbusters (1984).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 26, 2019

This house

2001-hal.pngThis house may be spying on me.

I know it listens. Its owners say, "Google, set the timer for one minute," and a male voice sounds: "Setting the timer for one minute."

I think, one minute? You need a timer for one minute? Does everyone now cook that precisely?

They say, "Google, turn on the lamp in the family room." The voice sounds: "Turning on the lamp in the family room." The lamp is literally sitting on the table right next to the person issuing the order.

I think, "Arm, hand, switch, flick. No?"

This happens every night because the lamp is programmed to turn off earlier than we go to bed.

I do not feel I am visiting the future. Instead, I feel I am visiting an experiment that years from now people will look back on and say, "Why did they do that?"

I know by feel how long a minute is. A child growing up in this house would not. That child may not even know how to operate a light switch, even though one of the house's owners is a technical support guy who knows how to build and dismember computers, write code, and wire circuits. Later, this house's owner tells me, "I just wanted a reminder."

It's 16 years since I visited Microsoft's and IBM's visions of the smart homes they thought we might be living in by now. IBM imagined voice commands; Microsoft imagined fashion advice-giving closets. The better parts of the vision - IBM's dashboard with a tick-box so your lawn watering system would observe the latest municipal watering restrictions - are sadly unavailable. The worse parts - living in constant near-darkness so the ubiquitous projections are readable - are sadly closer. Neither envisioned giant competitors whose interests are served by installing in-house microphones on constant alert.

This house inaudibly alerts its owner's phones whenever anyone approaches the front door. From my perspective, new people mysteriously appear in the kitchen without warning.

This house has smartish thermostats that display little wifi icons to indicate that they're online. This house's owners tell me these are Ecobee Linux thermostats; the wifi connection lets them control the heating from their phones. The thermostats are not connected to Google.

None of this is obviously intrusive. This house looks basically like a normal house. The pile of electronics in the basement is just a pile of electronics. Pay no attention to the small blue flashing lights behind the black fascia.

One of this house's owners tells me he has deliberately chosen a male voice for the smart speaker so as not to suggest that women are or should be subservient to men. Both owners are answered by the same male voice. I can imagine personalized voices might be useful for distinguishing who asked what, particularly in a shared house or a company, and ensuring only the right people got to issue orders. Google says its speakers can be trained to recognize six unique voices - a feature I can see would be valuable to the company as a vector for gathering more detailed information about each user's personality and profile. And, yes, it would serve users better.

Right now, I could come down in the middle of the night and say, "Google, turn on the lights in the master bedroom." I actually did something like this once by accident years ago in a friend's apartment that was wirelessed up with X10 controls. I know this system would allow it because I used the word "Google" carelessly in a sentence while standing next to a digital photo frame, and the unexpected speaker inside it woke up to say, "I don't understand". This house's owner stared: "It's not supposed to do that when Google is not the first word in the sentence". The photo frame stayed silent.

I think it was just marking its territory.

Turning off the fan in their bedroom would be more subtle. They would wake up more slowly, and would probably just think the fan had broken. This house will need reprogramming to protect itself from children. Once that happens, guests will be unable to do anything for themselves.

This house's owners tell me there are many upgrades they could implement, and they will but: managing them needs skill and thought to segment and secure the network and implement local data storage. Keeping Google and Amazon at bay requires an expert.

This house's owners do not get their news from their smart speakers, but it may be only a matter of time. At a recent Hacks/Hackers, Nic Newman gave the findings of a recent Reuters Institute study: smart speakers are growing faster than smartphones at the same stage, they are replacing radios, and "will kill the remote control". So far, only 46% use them to get news updates. What was alarming was the gatekeeper control providers have: on a computer, the web could offer 20 links; on a smartphone there's room for seven, voice...one. Just one answer to, "What's the latest news on the US presidential race?"

At OpenTech in 2017, Tom Steinberg observed that now that his house was equipped with an Amazon Echo, homes without one seemed "broken". He predicted that this would become such a fundamental technology that "only billionaires will be able to opt out". Yet really, the biggest advance since the beginning of remote controls is that now your garage door opener can collect your data and send it to Google.

My house can stay "broken".


Illustrations: HAL (what else?).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 18, 2019

Math, monsters, and metaphors

Twitter-moral-labyrinth.jpg "My iPhone won't stab me in my bed," Bill Smart said at the first We Robot, attempting to explain what was different about robots - but eight years on, We Robot seems less worried about that than about the brains of the operation. That is, AI, which conference participant Aaron Mannes described as, "A pile of math that can do some stuff".

But the math needs data to work on, and so a lot of the discussion goes toward possible consequences: delivery drones displaying personalized ads (Ryan Calo and Stephanie Ballard); the wrongness of researchers who defend their habit of scraping publicly posted data by saying it's "the norm" when their unwitting experimental subjects have never given permission; the unexpected consequences of creating new data sources in farming (Solon Barocas, Karen Levy, and Alexandra Mateescu); and how to incorporate public values (Alicia Solow-Neiderman) into the control of...well, AI, but what is AI without data? It's that pile of math. "It's just software," Bill Smart (again) said last week. Should we be scared?

The answer seems to be "sometimes". Two types of robots were cited for "robotic space colonialism" (Kristen Thomasen), because they are here enough and now enough for legal cases to be emerging. These are 1) drones, and 2) delivery robots. Mostly. Mason Marks pointed out Amazon's amazing Kiva robots, but they're working in warehouses where their impact is more a result of the workings of capitalism that that of AI. They don't scare people in their homes at night or appropriate sidewalk space like delivery robots, which Paul Colhoun described as "unattended property in motion carrying another person's property". Which sounds like they might be sort of cute and vulnerable, until he continues: "What actions may they take to defend themselves?" Is this a new meaning for move fast and break things?

Colhoun's comment came during a discussion of using various forecasting methods - futures planning, design fiction, the futures wheel (which someone suggested might provide a usefully visual alternative to privacy policies) - that led Cindy Grimm to pinpoint the problem of when you regulate. Too soon, and you risk constraining valuable technology. Too late, and you're constantly scrambling to revise your laws while being mocked by technical experts calling you an idiot (see 25 years of Internet regulation). Still, I'd be happy to pass a law right now barring drones from advertising and data collection and damn the consequences. And then be embarrassed; as Levy pointed out, other populations have a lot more to fear from drones than being bothered by some ads...

The question remains: what, exactly do you regulate? The Algorithmic Accountability Act recently proposed by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) would require large companies to audit machine learning systems to eliminate bias. Discrimination is much bigger than AI, said conference co-founder Michael Froomkin in discussing Alicia Solow-Neiderman's paper on regulating AI, but special to AI is unequal access to data.

Grimm also pointed out that there are three different aspects: writing code (referring back to Petros Terzis's paper proposing to apply the regime of negligence laws to coders); collecting data; and using data. While this is true, it doesn't really capture the experience Abby Jacques suggested could be a logical consequence of following the results collected by MIT's Moral Machine: save the young, fit, and wealthy, but splat the old, poor, and infirm. If, she argued, you followed the mandate of the popular vote, old people would be scrambling to save themselves in parking lots while kids ran wild knowing the cars would never hit them. An entertaining fantasy spectacle, to be sure, but not quite how most of us want to live. As Jacques tells it, the trolley problem the Moral Machine represents is basically a metaphor that has eaten its young. Get rid of it! This was a rare moment of near-universal agreement. "I've been longing for the trolley problem to die," robotics pioneerRobin Murphy said. Jacques herself was more measured: "Philosophers need to take responsibility for what happens when we leave our tools lying around."

The biggest thing I've learned in all the law conferences I go to is that law proceeds by analogy and metaphor. You see this everywhere: Kate Darling is trying to understand how we might integrate robots into our lives by studying the history of domesticating animals; Ian Kerr and Carys Craig are trying to deromanticize "the author" in discussions of AI and copyright law; the "property" in "intellectual property" draws an uncomfortable analogy to physical objects; and Hideyuki Matsumi is trying to think through robot registration by analogy to Japan's Koseki family registration law.

Google koala car.jpgGetting the metaphors right is therefore crucial, which explains, in turn, why it's important to spend so much effort understanding what the technology can really do and what it can't. You have to stop buying the images of driverless cars to produce something like the "handoff model" proposed by Jake Goldenfein, Deirdre Mulligan, and Helen Nissenbaum to explore the permeable boundaries between humans and the autonomous or connected systems driving their cars. Similarly, it's easy to forget, as Mulligan said in introducing her paper with Daniel N. Kluttz, that in "machine learning" algorithms learn only from the judgments at the end; they never see the intermediary reasoning stages.

So metaphor matters. At this point I had a blinding flash of realization. This is why no one can agree about Brexit. *Brexit* is a trolley problem. Small wonder Jacques called the Moral Machine a "monster".

Previous We Robot events as seen by net.wars: 2018 workshop and conference; 2017; 2016 workshop and conference, 2015; 2013, and 2012. We missed 2014.

Illustrations: The Moral Labyrinth art installation, by Sarah Newman and Jessica Fjeld, at We Robot 2019; Google driverless car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 12, 2019

The Algernon problem

charly-movie-image.jpgLast week we noted that it may be a sign of a maturing robotics industry that it's possible to have companies specializing in something as small as fingertips for a robot hand. This week, the workshop day kicking off this year's We Robot conference provides a different reason to think the same thing: more and more disciplines are finding their way to this cross-the-streams event. This year, joining engineers, computer scientists, lawyers, and the odd philosopher are sociologists, economists, and activists.

The result is oddly like a meeting of the Research Institute for the Science of Cyber Security, where a large part of the point from the beginning has been that human factors and economics are as important to good security as technical knowledge. This was particularly true in the face-off between the economist Rob Seamans and the sociologist Beth Bechky, which pitted quantitative "things we can count" against qualitative "study the social structures" thinking. The range of disciplines needed to think about what used to be "computer" security keeps growing as the ways we use computers become more complex; robots are computer systems whose mechanical manifestations interact with humans. This move has to happen.

One sign is a change in language. Madeline Elish, currently in the news for her newly published 2016 We Robot paper, Moral Crumple Zones, said she's trying to replace the term "deploying" with "integrating" for arriving technologies. "They are integrated into systems," she explained, "and when you say "integrate" it implies into what, with whom, and where." By conrast, "deployment" is military-speak, devoid of context. I like this idea, since by 2015, it was clear from a machine learning conference at the Royal Society that many had begun seeing robots as partners rather than replacements.

Later, three Japanese academics - the independent researcher Hideyuki Matsumi, Takayuki Kato, and Pumio Shimpo - tried to explain why Japanese people like robots so much - more, it seems, than "we" do (whoever "we" are). They suggested three theories: the influence of TV and manga; the influence of the mainstream Shinto religion, which sees a spirit in everything; and the Japanese government strategy to make the country a robotics powerhouse. The latter has produced a 356-page guideline for research development.

"Japanese people don't like to draw distinctions and place clear lines," Shinto said. "We think of AI as a friend, not an enemy, and we want to blur the lines." Shimpo had just said that even though he has two actual dogs he wants an Aibo. Kato dissented: "I personally don't like robots."

The MIT researcher Kate Darling, who studies human responses to robots, found positive reinforcement in studies that have found that autistic kids respond well to robots. "One theory is that they're social, but not too social." An experiment that placed these robots in homes for 30 days last summer had "stellar results". But: when the robots were removed at the end of the experiment, follow-up studies found that the kids were losing the skills the robots had brought them. The story evokes the 1958 Daniel Keyes story Flowers for Algernon, but then you have to ask: what were the skills? Did they matter to the children or just to the researchers and how is "success" defined?

The opportunities anthropomorphization opens for manipulation are an issue everywhere. Woody Hartzog called the tendency to believe what the machine says "automation bias", but that understates the range of motivations: you may believe the machine because you like it, because it's manipulated you, or because you're working in a government benefits agency where you can't be sure you won't get fired if you defy the machine's decision. Would that everyone could see Bill Smart and Cindy Grimm follow up their presentation from last year to show: AI is just software; it doesn't "know" things; and it's the complexity that gets you. Smart hates the term "autnomous" for robots "because in robots it means deterministic software running on a computer. It's teleoperation via computer code."

This is the "fancy hammer" school of thinking about robots, and it can be quite valuable. Kevin Bankston soon demonstrated this: "Science fiction has trained us to worry about Skynet instead of housing discrimination, and expect individual saviors rather than communities working together to deal with community problems." AI is not taking our jobs; capitalists are using AI to take our jobs - a very different problem. As long as we see robots and AI as autonomous, we miss that instead they ares agents carrying out others' plans. This is a larger example of a pervasive problem with smartphones, social media sites, and platforms generally: they are designed to push us to forget the data-collecting, self-interested, manipulative behemoth behind them.

Returning to Elish's comment, we are one of the things robots integrate with. At the moment, this is taking the form of making random people research subjects: the pedestrian killed in Arizona by a supposedly self-driving car, the hapless prisoners whose parole is decided by it's-just-software, the people caught by the Metropolitan Police's staggeringly flawed facial recognition, the homeless people who feel threatened by security robots, the Caltrain passengers sharing a platform with an officious delivery robot. Did any of us ask to be experimented on?


Illustrations: Cliff Robertson in Charly, the movie version of "Flowers for Algernon".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 5, 2019

The collaborative hand

Rich Walker-Shadow-2019-04-03.jpgThe futurist Anders Sandberg has often observed that we call it "artificial intelligence" only as long as it doesn't work; after that it's simply "automation". This week, Rich Walker, the managing director of Shadow Robot, said the same thing about robotics. No one calls a self-driving car or a washing machine a robot, for example. Then again, a friend does indeed call the automated tea maker that reliably wakes up every morning before he does "the robot", which suggests we only call things "robots" when we can mock their limitations.

Walker's larger point was robotics, like AI, suffers from confusion between the things people think it can do and the things it can actually do. The gap in AI is so large, that effectively the term now has two meanings, a technological one revolving around the traditional definition of AI, and a political one, which includes the many emerging new technologies - machine learning, computer vision, and so on - that we need to grapple with.

When, last year, we found that Shadow Robot was collaborating on research into care robots it seemed time for a revisit: the band of volunteers I met in 1997 and the tiny business it had grown into in 2009 had clearly reached a new level.

Social care is just one of many areas Shadow is exploring; others include agritech and manufacturing. "Lots are either depending on other pieces of technology that are not ready or available yet or dependent on economics that are not working in our favor yet," Walker says. Social care is an example of the latter; using robots outside of production lines in manufacturing is an example of the former. "It's still effectively a machine vision problem." That is, machine vision is not accurate enough with high enough reliability. A 99.9% level of accuracy means a failure per shift in a car manufacturing facility.

Thumbnail image for R-shadow-walker.jpgGetting to Shadow Robot's present state involved narrowing down the dream founder Richard Greenhill conceived after reading a 1980s computer programming manual: to build a robot that could bring him a cup of tea. The project, then struggling to be taken seriously as it had no funding and Greenhill had no relevant degrees, built the first robot outside Japan that could stand upright and take a step; the Science Museum included it in its 2017 robot exhibition.

Greenhill himself began the winnowing process, focusing on developing a physical robot that could function in human spaces rather than AI and computer vision, reasoning that there were many others who would do that. Greenhill recognized the importance of the hand, but it was Walker who recognized its commercial potential: "To engage with real-world, human-scale tasks you need hands."

The result, Walker says, is, "We build the best robot hand in the world." And, he adds, because several employees have worked on all the hands Shadow has ever built, "We understand all the compromises we've made in the designs, why they're there, and how they could be changed. If someone asks for an extra thumb, we can say why it's difficult but how we could do it."

Meanwhile, the world around Shadow has changed to include specialists in everything else. Computer vision, for example: "It's outside of the set of things we think we should be good at doing, so we want others to do it who are passionate about it," Walker says. "I have no interest in building robot arms, for example. Lots of people do that." And anyway, "It's incredibly hard to do it better than Universal Robots" - which itself became the nucleus of a world-class robotics cluster in the small Danish city of Odense.

Specialization may be the clearest sign that robotics is growing up. Shadow's current model, mounted on a UR arm, sports fingertips developed by SynTouch. With SynTouch and HaptX, Shadow collaborated to create a remote teleoperation system using HaptX gloves in San Francisco to control a robot hand in London following instructions from a businessman in Japan. The reason sounds briefly weird: All Nippon Airways is seeking new markets by moving into avatars and telepresence. It sounds less weird when Walker says ANA first thought of teleportation...and then concluded that telepresence might be more realistic.

Shadow's complement of employees is nearing 40, and they've moved from the undifferentiated north London house they'd worked in since the 1990s, dictated, Walker says, by buying a new milling machine. Getting the previous one in, circa 2007, required taking out the front window and the stairs and building a crane. Walker's increasing business focus reflects the fact that the company's customers are now as often commercial companies as the academic and research institutions that used to form their entire clientele.

For the future, "We want to improve tactile sensing," Walker says. "Touch is really hard to get robots to do well." One aspect they're particularly interested in for teleoperation is understanding intent: when grasping something, does the controlling human want to pinch, twist, hold, or twist it? At the moment, to answer that he imagines "the robot equivalent" of Clippy that asks, "It looks like you're trying to twist the wire. Do you mean to roll it or twist it?" Or even: "It looks like you're trying to defuse a bomb. Do you want to cut the red wire or the black wire?" Well, do ya, punk?


Illustrations: Rich Walker, showing off the latest model, which includes fingertips from HaptX and a robot arm from Universal Robotics; the original humanoid biped, on display at the Science Museum.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 3, 2019

Prognostalgia

1155px-New_Year_2019_NZ7_1370_(31616532097).jpg"What seems to you like the big technology story of 2018?" I asked a friend. "The lack of excitement," she replied.

New stuff - the future - used to be a lot more fun, a phenomenon that New York Times writer Eric Schulmuller has dubbed prognostalgia. While Isaac Asimov, in predicting the world of 2019 in 1983 or of 2014 in 1964, correctly but depressingly foresaw that computers might exacerbate social and economic divisions, he also imagined that this year we'd be building bases on other planets. These days, we don't even explore the unfamiliar corners of the Internet.

So my friend is right. The wow! new hardware of 2018 was a leather laptop. We don't hear so much about grand visions like organizing the world's information or connecting the world. Instead, the most noteworthy app of 2018 may have been Natural Cycles - particularly for its failures.

Smartphones have become commodities, even in Japan. In 2004, visiting Tokyo seemed like time-traveling the future. People loved their phones so much they adorned them with stuffed animals and tassels. In 2018, people stare at them just as much but the color is gone. If Tokyo still offers a predictive glimpse, it looks like meh.

In technopolitics, 2018 seems to have been the most relentlessly negative since 1998, when the first Internet backlash was paralleling the dot-com boom. Then, the hot, new kid on the block was Google, which as yet was - literally - a blank page: logo, search box, no business model. Nothing to fear. On the other hand...the stock market was wildly volatile, especially among Internet stocks, which mostly rose 1929-style at every glance (Amazon, despite being unprofitable, rose 1,300%). People were fighting governments over encryption, especially to block key escrow. There was panic about online porn. A new data protection law was abroad in the land. A US president was under investigation. Yes, I am cherry-picking.

Over the course of 2018 net.wars has covered the modern versions of most of these. Australia is requiring technology companies to make cleartext available when presented with a warrant. The rest of the Five Eyes apparently intend to follow suit. Data breaches keep getting bigger, and although security issues keep getting more sophisticated and more pervasive, the causes of those breaches are often the same old stupid mistakes that we, the victims, can do nothing about. A big theme throughout the year was the ethics of AI. Finally, there has been little good news for cryptocurrency fanciers, no matter what their eventual usefulness may be. About bitcoin, at least, our previous skepticism appears justified.

The end of the year did not augur well for what's coming next. We saw relatively low-cost cyber attacks that disrupted daily physical life as opposed to infrastructure targets: maybe-drones shut down Gatwick Airport and the malware disrupted printing and distribution on a platform shared by numerous US newspapers. The drone if-it-was attack is probably the more significant: uncertainty is poisonously disruptive. As software is embedded into everything, increasingly we will be unable to trust the physical world or predict the behavior of nearby objects. There will be much more of this - and a backlash is also beginning to take physical form, as people attack Waymo self-driving cars in Arizona. Jurisdictional disputes - who gets to compel the production of data and in which countries - will continue to run. The US's CLOUD Act, a response to the Microsoft case, requires US companies to turn over data on US citizens when ordered to do so no matter its location. Be the envy of other major governments. These are small examples of the incoming Internet of Other People's Things.

A major trend that net.wars has not covered much is China's inroads into supplying infrastructure to various countries in Africa and elsewhere, such as Venezuela. The infrastructure that is spreading now comes from a very different set of cultural values than the Internet of the 1990s (democratic and idealistic) or the web of the 2000s (commercial and surveillant).

So much of what we inevitably write about is not only negative but repeatedly so, as the same conflicts escalate inescapably year after year, that it seems only right to try to find a few positive things to start 2019.

On Twitter, Lawrence Lessig notes that for the first time in 20 years work is passing into the public domain. Freed for use and reuse are novels from Edgar Rice Burroughs, Carl Sandberg, DH Lawrence, Aldous Huxley, and Agatha Christie. Music: "Who's Sorry Now?" and works by Bela Bartok. Film: early Buster Keaton and Charlie Chaplin. Unpause, indeed.

In the US, Democrats are arriving to reconfigure Congress, and while both parties have contributed to increasing surveillance, tightening copyright, and extending the US's territorial reach, the restoration of some balance of powers is promising.

In the UK, the one good thing to be said about the Brexit mess is that the acute phase will soon end. Probably.

So, the future is no fun and the past is gone, and we're left with a messy present that will look so much better 50 years from now. Twas ever thus. Happy new year.


Illustrations: New Year's fireworks in Sweden (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 6, 2018

Richard's universal robots

Praminda Caleb-Solly -4.jpegThe robot in the video is actually a giant hoist attached to the ceiling. It has big grab bars down at the level of the person sitting on the edge of the bed, waiting. When the bars approach, she grabs them, and lets the robot slowly help her up into a standing position, and then begins to move forward.

This is not how any of us imagines a care robot, but I am persuaded this is more like our future than the Synths in 2015's Humans, which are incredibly humanoid (helpfully for casting) but so, so far from anything ready for deployment. This thing, which Praminda Caleb-Solly showed at work in a demonstration video at Tuesday's The Shape of Things conference, is a work in progress. There are still problems, most notably that your average modern-build English home has neither high enough ceilings nor enough lateral space to accommodate it. My bedroom is about the size of the stateroom in the Marx Brothers movie A Night at the Opera; you'd have to put it in the hall and hope the grab bar assembly could reach through the doorway. But still.

As the news keeps reminding us, the the Baby Boomer bulge will soon reach frailty. In industrialized nations, where mobility, social change, and changed expectations have broken up extended families, need will explode. In the next 12 years, Caleb-Solly said, a fifth of people over 80 - 4.8 million people in the UK - will require regular care. Today, the National Health Service is short almost 250,000 staff (a problem Brexit exacerbates wholesale). Somehow, we'll have to find 110,000 people to work in social care in England alone. Technology is one way to help fill that gap. Today, though, 30% of users abandon their assistive technologies; they're difficult to adapt to changing needs, difficult to personalize, and difficult to interact with.

Personally, I am not enthusiastic about having a robot live in my house and report on what I do to social care workers. But I take Caleb-Solly's point when she says, "We need smart solutions that can deal with supporting a healthy lifestyle of quality". That ceiling-hoist robot is part of a modular system that can add functions and facilities as people's needs and capacity change over time.

Thumbnail image for werobot-pepper-head_zpsrvlmgvgl.jpgIn movies and TV shows, robot assistants are humanoids, but that future is too far away to help the onrushing 4.8 million. Today's care-oriented robots have biological, but not human, inspirations: the PARO seal, or Pepper, which Caleb-Solly's lab likes because it's flexible and certified for experiments in people's homes. You may wonder what intelligence, artificial or otherwise, a walker needs, but given sensors and computational power the walker can detect how its user is holding it, how much weight it's bearing, whether the person's balance is changing, and help them navigate. I begin to relax: this sounds reasonable. And then she says, "Information can be conveyed to the carer team to assess whether something changed and they need more help," and I close down with suspicion again. That robot wants to rat me out.

There's a simple fix for that: assume the person being cared for has priorities and agency of their own, and have the robot alert them to the changes and let them decide what they want to do about it. That approach won't work in all situations; there are real issues surrounding cognitive decline, fear, misplaced pride, and increasing multiple frailties that make self-care a heavy burden. But user-centered design can't merely mean testing the device with real people with actual functional needs; the concept must extend to ownership of data and decision-making. Still, the robot walker in Caleb-Solly's lab taught her how to waltz. That has to count for something.

The project - CHIRON, for Care at Home using Intelligent Robotic Omni-functional Nodes - is a joint effort between Three Sisters Care, Caleb-Solly's lab, and Shadow Robot, and funded with £2 million over two years by Innovate UK.

Shadow Robot was the magnet that brought me here. One of the strangest and most eccentric stories in an already strange and eccentric field, Shadow began circa 1986, when the photographer Richard Greenhill was becalmed on a ship with nothing to do for several weeks but read the manual for the Sinclair ZX 81. His immediate thought: you could control a robot with one of those! His second thought: I will build one.

greenhill-rotated-2.jpegBy 1997, Greenhill's operation was a band of volunteers meeting every week in a north London house filled with bits of old wire and electronics scrounged from junkyards. By then, Greenhill had most of a hominid with deceptively powerful braided-cloth "air muscles". By my next visit, in 2009, former volunteer Rich Walker had turned Shadow into a company selling a widely respected robot hand, whose customers include NASA, MIT, and Carnegie-Mellon. Improbably, the project begun by the man with no degrees, no funding, and no university affiliation has outlasted numerous more famous efforts filled with degree-bearing researchers who used up their funding, published, and disbanded. And now it's contributing robotics research expertise to CHIRON.

Seen Tuesday, Greenhill was eagerly outlining a future in which we can all build what we need and everyone can live for free. Well, why not?


Illustrations: Praminda Caleb-Solly presenting on Tuesday (Kois Miah); Pepper; Richard Greenhill demonstrating his personally improved scooter.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 9, 2018

Escape from model land

Thumbnail image for lennysmith-davidtuckett-cruise-2018-11-08.jpg
"Models are best for understanding, but they are inherently wrong," Helen Dacre said, evoking robotics engineer Bill Smart on sensors. Dacre was presenting a tool that combines weather forecasts, air quality measurements, and other data to help airlines and other stakeholders quickly assess the risk of flying after a volcanic eruption. In April 2010, when Iceland's Eyjafjallajökull blew its top, European airspace shut down for six days at an estimated overall cost of £1.1 billion. Since then, engine manufacturers have studied the effect of atmospheric volcanic ash on aircraft engines, and are finding that a brief excursion through peak levels of concentration is less damaging than prolonged exposure at lower levels. So, do you fly?

This was one of the projects presented at this week's conference of the two-year-old network Challenging Radical Uncertainty in Science, Society and the Environment (CRUISSE). To understand "radical uncertainty", start with Frank Knight, who in 1921 differentiated between "risk", where the outcomes are unknown but the probabilities are known, and uncertainty, where even the probabilities are unknown. Timo Ehrig summed this up as "I know what I don't know" versus "I don't know what I don't know", evoking Donald Rumsfeld's "unknown unknowns". In radical uncertainty decisions, existing knowledge is not relevant because the problems are new: the discovery of metal fatigue in airline jets; the 2008 financial crisis; social media; climate change. The prior art, if any, is of questionable relevance. And you're playing with live ammunition - real people's lives. By the million, maybe.

How should you change the planning system to increase the stock of affordable housing? How do you prepare for unforeseen cybersecurity threats? What should we do to alleviate the impact of climate change? These are some of the questions that interested CRUISSE founders Leonard Smith and David Tuckett. Such decisions are high-impact, high-visibility, with complex interactions whose consequences are hard to foresee.

It's the process of making them that most interests CRUISSE. Smith likes to divide uncertainty problems into weather and climate. With "weather" problems, you make many similar decisions based on changing input; with "climate" problems your decisions are either a one-off or the next one is massively different. Either way, with climate problems you can't learn from your mistakes: radical uncertainty. You can't reuse the decisions; but you *could* reuse the process by which you made the decision. They are trying to understand - and improve - those processes.

This is where models come in. This field has been somewhat overrun by a specific type of thinking they call OCF, for "optimum choice framework". The idea there is that you build a model, stick in some variables, and tweak them to find the sweet spot. For risks, where the probabilities are known, that can provide useful results - think cost-benefit analysis. In radical uncertainty...see above. But decision makers are tempted to build a model anyway. Smith said, "You pretend the simulation reflects reality in some way, and you walk away from decision making as if you have solved the problem." In his hand-drawn graphic, this is falling off the "cliff of subjectivity" into the "sea of self-delusion".

Uncertainty can come from anywhere. Kris de Meyer is studying what happens if the UK's entire national electrical grid crashes. Fun fact: it would take seven days to come back up. *That* is not uncertain. Nor are the consequences: nothing functioning, dark streets, no heat, no water after a few hours for anyone dependent on pumping. Soon, no phones unless you still have copper wire. You'll need a battery or solar-powered radio to hear the national emergency broadcast.

The uncertainty is this: how would 65 million modern people react in an unprecedented situation where all the essentials of life are disrupted? And, the key question for the policy makers funding the project, what should government say? *Don't* fill your bathtub with water so no one else has any? *Don't* go to the hospital, which has its own generators, to charge your phone?

"It's a difficult question because of the intention-behavior gap," de Meyer said. De Meyer is studying this via "playable theater", an effort that starts with a story premise that groups can discuss - in this case, stories of people who lived through the blackout. He is conducting trials for this and other similar projects around the country.

In another project, Catherine Tilley is investigating the claim that machines will take all our jobs . Tilley finds two dominant narratives. In one, jobs will change, not disappear, and automation more of them, enhanced productivity, and new wealth. In the other, we will be retired...or unemployed. The numbers in these predictions are very large, but conflicting, so they can't all be right. What do we plan for education and industrial policy? What investments do we make? Should we prepare for mass unemployment, and if so, how?

Tilley identified two common assumptions: tasks that can be automated will be; automation will be used to replace human labor. But interviews with ten senior managers who had made decisions about automation found otherwise. Tl;dr: sectoral, national, and local contexts matter, and the global estimates are highly uncertain. Everyone agrees education is a partial solution - "but for others, not for themselves".

Here's the thing: machines are models. They live in model land. Our future depends on escaping.


Illustrations: David Tuckett and Lenny Smith.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2018

The Rochdale hypothesis

Unity_sculpture,_Rochdale_(1).JPGFirst, open a shop. Thus the pioneers of Rochdale, Lancashire, began the process of building their town. Faced with the jobs and loss of income brought by the Industrial Revolution, a group of 28 people, about half of them weavers, designed the set of Rochdale principles, and set about finding £1 each to create a cooperative that sold a few basics. Ten years later, Wikipedia tells us, Britain was home to thousands of imitators: cooperatives became a movement.

Could Rochdale form the template for building a public service internet?

This was the endpoint of a day-long discussion held as part of MozFest and led by a rogue band from the BBC. Not bad, considering that it took us half the day to arrive at three key questions: What is public? What is service? What is internet?

Pause.

To some extent, the question's phrasing derives from the BBC's remit as a public service broadcaster. "Public service" is the BBC's actual mandate; broadcasting the activity it's usually identified with, is only the means by which it fulfills that mission. There might be - are - other choices. To educate, inform, to entertain, those are its mandate. Neither says radio or TV.

Probably most of the BBC's many global admirers don't realize how broadly the BBC has interpreted that. In the 1980s, it commissioned a computer - the Acorn, which spawned ARM, whose chips today power smartphones - and a series of TV programs to teach the nation about computing. In the early 1990s, it created a dial-up Internet Service Provider to help people get online. Some ten or 15 years ago I contributed to an online guide to the web for an audience with little computer literacy. This kind of thing goes way beyond what most people - for example, Americans - mean by "public broadcasting".

But, as Bill Thompson explained in kicking things off, although 98% of the public has some exposure to the BBC every week, the way people watch TV is changing. Two days later, the Guardian reported that the broadcasting regulator, Ofcom, believes the BBC is facing an "existential crisis" because the younger generation watches significantly less television. An eighth of young people "consume no BBC content" in any given week. When everyone can access the best of TV's back catalogue on a growing array of streaming services, and technology giants like Netflix and Amazon are spending billions to achieve worldwide dominance, the BBC must change to find new relevance.

So: the public service Internet might be a solution. Not, as Thompson went on to say, the Internet to make broadcasting better, but the Internet to make *society* better. Few other organizations in the world could adopt such a mission, but it would fit the BBC's particular history.

Few of us are happy with the Internet as it is today. Mozilla's 2018 Internet Health Report catalogues problems: walled gardens, constant surveillance to exploit us by analyzing our data, widespread insecurity, and increasing censorship.

So, again: what does a public service Internet look like? What do people need? How do you avoid the same outcome?

"Code is law," said Thompson, citing Lawrence Lessig's first book. Most people learned from that book that software architecture could determine human behaviour. He took a different lesson: "We built the network, and we can change it. It's just a piece of engineering."

Language, someone said, has its limits when you're moving from rhetoric to tangible service. Canada, they said, renamed the Internet "basic service" - but it changed nothing. "It's still concentrated and expensive."

Also: how far down the stack do we go? Do we rewrite TCP/IP? Throw out the web? Or start from outside and try to blow up capitalism? Who decides?

At this point an important question surfaced: who isn't in the room? (All but about 30 of the world's population, but don't get snippy.) Last week, the Guardian reported that the growth of Internet access is slowing - a lot. UN data to be published next month by the Web Foundation, shows growth dropped from 19% in 2007 to less than 6% in 2017. The report estimates that it will be 2019, two years later than expected, before half the world is online, and large numbers may never get affordable access. Most of the 3.8 billion unconnected are rural poor, largely women, and they are increasingly marginalized.

The Guardian notes that many see no point in access. There's your possible starting point. What would make the Internet valuable to them? What can we help them build that will benefit them and their communities?

Last week, the New York Times suggested that conflicting regulations and norms are dividing the Internet into three: Chinese, European, and American. They're thinking small. Reversing the Internet's increasing concentration and centralization can't be by blowing up the center because it will fight back. But decentralizing by building cooperatively at the edges...that is a perfectly possible future consonant with its past, even we can't really force clumps of hipsters to build infrastructure in former industrial towns, by luring them there with cheap housing prices. Cue Thompson again: he thought of this before, and he can prove it: here's his 2000 manifesto on e-mutualism.

Building public networks in the many parts of Britain where access is a struggle...that sounds like a public service remit to me.

Illustrations: Illustrations: The Unity sculpture, commemorating the 150th anniversary of the Rochdale Pioneers (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2018

Not the new oil

Ada_Lovelace_Chalon_portrait.jpg"Does data age like fish or like wine?" the economist Diane Coyle asked last week. It was one of a long list of questions she suggested researchers need to answer in a presentation at the new Ada Lovelace Institute. More important, the meeting generally asked, how can data best be used to serve the common good? The newly-created Ada Lovelace Institute is being set up to answer this sort of question.

This is a relatively new way of looking at things that has been building up over the last year or two - active rather than passive, social rather than economic, and requiring a different approach from traditional discussions of individual privacy. That might mean stewardship - management as a public good - rather than governance according to legal or quasi-legal rules; and a new paradigm for privacy, which for the last decades has been cast as an individual right rather than a social compact. As we have argued here before, it is long since time to change that last bit, a point made by Ivana Bartoletti, head of the data privacy and data protection practice for GemServ.

One of the key questions for Coyle, as an economist, is how to value data - hence the question about how it ages. In one effort, she tried to get price and volume statistics from cloud providers, and found no agreement on how they thought about their business or how they made the decision to build a new data center. Bytes are the easiest to measure - but that's not how they do it. Some thought about the number of data records, or computations per second, but these measures are insufficient without knowing the content.

"Forget 'the new oil'," she said; the characteristics are too different. Well, that's good news in a sense; if data is not the new oil then we don't have to be dinosaur bones or plankton. But given how many businesses have spent the last 20 years building their plans on the presumption that data *is* the new oil, getting them to change that view will be an uphill slog. Coyle appears willing to try: data, she said, is a public good, non-rivalrous in use, and, like many digital goods, with high fixed but low marginal costs. She went on to say, however, that personal data is not valuable, citing the small price you get if you divide Facebook's profits across its many users.

This is, of course, not really true, any more than you can decide between wine and fish: data's value depends on the beholder, the beholder's purpose, the context, and a host of other variables. The same piece of data may be valueless at times and highly valuable at others. A photograph of Brett Kavanaugh and Christine Blasey Ford on that bed in 1982, for example, would have been relatively valueless at the time, and yet be worth a fortune now, whether to suppress or to publish. The economic value might increase as long as it was kept secret - but diminish rapidly once it was made public, while the social value is zero while it's secret but huge if made public. As commodities go, data is weird. Coyle invoked Erwin Schrödinger: you don't know what you've got until you look at it. And even then, you have to keep looking as circumstances change.

That was the opening gambit, but a split rapidly surfaced in the panel, which also included Emma Prest, the executive director of DataKind. Prest and Bartoletti raised issues of consent and ethics, and data turned from a public good into a matter of human rights.

If you're a government or a large company focused on economic growth, then viewing data as a social good means wringing as much profit as you can out of it. That to date has been the direction, leading to amassing giant piles of the stuff and enabling both open and secret trades in surveillance and tracking. One often-proposed response is to apply intellectual property rights; the EU tried something like this in 1996 when it passed the Database Directive, generally unloved today, but this gives organizations rights in databases they compile. It doesn't give individuals property rights over "my" data. As tempting as IP rights might be, one problem is that a lot of data is collaboratively created. "My" medical record is a composite of information I have given doctors and their experience and knowledge-based interpretation. Shouldn't they get an ownership share?

Of course someone - probably a security someone - will be along shortly to point out that ethics, rights, and public goods are not things criminals respect. But this isn't about bad guys. Oil or not, data has always also been a source of power. In that sense, it's heartening to see that so many of these conversations - at the nascent Ada Lovelace Institute, at the St Paul's Institute PDF), at the LSE, and at Data & Society, to name just a few - are taking place. If AI is about data, robotics is at least partly about AI in a mobile substrate. Eventually, these discussions of the shape of the future public sphere will be seen for what they are: debates over the future distribution of power. Don't tell Whitehall.


Illustrations: Ada Lovelace.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2018

Lost in transition

End_all_DRM_in_the_world_forever,_within_a_decade.jpg"Why do I have to scan my boarding card?" I demanded loudly of the machine that was making this demand. "I'm buying a thing of milk!"

The location was Heathrow Terminal 5. The "thing of milk" was a pint of milk being purchased with a view to a late arrival in a continental European city where tea is frequently offered with "Kafeesahne", a thick, off-white substance that belongs with tea about as much as library paste does.

A human materialized out of nowhere, and typed in some codes. The transaction went through. I did not know you could do that.

The incident sounds minor - yes, I thanked her - but has a real point. For years, UK airport retailers secured discounts for themselves by demanding to scan boarding cards at the point of purchase while claiming the reason was to exempt the customers from VAT when they are taking purchases out of the country. Just a couple of years ago the news came out: the companies were failing to pass the resulting discounts on to customers and simply pocketing the VAT. Legally, you are not required to comply with the request.

They still ask, of course.

If you're dealing with a human retail clerk, refusing is easy: you say "No" and they move on to completing the transaction. The automated checkout (which I normally avoid), however is not familiar with No. It is not designed for No. No is not part of its vocabulary unless a human comes along with an override code.

My legal right not to scan my boarding card therefore relies on the presence of an expert human. Take the human out of that loop - or overwhelm them with too many stations to monitor - and the right disappears, engineered out by automation and enforced by the time pressure of having to catch a flight and/or the limited resource of your patience.

This is the same issue that has long been machinified by DRM - digital rights management - and the locks it applies to commercially distributed content. The text of Alice in Wonderland is in the public domain, but wrap it in DRM and your legal rights to copy, lend, redistribute, and modify all vanish, automated out with no human to summon and negotiate with.

Another example: the discount railcard I pay for once a year is renewable online. But if you go that route, you are required to upload your passport, photo driver's license, or national ID card. None of these should really be necessary. If you renew at a railway station, you pay your money and get your card, no identification requested. In this example the automation requires you to submit more data and take greater risk than the offline equivalent. And, of course, when you use a website there's no human to waive the requirement and restore the status quo.

Each of these services is designed individually. There is no collusion, and yet the direction is uniform.

Most of the discussion around this kind of thing - rightly - focuses on clearly unjust systems with major impact on people's lives. The COMPAS recidivism algorithm, for example, is used to risk-assess the likelihood that a criminal defendant will reoffend. A ProPublica study found that the algorithm tended to produced biased results of two kinds: first, black defendants were more likely than white defendants to be incorrectly rated as high risk; second, white reoffenders were incorrectly classified as low-risk more often than black ones. Other such systems show similar biases, all for the same basic reason: decades of prejudice are baked into the training data these systems are fed. Virginia Eubanks, for example, has found similar issues in systems such as those that attempt to identify children at risk and that appear to see poverty itself as a risk factor.

By contrast, the instances I'm pointing out seem smaller, maybe even insignificant. But the potential is that over time wide swathes of choices and rights will disappear, essentially automated out of our landscape. Any process can be gamed this way.

At a Royal Society meeting last year, law professor Mireille Hildebrandt outlined the risks of allowing the atrophy of governance through the text-driven law that today is negotiated in the courts. The danger, she warned, is that through machine deployment and "judgemental atrophy" it will be replaced with administration, overseen by inflexible machines that enforce rules with no room for contestability, which Hildebrandt called "the heart of the rule of law".

What's happening here is, as she said, administration - but it's administration in which our legitimate rights dissipate in a wave of "because we can" automated demands. There are many ways we willingly give up these rights already - plenty of people are prepared to give up anonymity in financial transactions by using all manner of non-cash payment systems, for example. But at least those are conscious choices from which we derive a known benefit. It's hard to see any benefit accruing from the loss of the right to object to unreasonable bureaucracy imposed upon us by machines designed to serve only their owners' interests.


Illustrations: "Kill all the DRM in the world within a decade" (via Wikimedia.).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 27, 2018

We know where you should live

Thumbnail image for PatCadigan-Worldcon75.jpgIn the memorable panel "We Know Where You Will Live" at the 1996 Computers, Freedom, and Privacy conference, the science fiction writer Pat Cadigan startled everyone, including fellow panelists Vernor Vinge, Tom Maddox, and Bruce Sterling, by suggesting that some time in the future insurance companies would levy premiums for "risk purchases" - beer, junk foods - in supermarkets in real time.

Cadigan may have been proved right sooner than she expected. Last week, John Hancock, a 156-year-old US insurance company, announced it would discontinue underwriting traditional life insurance policies. Instead, in future all its policies will be "interactive"; that is, they will come with the "Vitality" program, under which customers supply data collected by their wearable fitness trackers or smartphones. John Hancock promotes the program, which it says is already used by 8 million customers in 18 countries, and as providing discounts. In the company's characterization, it's a sort of second reward for "living healthy". In the company's depiction, everyone wins - you get lower premiums and a healthier life, and John Hancock gets your data, enabling it to make more accurate risk assessments and increase its efficiency.

Even then, Cadigan was not the only one with the idea that insurance companies would exploit the Internet and the greater availability of data. A couple of years later, a smart and prescient friend suggested that we might soon be seeing insurance companies offer discounts for mounting a camera on the hood of your car so they could mine the footage to determine blame when accidents occurred. This was long before smartphones and GoPros, but the idea of small, portable cameras logging everything goes back at least to 1945, when Vannevar Bush wrote As We May Think, an essay that imagined something a lot like the web, if you make allowances for storing the whole thing on microfilm.

This "interactive" initiative is clearly a close relative of all these ideas, and is very much the kind of thing University of Maryland professor Frank Pasquale had in mind when writing his book The Black Box Society. John Hancock may argue that customers know what data they're providing, so it's not all that black a box, but the reality is that you only know what you upload. Just like when you download your data from Facebook, you do not know what other data the company matches it with, what else is (wrongly or rightly) in your profile, or how long the company will keep penalizing you for the month you went bonkers and ate four pounds of candy corn. Surely it's only a short step to scanning your shopping cart or your restaurant meal with your smartphone to get back an assessment of how your planned consumption will be reflected in your insurance premium. And from there, to automated warnings, and...look, if I wanted my mother lecturing me in my ear I wouldn't have left home at 17.

There has been some confusion about how much choice John Hancock's customers have about providing their data. The company's announcement is vague about this. However, it does make some specific claims: Vitality policy holders so far have been found to live 13-21 years longer than the rest of the insured population; generate 30% lower hospitalization costs; take nearly twice as many steps as the average American; and "engage with" the program 576 times a year.

John Hancock doesn't mention it, but there are some obvious caveats about these figures. First of all, the program began in 2015. How does the company have data showing its users live so much longer? Doesn't that suggest that these users were living longer *before* they adopted the program? Which leads to the second point: the segment of the population that has wearable fitness trackers and smartphones tends to be more affluent (which tends to favor better health already) and more focused on their health to begin with (ditto). I can see why an insurance company would like me to "engage with" its program twice a day, but I can't see why I would want to. Insurance companies are not my *friends*.

At the 2017 Computers, Privacy, and Data Protection, one of the better panels discussed the future for the insurance industry in the big data era. For the insurance industry to make sense, it requires an element of uncertainty: insurance is about pooling risk. For individuals, it's a way of managing the financial cost of catastrophes. Continuously feeding our data into insurance companies so they can more precisely quantify the risk we pose to their bottom line will eventually mean a simple equation: being able to get insurance at a reasonable rate is a pretty good indicator you're unlikely to need it. The result, taken far enough, will be to undermine the whole idea of insurance: if everything is known, there is no risk, so what's the point? betting on a sure thing is cheating in insurance just as surely as it is in gambling. In the panel, both Katja De Vries and Mireille Hildebrandt noted the sinister side of insurance companies acting as "nudgers" to improve our behavior for their benefit.

So, less "We know where you will live" and more "We know where and how you *should* live."


Illustrations: Pat Cadigan (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 27, 2018

Think horses, not zebras

IBM-watson-jeopardy.pngThese two articles made a good pairing: Oscar Schwartz's critique of AI hype in the Guardian, and Jennings Brown's takedown of IBM's Watson in real-world contexts. Brown's tl;dr: "This product is a piece of shit," a Florida doctor reportedly told IBM in the leaked memos on which Gizmodo's story is based. "We can't use it for most cases."

Watson has had a rough ride lately: in August 2017 Brown catalogued mounting criticisms of the company and its technology; that June, MIT Technology Review did, too. All three agree: IBM's marketing has outstripped Watson's technical capability.

That's what Schwartz is complaining about: even when scientists make modest claims; media and marketing hype it to the hilt. As a result, instead of focusing on design and control issues such as how to encode social fairness into algorithms, we're reading Nick Bostrom's suggestion that an uncontrolled superintelligent AI would kill humanity in the interests of making paper clips or the EU's deliberation about whether robots should have rights. These are not urgent issues, and focusing on them benefits only vendors who hope we don't look too closely at what they're actually doing.

Schwartz's own first example is the Facebook chat bots that were intended to simulate negotiation-like conversations. Just a couple of days ago someone referred to this as bots making up their own language and cited it as an example of how close AI is to the Singularity. In fact, because they lacked the right constraints, they just made strange sentences out of normal English words. The same pattern is visible with respect to self-driving cars.

You can see why: wild speculation drives clicks - excuse me, monetized eyeballs - but understanding what's wrong with how most of us think about accuracy in machine learning is *mathy*. Yet understanding the technology's very real limits is crucial to making good decisions about it.

With medicine, we're all particularly vulnerable to wishful thinking, since sooner or later we all rely on it for our own survival (something machines will never understand). The UK in particular is hoping AI will supply significant improvements because of the vast amount of patient, that is, training, data the NHS has to throw at these systems. To date, however, medicine has struggled to use information technology effectively.

Attendees at We Robot have often discussed what happens when the accuracy of AI diagnostics outstrips that of human doctors. At what point does defying the AI's decision become malpractice? At this year's conference, Michael Froomkin presented a paper studying the unwanted safety consequences of this approach (PDF).

The presumption is that the AI system's ability to call on the world's medical literature on top of generations of patient data will make it more accurate. But there's an underlying problem that's rarely mentioned: the reliability of the medical literature these systems are built on. The true extent of this issue began to emerge in 2005, when John Ioannidis published a series of papers estimating that 90% of medical research is flawed. In 2016, Ioannidis told Retraction Watch that systematic reviews and meta-analyses are also being gamed because of the rewards and incentives involved.

The upshot is that it's more likely to be unclear, when doctors and AI disagree, where to point the skepticism. Is the AI genuinely seeing patterns and spotting things the doctor can't? (In some cases, such as radiology, apparently yes. But clinical trials and peer review are needed.) Does common humanity mean the doctor finds clues in the patient's behavior and presentation that an AI can't? (Almost certainly.) Is the AI neutral in ways that biased doctors may not be? Stories of doctors not listening to patients, particularly women, are legion. Yet the most likely scenario is that the doctor will be the person entering data - which means the machine will rely on the doctor's interpretation of what the patient says. In all these conflicts, what balance do we tell the AI to set?

Much sooner than Watson will cure cancer we will have to grapple with which AIs have access to which research. In 2015, the team responsible for drafting Liberia's ebola recovery plan in 2014 wrote a justifiably angry op-ed in the New York Times. They had discovered that thousands of Liberians could have been spared ebola had a 1982 paper for Annals of Virology been affordable for them to read; it warned that Liberia needed to be included in the ebola virus endemic zone. Discussions of medical AI to date appear to handwave this sort of issue, yet cost structures, business models, and use of medical research are crucial. Is the future open access, licensing and royalties, all-you-can-eat subscriptions?

The best selling point for AI is that its internal corpus of medical research can be updated a lot faster than doctors' brains can be. In 2017, David Epstein wrote at ProPublica, many procedures and practices become entrenched, and doctors are difficult to dissuade from prescribing them even when they've been found useless. In the US, he added, the 21st Century Cures Act, passed in December 2016, threatens to make all this worse by lowering standards of evidence.

All of these are pressing problems no medical AI can solve. The problem, as usual, is us.

Illustrations: Watson wins at Jeopardy (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 8, 2018

Block that metaphor

oldest-robot-athens-2015-smaller.jpgMy favourite new term from this year's Privacy Law Scholars conference is "dishonest anthropomorphism". The term appeared in a draft paper written by Brenda Leung and Evan Selinger as part of a proposal for its opposite, "honest anthropomorphism". The authors' goal was to suggest a taxonomy that could be incorporated into privacy by design theory and practice, so that as household robots are developed and deployed they are less likely to do us harm. Not necessarily individual "harm" as in Isaac Asimov's Laws of Robotics, which tended to see robots as autonomous rather than a projection of its manufacturer into our personal space, therefore glossing over this more intentional and diffuse kind of deception. Pause to imagine that Facebook goes into making robots and you can see what we're talking about here.

"Dishonest anthropomorphism" derives from an earlier paper, Averting Robot Eyes by Margo Kaminski, Matthew Rueben, Bill Smart, and Cindy Grimm, which proposes "honest anthropomorphism" as a desirable principle in trying to protect people from the privacy problems inherent in admitting a robot, even something as limited as a Roomba, into your home. (At least three of these authors are regular attendees at We Robot since its inception in 2012.) That paper categorizes three types of privacy issues that robots bring: data privacy, boundary management, and social/relational.

The data privacy issues are substantial. A mobile phone or smart speaker may listen to or film you, but it has to stay where you put it (as Smart has memorably put it, "My iPad can't stab me in my bed"). Add movement and processing, and you have a roving spy that can collect myriad kinds of data to assemble an intimate picture of your home and its occupants. "Boundary management" refers to capabilities humans may not realize their robots have and therefore don't know to protect themselves against - thermal sensors that can see through walls, for example, or eyes that observe us even when the robot is apparently looking elsewhere (hence the title).

"Social/relational" refers to the our social and cultural expectations of the beings around us. In the authors' examples, unscrupulous designers can take advantage of our inclination to apply our expectations of other humans to entice us into disclosing more than we would if we truly understood the situation. A robot that mimics human expressions that we understand through our own muscle memory may be highly deceptive, inadvertently or intentionally. Robots may also be given the capability of identifying micro-reactions we can't control but that we're used to assuming go unnoticed.

A different session - discussing research by Marijn Sax, Natalie Helberger, and Nadine Bol - provided a worked example, albeit one without the full robot component. In other words: they've been studying mobile health apps. Most of these are obviously aimed at encouraging behavioral change - walk 10,000 steps, lose weight, do yoga. What the authors argue is that they are more aimed at effecting economic change than at encouraging health, an aspect often obscured from users. Quite apart from the wrongness of using an app marketed to improve your health as a vector for potentially unrelated commercial interests, the health framing itself may be questionable. For example, the famed 10,000 steps some apps push you to take daily has no evidence basis in medicine: the number was likely picked as a Japanese marketing term in the 1960s. These apps may also be quite rigid; in one case that came up during the discussion, an injured nurse found she couldn't adapt the app to help her follow her doctor's orders to stay off her feet. In other words, they optimize one thing, which may or may not have anything to do with health or even health's vaguer cousin, "wellness".

Returning to dishonest anthropomorphism, one suggestion was to focus on abuse rather than dishonesty; there are already laws that bar unfair practices and deception. After all, the entire discipline of user design is aimed at nudging users into certain behaviors and discouraging others. With more complex systems, even if the aim is to make the user feel good it's not simple: the same user will react differently to the same choice at different times. Deciding which points to single out in order to calculate benefit is as difficult as trying to decide where to begin and end a movie story, which the screenwriter William Goldman has likened to deciding where to cut a piece of string. The use of metaphor was harmless when we were talking desktops and filing cabinets; much less so when we're talking about a robot cat that closely emulates a biological cat and leads us into the false sense that we can understand it in the same way.

Deception is becoming the theme of the year, perhaps partly inspired by Facebook and Cambridge Analytica. It should be a good thing. It's already clear that neither the European data protection approach nor the US consumer protection approach will be sufficient in itself to protect privacy against the incoming waves of the Internet of Things, big data, smart infrastructure, robots, and AI. As the threats to privacy expand, the field itself must grow in new directions. What made these discussions interesting is that they're trying to figure out which ones.

Illustrations: Recreation of oldest known robot design (from the Ancient Greek Technology exhibition)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 1, 2018

The three IPs

Thumbnail image for 1891_Telegraph_Lines.jpgAgainst last Friday's date history will record two major European events. The first, as previously noted is the arrival into force of the General Data Protection Regulation, which is currently inspiring a number of US news sites to block Europeans. The second is the amazing Irish landslide vote to repeal the 8th amendment to the country's constitution, which barred legislators from legalizing abortion. The vote led the MEP Luke Ming Flanagan to comment that, "I always knew voters were not conservative - they're just a bit complicated."

"A bit complicated" sums up nicely most people's views on privacy; it captures perfectly the cognitive dissonance of someone posting on Facebook that they're worried about their privacy. As Merlin Erroll commented, terrorist incidents help governments claim that giving them enough information will protect you. Countries whose short-term memories include human rights abuses set their balance point differently.

The occasion for these reflections was the 20th birthday of the Foundation for Information Policy Research. FIPR head Ross Anderson noted on Tuesday that FIPR isn't a campaigning organization, "But we provide the ammunition for those who are."

Led by the late Caspar Bowden, FIPR was most visibly activist in the late 1990s lead-up to the passage of the now-replaced Regulation of Investigatory Powers Act (2000). FIPR in general and Bowden in particular were instrumental in making the final legislation less dangerous than it could have been. Since then, FIPR helped spawn the 15-year-old European Digital Rights and UK health data privacy advocate medConfidential.

Many speakers noted how little the debates have changed, particularly regarding encryption and surveillance. In the case of encryption, this is partly because mathematical proofs are eternal, and partly because, as Yes, Minister co-writer Antony Jay said in 2015, large organizations such as governments always seek to impose control. "They don't see it as anything other than good government, but actually it's control government, which is what they want.". The only change, as Anderson pointed out, is that because today's end-to-end connections are encrypted, the push for access has moved to people's phones.

Other perennials include secondary uses of medical data, which Anderson debated in 1996 with the British Medical Association. Among significant new challenges, Anderson, like many others noted the problems of safety and sustainability. The need to patch devices that can kill you changes our ideas about the consequences of hacking. How do you patch a car over 20 years? he asked. One might add: how do you stop a botnet of pancreatic implants without killing the patients?

We've noted here before that built infrastructure tends to attract more of the same. Today, said Duncan Campbell, 25% of global internet traffic transits the UK; Bude, Cornwall remains the critical node for US-EU data links, as in the days of the telegraph. As Campbell said, the UK's traditional position makes it perfectly placed to conduct global surveillance.

One of the most notable changes in 20 years: there were no less than two speakers whose open presence would have been unthinkable: Ian Levy, the technical director of the National Cyber Security centre, the defensive arm of GCHQ, and Anthony Finkelstein, the government's chief scientific advisor for national security. You wouldn't have seen them even ten years ago, when GCHQ was deploying its Mastering the Internet plan, known to us courtesy of Edward Snowden. Levy made a plea to get away from the angels versus demons school of debate.

"The three horsemen, all with the initials 'IP' - intellectual property, Internet Protocol, and investigatory powers - bind us in a crystal lattice," said Bill Thompson. The essential difficulty he was getting at is that it's not that organizations like Google DeepMind and others have done bad things, but that we can't be sure they haven't. Being trustworthy, said medConfidential's Sam Smith, doesn't mean you never have to check the infrastructure but that people *can* check it if they want to.

What happens next is the hard question. Onora O'Neill suggested that our shiny, new GDPR won't work, because it's premised on the no-longer-valid idea that personal and non-personal data are distinguishable. Within a decade, she said, new approaches will be needed. Today, consent is already largely a façade; true consent requires understanding and agreement.

She is absolutely right. Even today's "smart" speakers pose a challenge: where should my Alexa-enabled host post the privacy policy? Is crossing their threshold consent? What does consent even mean in a world where sensors are everywhere and how the data will be used and by whom may be murky. Many of the laws built up over the last 20 years will have to be rethought, particularly as connected medical devices pose new challenges.

One of the other significant changes will be the influx of new and numerous stakeholders whose ideas about what the internet is are very different from those of the parties who have shaped it to date. The mobile world, for example, vastly outnumbers us; the Internet of Things is being developed by Asian manufacturers from a very different culture.

It will get much harder from here, I concluded. In response, O'Neill was not content. It's not enough, she said, to point out problems. We must propose at least the bare bones of solutions.


Illustrations: 1891 map of telegraph lines (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


May 25, 2018

Who gets the kidney?

whogetsthekidney.jpg
At first glance, Who should get the kidney? seemed more reasonable and realistic than MIT's Moral Machine.

To recap: about a year ago, MIT ran an experiment, a variation of the old trolley problem, in which it asked visitors in charge of a vehicle about to crash to decide which nearby beings (adults, children, pets) to sacrifice and which to save. Crash!

As we said at the time, people don't think like that. In charge of a car, you react instinctively to save yourself, whoever's in the car with you, and then try to cause the least damage to everything else. Plus, much of the information the Moral Machine imagined - this stick figure is a Nobel prize-winning physicist; this one is a sex offender - just is not available to a car driver in a few seconds and even if it were, it's cognitive overload.

So, the kidney: at this year's We Robot, researchers offered us a series of 20 pairs of kidney recipients and a small selection of factors to consider: age, medical condition, number of dependents, criminal convictions, drinking habits. And you pick. Who gets the kidney?

Part of the idea as presented is that these people have a kidney available to them but it's not a medical match, and therefore some swapping needs to happen to optimize the distribution of kidneys. This part, which made the exercise sound like a problem AI could actually solve, is not really incorporated into the tradeoffs you're asked to make. Shorn of this ornamentation, Who Gets the Kidney? is a simple and straightforward question of whom to save. Or, more precisely, who in future will prove to have deserved to have been given this second chance at life? You are both weighing the value of a human being as expressed through a modest set of known characteristics and trying to predict the future. In this, it is no different from some real-world systems, such as the benefits and criminal justice systems Virginia Eubanks studies in her recent book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.

I found, as did the others in our group, that decision fatigue sets in very quickly. In this case, the goal - to use the choices to form like-minded discussion clusters of We Robot attendees - was not life-changing, and many of us took the third option, flipping a coin.

At my table, one woman felt strongly that the whole exercise was wrong; she embraced the principle that all lives are of equal value. Our society often does not treat them that way, and one reason is obvious: most people, put in charge of a kidney allocation system, want things arranged so that if they themselves they will get one.

Instinct isn't always a good guide, either. Many people, used to thinking in terms of protecting children and old people as "they've had their chance at life", automatically opt to give the kidney to the younger person. Granted, I'm 64, and see above paragraph, but even so: as distressing as it is to the parents, a baby can be replaced very quickly with modest effort. It is *very* expensive and time-consuming to replace an 85-year-old. It may even be existentially dangerous, if that 85-year-old is the one holding your society's institutional memory. A friend advises that this is a known principle in population biology.

The more interesting point, to me, was discovering that this exercise really wasn't any more lifelike than the moral machine. It seemed more reasonable because unlike the driver in the crashing car, kidney patients have years of documentation of their illness and there is time for them, their families, and their friends to fill in further background. The people deciding the kidney's destination are much better informed, and in the all-too-familiar scenario of allocating scarce resources. And yet: it's the same conundrum, and in the end how many of us want the machine, rather than a human, to decide whether we live or die?

Someone eventually asked: what if we become able to make an oversupply of kidneys? This only solves the top layer of the problem. Each operation has costs in surgeons' time, medical equipment, nursing care, and hospital infrastructure. Absent a disruptive change in medical technology, it's hard to imagine it will ever be easy to give everyone a kidney who needs one. Say it in food: we actually do grow enough food to supply everyone, but it's not evenly distributed, so in some areas we have massive waste and in others horrible famine (and in some places, both).

Moving to current practice, in a Guardian article Eubanks documents the similar conundrums confronting those struggling to allocate low-income housing, welfare, and other basic needs to poor people in the US in a time of government "austerity". The social workers, policy makers, and data scientists on these jobs have to make decisions, that, like the kidney and driving examples, have life-or-death consequences. In this case, as Eubanks puts it, they decide which get helped among "the most exploited and marginalized people in the United States". The automated systems Eubanks encounters do not lower barriers to programs as promised and, she writes, obscure the political choices that created these social problems in the first place. Automating the response doesn't change those.


Illustrations: Project screenshot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 20, 2018

Deception

werobot-pepper-head_zpsrvlmgvgl.jpg"Why are robots different?" 2018 co-chair Mark Lemley asked repeatedly at this year's We Robot. We used to ask this in the late 1990s when trying to decide whether a new internet development was worth covering. "Would this be a story if it were about telephones?" Tom Standage and Ben Rooney frequently asked at the Daily Telegraph.

The obvious answer is physical risk and our perception of danger. The idea that autonomously moving objects may be dangerous is deeply biologically hard-wired. A plant can't kill you if you don't go near it. Or, as Bill Smart put it at the first We Robot in 2012, "My iPad can't stab me in my bed." Autonomous movement fools us into thinking things are smarter than they are.

It is probably not much consolation to the driver of the crashed autopiloting Tesla or his bereaved family that his predicament was predicted two years ago at We Robot 2016. In a paper, Madeline Elish called humans in these partnerships "Moral Crumple Zones", because, she argued, in a human-machine partnership, the human would take all the pressure, like the crumple zone in a car.

Today, Tesla is fulfilling her prophecy by blaming the driver for not getting his hands onto the steering wheel fast enough when commanded. (Other prior art on this: Dexter Palmer's brilliant 2016 book Version Control.)

As Ian Kerr pointed out, the user's instructions are self-contradictory. The marketing brochure uses the metaphors "autopilot" and "autosteer" to seduce buyers into envisioning a ride of relaxed luxury while the car does all the work. But the legal documents and user manual supplied with the car tell you that you can't rely on the car to change lanes, and you must keep your hands on the wheel at all times. A computer ingesting this would start smoking.

Granted, no marketer wants to say, "This car will drive itself in a limited fashion, as long as you watch the road and keep your hands on the steering wheel." The average consumer reading that says, "Um...you mean I have to drive it?"

The human as moral crumple zone also appears in analyses of the Arizona Uber crash. Even-handedly, Brad Templeton points plenty of blame at Uber and its decisions: the car's LIDAR should have spotted the pedestrian crossing the road in time to stop safely. He then writes, "Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired." And yet humans are notoriously bad at the job required of her: monitor a machine. Safety drivers are typically deployed in pairs to split the work - but also to keep each other attentive.

The larger We Robot discussion was part about public perception of risk, based on a paper (PDF) by Aaron Mannes that discussed how easy it is to derail public trust in a company or new technology when statistically less-significant incidents spark emotional public outrage. Self-driving cars may in fact be safer overall than human drivers despite the fatal crash in Arizona; Mannes also mentioned were Three Mile Island, which made the public much more wary of nuclear power, and the Ford Pinto, which spent the 1970s occasionally catching fire.

Mannes suggested that if you have that trust relationship you may be able to survive your crisis. Without it, you're trying to win the public over on "Frankenfoods".

So much was funnier and more light-hearted seven years ago, as a long-time attendee pointed out; the discussions have darkened steadily year by year as theory has become practice and we can no longer think the problems are as far away as the Singularity.

In San Francisco, delivery robots cause sidewalk congestion and make some homeless people feel surveilled; in Chicago and Durham we risk embedding automated unfairness into criminal justice; the egregious extent of internet surveillance has become clear; and the world has seen its first self-driving car road deaths. The last several years have been full of fear about the loss of jobs; now the more imminent dragons are becoming clearer. Do you feel comfortable in public spaces when there's a like a mobile unit pointing some of its nine cameras at you?

Karen Levy, finds that truckers are less upset about losing their jobs than about automation invading their cabs, ostensibly for their safety. Sensors, cameras, and wearables that monitor them for wakefulness, heart health, and other parameters are painful and enraging to this group, who chose their job for its autonomy.

Today's drivers have the skills to step in; tomorrow's won't. Today's doctors are used to doing their own diagnostics; tomorrow's may not be. In the paper by Michael Froomkin, Ian Kerr, and Joëlle Pinea (PDF), automation may mean not only deskilling humans (doctors) but also a frozen knowledge base. Many hope that mining historical patient data will expose patterns that enable more accurate diagnostics and treatments. If the machines take over, where will the new approaches come from?

Worse, behind all that is sophisticated data manipulation for which today's internet is providing the prototype. When, as Woody Hartzog suggested, Rocco, your Alexa-equipped Roomba, rolls up to you, fakes a bum wheel, and says, "Daddy, buy me an upgrade or I'll die", will you have the heartlessness to say no?

Illustrations: Pepper and handler at We Robot 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 14, 2018

Late, noisy, and wrong

Thumbnail image for Bill Smart - We Robot 2016.jpg"All sensors are terrible," Bill Smart and Cindy Grimm explained as part of a pre-conference workshop at this year's We Robot. Smart, an engineer at Oregon State with prior history here, loves to explain why robots and AI aren't as smart as people think. "Just a fancy hammer," he said the first year.

Thursday's target was broad: the reality of sensors, algorithms, and machine learnng.

One of his slides read:


  • It's all just math and physics.

  • There is no intelligence.

  • It's just a computer program.

  • Sensors turn physics into numbers.

That last one is the crucial bit, and it struck me as surprising only because in all the years I've read about and glibly mentioned sensors and how many there are in our phones they've never really been explained to me. I'm not an electrical engineering student, so like most of us, I wave around the words. Of course I know that digital means numbers, and computers do calculations with numbers not fuzzy things like light and sound, and therefore the camera in my phone (which is a sensor) is storing values describing light levels rather than photographing light in the way that analogue film did, But I don't' - or didn't until Thursday - really know what sensors do measure. For most purposes, it's OK that my understanding is...let's call it abstract. But it does make it easy to overestimate what the technology can do now and how soon it will be able to fulfil the fantasies of mad scientists.

Smart's point is that when you start talking about what AI can do - whether or not you're using my aspirational intelligence recasting of the term - you'd better have some grasp of what it really is. It means the difference between a blob on the horizon that can be safely ignored and a woman pushing a bicycle across a roadway in front of an oncoming LIDAR-equipped Uber self-driving car;

So he begins with this: "All sensors are terrible." We don't use better ones because either such a thing does not exist or because they're too expensive. They are all "noisy, late, and wrong" and "you can never measure what you want to."

What we want to measure are things like pressure, light, and movement, and because we imagine machines as analogues of ourselves, we want them to feel the pressure, see the light, and understand the movement. However, what sensors can measure is electrical current. So we are always "measuring indirectly through assumptions and physics". This is the point AI Weirdness makes too, more visually, by showing what happens when you apply a touch of surrealism to the pictures you feed through machine learning.

He described what a sensor does this way: "They send a ping of energy into the world. It interacts, and comes back." In the case of LIDAR - he used a group of humans to enact this - a laser pulse is sent out, and the time it takes to return is a number of oscillations of a crystal. This has some obvious implications: you can't measure anything shorter than one oscillation.

Grimm explains that a "time of flight" sensor like that is what cameras - back to old Kodaks - use to auto-focus. Smartphones are pretty good at detecting a cluster of pixels that looks like a face and using that to focus on. But now let's imagine it's being used in a knee-high robot on a sidewalk to detect legs. In an art installation Smart and Grimm did they found that it doesn't work in Portland...because of all those hipsters wearing black jeans.

So there are all sorts of these artefacts, and we will keep tripping over them because most of us don't really know what we're talking about. With image recognition, the important thing to remember is that the sensor is detecting pixel values, not things - and a consequence of that is that we don't necessarily know *what* the system has actually decided is important and we can't guarantee what it might be recognizing. So turn machine learning loose on a batch of photos of Audis, and if they all happen to be photographed at the same angle the system won't recognize an Audi photographed at a different one. Teach a self-driving car all the roads in San Francisco and it still won't know anything about driving in Portland.

That circumscription is important. Teach a machine learning system on a set of photos of Abraham Lincoln and a zebra fish, and you get a system that can't imagine it might be a cat. The computer - which, remember, is working with an array of numbers - looks at the numbers in the array and based on what it has identified as significant in previous runs makes the call based on what's closest. It's numbers in, numbers out, and we can't guarantee what it's "recognizing".

A linguistic change would help make all this salient. LIDAR does not "see" the roadway in front of the car that's carrying it. Google's software does not "translate" language. Software does not "recognize" images. The machine does not think, and it has no gender.

So when Mark Zuckerberg tells Congress that AI will fix everything, consider those arrays of numbers that may interpret a clutch of pixels as Abraham Lincoln when what's there is a zebra fish...and conclude he's talking out of his ass.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier co\lumns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


March 23, 2018

Aspirational intelligence

2001-hal.png"All commandments are ideals," he said. He - Steven Croft, the Bishop of Oxford - had just finished reading out to the attendees of Westminster Forum's seminar (PDF) his proposed ten commandments for artificial intelligence. He's been thinking about this on our behalf: Croft malware writers not to adopt AI enhancements. Hence the reply.

The first problem is: what counts as AI? Anders Sandberg has quipped that it's only called AI until it starts working, and then it's called automation. Right now, though, to many people "AI" seems to mean "any technology I don't understand".

Croft's commandment number nine seems particularly ironic: this week saw the first pedestrian killed by a self-driving car. Early guesses are that the likely weakest links were the underemployed human backup driver and the vehicle's faulty LIDAR interpretation of a person walking a bicycle. Whatever the jaywalking laws are in Arizona, most of us instinctively believe that in a cage match between a two-ton automobile and an unprotected pedestrian the car is always the one at fault.

Thinking locally, self-driving cars ought to be the most ethics-dominated use of AI, if only because people don't like being killed by machines. Globally, however, you could argue that AI might be better turned to finding the best ways to phase out cars entirely.

We may have better luck at persuading criminal justice systems to either require transparency, fairness, and accountability in machine learning systems that predict recidivism and who can be helped or drop them entirely.

The less-tractable issues with AI are on display in the still-developing Facebook and Cambridge Analytica scandals. You may argue that Facebook is not AI, but the platform certainly uses AI in fraud detection and to determine what we see and decide which of our data parts to use on behalf of advertisers. All on its own, Facebook is a perfect exemplar of all the problems Australian privacy advocate foresaw in 2004 after examining the first social networks. In 2012, Clark wrote, "From its beginnings and onward throughout its life, Facebook and its founder have demonstrated privacy-insensitivity and downright privacy-hostility." The same could be said of other actors throughout the tech industry.

Yonatan Zunger is undoubtedly right when he argues in the Boston Globe that computer science has an ethics crisis. However, just fixing computer scientists isn't enough if we don't fix the business and regulatory environment built on "ask forgiveness, not permission". Matthew Stoll writes in the Atlantic about the decline since the 1970s of American political interest in supporting small, independent players and limiting monopoly power. The tech giants have widely exported this approach; now, the only other government big enough to counter it is the EU.

The meetings I've attended of academic researchers considering ethics issues with respect to big data have demonstrated all the careful thoughtfulness you could wish for. The November 2017 meeting of the Research Institute in Science of Cyber Security provided numerous worked examples in talks from Kat Hadjimatheou at the University of Warwick, C Marc Taylor from the the UK Research Integrity Office, and Paul Iganski the Centre for Research and Evidence on Security Threats (CREST). Their explanations of the decisions they've had to make about the practical applications and cases that have come their way are particularly valuable.

On the industry side, the problem is not just that Facebook has piles of data on all of us but that the feedback loop from us to the company is indirect. Since the Cambridge Analytica scandal broke, some commenters have indicated that being able to do without Facebook is a luxury many can't afford and that in some countries Facebook *is* the internet. That in itself is a global problem.

Croft's is one of at least a dozen efforts to come up with an ethics code for AI. The Open Data Institute has its Data Ethics Canvas framework to help people working with open data identify ethical issues. The IEEE has published some proposed standards (PDF) that focus on various aspects of inclusion - language, cultures, non-Western principles. Before all that, in 2011, Danah Boyd and Kate Crawford penned Six Provocations for Big Data, which included a discussion of the need for transparency, accountability, and consent. The World Economic Forum published its top ten ethical issues in AI in 2016. Also in 2016, a Stanford University Group published a report trying to fend off regulation by saying it was impossible.

If the industry proves to be right and regulation really is impossible, it won't be because of the technology itself but because of the ecosystem that nourishes amoral owners. "Ethics of AI", as badly as we need it, will be meaningless if the necessary large piles of data to train it are all owned by just a few very large organizations and well-financed criminals; it's equivalent to talking about "ethics of agriculture" when all the seeds and land are owned by a child's handful of global players. The pre-emptive antitrust movement of 2018 would find a way to separate ownership of data from ownership of the AI, algorithms, and machine learning systems that work on them.


Illustrations: HAL.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 2, 2018

In sync

Discarding images-King David music.jpgUntil Wednesday, I was not familiar with the use of "sync" to stand for a music synchronization license - that is, a license to use a piece of music in a visual setting such as a movie, video game, or commercial. The negotiations involved can be Byzantine and very, very slow, in part because the music's metadata is so often wrong or missing. In one such case, described at Music 4.5's seminar on developing new deals and business models for sync (Flash), it took ten years to get the wrong answer from a label to the apparently simple question: who owns the rights to this track on this compilation album?

The surprise: this portion of the music business is just as frustrated as activists with the state of online copyright enforcement. They don't love the Digital Millennium Copyright Act (2000) any more than we do. We worry about unfair takedowns of non-infringing material and bans on circumvention tools; they hate that the Act's Safe Harbor grants YouTube and Facebook protection from liability as long as they remove content when told it's infringing. Google's automated infringement detection software, ContentID, I heard Wednesday, enables the "value gap", which the music industry has been fretting about for several years now because the sites have no motivation to create licensing systems. There is some logic there.

However, where activists want to loosen copyright, enable fair use, and restore the public domain, they want to dump Safe Harbor, either by developing a technological bypass; or change the law; or by getting FaceTube to devise a fairer, more transparent revenue split. "Instagram," said one, "has never paid the music industry but is infringing copyright every day."

To most of us, "online music" means subscription-based streaming services like Spotify or download services like Amazon and iTunes. For many younger people, especially Americans though, YouTube is their jukebox. Pex estimates that 84% of YouTube videos contain at least ten seconds of music. Google says ContentID matches 99.5% of those, and then they are either removed or monetized. But, Pex argues, 65% of those videos remain unclaimed and therefore provide no revenue. Worse, as streaming grows, downloads are crashing. There's a detectable attitude that if they can fix licensing on YouTube they will have cracked it for all sites hosting "creator-generated content".

It's a fair complaint that ContentID was built to protect YouTube from liability, not to enable revenues to flow to rights holders. We can also all agree that the present system means millions of small-time creators are locked out of using most commercial music. The dancing baby case took eight years to decide that the background existence of a Prince song in a 29-second home video of a toddler dancing was fair use. But sync, too, was designed for businesses negotiating with businesses. Most creators might indeed be willing to pay to legally use commercial music if licensing were quick, simple, and cheap.

There is also a question of whether today's ad revenues are sustainable; a graphic I can't find showed that the payout per view is shrinking. Bloomberg finds that increasingly winning YouTubers are taking all with little left for the very long tail.

The twist in the tale is this. MP3 players unbundled albums into songs as separate marketable items. Many artists were frustrated by the loss of control inherent in enabling mix tapes at scale. Wednesday's discussion heralded the next step: unbundling the music itself, breaking it apart into individual beats, phrases and bars, each licensable.

One speaker suggested scenarios. The "content" you want to enjoy is 42 minutes long but your commute is only 38 minutes. You might trim some "unnecessary dialogue" and rearrange the rest so now it fits! My reaction: try saying "unnecessary dialogue" to Aaron Sorkin and let's see how that goes.

I have other doubts. I bet "rearranging" will take longer than watching the four minutes. Speeding up the player slightly achieves the same result, and you can do that *now* for free (try really blown it. More useful was the suggestion that hearing-impaired people could benefit from being able to tweak the mix to fade the background noise and music in a pub scene to make the actors easier to understand. But there, too, we actually already have closed captions. It's clear, however, that the scenarios may be wrong, but the unbundling probably isn't.

In this world, we won't be talking about music, but "music objects". Many will be very low-value...but the value of the total catalogue might rise. The BBC has an experiment up already: The Mermaid's Tears, an "object-based radio drama" in which you can choose to follow any one of the three characters to experience the story.

Smash these things together, and you see a very odd world coming at us. It's hard to see how fair use survives a system that aims to license "music objects" rather than "music". In 1990, Pamela Samuelson warned about copyright maximlism. That agenda does not appear to have gone away.


Illustrations: King David dancing before the Ark of the Covenant, 'Maciejowski Bible', Paris ca. 1240 (via Discarding Images.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 16, 2018

Data envy

new-22portobelloroad.jpgWhile we're all fretting about Facebook, Google, and the ecosystem of advertisers that track our every online move, many other methods for tracking each of us are on the rise, sprawling out across the cyber-physical continuum. You can see the world's retailers, transport authorities, and governments muttering, "Why should *they* have all the data?" CCTV was the first step, and it's a terrible role model. Consent is never requested; instead, where CCTV's presence is acknowledged it comes with "for your safety" propaganda.

People like the Center for Digital Democracy's Jeff Chester or security and privacy researcher Chris Soghoian have often exposed the many hidden companies studying us in detail online. At a workshop in 2011, they predicted much of 2016's political interference and manipulation. They didn't predict that Russians would seek to interfere with Western democracies; but they did correctly foresee the possibility of individual political manipulation via data brokers and profiling. Was this, that workshop asked, one of the last moments at which privacy incursions could be reined in?

A listener then would have been introduced to companies like Axciom and Xaxis, behind-the-scenes swappers of our data trails. Like Equifax, we do not have direct relationships with these companies, and as people said on Twitter during the Equifax breach, "We are their victims, not their customers".

At Freedom to Tinker, in September Steven Engelhardt exposed the extent to which email has become a tracking device. Because most people use just one email address, it provides an easy link. HTML email is filled with third-party trackers that send requests to myriad third-parties, which can then match the email address against other information they hold. Many mailing lists add to this by routing clicks on links through their servers to collect information about what you view, just like social media sites. There are ways around these things - ban your email client from loading remote content, view email as plain text, and copy the links rather than clicking on them. Google is about to make all this much worse by enabling programs to run within email messages. It is, as they say at TechCrunch, a terrible idea for everyone except Google: it means more ads, more trackers, and more security risks.

In December, also at Freedom to Tinker, Gunes Acar explained that a long-known vulnerability in browsers' built-in password managers helps third parties track us. The browser memorizes your login details the first time you land on a website and enter them. Then, as you browse on the site to a non-login page, the third party plants a script with an invisible login form that your browser helpfully autofills . The script reads and hashes the email address, and sends it off to the mother ship, where it can be swapped and matched to other profiles with the same email address hash. Again, since people use the same one for everything and rarely change it, email addresses are exceptionally good connectors between browsing profiles, mobile apps, and devices. Ad blockers help protect against this; browser vendors and publishers could also help.

But these are merely extensions of the tracking we already have. Amazon Go's new retail stores rely on tracking customers throughout, noting not only what they buy but how long they stand in front of a shelf and what they pick up and put back. This should be no surprise: Recode predicted as much in 2015. Other retailers will copy this: why should online retailers have all the data?

Meanwhile, police in Wales have boasted about using facial recognition to arrest people, matching images of people of interest against both its database of 500,000 custody images and live CCTV feeds while the New York Times warns that the technology's error rate spikes when the subjects being matched are not white and male. In the US, EFF reports that according to researchers at Georgetown Law School an estimated 117 million Americans are already in law enforcement facial recognition systems with little oversight.

We already knew that phones are tracked by their attempts to connect to passing wifi SSIDs; at last month's CPDP, the panel on physical tracking introduced targeted tracking using MAC addresses extracted via wifi connections. In many airports, said Future of Privacy Forum's Jules Polonetsky, courtesy of Blip Systems deploys sensors to help with logistical issues such as traffic flow and queue management. In Cincinnati, says the company's website, these sensors help the Transportation Security Agency better allocate resources and provide smoother "passenger processing" (should you care to emerge flat and orange like American cheese).

Visitors to office buildings used to sign in with name, company, and destination; now, tablets demand far more detailed information with no apparent justification. Every system, as Infomatica's Monica McDonnell explained at CPDP, is made up of dozens of subsystems, some of which may date to the 1960s, all running slightly different technologies that may or may not be able to link together the many pockets of information generated for each person.

These systems are growing much faster than most of us realize, and this is even before autonomous vehicles and the linkage of systems into smart cities. If the present state of physical tracking is approximately where the web was in 2000...the time to set the limits is now.


Illustrations: George Orwell's house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 26, 2018

Bodies in the clouds

andrea-matwyshyn.jpgThis year's Computers, Privacy, and Data Protection conference had the theme "The Internet of Bodies". I chaired the "Bodies in the Clouds" panel, which was convened by Lucie Krahulcova of Access Now, and this is something like what I may have said to introduce it.

The notion of "cyberspace" as a separate space derives from the early days of the internet, when most people outside of universities or large science research departments had to dial up and wait while modems mated to get there. Even those who had those permanent connections were often offline in other parts of their lives. Crucially, the people you met in that virtual land were strangers, and it was easy to think there were no consequences in real life.

In 2013, New America Foundation co-founder Michael Lind called cyberspace an idea that makes you dumber the moment you learn of it and begged us to stop believing the internet is a mythical place that governments and corporations are wrongfully invading. While I disagreed, I can see that those with no memory of those early days might see it that way. Today's 30-year-olds were 19 when the iPhone arrived, 18 when Facebook became a thing, 16 when Google went public, and eight when Netscape IPO'd. They have grown up alongside iTunes, digital maps, and GPS, surrounded online by everyone they know. "Cyberspace" isn't somewhere they go; online is just an extension of their phones or laptops..

And yet, many of the laws that now govern the internet were devised with the separate space idea in mind. "Cyberspace", unsurprisingly, turned out not to be exempt from the laws governing consumer fraud, copyright, defamation, libel, drug trafficking, or finance. Many new laws passed in this period are intended to contain what appeared to legislators with little online experience to be a dangerous new threat. These laws are about to come back to bite us.

At the moment there is still *some* boundary: we are aware that map lookups, video sites, and even Siri requests require online access to answer, just as we know when we buy a device like a "smart coffee maker" or a scale that tweets our weight that it's externally connected, even if we don't fully understand the consequences. We are not puzzled by the absence of online connections as we would be if the sun disappeared and we didn't know what an eclipse was.

Security experts had long warned that traditional manufacturers were not grasping the dangers of adding wireless internet connections to their products, and in 2016 they were proved right, when the Mirai botnet harnessed video recorders, routers, baby monitors, and CCTV cameras to delier monster attacks on internet sites and service providers.

For the last few years, I've called this the invasion of the physical world by cyberspace. The cyber-physical construct of the Internet of Things will pose many more challenges to security, privacy, and data protection law. The systems we are beginning to build will be vastly more complex than the systems of the past, involving many more devices, many more types of devices, and many more service providers. An automated city parking system might have meters, license plate readers, a payment system, middleware gateways to link all these, and a wireless ISP. Understanding who's responsible when such systems go wrong or how to exercise our privacy rights will be difficult. The boundary we can still see is vanishing, as is our control over it.

For example, how do we opt out of physical tracking when there are sensors everywhere? It's clear that the Cookie Directive approach to consent won't work in the physical world (though it would give a new meaning to "no-go areas").

Today's devices are already creating new opportunities to probe previously inaccessible parts of our lives. Police have asked for data from Amazon Echos in a Arkansas murder case. In Germany, investigators used the suspect's Apple Health app while re-enacting the steps they believed he took and compared the results to the data the app collected at the time of the crime to prove his guilt.

A friend who buys and turns on an Amazon Echo is deemed to have accepted its privacy policy. Does visiting their home mean I've accepted it too? What happens to data about me that the Echo has collected if I am not a suspect? And if it controls their whole house, how do I get it to work after they've gone to bed?

At Privacy Law Scholars in 2016, Andrea Matwyshyn introduced a new idea: the Internet of Bodies, the theme of this year's CPDP. As she spotted then, the Internet of Bodies make us dependent for our bodily integrity and ability to function on this hybrid ecosystem. At that first discussion of what I'm sure will be an important topic for many years to come, someone commented, "A pancreas has never reported to the cloud before."

A few weeks ago, a small American ISP sent a letter to warn a copyright-infringing subscriber that continuing to attract complaints would cause the ISP to throttle their bandwidth, potentially interfering with devices requiring continuous connections, such as CCTV monitoring and thermostats. The kind of conflict this suggests - copyright laws designed for "cyberspace" touching our physical ability to stay warm and alive in a cold snap - is what awaits us now.

Illustrations: Andrea Matwyshyn.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 1, 2017

Unstacking the deck

Thumbnail image for Alice_par_John_Tenniel_42.pngA couple of weeks ago, I was asked to talk to a workshop studying issues in decision-making in standards development organizations about why the consumer voice is important. This is what I think I may have said.

About a year ago, my home router got hacked thanks to a port deliberately left open by the manufacturer and documented (I now know) in somewhat vague terms on page 210 of a 320-page manual. The really important lesson I took from the experience was that security is a market failure: you can do everything right and still lose. The router was made by an eminently respectable manufacturer, sold by a knowledgeable expert, configured correctly, patched up to date, and yet still failed a basic security test. The underlying problem was that the manufacturer imagined that the port it left open would only ever be used by ISPs wishing to push updates to their customers and that ordinary customers would not be technically capable of opening the port when needed. The latter assumption is probably true, but the former is nonsense. No attacker says, "Oh, look, a hole! I wonder if we're allowed to use it." Consumers are defenseless against manufacturers who fail to understand this.

But they are also, as we have seen this year, defenseless against companies' changing business plans and models. In April, Google's Nest subsidiary decided to turn off devices made by Revolv, a company it bought in 2014 that made a smart home hub. Again, this is not a question of ending support for a device that continues to function as would have happened any time in the past. The fact that the hub is controlled by an app means both the hardware and the software can be turned off when the company loses interest in the product. These are, as Arlo Gilbert wrote at Medium, devices people bought and paid for. Where does Google get the right, in Gilbert's phrasing, to "reach into your home and pull the plug"?

In August, sound system manufacturer Sonos offered its customers two choices: accept its new privacy policy, which requires customers to agree to broader and more detailed data collection, or watch your equipment decline in functionality as updates are no longer applied and possibly cease to function. Here, the issue appears to be that Sonos wants its speakers to integrate with voice assistants, and the company therefore must conform to privacy policies issued by upstream companies such as Amazon. If you do not accept, eventually you have an ex-sound system. Why can't you accept the privacy policy if and only if you want to add the voice assistant?

Finally, in November, Logitech announced it would end service and support for its Harmony Hub devices in March 2018. This might have been a "yawn" moment except that "end of life" means "stop working". The company eventually promised to replace all these devices with newer Harmony Hubs, which can control a somewhat larger range of devices, but the really interesting thing is why it made the change. According to Ars Technica, Logitech did not want to renew an encryption certificate whose expiration will leave Harmony Link devices vulnerable to attacks. It was, as the linked blog posting makes plain, a business decision. For consumers and the ecologically conscientious, a wasteful one.

So, three cases where consumers, having paid money for devices in good faith, are either forced to replace them or accept being extorted for their data. In a world where even the most mundane devices are reconfigurable via software and receive updates over the internet, consumers need to be protected in new ways. Standards development organizations have a role to play in that, even if it's not traditionally been their job. We have accepted "Pay-with-data" as a tradeoff for "free" online; now this is "pay-with-data" as part of devices we've paid to buy.

The irony is that the internet was supposed to empower consumers by redressing the pricing information imbalance between buyers and sellers. While that has certainly happened, the incoming hybrid cyber-physical world will up-end that. We will continue to know a lot more about pricing than we used to, but connected software allows the companies that make the objects that clutter our homes to retain control of those items throughout their useful lives. In such a situation the power balance that applies is "Possession is nine-tenths of the law." And possession will no longer be measurable by the physical location of the object but by who has access to change what it does. Increasingly, that's not us. Consumers have no ability to test their cars for regulatory failures (VW) or know whether Uber is screwing the regulators or Uber drivers are screwing riders. This is a new imbalance of power we cannot fix by ourselves.

Worse, much of this will be invisible to us. All the situations discussed here became visible. But I only found out about the hack on my router because I am eccentric enough to run my own mail server and the spam my router sent got my outgoing email bounced when it caused an anti-spam service to blacklist my mail server. In the billion-object Internet of Things, such communications and many of their effects will primarily be machine-to-machine and hidden from human users, and the world will cease to function in unpredictable odd ways.

Illustrations: John Tenniel's Alice, under attack by a pack of cards.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2017

Twister

Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".


Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 10, 2017

Regulatory disruption

Thumbnail image for Northern_Rock_Queue.jpgThe financial revolution due to hit Britain in mid-January has had surprisingly little publicity and has little to do with the money-related things making news headlines over the last few years. In other words, it's not a new technology, not even a cryptocurrency. Instead, this revolution is regulatory: banks will be required to open up access to their accounts to third parties.

The immediate cause of this change is two difficult-to-distinguish pieces of legislation, one UK-specific and one EU-wide. The EU piece is Payment Services Directive 2, which is intended to foster standards and interoperability in payments across Europe. In the UK, Open Banking requires the nine biggest retail banks to create APIs that, given customer consent, will give third parties certified by the Financial Conduct Authority direct access to customer accounts. Account holders have begun getting letters announcing new terms and conditions, although recipients report that the parts that refer to open banking and consent are masterfully vague.

Thumbnail image for rotated-birch-contactlessmonopoly-ttf2016.jpgAs anyone attending the annual Tomorrow's Transactions Forum knows, open banking has been creeping up on us for the last few years. Consult Hyperion's Tim Richards has a good explanation of the story so far. At this year's event, Dave Birch, who has a blog posting outlining PSD2's background and context, noted that in China, where the majority of non-cash payments are executed via mobile, Alipay and Tencent are already executing billions of transactions a year, bypassing banks entirely. While the banks aren't thrilled about losing the transactions and their associated (dropping) revenue, the bigger issue is that they are losing the data and insight into their customers that traditionally has been exclusively theirs.

We could pick an analogy from myriad internet-disrupted sectors, but arguably the best fit is telecoms deregulation, which saw AT&T (in the US) and BT (in the UK) forced to open up their networks to competitors. Long distance revenues plummeted and all sorts of newcomers began leaching away their customers.

For banks, this story began the day Elon Musk's x.com merged with Peter Thiel's money transfer business to create the first iteration of Paypal so that anyone with an email address could send and receive money. Even then, the different approach of cryptocurrencies was the subject of experiments, but for most people the rhetoric of escaping government was less a selling point than being able to trade small sums with strangers who didn't take credit cards. Today's mobile payment users similarly don't care whether a bank is involved or not as long as they get their money.

Part of the point is to open up competition. In the UK, consumer-bank relationships tend to be lifelong, partly because so much of banking here has been automated for decades. For most people, moving their account involves not only changing arrangements for inbound payments like salary, but also also all the outbound payments that make up a financial life. The upshot is to give the banks impressive customer lock-in, which the Competition and Markets Authority began trying to break with better account portability.

The larger point of Open Banking, however, is to drive innovation in financial services. Why, the reasoning goes, shouldn't it be easier to aggregate data from many sources - bank and other financial accounts, local transport, government benefits - and provide a dashboard to streamline management or automatically switch to the cheapest supplier of unavoidable services? At Wired, Rowland Manthorpe has a thorough outline of the situation and its many uncertainties. Among these are the impact on the banks themselves - will they become, as the project's leader and the telecoms analogy suggest, plumbing for the financial sector or will they become innovators themselves? Or, despite the talk of fintech startups, will the big winners be Google and Facebook?

The obvious concerns in all this are security and privacy. Few outside the technology sector understand what an API is; how do we explain it to the broad range of the population so they understand how to protect themselves? Assuming that start-ups emerge, what mechanisms will we have to test how well our data is secured or trace how it's being used? What about the potential for spoof apps that steal people's data and money?

It's also easy to imagine that "consent" may be more than ordinarily mangled, a problem a friend calls the "tendency to mandatory". It's easy to imagine that the companies to whom we apply for insurance, a loan, or a job may demand an opened gateway to account data as part of the approvals process, which is extortion rather than consent.

This is also another situation where almost all of "my" data inevitably involves exposing third parties, the other halves of our transactions who have never given consent for that to happen. Given access to a large enough percentage of the population's banking data, triangulation should make it possible to fill in a fair bit of the rest. Amazon already has plenty of this kind of data from its own customers; for Facebook and Google this must be an exciting new vista.

Understanding what this will all mean will take time. But it represents a profound change, not only in the landscape of financial services but in the area of technical innovation. This time, those fusty old government regulators are the ones driving disruption.


Illustrations: Northern Rock in 2007 (Dominic Alves); Dave Birch.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 3, 2017

Life forms

Thumbnail image for Cephalopod_barnstar.pngWould you rather be killed by a human or a machine?

At this week's Royal Society meeting on AI and Society, Chris Reed recounted asking this question of an audience in Singapore. They all picked the human, even though they knew it was irrational, because they thought at least they'd know *why*.

A friend to whom I related this had another theory: maybe they thought there was a chance they could talk the the human killer out of it, whereas the machine would be implacable. It's possible.

My own theory pins this distaste for machine killing on a different, crucial underlying factor: a sense of shared understanding. The human standing over you with the axe or driving the oncoming bus may be a professional paid to dispatch you, a serial killer, an angry ex, or mentally ill, but they all have a personal understanding of what a human life means because they all have one they know they, too, will one day lose. The meaning of removing someone else's life is thoroughly embedded in all of us. Not having that is more or less the definition of a machine, or was until Philip K. Dick and his replicants. But there is no reason to assume that every respondent had the same reason.

Similarly, a commenter in the audience found similar responses to an Accenture poll he encountered on Twitter that inquired whether he would be in favor of AI making health decisions. When he checked the voting results, 69% had said no. Here again, the death of a patient by medical mistake keeps a human doctor awake at night (if television is to be believed), while to a machine it's a statistic, no matter how heavily weighted in its inner backpropagating neural networks.

Marion-Oswald-in-template.jpgThese two anecdotes resonated because earlier, Marion Oswald had opened her talk by asking whether, like Peter Godfrey-Smith's observation of cephalopods, interacting with AI was the closest we can come to interacting with an intelligent alien. Arguably, unless the aliens are immortal, on issues of life and death we can actually expect to have more shared understanding with them, as per above, than with machines.

The primary focus of Oswald's talk was actually to discuss her work studying HART, an algorithmic model used by Durham Constabulary to decide whether offenders qualified for deferred prosecution and help with their problems. The study raises all sorts of questions we're going to have to consider over the coming years about the role of police in society.

These issues were somewhat taken up later by Mireille Hildebrandt, who warned of the risks of transforming text-driven law - the messy stuff centuries of court cases have contested and interpreted - to data-driven law Allowing that to happen, she argued, transforms law into administration. "Contestability is the heart of the rule of law," she said. "There is more to the law that predictability and expedience." A crucial part of that is being able to test the system, and here Hildebrandt was particularly gloomy, in that although legal systems that comb the legal corpus are currently being marketed as aids for lawyers, she views it as inevitable that at some point they will become replacements. Some time after that, the skills necessary to test the inner workings of these systems will have vanished from the systems' human owners' firms.

At the annual We Robot conference, a recurring theme is the hard edges of computer systems, an aspect Ellen Ullman examined closely in her 1997 book, Close to the Machine. In Bill Smart's example, the difference between 59.99 miles an hour and 60.01 miles an hour is indistinguishable, but to a computer fitted with the right sensors the difference is a speeding ticket. An aspect of this that is insufficiently discussed is that all biological beings have some level of unpredictability. Robots and AI with far greater sensing precision than is available to humans will respond to changes we can't detect, making them appear less predictable, and therefore more intelligent, than they actually are. This is a deception we will have to learn to decode.

Already, machines that are billed as tools to aid human judgement are often much more trusted than they should be. Danielle Citron's 2006 paper Technological Due Process studied this in connection with benefits scoring systems in Texas and California, and found two problems. First, humans tended to trust the machine's decisions rather than apply their own judgement, a problem Hildebrandt referred to as "judgemental atrophy". Second, computer programmers are not trained lawyers, and are therefore not good at accurately translating legal text into decision-making systems. How do you express a fuzzy but widely understood and often-used standard like the UK's "reasonable person" in computer code? You'd have to precisely define the attopoint at which "reasonable" abruptly flicks to "unreasonable".

Ultimately, Oswald came down against the "intelligent alien" idea: "These are people-made, and it's up to us to find the benefits and tackle the risks," she said. "Ignorance of mathematics is no excuse."

That determination rests on the notion that the people building AI systems and the people using them have shared values. We already know that's not true, but even so: I vote less alien than a cephalopod on everything but the fear of death.

Illustrations: Cephalopod (via Obsidian Soul; Marion Oswald.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 29, 2017

Ubersicht

London_Skyline.jpgIf it keeps growing, every company eventually reaches a moment where this message arrives: it's time to grow up. For Microsoft, IBM, and Intel it was antitrust suits. Google's had the EU's €2.4 billion fine. For Facebook and Twitter, it may be abuse and fake news.

This week, it was Uber's turn, when Transport for London declined to renew Uber's license to operate. Uber's response was to apologize and promise to "do more" while urging customers to sign its change.org petition. At this writing, 824,000 have complied.

Travis_Kalanick_at_DLD_Munich_2015_(cropped).jpgI can't see the company as a victim here. The "sharing economy" rhetoric of evil protectionist taxi regulators has taken knocks from the messy reality of the company's behavior and the Grade A jerkishness of its (now former) founding CEO, the controversial Travis Kalanick. The tone-deaf "Rides of Glory" blog post. The safety-related incidents that TfL complains the company failed to report because: PR. Finally, the clashes with myriad city regulators the company would prefer to bypass: currently, it's threatening to pull out of Quebec. Previously, both Uber and Lyft quit Austin, Texas for a year rather than comply with a law requiring driver fingerprinting. In a second London case, Uber is arguing that its drivers are not employees; SumOfUs begs to differ.

People who use Uber love Uber, and many speak highly of drivers they use regularly. In one part of their brains, Uber-loving friends advocate for social justice, privacy, and fair wages and working conditions; in the other, Uber is so cool, cheap, convenient, and clean, and the app tracks the cab in real time...and city transport is old, grubby, and slow. But we're not at the beginning of this internet thing any more, and we know a lot about what happens when a cute, cuddly company people love grows into a winner-takes-all behemoth the size of a nation-state.

A consideration beyond TfL's pay grade is that transport doesn't really scale, as Hubert Horan explains in his detailed analysis of the company's business model. As Horan explains, Uber can't achieve new levels of cost savings and efficiency (as Amazon and eBay did) because neither the fixed costs of providing the service nor network externalities create them. More simply, predatory competition - that is, venture capitalists providing the large sums that allow Uber to undercut and put out of business existing cab firms (and potentially public transport) - is not sustainable until all other options have been killed and Uber can raise its prices.

Black_London_Cab.jpgEarlier this year, at a conference on autonomous vehicles, TfL's representative explained the problems it faces. London will grow from 8.6 million to 10 million people by 2025. On the tube, central zone trains are already running at near the safe frequency limit and space prohibits both wider and longer trains. Congestion will increase: trucks, cars, cabs, buses, bicycles, and pedestrians. All these interests - plus the thousands of necessary staff - need to be balanced, something self-interested companies by definition do not do. In Silicon Valley, where public transport is relatively weak, it may not be clearly understood how deeply a city like London depends on it.

At Wired UK, Matt Burgess says Uber will be back. When Uber and Lyft exited Austin, Texas rather than submit to a new law requiring them to fingerprint drivers, within a year state legislators had intervened. But that was several scandals ago, which is why I think that this once SorryWatch has it wrong: Uber's apology may be adequately drafted (as they suggest, minus the first paragraph), but the company's behaviour has been egregious enough to require clear evidence of active change. Uber needs a plan, not a PR campaign - and urging its customers to lobby for it does not suggest it's understood that.

At London Reconnections, John Bull explains the ins and outs of London's taxi regulation in fascinating detail. Bull argues that in TfL Uber has met a tech-savvy and forward-thinking regulator that is its own boss and too big to bully. Given that almost the only cost the company can squeeze is its drivers' compensation, what protections need to be in place? How does increasing hail-by-app taxi use fit into overall traffic congestion?

Uber is one of the very first of the new hybrid breed of cyber-physical companies. Bypassing regulators - asking forgiveness rather than permission - may have flown when the consequences were purely economic, but it can't be tolerated in the new era of convergence, in which the risks are. My iPhone can't stab me in my bed, (as Bill Smart has memorably observed, but that's not true of these hybrids..

TfL will presumably focus on rectifying the four areas in its announcement. Beyond that, though I'd like to see Uber pressed for some additional concessions. In particular, I think the company - and others like it - should be required to share their aggregate ride pattern data (not individual user accounts) with TfL to aid the authority to make better decisions for the benefit of all Londoners. As Tom Slee, the author of What's Yours Is Mine: Against the Sharing Economy, has put it, "Uber is not 'the future', it's 'a future'".


Illustrations: London skyline (by Mewiki); London black cab (Jimmy Barrett; Travis Kalanick (Dan Taylor).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 16, 2017

The ghost in the machine

rotated-patrickball-2017.jpgHumans are a problem in decision-making. We have prejudices based on limited experience, received wisdom, weird personal irrationality, and cognitive biases psychologists have documented. Unrecognized emotional mechanisms shield us from seeing our mistakes.

Cue machine learning as the solution du jour. Many have claimed that crunching enough data will deliver unbiased judgements. These days, this notion is being debunked: the data the machines train on and analyze arrives pre-infected, as we created it in the first place, a problem Cathy O'Neil does a fine job of explaining in Weapons of Math Destruction. See also Data & Society and Fairness, Accountability, and Transparency in Machine Learning.

Patrick Ball, founding director of the Human Rights Database Analysis Group, argues, however, that there are underlying worse problems. HRDAG "applies rigorous science to the analysis of human rights violations around the world". It uses machine learning - currently, to locate mass graves in Mexico - but a key element of its work is "multiple systems estimation" to identify overlaps and gaps.

"Every kind of classification system - human or machine - has several kinds of errors it might make," he says. "To frame that in a machine learning context, what kind of error do we want the machine to make?" HRDAG's work on predictive policing shows that "predictive policing" finds patterns in police records, not patterns in occurrence of crime.

Media reports love to rate machine learning's "accuracy", typically implying the percentage of decisions where the machine's "yes" represents a true positive and its "no" means a true negative. Ball argues this is meaningless. In his example, a search engine that scans billions of web pages for "Wendy Grossman" can be accurate to .99999 because the vast supply of pages that don't mention me (true negatives) will swamp the results. The same is true of any machine system trying to find something rare in a giant pile of data - and it gets worse as the pile of data gets bigger, a problem net.wars has often called searching for a needle in a haystack by building bigger haystacks in relation to data retention.

For any automated decision system, you can draw a 2x2 confusion matrix, like this:
ConfusionMatrix.png
"There are lots of ways to understand that confusion matrix, but the least meaningful of those ways is to look at true positives plus true negatives divided by the total number of cases and say that's accuracy," Ball says, "because in most classification problems there's an asymmetry of yes/no answers" - as above. A "94% accurate" model "isn't accurate at all, and you haven't found any true positives because these classifications are so asymmetric." This fact does make life easy for marketers, though: you can improve your "accuracy" just by throwing more irrelevant data at the model. "To lay people, accuracy sounds good, but it actually isn't the measure we need to know."

Unfortunately, there isn't a single measure: "We need to know at least two, and probably four. What we have to ask is, what kind of mistakes are we willing to tolerate?"

In web searches, we can tolerate a few seconds to scan 100 results and ignore the false positives. False negatives - pages missing that we wanted to see - are less acceptable. Machine learning uses "recall" for the fraction of true positives in the set of results, and "precision" for that of true positives in the entire set being searched. The various ways the classifier can be set can be drawn as a curve. Human beings understand a single number better than tradeoffs; reporting accuracy then means picking a spot on the curve as the point to set the classifier. "But it's always going to be ridiculously optimistic because it will include an ocean of true negatives." This is true whether you're looking for 2,000 fraudulent financial transactions in a sea of billions daily, or finding a handful of terrorists in the general population. Recent attackers, from 9/11 to London Bridge 2017, have already been objects of suspicion, but forces rarely have the capacity to examine every such person, and before an attack there may be nothing to find. Retaining all that irrelevant data may, however, help forensic investigation.

Where there are genuine distinguishing variables, the model will find the matches even given extreme asymmetry in the data. "If we're going to report in any serious way, we will come up with lay language around, 'we were trying to identify 100 people in a population of 20,00 and we found 90 of them." Even then, care is needed to be sure you're finding what you think. The classic example here is the the US Army's trial using neural networks to find camouflaged tanks. The classifier fell victim to the coincidence that all the pictures with tanks in them had been taken on sunny days and all the pictures of empty forest on cloudy days. "That's the way bias works," Ball says.

Cathy_O'Neil_at_Google_Cambridge.jpgThe crucial problem is that we can't see the bias. In her book, O'Neil favors creating feedback loops to expose these problems. But these can be expensive and often can't be created - that's why the model was needed.

"A feedback loop may help, but biased predictions are not always wrong - but they're wrong any time you wander into the space of the bias," Ball says. In his example: say you're predicting people's weight given their height. You use one half of a data set to train a model, then plot heights and weights, draw a line, and use its slope and intercept to predict the other half. It works. "And Wired would write the story." Investigating when the model makes errors on new data shows the training data all came from Hong Kong schoolchildren who opted in, a bias we don't spot because getting better data is expensive, and the right answer is unknown.

"So it's dangerous when the system is trained on biased data. It's really, really hard to know when you're wrong." The upshot, Ball says, is that "You can create fair algorithms that nonetheless reproduce unfair social systems because the algorithm is fair only with respect to the training data. It's not fair with respect to the world."


Illustrations: Patrick Ball; confusion matrix (Jackverr); Cathy O'Neil (GRuban).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 30, 2012

Robot wars

Who'd want to be a robot right now, branded a killer before you've even really been born? This week, Huw Price, a philosophy professor, Martin Rees, an emeritus professor of cosmology and astrophysics, and Jaan Tallinn, co-founder of Skype and a serial speaker at the Singularity Summit, announced the founding of the Cambridge Project for Existential Risk. I'm glad they're thinking about this stuff.

Their intention is to build a Centre for the Study of Existential Risk. There are many threats listed in the short introductory paragraph explaining the project - biotechnology, artificial life, nanotechnology, climate change - but the one everyone seems to be focusing on is: yep, you got it, KILLER ROBOTS - that is, artificial general intelligences so much smarter than we are that they may not only put us out of work but reshape the world for their own purposes, not caring what happens to us. Asimov would weep: his whole purpose in creating his Three Laws of Robotics was to provide a device that would allow him to tell some interesting speculative, what-if stories and get away from the then standard fictional assumption that robots were eeeevil.

The list of advisors to Cambridge project has some interesting names: Hermann Hauser, now in charge of a venture capital fund, whose long history in the computer industry includes founding Acorn and an attempt to create the first mobile-connected tablet (it was the size of a 1990s phone book, and you had to write each letter in an individual box to get it to recognize handwriting - just way too far ahead of its time); and Nick Bostrum of the Future of Humanity Institute at Oxford. The other names are less familiar to me, but it looks like a really good mix of talents, everything from genetics to the public understanding of risk.

The killer robots thing goes quite a way back. A friend of mine grew up in the time before television when kids would pay a nickel for the Saturday show at a movie theatre, which would, besides the feature, include a cartoon or two and the next chapter of a serial. We indulge his nostalgia by buying him DVDs of old serials such as The Phantom Creeps, which features an eight-foot, menacing robot that scares the heck out of people by doing little more than wave his arms at them.

Actually, the really eeeevil guy in that movie is the mad scientist, Dr Zorka, who not only creates the robot but also a machine that makes him invisible and another that induces mass suspended animation. The robot is really just drawn that way. But, like CSER, what grabs your attention is the robot.

I have a theory about this that I developed over the last couple of months working on a paper on complex systems, automation, and other computing trends, and this is that it's all to do with biology. We - and other animals - are pretty fundamentally wired to see anything that moves autonomously as more intelligent than anything that doesn't. In survival terms, that makes sense: the most poisonous plant can't attack you if you're standing out of reach of its branches. Something that can move autonomously can kill you - yet is also more cuddly. Consider the Roomba versus a modern dishwasher. Counterintuitively, the Roomba is not the smarter of the two.

And so it was that on Wednesday, when Voice of Russia assembled a bunch of us for a half-hour radio discussion, the focus was on KILLER ROBOTs, not synthetic biology (which I think is a much more immediately dangerous field) or climate change (in which the scariest new development is the very sober, grown-up, businesslike this-is-getting-expensive report from the insurer Munich Re). The conversation was genuinely interesting, roaming from the mysteries of consciousness to the problems of automated trading and the 2010 flash crash. Pretty much everyone agreed that there really isn't sufficient evidence to predict a date at which machines might be intelligent enough to pose an existential risk to humans. You might be worried about self-driving cars, but they're likely to be safer than drunk humans.

There is a real threat from killer machines; it's just that it's not super-human intelligence or consciousness that's the threat here. Last week, Human Rights Watch and the International Human Rights Clinic published Losing Humanity: the Case Against Killer Robots, arguing that governments should act pre-emptively to ban the development of fully autonomous weapons. There is no way, that paper argues, for autonomous weapons (which the military wants so fewer of *our* guys have to risk getting killed) to distinguish reliably between combatants and civilians.

There were some good papers on this at this year's We Robot conference from Ian Kerr and Kate Szilagyi (PDF) and Markus Wegner.

From various discussions, it's clear that you don't need to wait for *fully* autonomous weapons to reach the danger point. In today's partially automated systems, the operator may be under pressure to make a decision in seconds and "automation bias" means the human will most likely accept whatever the machines suggests it will do, the military equivalent of clicking OK. The human in the loop isn't as much of a protection as we might hope against the humans designing these things. Dr Zorka, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series

October 19, 2012

Finding the gorilla

"A really smart machine will think like an animal," predicted Temple Grandin at last weekend's Singularity Summit. To an animal, she argued, a human on a horse often looks like a very different category of object than a human walking. That seems true; and yet animals also live in a sensory-driven world entirely unlike that of machines.

A day later, Melanie Mitchell, a professor of computer science at Portland State University, argued that analogies are key, she said, to human intelligence, producing landmark insights like comparing a brain to a computer (von Neumann) or evolutionary competition to economic competition (Darwin). This is true, although that initial analogy is often insufficient and may even be entirely wrong. A really significant change in our understanding of the human brain came with research by psychologists like Elizabeth Loftus showing that where computers retain data exactly as it was (barring mechanical corruption), humans improve, embellish, forget, modify, and partially lose stored memories; our memories are malleable and unreliable in the extreme. (For a worked example, see The Good Wife, season 1, episode 6.)

Yet Mitchell is obviously right when she says that much of our humor is based on analogies. It's a staple of modern comedy, for example, for a character to respond on a subject *as if* it were another subject (chocolate as if it were sex, a pencil dropping on Earth as if it were sex, and so on). Especially incongruous analogies: when Watson asks - in the video clip she showed - for the category "Chicks dig me" it's funny because we know that as a machine a) Watson doesn't really understand what it's saying, and b) Watson is pretty much the polar opposite of the kind of thing that "chicks" are generally imagined to "dig".

"You are going to need my kind of mind on some of these Singularity projects," said Grandin, meaning visual thinkers, rather than the mathematical and verbal thinkers who "have taken over". She went on to contend that visual thinkers are better able to see details and relate them to each other. Her example: the emergency generators at Fukushima located below the level of a plaque 30 feet up on the seawall warning that flood water could rise that high. When she talks - passionately - about installing mechanical overrides in the artificial general intelligences Singularitarians hope will be built one day soonish, she seems to be channelling Peter G. Neumann, who talks often about the computer industry's penchant for repeating the security mistakes of decades past.

An interesting sideline about the date of the Singularity: Oxford's Stuart Armstrong has studied these date predictions and concluded pretty much that, in the famed words of William Goldman, no one knows anything. Based on his study of 257 predictions collected by the Singularity Institute and published on its Web site, he concluded that most theories about these predictions are wrong. The dates chosen typically do not correlate with the age or expertise of the predicter or the date of the prediction. I find this fascinating: there's something like an 80 percent consensus that the Singularity will happen in five to 100 years.

Grandin's discussion of visual thinkers made me wonder whether they would be better or worse at spotting the famed invisible gorilla than most people. Spoiler alert: if you're not familiar with this psychologist test, go now and watch the clip before proceeding. You want to say better - after all, spotting visual detail is what visual thinkers excel at - but what if the demands of counting passes is more all-consuming for them than for other types of thinkers? The psychologist Daniel Kahneman, participating by video link, talked about other kinds of bias but not this one. Would visual thinkers be more or less likely to engage in the common human pastime of believing we know something based on too little data and then ignoring new data?

This is, of course, the opposite of today's Bayesian systems, which make a guess and then refine it as more data arrives: almost the exact opposite of the humans Kahneman describes. So many of the developments we're seeing now rely on crunching masses of data (often characterized as "big" but often not *really* all that big) to find subtle patterns that humans never spot. Linda Avey, founder of the personal genome profiling service 23andMe and John Wilbanks are both trying to provide services that will allow individuals to take control of and understand their personal medical data. Avey in particular seems poised to link in somehow to the data generated by seekers in the several-year-old self-quantified movement.

This approach is so far yielding some impressive results. Peter Norvig, the director of research at Google, recounted both the company's work on recognizing cats and its work on building Google Translate. The latter's patchy quality seems more understandable when you learn that it was built by matching documents issued in multiple languages against each other and building up statistical probabilities. The former seems more like magic, although Slate points out that the computers did not necessarily pick out the same patterns humans would.

Well, why should they? Do I pick out the patterns they're interested in? The story continues...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 5, 2012

The doors of probability

Mike Lynch has long been the most interesting UK technology entrepreneur. In 2000, he became Britain's first software billionaire. In 2011 he sold his company, Autonomy, to Hewlett-Packard for $10 billion. A few months ago, Hewlett-Packard let him escape back into the wild of Cambridge. We've been waiting ever since for hints of what he'll do next; on Monday, he showed up at NESTA to talk about his adventures with Wired UK editor David Rowan.

Lynch made his name and his company by understanding that the rule formulated in 1750 by the English vicar and mathematician Thomas Bayes could be applied to getting machines to understand unstructured data. These days, Bayes is an accepted part of the field of statistics, but for a couple of centuries anyone who embraced his ideas would have been unwise to admit it. That began to change in the 1980s, when people began to realize the value of his ideas.

"The work [Bayes] did offered a bridge between two worlds," Lynch said on Monday: the post-Renaissance world of science, and the subjective reality of our daily lives. "It leads to some very strange ideas about the world and what meaning is."

As Sharon Bertsch McGrayne explains in The Theory That Would Not Die, Bayes was offering a solution to the inverse probability problem. You have a pile of encrypted code, or a crashed airplane, or a search query: all of these are effects; your problem is to find the most likely cause. (Yes, I know: to us the search query is the cause and the page of search results if the effect; but consider it from the computer's point of view.) Bayes' idea was to start with a 50/50 random guess and refine it as more data changes the probabilities in one direction or another. When you type "turkey" into a search engine it can't distinguish between the country and the bird; when you add "recipe" you increase the probability that the right answer is instructions on how to cook one.

Note, however, that search engines work on structured data: tags, text content, keywords, and metadata all going into building an index they can run over to find the hits. What Lynch is talking about is the stuff that humans can understand - raw emails, instant messages, video, audio - that until now has stymied the smartest computers.

Most of us don't really like to think in probabilities. We assume every night that the sun will rise in the morning; we call a mug a mug and not "a round display of light and shadow with a hole in it" in case it's really a doughnut. We also don't go into much detail in making most decisions, no matter how much we justify them afterwards with reasoned explanations. Even decisions that are in fact probabilistic - such as those of the electronic line-calling device Hawk-Eye used in tennis and cricket - we prefer to display as though they were infallible. We could, as Cardiff professor Harry Collins argued, take the opportunity to educate people about probability: the on-screen virtual reality animation could include an estimate of the margin for error, or the probability that the system is right (much the way IBM did in displaying Watson's winning Jeopardy answers). But apparently it's more entertaining - and sparks fewer arguments from the players - to pretend there is no fuzz in the answer.

Lynch believes we are just at the beginning of the next phase of computing, in which extracting meaning from all this unstructured data will bring about profound change.

"We're into understanding analog," he said. "Fitting computers to use instead of us to them." In addition, like a lot of the papers and books on algorithms I've been reading recently, he believes we're moving away from the scientific tradition of understanding a process to get an outcome and into taking huge amounts of data about outcomes and from it extracting valid answers. In medicine, for example, that would mean changing from the doctor who examines a patient, asks questions, and tries to understand the cause of what's wrong with them in the interests of suggesting a cure. Instead, why not a black box that says, "Do these things" if the outcome means a cured patient? "Many people think it's heresy, but if the treatment makes the patient better..."

At the beginning, Lynch said, the Autonomy founders thought the company could be worth £2 to £3 million. "That was our idea of massive back then."

Now, with his old Autonomy team, he is looking to invest in new technology companies. The goal, he said, is to find new companies built on fundamental technology whose founders are hungry and strongly believe that they are right - but are still able to listen and learn. The business must scale, requiring little or no human effort to service increased sales. With that recipe he hopes to find the germs of truly large companies - not the put in £10 million sell out at £80 million strategy he sees as most common, but multi-billion pound companies. The key is finding that fundamental technology, something where it's possible to pick a winner.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 31, 2012

Remembering the moon

"I knew my life was going to be lived in space," a 50-something said to me in 2009 on the anniversary of the moon landings, trying to describe the impact they had on him as a 12-year-old. I understood what he meant: on July 20, 1969, a late summer Sunday evening in my time zone, I was 15 and allowed to stay up late to watch; awed at both the achievement and the fact that we could see it live, we took Polaroid pictures (!) of the TV image showing Armstrong stepping onto the Moon's surface.

The science writer Tom Wilkie remarked once that the real impact of those early days of the space program was the image of the Earth from space, that it kicked off a new understanding of the planet as a whole, fragile ecosystem. The first Earth Day was just nine months later. At the time, it didn't seem like that. "We landed on the moon" became a sort of yardstick; how could we put a man on the moon yet be unable to fix a bicycle? That sort of thing.

To those who've grown up always knowing we landed on the moon in ancient times (that is, before they were born), it's hard to convey what a staggering moment of hope and astonishment that was. For one thing, it seemed so improbable and it happened so fast. In 1962, President Kennedy promised to put a man on the moon by the end of the decade - and it happened, even though he was assassinated. For another, it was the science fiction we all read as teens come to life. Surely the next steps would be other planets, greater access for the rest of us. Wouldn't I, in my lifetime, eventually be able also to look out the window of a vehicle in motion and see the Earth getting smaller?

Probably not. Many years later, I was on the receiving end of a rant from an English friend about the wasteful expense of sending people into space when unmanned spacecraft could do so much more for so much less money. He was, of course, right, and it's not much of a surprise that the death of the first human to set foot on the Moon, Neil Armstrong, so nearly coincided with the success of the Mars investigator robot, Curiosity. What Curiosity also reminds us, or should, is that although we admire Armstrong as a hero, the fact is that landing on the Moon wasn't so much his achievement as that of probably thousands, of engineers, programmers, and scientists who developed and built the technology necessary to get him there. As a result, the thing that makes me saddest about Armstrong's death on August 25 is the loss of his human memory of the experience of seeing and touching that off-Earth orbiting body.

The science fiction writer Charlie Stross has a lecture transcript I particularly like about the way the future changes under your feet. The space program - and, in the UK and France, Concorde - seemed like a beginning at the time, but has so far turned out to be an end. Sometime between 1950 and 1970, Stross argues, progress was redefined from being all about the speed of transport to being all about the speed of computers or, more precisely, Moore's Law. In the 1930s, when the moon-walkers were born, the speed of transport was doubling in less than a decade; but it only doubled in the 40 years from the late 1960s to 2007, when he wrote this talk. The speed of acceleration had slowed dramatically.

Applying this precedent to Moore's Law, Intel founder Gordon Moore's observation that the number of transistors that could fit on an integrated circuit doubled about every 24 months, increasing computing speed and power proportionately, Stross was happy to argue that despite what we all think today and the obsessive belief among Singularitarians that computers will surpass the computational power of humans oh, any day now, but certainly by 2030, "Computers and microprocessors aren't the future. They're yesterday's future, and tomorrow will be about something else." His suggestion: bandwidth, bringing things like lifelogging and ubiquitous computing so that no one ever gets lost; if we'd had that in 1969, the astronauts would have been sending back first-person total-immersion visual and tactile experiences that would now be in NASA's library for us all to experience as if at first hand instead of the just the external image we all know.

The science fiction I grew up with assumed that computers would remain rare (if huge) expensive items operated by the elite and knowledgeable (except, perhaps, for personal robots). Space flight, and personal transport, on the other hand, would be democratized. Partly, let's face it, that's because space travel and robots make compelling images and stories, particularly for movies, while sitting and typing...not so much. I didn't grow up imagining my life being mediated and expanded by computer use; I, like countless generations before me, grew up imagining the places I might go and the things I might see. Armstrong and the other astronauts, were my proxies. One day in the not-too-distant future, we will have no humans left who remember what it was actually like to look up and see the Earth in the sky while standing on a distant rock. There only ever have been, Wikipedia tells me, 12, all born in the 1930s.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 17, 2012

Bottom dwellers

This week Google announced it would downgrade in its search results sites with an exceptionally high number of valid copyright notices filed against them. As the EFF points out, the details of exactly how this will work are scarce and there is likely to be a big, big problem with false positives - that is, sites that are downgraded unfairly. You have only to look at the recent authorial pile-on that took down the legitimate ebook lending site LendInk for what can happen when someone gets hold of the wrong side of the copyright stick.

Unless we know how the inclusion of Google's copyright notice stats will work, how do we know what will be affected, how, and for how long? There is no transparency to let a site know what's happening to it, and no appeals process. Given the many abuses of the Digital Millennium Copyright Act, under which such copyright notices are issued, it's hard to know how fair such a system will be. Though, granted: the company could have simply done it and not told us. How would we know?

The timing of this move is interesting because it comes only a few months after Google began advocating for the notion that search engine results are, like newspaper editorial matter, a form of free speech under the First Amendment. The company went as far as to commission the legal scholar Eugene Volokh to write a white paper outlining the legal arguments. These basically revolve around the idea that a search algorithm is merely a new form of editorial judgment; Google returns search results in the order in which, in its opinion, they will be most helpful to users.

In response, Tim Wu, author of The Master Switch, argued in the New York Times that conceding the right of free speech to computerized decisions brings serious problems with it in the long run. Supposing, for example, that antitrust authorities want to regulate Google to ensure that it doesn't use its dominance in search to unfairly advantage its other online properties - YouTube, Google Books, Google Maps, and so on. If search results are free speech, that type of regulation becomes unconstitutional. On BoingBoing, Cory Doctorow responded that one should regulate the bad speech without denying it is speech. Earlier, in the Guardian Doctorow argued that Google's best gambit was making the argument about editorial integrity; publications make esthetic judgments, but Google famously loves to live by numbers.

This part of the argument is one that we're going to be seeing a lot of over the next few decades, because it boils down to this bit of Philip K. Dick territory: should machines programmed by humans have free speech rights? And if so, under what circumstances? If Google search results are free speech, is the same true of the output of credit-scoring algorithms or speed cameras? A magazine editor can, if asked, explain the reasoning process by which material was commissioned for, placed in, or rejected by her magazine; Google is notoriously secretive about the workings of its algorithms. We do not even know the criteria Google uses to judge the quality of its search results.

These are all questions we're going to have to answer as a society; and they are questions that may be answered very differently in countries without a First Amendment. My own first inclination is to require some kind of transparency in return: for every generation of separation between human and result, there must be an additional layer of explanation detailing how the system is supposed to work. The more people the results affect, the bigger the requirement for transparency. Something like that.

The more immediate question, of course, is, whether Google's move will have an impact on curbing unauthorized file-sharing. My guess is not that much; few file-sharers of my acquaintance use Google for the purpose of finding files to download.

Yet, in an otherwise sensible piece about the sentencing of Surfthechannel.com owner Anton Vickerman to four years in prison in the Guardian, Dan Sabbagh winds up praising Google's decision with a bunch of errors. First of all, he blames the music industry's problems on mistakes "such as failing to introduce copy protection". As the rest of us know, the music industry only finally dropped copy protection in 2009 - because consumers hate it. Arguably, copy protection delayed the adoption of legal, paid services by years. He also calls the decision to sell all-you-can-eat subscriptions to music back catalogues a mistake; on what grounds is not made clear.

Finally, he argues, "Had Google [relegated pirate sites' results] a decade ago, it might not have been worthwhile for Vickerman to set up his site at all."

Ten years ago? In 2002, Napster had been gone for less than a year. Gnutella and BitTorrent were measuring their age in months. iTunes was a year old. The Pirate Bay wouldn't exist for some months more. Google was two years away from going public. The mistake then wasn't downgrading sites oft accused of copyright infringement. The mistake then was not building legal, paid downloading services and getting them up and running as fast as possible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 25, 2012

Camera obscura

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't), or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 11, 2012

Self-drive

When I first saw that Google had obtained a license for its self-driving car in the state of Nevada I assumed that the license it had been issued was a driver's license. It's disappointing to find out that what they meant was that the car had been issued with license plates so it can operate on public roads. Bah: all operational cars have license plates, but none have driver's licenses. Yet.

The Guardian has been running a poll, asking readers if they'd ride in the car or not. So far, 84 percent say yes. I would, too, I think. With a manual override and a human prepared to step in for oh, the first ten years or so.

I'm sure that Google, being a large company in a highly litigious society, has put the self-driving car through far more rigorous tests than any a human learner undergoes. Nonetheless, I think it ought to be required to get a driver's license, not just license plates. It should have to pass the driving test like everyone else. And then buy insurance, which is where we'll find out what the experts think. Will the rates for a self-driving car be more or less than for a newly licensed male aged 18 to 25?

To be fair, I've actually been to Nevada, and I know how empty most of those roads are. Even without that, I'd certainly rather ride in Google's car than on a roller coaster. I'd rather share the road with Google's car than with a drunk driver. I'd rather ride in Google's car than trust the next Presidential election to electronic voting machines.

That last may seem illogical. After all, riding in a poorly driven car can kill you. A gamed electronic voting machine can only steal your votes. The same problems with debugging software and checking its integrity apply to both. Yet many of us have taken quite long flights on fly-by-wire planes and ridden on driverless trains without giving it much thought.

But a car is *personal*. So much so that we tolerate 1.2 million deaths annually worldwide from road traffic; in 2011 alone, more than ten times as many people died on American roads as were killed in the 9/11 World Trade Center attack. Yet everyone thinks they're an above-average driver and feels safest when they're controlling their own car. Will a self-driving car be that delusional?

The timing was interesting because this week I have also been reading a 2009 book I missed, The Case for Working With Your Hands or Why Office Work is Bad for Us and Fixing Things Feels Good . The author, Michael Crawford, argues that manual labour, which so many middle class people have been brought up to despise, is more satisfying - and has better protection against outsourcing - than anything today's white collar workers learn in college. I've been saying for years that if I had teenagers I'd be telling them to learn a trade like automechanics, plumbing, electrical work, nursing, or even playing live music - anything requiring skill and knowledge and that can't easily be outsourced to another country in the global economy. I'd say teaching, but see last week's.

Dumb down plumbing all you want with screw-together PVC pipes and joints, but someone still has to come to your house to work on it. Even today's modern cars, with their sealed subsystems and electronic read-outs, need hands-on care once in a while. I suppose Google's car arrives back at home base and sends in a list of fix-me demands for its human minders to take care of.

When Crawford talks about the satisfaction of achieving something in the physical world, he's right, up to a point. In an interview for the Guardian in 1995 (TXT), John Perry Barlow commented to me that, "The more time I spend in cyberspace, the more I love the physical world, and any kind of direct, hard-linked interaction with it. I never appreciated the physical world anything like this much before." Now, Barlow, more than most people, knows a lot of about fixing things: he spent 17 years running a debt-laden Wyoming ranch and, as he says in that piece, he spent most of it fixing things that couldn't be fixed. But I'm going to argue that it's the contrast and the choice that makes physical work seem so attractive.

Yes, it feels enormously different to know that I have personally driven across the US many times, the most notable of which was a three-and-a-half-day sprint from Connecticut to Los Angeles in the fall of 1981 (pre-GPS, I might add, without needing to look at a map). I imagine being driven across would be more like taking the train even though you can stop anywhere you like: you see the same scenery, more or less, but the feeling of personal connection would be lost. Very much like the difference between knowing the map and using GPS. Nonetheless, how do I travel across the US these days? Air. How does Barlow make his living? Being a "cognitive dissident". And Crawford writes books. At some point, we all seem to want to expand our reach beyond the purely local, physical world. Finding that balance - and employment for 9 billion people - will be one of this century's challenges.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 6, 2012

I spy

"Men seldom make passes | At girls who wear glasses," Dorothy Parker incorrectly observed in 1937. (How would she know? She didn't wear any). You have to wonder what she could have made of Google Goggles which, despite the marketing-friendly alliterative name, are neither a product (yet) nor a new idea.

I first experienced the world according to a heads-up display in 1997 during a three-day conference (TXT) on wearable computing at MIT ($). The eyes-on demonstration was a game of pool with the headset augmenting my visual field with overlays showing cuing angles. (Could be the next level of Olympic testing: checking athletes for contraband contract lenses and earpieces for those in sports where coaching is not allowed.)

At that conference, a lot of ideas were discussed and demonstrated: temperature-controlling T-shirts, garments that could send back details of a fallen soldier's condition, and so on. Much in evidence were folks like Thad Starner, who scanned my business card and handed it back to me and whose friends commented on the way he'd shift his eyes to his email mid-conversation, and Steve Mann, who turned himself into a cyborg experiment as long ago as the 1980s. Checking their respective Web pages, I see that Mann hasn't updated the evolution of wearables graphic since the late 1990s, by which time the headset looked like an ordinary pair of sunglasses; in 2002, when airport security forced him to divest his gear, he had trouble adjusting to life without it. Starner is on leave to work at...Project Glass, the home of Google Goggles.

The problem when a technological dream spans decades is that between conception and prototype things change. In 1997, that conference seemed to think wearable computing - keyboards embroidered in conductive thread, garments made of cloth woven from copper-covered strands, souped-up eyeglasses, communications-enabled watches, and shoes providing from the energy generated in walking - surely was a decade or less away.

The assumptions were not particularly contentious. People wear wrist watches and jewelry, right? So they'll wear things with the same fashion consciousness, but functional. Like, it measures and displays your heart rhythms (a woman danced wearing a light-flashing pendant that sped up with her heart rate), or your moods (high-tech mood rings), or acts as the controller for your personal area network.

Today, a lot of people don't *wear* wrist watches any more.

For wearable guys, it's good progress. The functionality that required 12 pounds of machinery draped about your person - I see from my pieces linked above and my contemporaneous notes, that the rig I tried felt like wearing a very heavy, inflexible sandwich board - is an iPhone or Android. Even my old Palm Centro comes close. As Jack Schofield writes in the Guardian, the headset is really all that's left that we don't have. And Google has a lot of competition.

What interests me is let's say these things do take off in a big way. What then? Where will the information come from to display on those headsets? Who will be the gatekeepers? If we - some of us - want to see every building decorated with outsized female nudes, will we have to opt in for porn?

My speculation here is surely not going to be futuristic enough, because like most people I'm locked into current trends. But let's say that glasses bolt onto the mobile/Internet ecologies we have in place. It is easy to imagine that, if augmented reality glasses do take off, they will be an important gateway to the next generation of information services. Because if all the glasses are is a different way of viewing your mobile phone, then they're essentially today's ear pieces - surely not sufficient motivation for people with good vision to wear glasses. So, will Apple glasses require an iTunes account and an iOS device to gain access to a choice of overlays to turn on and off that you receive from the iTunes store in real time? Similarly, Google/Android/Android marketplace. And Microsoft/Windows Mobile/Bing or something. And whoever.

So my questions are things like: will the hardware and software be interoperable? Will the dedicated augmented reality consumer need to have several pairs? Will it be like, "Today I'm going mountain climbing. I've subscribed to the Ordnance Survey premium service and they have their own proprietary glasses, so I'll need those. And then I need the Google set with the GPS enhancement to get me there in the car and find a decent restaurant afterwards." And then your kids are like, "No, the restaurants are crap on Google. Take the Facebook pair, so we can ask our friends." (Well, not Facebook, because the kids will be saying, "Facebook is for *old* people." Some cool, new replacement that adds gaming.)

What's that you say? These things are going to collapse in price so everyone can afford 12 pairs? Not sure. Prescription glasses just go on getting more expensive. I blame the involvement of fashion designers branding frames, but the fact is that people are fussy about what they wear on their faces.

In short, will augmented reality - overlays on the real world - be a new commons or a series of proprietary, necessarily limited, world views?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 30, 2012

The ghost of cash

"It's not enough to speak well of digital money," Geronimo Emili said on Wednesday. "You must also speak negatively of cash." Emili has a pretty legitimate gripe. In his home country, Italy, 30 percent of the economy is black and the gap between the amount of tax the government collects and the amount it's actually owed is €180 billion. Ouch.

This sets off a bit of inverted nationalist competition between him and the Greek lawyer Maria Giannakaki, there to explain a draft Greek law mandating direct payment of VAT from merchants' tills to eliminate fraud: which country is worse? Emili is sure it's Italy.

"We invented banks," he said. "But we love cash." Italy's cash habit costs the country €10 billion a year - and 40 percent of Europe's bank robberies.

This exchange took place at this year's Digital Money Forum, an annual event that pulls together people interested in everything from the latest mobile technology to the history of Anglo-Saxon coinage. Their shared common interest: what makes money work? If you, like most of this group, want to see physical cash eliminated, this is the key question.

Why Anglo-Saxon coinage? Rory Naismith explains that the 8th century began the shift from valuing coins merely for their metal content and assigning them a premium for their official status. It was the beginning of the abstraction of money: coins, paper, the elimination of the gold standard, numbers in cyberspace. Now, people like Emili and this event's convenor, David Birch, argue it's time to accept money's fully abstract nature and admit the truth: it's a collective hallucination, a "promise of a promise".

These are not just the ravings of hungry technology vendors: Birch, Emili, and others argue that the costs of cash fall disproportionately on the world's poor, and that cash is the key vector for crime and tax evasion. Our impressions of the costs are distorted because the costs of electronic payments, credit cards, and mobile wallets are transparent, while cash is free at the point of use.

When I say to Birch that eliminating cash also means eliminating the ability to transact anonymously, he says, "That's a different conversation." But it isn't, if eliminating crime and tax evasion are your drivers. In the two days only Bitcoin offers anonymity, but it's doomed to its niche market, for whatever reason. (I think it's too complicated; Dutch financial historian Simon Lelieveldt says it will fail because it has no central bank.)

I pause to be annoyed by the claim that cash is filthy and spreads disease. This is Microsoft-level FUD, and not worthy of smart people claiming to want to benefit the poor and eliminate crime. In fact, I got riled enough to offer to lick any currency (or coins; I'm not proud) presented. I performed as promised on a fiver and a Danish note. And you know, they *kept* that money?

In 1680, says Birch, "Pre-industrial money was failing to serve an industrial revolution." Now, he is convinced, "We are in the early part of the post-industrial revolution, and we're shoehorning industrial money in to fit it. It can't last." This is pretty much what John Perry Barlow said about copyright in 1993, and he was certainly right.

But is Birch right? What kind of medium is cash? Is it a medium of exchange, like newspapers, trading stored value instead of information, or is it a format, like video tape? If it's the former, why shouldn't cash survive, even if only as a niche market? Media rarely die altogether - but formats come and go with such speed that even the more extreme predictions at this event - such as Sandra Alzetta, who said that her company expects half its transactions to be mobile by 2020 -seem quite modest. Her company is Visa International, by the way.

I'd say cash is a medium of exchange, and today's coins and notes are its format. Past formats have included shells, feathers, gold coins, and goats; what about a format for tomorrow that printed or minted on demand, at ATMs? I ask the owner of the grocery shop around the corner if his life would be better if cash were eliminated, and he shrugs no. "I'd still have to go out and get the stuff."

What's needed is low-cost alternatives that fit in cultural contexts. Lydia Howland, whose organization IDEO works to create human-centered solutions to poverty, finds the same needs in parts of Britain that exist in countries like Kenya, where M-Pesa is succeeding in bringing access to banking and remote payments to people who have never had access to financial services before.

"Poor people are concerned about privacy," she said on Wednesday. "But they have so much anonymity in their lives that they pay a premium for every financial service." Also, because they do so much offline, there is little understanding of how they work or live. "We need to create a society where a much bigger base has a voice."

During a break, I try to sketch the characteristics of a perfect payment mechanism: convenient; transparent to the user; universally accepted; universally accessible and usable; resistant to tracking, theft, counterfeiting, and malware; and hard to steal on a large scale. We aren't there yet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 23, 2012

The year of the future

If there's one thing everyone seemed to agree on yesterday at Nominet's annual Internet policy conference, it's that this year, 2012, is a crucial one in the development of the Internet.

The discussion had two purposes. One is to feed into Nominet's policy-making as the body in charge of .uk, in which capacity it's currently grappling with questions such as how to respond to law enforcement demands to disappear domains. The other, which is the kind of exercise net.wars particularly enjoys and that was pioneered at the Computers, Freedom, and Privacy conference (next one spring 2013, in Washington, DC), is to peer into the future and try to prepare for it.

Vint Cerf, now Google's Chief Internet Evangelist, outlined some of that future, saying that this year, 2012, will see more dramatic changes to the Internet than anything since 1983. He had a list:

- The deployment of better authentication in the form of DNSSec;

- New certification regimes to limit damage in the event of more cases like 2011's Diginotar hack;

- internationalized domain names;

- The expansion of new generic top-level domains;

- The switch to IPv6 Internet addressing, which happens on June 6;

- Smart grids;

- The Internet of things: cars, light bulbs, surfboards (!), and anything else that can be turned into a sensor by implanting an RFID chip.

Cerf paused to throw in an update on his long-running project the interplanetary Internet he's been thinking about since 1998 (TXT).

"It's like living in a science fiction novel," he said yesterday as he explained about overcoming intense network lag by using high-density laser pulses. The really cool bit: repurposing space craft whose scientific missions have been completed to become part of the interplanetary backbone. Not space junk: network nodes-in-waiting.

The contrast to Ed Vaizey, the minister for culture, communications and the creative industries at the Department of Culture, Media, and Sport, couldn't have been more marked. He summed up the Internet's governance problem as the "three Ps": pornography, privacy, and piracy. It's nice rhetorical alliteration, but desperately narrow. Vaizey's characterization of 2012 as a critical year rests on the need to consider the UK's platform for the upcoming Internet Governance Forum leading to 2014's World Information Technology Forum. When Vaizey talks about regulating with a "light touch", does he mean the same things we do?

I usually place the beginning of the who-governs-the-Internet argument at1997, the first time the engineers met rebellion when they made a technical decision (revamping the domain name system). Until then, if the pioneers had an enemy it was governments, memorably warned off by John Perry Barlow's 1996 Declaration of the Independence of Cyberspace. After 1997, it was no longer possible to ignore the new classes of stakeholders, commercial interests and consumers.

I'm old enough as a Netizen - I've been online for more than 20 years - to find it hard to believe that the Internet Governance Forum and its offshoots do much to change the course of the Internet's development: while they're talking, Google's self-drive cars rack up 200,000 miles on San Francisco's busy streets with just one accident (the car was rear-ended; not their fault) and Facebook sucks in 800 million users (if it were a country, it would be the world's third most populous nation).

But someone has to take on the job. It would be morally wrong for governments, banks, and retailers to push us all to transact with them online if they cannot promise some level of service and security for at least those parts of the Internet that they control. And let's face it: most people expect their governments to step in if they're defrauded and criminal activity is taking place, offline or on, which is why I thought Barlow's declaration absurd at the time

Richard Allan, director of public policy for Facebook EMEA - or should we call him Lord Facebook? - had a third reason why 2012 is a critical year: at the heart of the Internet Governance Forum, he said, is the question of how to handle the mismatch between global Internet services and the cultural and regulatory expectations that nations and individuals bring with them as they travel in cyberspace. In Allan's analogy, the Internet is a collection of off-shore islands like Iceland's Surtsey, which has been left untouched to develop its own ecosystem.

Should there be international standards imposed on such sites so that all users know what to expect? Such a scheme would overcome the Balkanization problem that erupts when sites present a different face to each nation's users and the censorship problem of blocking sites considered inappropriate in a given country. But if that's the way it goes, will nations be content to aggregate the most open standards or insist on the most closed, lowest-common-denominator ones?

I'm not sure this is a choice that can be made in any single year - they were asking this same question at CFP in 1994 - but if this is truly the year in which it's made, then yes, 2012 is a critical year in the development of the Internet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 2, 2012

Drive by wire

The day in 1978 when I first turned on my CB radio, I discovered that all that time the people in the cars around me had been having conversations I knew nothing about. Suddenly my car seemed like a pre-Annie Sullivan Helen Keller.

Judging by yesterday's seminar on self-driving cars, something similar is about to happen, but on a much larger scale. Automate driving and then make each vehicle part of the Internet of Things and suddenly the world of motoring is up-ended.

The clearest example came from Jeroen Ploeg, who is part of a Dutch national project on Cooperative Advanced Cruise Control. Like everyone here, Ploeg is grappling with issues that recur across all the world's densely populated zones: congestion, pollution, and safety. How can you increase capacity without building more roads (expensive) while decreasing pollution (expensive, unpleasant, and unhealthy) and increasing safety (deaths from road accidents have decreased in the UK for the last few years but are still nearly 2,000 a year)? Decreasing space between cars isn't safe for humans, who also lack the precision necessary to keep a tightly packed line of cars moving evenly. What Ploeg explains, and then demonstrates on a ride in a modified Prius through the Nottingham lunchtime streets, is that given the ability to communicate the cars can collaborate to keep a precise distance that solves all three problems. When he turns on the cooperative bit so that our car talks to its fellow in front of us, the advance warnings significantly smooth our acceleration and braking.

"It has a big potential to increase throughput," he says, noting that packing safely closer together can cut down trucks' fuel requirements by up to 10 percent from the reduction in headwinds.

But other than that, "There isn't a business case for it," he says sadly. No: because we don't buy cars collaboratively, we buy them individually according to personal values like top speed, acceleration, fuel efficiency, comfort, sporty redness, or fantasy.

To robot vehicle researchers, the question isn't if self-driving cars will take over - the various necessary bits of technology are too close to ready - but when and how people will accept the inevitable. There are some obvious problems. Human factors, for one. As cars become more skilled - already, they help humans park, keep in lanes, and keep a consistent speed - humans forget the techniques they've learned. Gradually, says Natasha Merat, co-director at the Institute for Transport Studies at the University of Leeds, they stop paying attention. In critical situations, her research shows, they react more slowly; in urban situations more automated means they're more likely to watch DVDs until or unless they hear an alarm sound. (Curiously, her research shows that on motorways they continue to pay more attention; speed scares, apparently.) So partial automation may be more dangerous than full automation despite seeming like a good first step.

The more fascinating thing is what happens when vehicles start to communicate. Paul Newman, head of the Mobile Robotics Unit at Oxford proposes that your vehicle should learn your routes; one day, he imagines, a little light comes on indicating that it's ready to handle the drive itself. Newman wants to reclaim his time ("It's ridiculous to think that we're condemned to a future of congestion, accidents, and time-wasting"), but since GPS is too limited to guide an automated car - it doesn't work well inside cities, it's not fine-grained enough for parking lots - there's talk of guide boxes. Newman would rather take cues from the existing infrastructure the way humans do. But give vehicles the ability to communicate and share information - maps, pictures, and sensor data. "I don't need a funky French car bubble car. I want today's car with cameras and a 3G connection."

It's later, over lunch, that I realize what he's really proposing. Say all of Britain's roads are traversed once an hour by some vehicle or other. If each picks up infrastructure, geographical, and map data and shares it...you have the vehicle equivalent of Wikipedia to compete with Google's Street View.

Two topics are largely skipped at this event, both critical: fuel and security. John Miles, from Arup argued that it's a misconception that a large percentage of today's road traffic could be moved to rail. But is it safe to assume we'll find enough fuel to run all those extra vehicles either? Traffic increased in the UK by 85 percent since 1980; another 25 percent increase is expected in just the next 20 years.

But security is the crucial one because it must be built into V2V from the beginning. Otherwise, we're talking the apocryphal old joke about cars crashing unpredictably, like Windows.

It's easy to resist this particular future even without wondering whether people will accept statistics showing robot cars are safer if a child is killed by one: I don't even like cars that bossily remind me to wear a seatbelt. But, as several people said yesterday, I am the wrong age. The "iPod generation" don't identify cars so closely with independence, and they don't like looking up from their phones. The 30-year-old of 2032 who knows how to back into a tight parking space may be as rare as a 30-year-old today who can multiply three-digit numbers in his head. Me, I'll wave from the train.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 30, 2011

Ignorance is no excuse

My father was not a patient man. He could summon up some compassion for those unfortunates who were stupider than himself. What he couldn't stand was ignorance, particularly willful ignorance. The kind of thing where someone boasts about how little they know.

That said, he also couldn't abide computers. "What can you do with a computer that you can't do with a paper and pencil?" he demanded to know when I told him I was buying a friend's TRS-80 Model III in 1981. He was not impressed when I suggested that it would enable me to make changes on page 3 of a 78-page manuscript without retyping the whole thing.

My father had a valid excuse for that particular bit of ignorance or lack of imagination. It was 1981, when most people had no clue about the future of the embryonic technology they were beginning to read about. And he was 75. But I bet if he'd made it past 1984 he'd have put some effort into understanding this technology that would soon begin changing the printing industry he worked in all his life.

While computers were new on the block, and their devotees were a relatively small cult of people who could be relatively easily spotted as "other", you could see the boast "I know nothing about computers" as a replay of high school. In American movies and TV shows that would be jocks and the in-crowd on one side, a small band of miserable, bullied nerds on the other. In the UK, where for reasons I've never understood it's considered more admirable to achieve excellence without ever being seen to work hard for it, the sociology plays out a little differently. I guess here the deterrent is less being "uncool" and more being seen as having done some work to understand these machines.

Here's the problem: the people who by and large populate the ranks of politicians and the civil service are the *other* people. Recent events such as the UK's Government Digital Service launch suggest that this is changing. Perhaps computers have gained respectability at the top level from the presence of MPs who can boast that they misspent their youth playing video games rather than, like the last generation's Ian Taylor, getting their knowledge the hard way, by sweating for it in the industry.

There are several consequences of all this. The most obvious and longstanding one is that too many politicians don't "get" the Net, which is how we get legislation like the DEA, SOPA, PIPA, and so on. The less obvious and bigger one is that we - the technology-minded, the early adopters, the educated users - write them off as too stupid to talk to. We call them "congresscritters" and deride their ignorance and venality in listening to lobbyists and special interest groups.

The problem, as Emily Badger writes for Miller-McCune as part of a review of Clay Johnson's latest book, is that if we don't talk to them how can we expect them to learn anything?

This sentiment is echoed in a lecture given recently at Rutgers by the distinguished computer scientist David Farber on the technical and political evolution of the Internet (MP3) (the slides are here (PDF)). Farber's done his time in Washington, DC, as chief technical advisor to the Federal Communications Commission and as a member of the Presidential Advisory Board on Information Technology. In that talk, Farber makes a number of interesting points about what comes next technically - it's unlikely, he says, that today's Internet Protocols will be able to cope with the terabyte networks on the horizon, and reengineering is going to be a very, very hard problem because of the way humans resist change - but the more relevant stuff for this column has to do with what he learned from his time in DC.

Very few people inside the Beltway understand technology, he says there, citing the Congressman who asked him seriously, "What is the Internet?" (Well, see, it's this series of tubes...) And so we get bad - that is, poorly grounded - decisions on technology issues.

Early in the Net's history, the libertarian fantasy was that we could get on just fine without their input, thank you very much. But as Farber says, politicians are not going to stop trying to govern the Internet. And, as he doesn't quite say, it's not like we can show them that we can run a perfect world without them. Look at the problems techies have invented: spam, the flaky software infrastructure on which critical services are based, and so on. "It's hard to be at the edge in DC," Farber concludes.

So, going back to Badger's review of Johnson: the point is it's up to us. Set aside your contempt and distrust. Whether we like politicians or not, they will always be with us. For 2012, adopt your MP, your Congressman, your Senator, your local councilor. Make it your job to help them understand the bills they're voting on. Show them tshat even if they don't understand the technology there's votes in those who do. It's time to stop thinking of their ignorance as solely *their* fault.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 16, 2011

Location, location, location

In the late 1970s, I used to drive across the United States several times a year (I was a full-time folksinger), and although these were long, long days at the wheel, there were certain perks. One was the feeling that the entire country was my backyard. The other was the sense that no one in the world knew exactly where I was. It was a few days off from the pressure of other people.

I've written before that privacy is not sleeping alone under a tree but being able to do ordinary things without fear. Being alone on an interstate crossing Oklahoma wasn't to hide some nefarious activity (like learning the words to "There Ain't No Instant Replay in the Football Game of Life"). Turn off the radio and, aside from an occasional billboard, the world was quiet.

Of course, that was also a world in which making a phone call was a damned difficult thing to do, which is why professional drivers all had CB radios. Now, everyone has mobile phones, and although your nearest and dearest may not know where you are, your phone company most certainly does, and to a very fine degree of "granularity".

I imagine normal human denial is broad enough to encompass pretending you're in an unknown location while still receiving text messages. Which is why this year's A Fine Balance focused on location privacy.

The travel privacy campaigner Edward Hasbrouck has often noted that travel data is particularly sensitive and revealing in a way few realize. Travel data indicate your religion (special meals), medical problems, and life style habits affecting your health (choosing a smoking room in a hotel). Travel data also shows who your friends are, and how close: who do you travel with? Who do you share a hotel room with, and how often?

Location data is travel data on a steady drip of steroids. As Richard Hollis, who serves on the ISACA Government and Regulatory Advocacy Subcommittee, pointed out, location data is in fact travel data - except that instead of being detailed logging of exceptional events it's ubiquitous logging of everything you do. Soon, he said, we will not be able to opt out - and instead of travel data being a small, sequestered, unusually revealing part of our lives, all our lives will be travel data.

Location data can reveal the entire pattern of your life. Do you visit a church every Monday evening that has an AA meeting going on in the basement? Were you visiting the offices of your employer's main competitor when you were supposed to have a doctor's appointment?

Research supports this view. Some of the earliest work I'm aware of is of Alberto Escudero-Pascual. A month-long experiment tracking the mobile phones in his department enabled him to diagram all the intra-departmental personal relations. In a 2002 paper, he suggests how to anonymize location information (PDF). The problem: no business wants anonymization. As Hollis and others said, businesses want location data. Improved personalization depends on context, and location provides a lot of that.

Patrick Walshe, the director of privacy for the GSM Association, compared the way people care about privacy to the way they care about their health: they opt for comfort and convenience and hope for the best. They - we - don't make changes until things go wrong. This explains why privacy considerations so often fail and privacy advocates despair: guarding your privacy is like eating your vegetables, and who except a cranky person plans their meals that way?

The result is likely to be the world that Microsoft UK's director of Search, advertising, and online UK, Dave Coplin, outlined, arguing that privacy today is at the turning point that the Melissa virus represented for security 11 years ago when it first hit.

Calling it "the new battleground," he said, "This is what happens when everything is connected." Similarly, Blaine Price, a senior lecturer in computing at the Open University, had this cheering thought: as humans become part of the Internet of Things, data leakage will become almost impossible to avoid.

Network externalities mean that the number of people using a network increase its value for all other users of that network. What about privacy externalities? I haven't heard the phrase before, although I see it's not new (PDF). But I mean something different than those papers do: the fact that we talk about privacy as an individual choice when instead it's a collaborative effort. A single person who says, "I don't care about my privacy" can override the pro-privacy decisions of dozens of their friends, family, and contacts. "I'm having dinner with @wendyg," someone blasts, and their open attitude to geolocation reveals mine.

In his research on tracking, Price has found that the more closely connected the trackers are the less control they have over such decisions. I may worry that turning on a privacy block will upset my closest friend; I don't obsess at night, "Will the phone company think I'm mad at it?"

So: you want to know where I am right now? Pay no attention to the geolocated Twitterer who last night claimed to be sitting in her living room with "wendyg". That wasn't me.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 2, 2011

Debating the robocalypse

"This House fears the rise of artificial intelligence."

This was the motion up for debate at Trinity College Dublin's Philosophical Society (Twitter: @phil327) last night (December 1, 2011). It was a difficult one, because I don't think any of the speakers - neither the four students, Ricky McCormack, Michael Coleman, Cat O'Shea, and Brian O'Beirne, nor the invited guests, Eamonn Healy, Fred Cummins, and Abraham Campbell - honestly fear AI all that much. Either we don't really believe a future populated by superhumanly intelligent killer robots is all that likely, or, like Ken Jennings, we welcome our new computer overlords.

But the point of this type of debate is not to believe what you are saying - I learned later that in the upper levels of the game you are assigned a topic and a position and given only 15 minutes to marshal your thoughts - but to argue your assigned side so passionately, persuasively, and coherently that you win the votes of the assembled listeners even if later that night, while raiding the icebox, they think, "Well, hang on..." This is where politicians and Dail/House of Commons debating style come from, As a participatory sport it was utterly new to me, and it explains a *lot* about the derailment of political common sense by the rise of public relations and lobbying.

Obviously I don't actually oppose research into AI. I'm all for better tools, although I vituperatively loathe tools that try to game me. As much fun as it is to speculate about whether superhuman intelligences will deserve human rights, I tend to believe that AI will always be a tool. It was notable that almost every speaker assumed that AI would be embodied in a more-or-less humanoid robot. Far more likely, it seems to me, that if AI emerges it will be first in some giant, boxy system (that humans can unplug) and even if Moore's Law shrinks that box it will be much longer before AI and robotics converge into a humanoid form factor.

Lacking conviction on the likelihood of all this, and hence of its dangers, I had to find an angle, which eventually boiled down to Walt Kelly and We have met the enemy and he is us. In this, I discovered, I am not alone: a 2007 ThinkArtificial poll found that more than half of respondents feared what people would do with AI: the people who program it, own it, and deploy it.

If we look at the history of automation to date, a lot of it has been used to make (human) workers as interchangeable as possible. I am old enough to remember, for example, being able to walk down to the local phone company in my home town of Ithaca, NY, and talk in person to a customer service representative I had met multiple times before about my piddling residential account. Give everyone the same customer relationship database and workers become interchangeable parts. We gain some convenience - if Ms Jones is unavailable anyone else can help us - but we pay in lost relationships. The company loses customer loyalty, but gains (it hopes) consistent implementation of its rules and the economic leverage of no longer depending on any particular set of workers.

I might also have mentioned automated trading systems, which are making the markets swing much more wildly much more often. Later, Abraham Campbell, a computer scientist working in augmented reality at University College Dublin, said as much as 25 percent of trading is now done by bots. So, cool: Wall Street has become like one of those old IRC channels where you met a cute girl named Eliza...

Campbell had a second example: the Siri, which will tell you where to hide a dead body but not where you might get an abortion. Google's removal of torrent sites from its autosuggestion/Instant feature didn't seem to me egregious censorship, partly because there are other search engines and partly (short-sightedly) because I hate Instant so much already. But as we become increasingly dependent on mediators to help us navigate our overcrowded world, the agenda and/or competence of the people programming them are vital to know. These will be transparent only as long as there are alternatives.

Simultaneously, back in England in work that would have made Jessica Mitford proud, Privacy International's Eric King and Emma Draper were publishing material that rather better proves the point. Big Brother Inc lays out the dozens of technology companies from democratic Western countries that sell surveillance technologies to repressive regimes. King and Draper did what Mitford did for the funeral business in the late 1960s (and other muckrakers have done since): investigate what these companies' marketing departments tell prospective customers.

I doubt businesses will ever, without coercion, behave like humans with consciences; it's why they should not be legally construed as people. During last night's debate, the prospective robots were compared to women and "other races", who were also denied the vote. Yes, and they didn't get it without a lot of struggle. The In the "Robocalypse" (O'Beirne), they'd better be prepared to either a) fight to meltdown for their rights or b) protect their energy sources and wait patiently for the human race to exterminate itself.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 4, 2011

The identity layer

This week, the UK government announced a scheme - Midata - under which consumers will be able to reclaim their personal information. The same day, the Centre for the Study of Financial Innovation assembled a group of experts to ask what the business model for online identification should be. And: whatever that model is, what the the government's role should be. (For background, here's the previous such discussion.)

My eventual thought was that the government's role should be to set standards; it might or might not also be an identity services provider. The government's inclination now is to push this job to the private sector. That leaves the question of how to serve those who are not commercially interesting; at the CSFI meeting the Post Office seemed the obvious contender for both pragmatic and historical reasons.

As Mike Bracken writes in the Government Digital Service blog posting linked above, the notion of private identity providers is not new. But what he seems to assume is that what's needed is federated identity - that is, in Wikipedia's definition, a means for linking a person's electronic identity and attributes across multiple distinct systems. What I meant is a system in which one may have many limited identities that are sufficiently interoperable that you can make a choice which to use at the point of entry to a given system. We already have something like this on many blogs, where commenters may be offered a choice of logging in via Google, OpenID, or simply posting a name and URL.

The government gateway circa Year 2000 offered a choice: getting an identity certificate required payment of £50 to, if I remember correctly, Experian or Equifax, or other companies whose interest in preserving personal privacy is hard to credit. The CSFI meeting also mentioned tScheme - an industry consortium to provide trust services. Outside of relatively small niches it's made little impact. Similarly, fifteen years ago, the government intended, as part of implementing key escrow for strong cryptography, to create a network of trusted third parties that it would license and, by implication, control. The intention was that the TTPs should be folks that everyone trusts - like banks. Hilarious, we said *then*. Moving on.

In between then and now, the government also mooted a completely centralized identity scheme - that is, the late, unlamented ID card. Meanwhile, we've seen the growth a set of competing American/global businesses who all would like to be *the* consumer identity gateway and who managed to steal first-mover advantage from existing financial institutions. Facebook, Google, and Paypal are the three most obvious. Microsoft had hopes, perhaps too early, when in 1999 it created Passport (now Windows Live ID). More recently, it was the home for Kim Cameron's efforts to reshape online identity via the company's now-cancelled CardSpace, and Brendon Lynch's adoption of U-Prove, based on Stefan Brands' technology. U-Prove is now being piloted in various EU-wide projects. There are probably lots of other organizations that would like to get in on such a scheme, if only because of the data and linkages a federated system would grant them. Credit card companies, for example. Some combination of mobile phone manufacturers, mobile network operators, and telcos. Various medical outfits, perhaps.

An identity layer that gives fair and reasonable access to a variety of players who jointly provide competition and consumer choice seems like a reasonable goal. But it's not clear that this is what either the UK's distastefully spelled "Midata" or the US's NSTIC (which attracted similar concerns when first announced, has in mind. What "federated identity" sounds like is the convenience of "single sign-on", which is great if you're working in a company and need to use dozens of legacy systems. When you're talking about identity verification for every type of transaction you do in your entire life, however, a single gateway is a single point of failure and, as Stephan Engberg, founder of the Danish company Priway, has often said, a single point of control. It's the Facebook cross-all-the-streams approach, embedded everywhere. Engberg points to a discussion paper) inspired by two workshops he facilitated for the Danish National IT and Telecom Agency (NITA) in late 2010 that covers many of these issues.

Engberg, who describes himself as a "purist" when it comes to individual sovereignty, says the only valid privacy-protecting approach is to ensure that each time you go online on each device you start a new session that is completely isolated from all previous sessions and then have the choice of sharing whatever information you want in the transaction at hand. The EU's LinkSmart project, which Engberg was part of, created middleware to do precisely that. As sensors and RFID chips spread along with IPv6, which can give each of them its own IP address, linkages across all parts of our lives will become easier and easier, he argues.

We've seen often enough that people will choose convenience over complexity. What we don't know is what kind of technology will emerge to help us in this case. The devil, as so often, will be in the details.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 21, 2011

Printers on fire

It used to be that if you thought things were spying on you, you were mentally disturbed. But you're not paranoid if they're really out to get you, and new research at Columbia University, with funding from DARPA's Crash program, exposes how vulnerable today's devices are. Routers, printers, scanners - anything with an embedded system and an IP address.

Usually what's dangerous is monoculture: Windows is a huge target. So, argue Columbia computer science professor Sal Stolfo and PhD student Ang Cui, device manufacturers rely on security by diversity: every device has its own specific firmware. Cui estimates, for example, that there are 300,000 different firmware images for Cisco routers, varying by feature set, model, operating system version, hardware, and so on. Sure, what's the payback? Especially compared to that nice, juicy Windows server over there?

"In every LAN there are enormous numbers of embedded systems in every machine that can be penetrated for various purposes," says Cui.

The payback is access to that nice, juicy server and, indeed, the whole network Few update - or even check - firmware. So once inside, an attacker can lurk unnoticed until the device is replaced.

Cui started by asking: "Are embedded systems difficult to hack? Or are they just not low-hanging fruit?" There isn't, notes Stolfo, an industry providing protection for routers, printers, the smart electrical meters rolling out across the UK, or the control interfaces that manage conference rooms.

If there is, after seeing their demonstrations, I want it.

Their work is two-pronged: first demonstrate the need, then propose a solution.

Cui began by developing a rootkit for Cisco routers. Despite the diversity of firmware and each image's memory layout, routers are a monoculture in that they all perform the same functions. Cui used this insight to find the invariant elements and fingerprint them, making them identifiable in the memory space. From that, he can determine which image is in place and deduce its layout.

"It takes a millisecond."

Once in, Cui sets up a control channel over ping packets (ICMP) to load microcode, reroute traffic, and modify the router's behaviour. "And there's no host-based defense, so you can't tell it's been compromised." The amount of data sent over the control channel is too small to notice - perhaps a packet per second.

"You can stay stealthy if you want to."

You could even kill the router entirely by modifying the EEPROM on the motherboard. How much fun to be the army or a major ISP and physically connect to 10,000 dead routers to restore their firmware from backup?

They presented this at WOOT (Quicktime), and then felt they needed something more dramatic: printers.

"We turned off the motor and turned up the fuser to maximum." Result: browned paper and...smoke.

How? By embedding a firmware update in an apparently innocuous print job. This approach is familiar: embedding programs where they're not expected is a vector for viruses in Word and PDFs.

"We can actually modify the firmware of the printer as part of a legitimate document. It renders correctly, and at the end of the job there's a firmware update." It hasn't been done before now, Cui thinks, because there isn't a direct financial pay-off and it requires reverse-engineering proprietary firmware. But think of the possibilities.

"In a super-secure environment where there's a firewall and no access - the government, Wall Street - you could send a resume to print out." There's no password. The injected firmware connects to a listening outbound IP address, which responds by asking for the printer's IP address to punch a hole inside the firewall.

"Everyone always whitelists printers," Cui says - so the attacker can access any computer. From there, monitor the network, watch traffic, check for regular expressions like names, bank account numbers, and social security numbers, sending them back out as part of ping messages.

"The purpose is not to compromise the printer but to gain a foothold in the network, and it can stay for years - and then go after PCs and servers behind the firewall." Or propagate the first printer worm.

Stolfo's and Cui's call their answer a "symbiote" after biological symbiosis, in which two biological organisms attach to each other to mutual benefit.

The goal is code that works on an arbitrarily chosen executable about which you have very little knowledge. Emulating a biological symbiote, which finds places to attach to the host and extract resources, Cui's symbiote first calculates a secure checksum across all the static regions of the code, then finds random places where its code can be injected.

"We choose a large number of these interception points - and each time we choose different ones, so it's not vulnerable to a signature attack and it's very diverse." At each device access, the symbiote steals a little bit of the CPU cycle (like an RFID chip being read) and automatically verifies the checksum.

"We're not exploiting a vulnerability in the code," says Cui, "but a logical fallacy in the way a printer works." Adds Stolfo, "Every application inherently has malware. You just have to know how to use it."

Never mind all that. I'm still back at that printer smoking. I'll give up my bank account number and SSN if you just won't burn my house down.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


September 30, 2011

Trust exercise

When do we need our identity to be authenticated? Who should provide the service? Whom do we trust? And, to make it sustainable, what is the business model?

These questions have been debated ever since the early 1990s, when the Internet and the technology needed to enable the widespread use of strong cryptography arrived more or less simultaneously. Answering them is a genuinely hard problem (or it wouldn't be taking so long).

A key principle that emerged from the crypto-dominated discussions of the mid-1990s is that authentication mechanisms should be role-based and limited by "need to know"; information would be selectively unlocked and in the user's control. The policeman stopping my car at night needs to check my blood alcohol level and the validity of my driver's license, car registration, and insurance - but does not need to know where I live unless I'm in violation of one of those rules. Cryptography, properly deployed, can be used to protect my information, authenticate the policeman, and then authenticate the violation result that unlocks more data.

Today's stored-value cards - London's Oyster travel card, or Starbucks' payment/wifi cards - when used anonymously do capture some of what the crypto folks had in mind. But the crypto folks also imagined that anonymous digital cash or identification systems could be supported by selling standalone products people installed. This turned out to be wholly wrong: many tried, all failed. Which leads to today, where banks, telcos, and technology companies are all trying to figure out who can win the pool by becoming the gatekeeper - our proxy. We want convenience, security, and privacy, probably in that order; they want security and market acceptance, also probably in that order.

The assumption is we'll need that proxy because large institutions - banks, governments, companies - are still hung up on identity. So although the question should be whom do we - consumers and citizens - trust, the question that ultimately matters is whom do *they* trust? We know they don't trust *us*. So will it be mobile phones, those handy devices in everyone's pockets that are online all the time? Banks? Technology companies? Google has launched Google Wallet, and Facebook has grand aspirations for its single sign-on.

This was exactly the question Barclaycard's Tom Gregory asked at this week's Centre for the Study of Financial Innovation round-table discussion (PDF) . It was, of course, a trick, but he got the answer he wanted: out of banks, technology companies, and mobile network operators, most people picked banks. Immediate flashback.

The government representatives who attended Privacy International's 1997 Scrambling for Safety meeting assumed that people trusted banks and that therefore they should be the Trusted Third Parties providing key escrow. Brilliant! It was instantly clear that the people who attended those meetings didn't trust their banks as much as all that.

One key issue is that, as Simon Deane-Johns writes in his blog posting about the same event, "identity" is not a single, static thing; it is dynamic and shifts constantly as we add to the collection of behaviors and data representing it.

As long as we equate "identity" with "a person's name" we're in the same kind of trouble the travel security agencies are when they try to predict who will become a terrorist on a particular flight. Like the browser fingerprint, we are more uniquely identifiable by the collection of our behaviors than we are by our names, as detectives who search for missing persons know. The target changes his name, his jobs, his home, and his wife - but if his obsession is chasing after trout he's still got a fishing license. Even if a link between a Starbucks card and its holder's real-world name is never formed, the more data the card's use enters into the system the more clearly recognizable as an individual he will be. The exact tag really doesn't matter in terms of understanding his established identity.

What I like about Deane-Johns' idea -

the solution has to involve the capability to generate a unique and momentary proof of identity by reference to a broad array of data generated by our own activity, on the fly, which is then useless and can be safely discarded"

is two things. First, it has potential as a way to make impersonation and identity fraud much harder. Second is that implicit in it is the possibility of two-way authentication, something we've clearly needed for years. Every large organization still behaves as though its identity is beyond question whereas we - consumers, citizens, employees - need to be thoroughly checked. Any identity infrastructure that is going to be robust in the future must be built on the understanding that with today's technology anyone and anything can be impersonated.

As an aside, it was remarkable how many people at this week's meeting were more concerned about having their Gmail accounts hacked than their bank accounts. My reasoning is that the stakes are higher: I'd rather lose my email reputation than my house.. Their reasoning is that the banking industry is more responsive to customer problems than technology companies. That truly represents a shift from 1997, when technology companies were smaller and more responsive.

More to come on these discussions...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 23, 2011

Your grandmother's phone

In my early 20s I had a friend who was an expert at driving cars with...let's call them quirks. If he had to turn the steering wheel 15 degrees to the right to keep the car going straight while peering between smears left by the windshield wipers and pressing just the exact right amount on the brake pedal, no problem. This is the beauty of humans: we are adaptable. That characteristic has made us the dominant species on the planet, since we can adapt to changes of habitat, food sources, climate (within reason), and cohorts. We also adapt to our tools, which is why technology designers get away with flaws like the iPhone's "death grip". We don't like it - but we can deal with it.

At least, we can deal with it when we know what's going on. At this week's Senior Market Mobile, the image that stuck in everyone's mind came early in the day, when Cambridge researchers Ian Hosking and Mike Bradley played a video clip of a 78-year-old woman trying to figure out how to get past an iPad's locked screen. Was it her fault that it seemed logical to her to hold it in one hand while jabbing at it in frustration? As Donald Norman wrote 20 years ago, for an interface to be intuitive it has to match the user's mental model of how it works.

That 78-year-old's difficulties, when compared with the glowing story of the 100-year-old who bonded instantly with her iPad, make another point: age is only one aspect of a person's existence - and one whose relevance they may reject. If you're having trouble reading small type or remembering the menu layout, pushing the buttons, or hearing a phone call what matters isn't that you're old but that you have vision impairment, cognitive difficulties, less dextrous fingers, or hearing loss. You don't have to be old to have any of those things - and not all old people have them.

For those reasons, the design decisions intended to aid seniors - who, my God, are defined as anyone over 55! - aid many other people too. All of these points were made with clarity by Mark Beasley, whose company specializes in marketing to seniors - you know, people who, unlike predominantly 30-something designers and marketers, don't think they're old and who resent being lumped together with a load of others with very different needs on the basis of age. And who think it's not uncool to be over 50. (How ironic, considering that when the Baby Boomers were 18 they minted the slogan, "Never trust anyone over 30.")

Besides physical attributes and capabilities, cultural aspects matter more in a target audience's than their age per se. We who learned to type on manual typewriters bash keyboards a lot harder than those who grew up with computers. Those who grew up with the phone grudgingly sited in the hallway, using it only for the briefest of conversations are less likely to be geared toward settling in for a long, loud intimate conversation on a public street.

Last year at this event, Mobile Industry Review editor Ewan McLeod lambasted the industry because even the iPhone did not effectively serve his parents' greatest need: an easy way to receive and enjoy pictures of their grandkids. This year, Stuart Arnott showed off a partial answer, Mindings, a free app for Android tablets that turns them into smart display frames. You can send them pictures or text messages or, in Arnott's example, a reminder to take medication that, when acknowledged by a touch goes on to display the picture or message the owner really wants to see.

Another project in progress, Threedom is an attempt to create an Android design with only three buttons that uses big icons and type to provide all the same functionality but very simply.

The problem with all of this - which Arnott seems to have grasped with Mindings - is that so much of these discussions focus on the mobile phone as a device in isolation. But that's not really learning the lesson of the iPod/iPhone/iPad, which is that what matters is the ecology surrounding the device. It is true that a proportion of today's elderly do not use computers or understand why they suddenly need a mobile phone. But tomorrow's elderly will be radically different. Depending on class and profession, people who are 60 now are likely to have spent many years of his working life using computers and mobile phones. When they reach 86, what will dictate their choice of phone will be only partly whatever impairments age may bring. A much bigger issue is going to be the legacy and other systems that the phone has to work with: implantable electronic medical devices, smart electrical meters, ancient software in use because it's familiar (and has too much data locked inside it), maybe even that smart house they keep telling us we're going to have one of these days. Those phones are going to have to do a lot more than just make it easy to call your son.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 8, 2011

The grey hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views, also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers - not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea - and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden?
One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friendly default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night, believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help - but for that to be effective we need far greater transparency from all these - largely American - companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 17, 2011

If you build it...

Lawrence Lessig once famously wrote that "Code is law". Today, at the last day of this year's Computers, Freedom, and Privacy, Ross Anderson's talk about the risks of centralized databases suggested a corollary: Architecture is policy. (A great line and all mine, so I thought, until reminded that only last year CFP had an EFF-hosted panel called exactly that.)

You may *say* that you value patient (for example) privacy. And you may believe that your role-based access rules will be sufficient to protect a centralized database of personal health information (for example), but do the math. The NHS's central database, Anderson said, includes data on 50 million people that is accessible by 800,000 people - about the same number as had access to the diplomatic cables that wound up being published by Wikileaks. And we all saw how well that worked. (Perhaps the Wikileaks Unit could be pressed into service as a measure of security risk.)

So if you want privacy-protective systems, you want the person vendors build for - "the man with the checkbook" to be someone who understands what policies will actually be implemented by your architecture and who will be around the table at the top level of government, where policy is being drafted. When the man with the checkbook is a doctor, you get a very different, much more functional, much more privacy protective system. When governments recruit and listen to a CIO you do not get a giant centralized, administratively convenient Wikileaks Unit.

How big is the threat?

Assessing that depends a lot, said Bruce Schneier, on whether you accept the rhetoric of cyberwar (Americans, he noted, are only willing to use the word "war" when there are no actual bodies involved). If we are at war, we are a population to be subdued; if we are in peacetime we are citizens to protect. The more the rhetoric around cyberwar takes over the headlines, the harder it will be to get privacy protection accepted as an important value. So many other debates all unfold differently depending whether we are rhetorically at war or at peace: attribution and anonymity; the Internet kill switch; built-in and pervasive wiretapping. The decisions we make to defend ourselves in wartime are the same ones that make us more vulnerable in peacetime.

"Privacy is a luxury in wartime."

Instead, "This" - Stuxnet, attacks on Sony and Citibank, state-tolerated (if not state-sponsored) hacking - "is what cyberspace looks like in peacetime." He might have, but didn't, say, "This is the new normal." But if on the Internet in 1995 no one knew you were a dog; on the Internet in 2011 no one knows whether your cyberattack was launched by a government-sponsored military operation or a couple of guys in a Senegalese cybercafé.

Why Senegalese? Because earlier, Mouhamadou Lo, a legal advisor from the Computing Agency of Senegal, had explained that cybercrime affects everyone. "Every street has two or three cybercafés," he said. "People stay there morning to evening and send spam around the world." And every day in his own country there are one or two victims. "it shows that cybercrime is worldwide."

And not only crime. The picture of a young Senegalese woman, posted in Facebook, appeared in the press in connection with the Strauss-Kahn affair because it seemed to correspond to a description given of the woman in the case. She did nothing wrong; but there are still consequences back home.

Somehow I doubt the solution to any of this will be found in the trend the ACLU's Jay Stanley and others highlighted towards robot policing. Forget black helicopters and CCTV; what about infrared cameras that capture private moments in the dark and helicopters the size of hummingbirds that "hover and stare". The mayor of Ogden, Utah wants blimps over his city, and, as Vernon M Keenan, director of the Georgia Bureau of Investigation put it, "Law enforcement does not do a good job of looking at new technologies through the prism of civil liberties."

Imagine, said the ACLU's Jay Stanley: "The chilling prospect of 100 percent enforcement."

Final conference thoughts, in no particular order:

- This is the first year of CFP (and I've been going since 1994) where Europe and the UK are well ahead on considering a number of issues. One was geotracking (Europe has always been ahead in mobile phones); but also electronic health care records and how to manage liability for online content. "Learn from our mistakes!" pleaded one Dutch speaker (re health records).

- #followfriday: @sfmnemonic; @privacywonk; @ehasbrouck; @CenDemTech; @openrightsgroup; @privacyint; @epic; @cfp11.

- The market in secondary use of health care data is now $2 billion (PriceWaterhouseCooper via Latanya Sweeney).

- Index on Censorship has a more thorough write-up of Bruce Schneier's talk.

- Today was IBM's 100th birthday.

- This year's chairs, Lillie Coney (EPIC) and Jules Polonetsky, did an exceptional job of finding a truly diverse range of speakers. A rarity at technology-related conferences.

- Join the weekly Twitter #privchat, Tuesdays at noon Eastern US time, hosted by the Center for Democracy and Technology.

- Have a good year, everybody! See you at CFP 2012 (and here every Friday until then).

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 10, 2011

The creepiness factor

"Facebook is creepy," said the person next to me in the pub on Tuesday night.

The woman across from us nodded in agreement and launched into an account of her latest foray onto the service. She had, she said uploaded a batch of 15 photographs of herself and a friend. The system immediately tagged all of the photographs of the friend correctly. It then grouped the images of her and demanded to know, "Who is this?"

What was interesting about this particular conversation was that these people were not privacy advocates or techies; they were ordinary people just discovering their discomfort level. The sad thing is that Facebook will likely continue to get away with this sort of thing: it will say it's sorry, modify some privacy settings, and people will gradually get used to the convenience of having the system save them the work of tagging photographs.

In launching its facial recognition system, Facebook has done what many would have thought impossible: it has rolled out technology that just a few weeks ago *Google* thought was too creepy for prime time.

Wired UK has a set of instructions for turning tagging off. But underneath, the system will, I imagine, still recognize you. What records are kept of this underlying data and what mining the company may be able to do on them is, of course, not something we're told about.

Facebook has had to rein in new elements of its service so many times now - the Beacon advertising platform, the many revamps to its privacy settings - that the company's behavior is beginning to seem like a marketing strategy rather than a series of bungling missteps. The company can't be entirely privacy-deaf; it numbers among its staff the open rights advocate and former MP Richard Allan. Is it listening to its own people?

If it's a strategy it's not without antecedents. Google, for example, built its entire business without TV or print ads. Instead, every so often it would launch something so cool everyone wanted to use it that would get it more free coverage than it could ever have afforded to pay for. Is Facebook inverting this strategy by releasing projects it knows will cause widely covered controversy and then reining them back in only as far as the boundary of user complaints? Because these are smart people, and normally smart people learn from their own mistakes. But Zuckerberg, whose comments on online privacy have approached arrogance, is apparently justified, in that no matter what mistakes the company has made, its user base continues to grow. As long as business success is your metric, until masses of people resign in protest, he's golden. Especially when the IPO moment arrives, expected to be before April 2012.

The creepiness factor has so far done nothing to hurt its IPO prospects - which, in the absence of an actual IPO, seem to be rubbing off on the other social media companies going public. Pandora (net loss last quarter: $6.8 million) has even increased the number of shares on offer.

One thing that seems to be getting lost in the rush to buy shares - LinkedIn popped to over $100 on its first day, and has now settled back to $72 and change (for a Price/Earnings ratio 1076) - is that buying first-day shares isn't what it used to be. Even during the millennial technology bubble, buying shares at the launch of an IPO was approximately like joining a queue at midnight to buy the new Apple whizmo on the first day, even though you know you'll be able to get it cheaper and debugged in a couple of months. Anyone could have gotten much better prices on Amazon shares for some months after that first-day bonanza, for example (and either way, in the long term, you'd have profited handsomely).

Since then, however, a new game has arrived in town: private exchanges, where people who meet a few basic criteria for being able to afford to take risks, trade pre-IPO shares. The upshot is that even more of the best deals have already gone by the time a company goes public.

In no case is this clearer than the Groupon IPO, about which hardly anyone has anything good to say. Investors buying in would be the greater fools; a co-founder's past raises questions, and its business model is not sustainable.

Years ago, Roger Clarke predicted that the then brand-new concept of social networks would inevitably become data abusers simply because they had no other viable business model. As powerful as the temptation to do this has been while these companies have been growing, it seems clear the temptation can only become greater when they have public markets and shareholders to answer to. New technologies are going to exacerbate this: performing accurate facial recognition on user-uploaded photographs wasn't possible when the first pictures were being uploaded. What capabilities will these networks be able to deploy in the future to mine and match our data? And how much will they need to do it to keep their profits coming?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 29, 2011

Searching for reality

They say that every architect has, stuck in his desk drawer, a plan for the world's tallest skyscraper; probably every computer company similarly has a plan for the world's fastest supercomputer. At one time, that particular contest was always won by Seymour Cray. Currently, the world's fastest computer is Tianhe-1A, in China. But one day soon, it's going to be Blue Waters, an IBM-built machine filling 9,000 square feet at the National Center for Supercomputing Applications at the University of Illinois at Champaign-Urbana.

It's easy to forget - partly because Champaign-Urbana is not a place you visit by accident - how mainstream-famous NCSA and its host, UIUC, used to be. NCSA is the place from which Mosaic emerged in 1993. UIUC was where Arthur C. Clarke's HAL was turned on, on January 12, 1997. Clarke's choice was not accidental: my host, researcher Robert McGrath tells me that Clarke visited here and saw the seminal work going on in networking and artificial intelligence. And somewhere he saw the first singing computer, an IBM 7094 haltingly rendering "Daisy Bell." (Good news for IBM: at that time they wouldn't have had to pay copyright clearance fees on a song that was, in 1961, 69 years old.)

So much was invented here: Telnet, for example.

"But what have they done for us lately?" a friend in London wondered.

NCSA's involvement with supercomputing began when Larry Smarr, having worked in Europe and admired the access non-military scientists had to high-performance computers, wrote a letter to the National Science Foundation proposing that the NSF should fund a supercomputing center for use by civilian scientists. They agreed, and the first version of NCSA was built in 1986. Typically, a supercomputer is commissioned for five years; after that it's replaced with the fastest next thing. Blue Waters will have more than 300,000 8-core processors and be capable of a sustained rate of 1 petaflop and a peak rate of 10 petaflops. The transformer room underneath can provide 24 megawatts of power - as energy-efficiently as possible. Right now, the space where Blue Waters will go is a large empty white space broken up by black plug towers. It looks like a set from a 1950s science fiction film.

On the consumer end, we're at the point now where a five-year-old computer pretty much answers most normal needs. Unless you're a gamer or a home software developer, the pressure to upgrade is largely off. But this is nowhere near true at the high end of supercomputing.

"People are never satisfied for long," says Tricia Barker, who showed us around the facility. "Scientists and engineers are always thinking of new problems they want to solve, new details they want to see, and new variables they want to include." Planned applications for Blue Waters include studying storms to understand why some produce tornadoes and some don't. In the 1980s, she says, the data points were kilometers apart; Blue Waters will take the mesh down to 10 meters.

"It's why warnings systems are so hit and miss," she explains. Also on the list are more complete simulations to study climate change.

Every generation of supercomputers gets closer to simulating reality and increases the size of the systems we can simulate in a reasonable amount of time. How much further can it go?

They speculate, she said, about how, when, and whether exaflops can be reached: 2018? 2020? At all? Will the power requirements outstrip what can reasonably be supplied? How big would it have to be? And could anyone afford it?

In the end, of course, it's all about the data. The 500 petabytes of storage Blue Waters will have is only a small piece of the gigantic data sets that science is now producing. Across campus, also part of NCSA, senior research scientist Ray Plante is part of the Large Synoptic Survey Telescope project which, when it gets going, will capture a third of the sky every night on 3 gigapixel cameras with a wide field of view. The project will allow astronomers to see changes over a period of days, allowing them to look more closely at phenomena such as bursters and supernovae, and study dark energy.

Astronomers have led the way in understanding the importance of archiving and sharing data, partly because the telescopes are so expensive that scientists have no choice about sharing them. More than half the Hubble telescope papers, Plante says, are based on archival research, which means research conducted on the data after a short period in which research is restricted to those who proposed (and paid for) the project. In the case of LSST, he says, there will be no proprietary period: the data will be available to the whole community from Day One. There's a lesson here for data hogs if they care to listen.

Listening to Plante - and his nearby colleague Joe Futrelle - talk about the issues involved in storing, studying, and archiving these giant masses of data shows some of the issues that lie ahead for all of us. Many of today's astronomical studies rely on statistics, which in turn requires matching data sets that have been built into catalogues without necessarily considering who might in future need to use them: opening the data is only the first step.

So in answer to my friend: lots. I saw only about 0.1 percent of it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 15, 2011

The open zone

This week my four-year-old computer had a hissy fit and demanded, more or less simultaneously, a new graphics card, a new motherboard, and a new power supply. It was the power supply that was the culprit: when it blew it damaged the other two pieces. I blame an incident about six months ago when the power went out twice for a few seconds each time, a few minutes apart. The computer's always been a bit fussy since.

I took it to the tech guys around the corner to confirm the diagnosis, and we discussed which replacements to order and where to order them from. I am not a particularly technical person, and yet even I can repair this machine by plugging in replacement parts and updating some software. (It's fine now, thank you.)

Here's the thing: at no time did anyone say, "It's four years old. Just get a new one." Instead, the tech guys said, "It's a good computer with a good processor. Sure, replace those parts." A watershed moment: the first time a four-year-old computer is not dismissed as obsolete.

As if by magic, confirmation turned up yesterday, when the Guardian's Charles Arthur asked whether the PC market has permanently passed its peak. Arthur goes on to quote Jay Chou, a senior research analyst at IDC, suggesting that we are now in the age of "good-enough computing" and that computer manufacturers will now need to find ways to create a "compelling user experience". Apple is the clear leader in that arena, although it's likely that if I'd had a Mac instead of a PC it would have been neither so easy nor so quick and inexpensive to fix my machine and get back to work on it, Macs are wonders of industrial design, but as I noted in 2007 when I built this machine, building PCs is now a color-by-numbers affair plugged together out of subsystem pieces that plug together in only one way. What it lacks in elegance compared to a Mac is more than made up for by being able to repair it myself.

But Chou is likely right that this is not the way the world is going.

In his 1998 book The Invisible Computer, usability pioneer Donald Norman projected a future of information appliances, arguing that computers would become invisible because they would be everywhere. (He did not, however, predict the ubiquitous 20-second delay that would accompany this development. You know, it used to be you could turn something on and it would work right away because it didn't have to load software into its memory?) For his model, Norman took electric motors: in the early days you bought one electric motor and used it to power all sorts of variegated attachments; later (now) you found yourself owning dozens of electric motors, all hidden inside appliances.

The trade-off is pretty much the same: the single electric motor with attachments was much more repairable by a knowledgeable end user than today's sealed black-box appliances are. Similarly, I can rebuild my PC, but I can only really replace the hard drive on my laptop, and the battery on my smart phone. Iphone users can't even do that. Norman, whose interest is usability, doesn't - or didn't, since he's written other books since - see this as necessarily a bad deal for consumers, who just want their technology to work intuitively so they can use it to get stuff done.

Jonathan Zittrain, though, has generally taken the opposite view, arguing in his book The Future of the Internet - and How to Stop It and in talks such as the one he gave at last year's Web science meeting that the general-purpose computer, which he dates to 1977, is dying. With it, to some extent, is going the open Internet; it was at that point that, to illustrate what he meant by curated content, he did a nice little morph from the ultra-controlled main menu of CompuServe circa 1992 to today's iPhone home screen.

"How curated do we want things to be?" he asked.

It's the key question. Zittrain's view, backed up by Tim Wu in The Master Switch is that security and copyright may be the levers used to close down general-purpose computers and the Internet, leaving us with a corporately-owned Internet that runs on black boxes to which individual consumers have little or no access. This is, ultimately, what the "Open" in Open Rights Group seems to me to be about: ensuring that the most democratic medium ever invented remains a democratic medium.

Clearly, there are limits. The earliest computer kits were open - but only to the relatively small group of people with - or willing to acquire - considerable technical skill. My computer would not be more open to me if I had to get out a soldering iron to fix my old motherboard and code my own operating system. Similarly, the skill required to deal with security threats like spam and malware attacks raises the technical bar of dealing with computers to the point where they might as well be the black boxes Zittrain fears. But somewhere between the soldering iron and the point-and-click of a TV remote control there has to be a sweet spot where the digital world is open to the most people. That's what I hope we can find.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 8, 2011

Brought to book

JK Rowling is seriously considering releasing the Harry Potter novels as ebooks, while Amanda Hocking, who's sold a million or so ebooks has signed a $2 million contract with St. Martin's Press. In the same week. It's hard not to conclude that ebooks are finally coming of age.

And in many ways this is a good thing. The economy surrounding the Kindle, Barnes and Noble's Nook, and other such devices is allowing more than one writer to find an audience for works that mainstream publishers might have ignored. I do think hard work and talent will usually out, and it's hard to believe that Hocking would not have found herself a good career as a writer via the usual routine of looking for agents and publishers. She would very likely have many fewer books published at this point, and probably wouldn't be in possession of the $2 million it's estimated she's made from ebook sales.

On the other hand, assuming she had made at least a couple of book sales by now, she might be much more famous: her blog posting explaining her decision notes that a key factor is that she gets a steady stream of complaints from would-be readers that they can't buy her books in stores. She expects to lose money on the St. Martin's deal compared to what she'd make from self-publishing the same titles. To fans of disintermediation, of doing away with gatekeepers and middle men and allowing artists to control their own fates and interact directly with their audiences, Hocking is a self-made hero.

And yet...the future of ebooks may not be so simply rosy.

This might be the moment to stop and suggest reading a little background on book publishing from the smartest author I know on the topic, science fiction writer Charlie Stross. In a series of blog postings he's covered common misconceptions about publishing, why the Kindle's 2009 UK launch was bad news for writers, and misconceptions about ebooks. One of Stross's central points: epublishing platforms are not owned by publishers but by consumer electronics companies - Apple, Sony, Amazon.

If there's one thing we know about the Net and electronic media generally it's that when the audience for any particular new medium - Usenet, email, blogs, social networks - gets to be a certain size it attracts abuse. It's for this reason that every so often I argue that the Internet does not scale well.

In a fascinating posting on Patrick and Theresa Nielsen-Hayden's blog Making Light, Jim Macdonald notes the case of Canadian author S K S Perry, who has been blogging on LiveJournal about his travails with a thief. Perry, having had no luck finding a publisher for his novel Darkside, had posted it for free on his Web site, where a thief copied it and issued a Kindle edition. Macdonald links this sorry tale (which seems now to have reached a happy-enough ending) with postings from Laura Hazard Owen and Mike Essex that predict a near future in which we are awash in recycled ebook...spam. As all three of these writers point out, there is no system in place to do the kind of copyright/plagiarism checking that many schools have implemented. The costs are low; the potential for recycling content vast; and the ease of gaming the ratings system extraordinary. And either way, the ebook retailer makes money.

Macdonald's posting primarily considers this future with respect to the challenge for authors to be successful*: how will good books find audiences if they're tiny islands adrift in a sea of similar-sounding knock-offs and crap? A situation like that could send us all scurrying back into the arms of people who publish on paper. That wouldn't bother Amazon-the-bookseller; Apple and others without a stake in paper publishing are likely to care more (and promising authors and readers due care and diligence might help them build a better, differentiated ebook business).

There is a mythology that those who - like the Electronic Frontier Foundation or the Open Rights Group - oppose the extension and tightening of copyright are against copyright. This is not the case: very few people want to do away with copyright altogether. What most campaigners in this area want is a fairer deal for all concerned.

This week the issue of term extension for sound recordings in the EU revived when Denmark changed tack and announced it would support the proposals. It's long been my contention that musicians would be better served by changes in the law that would eliminate some of the less fair terms of typical contracts, that would provide for the reversion of rights to musicians when their music goes out of commercial availability, and that would alter the balance of power, even if only slightly, in favor of the musicians.

This dystopian projected future for ebooks is a similar case. It is possible to be for paying artists and even publishers and still be against the imposition of DRM and the demonization of new technologies. This moment, where ebooks are starting to kick into high gear, is the time to find better ways to help authors.

*Successful: an author who makes enough money from writing books to continue writing books.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 14, 2011

Face time

The history of the Net has featured many absurd moments, but this week was some sort of peak of the art. In the same week I read that a) based a $450 million round of investment from Goldman Sachs Facebook is now valued at $50 billion, higher than Boeing's market capitalization and b) Facebook's founder, Mark Zuckerberg, is so tired of the stress of running the service that he plans to shut it down on March 15. As I seem to recall a CS Lewis character remarking irritably, "Why don't they teach logic in these schools?" If you have a company worth $50 billion and you don't much like running it any more, you sell the damn thing and retire. It's not like Zuckerberg even needs to wait to be Time's Man of the Year.

While it's safe to say that Facebook isn't going anywhere soon, it's less clear what its long-term future might be, and the users who panicked at the thought of the service's disappearance would do well to plan ahead. Because: if there's one thing we know about the history of the Net's social media it's that the party keeps moving. Facebook's half-a-billion-strong user base is, to be sure, bigger than anything else assembled in the history of the Net. But I think the future as seen by Douglas Rushkoff, writing for CNN last week is more likely: Facebook, he argued based on its arguably inflated valuation, is at the beginning of its end, as MySpace was when Rupert Murdoch bought it in 2005 for $580 million. (Though this says as much about Murdoch's Net track record as it does about MySpace: Murdoch bought the text-based Delphi, at its peak moment in late 1993.)

Back in 1999, at the height of the dot-com boom, the New Yorker published an article (abstract; full text requires subscription) comparing the then-spiking stock price of AOL with that of the Radio Corporation of America back in the 1920s, when radio was the hot, new democratic medium. RCA was selling radios that gave people unprecedented access to news and entertainment (including stock quotes); AOL was selling online accounts that gave people unprecedented access to news, entertainment, and their friends. The comparison, as the article noted, wasn't perfect, but the comparison chart the article was written around was, as the author put it, "jolly". It still looks jolly now, recreated some months later for this analysis of the comparison.

There is more to every company than just its stock price, and there is more to AOL than its subscriber numbers. But the interesting chart to study - if I had the ability to create such a chart - would be the successive waves of rising, peaking, and falling numbers of subscribers of the various forms of social media. In more or less chronological order: bulletin boards, Usenet, Prodigy, Genie, Delphi, CompuServe, AOL...and now MySpace, which this week announced extensive job cuts.

At its peak, AOL had 30 million of those; at the end of September 2010 it had 4.1 million in the US. As subscriber revenues continue to shrink, the company is changing its emphasis to producing content that will draw in readers from all over the Web - that is, it's increasingly dependent on advertising, like many companies. But the broader point is that at its peak a lot of people couldn't conceive that it would shrink to this extent, because of the basic principle of human congregation: people go where their friends are. When the friends gradually start to migrate to better interfaces, more convenient services, or simply sites their more annoying acquaintances haven't discovered yet, others follow. That doesn't necessarily mean death for the service they're leaving: AOL, like CIX, the The WELL, and LiveJournal before it, may well find a stable size at which it remains sufficiently profitable to stay alive, perhaps even comfortably so. But it does mean it stops being the growth story of the day.

As several financial commentators have pointed out, the Goldman investment is good for Goldman no matter what happens to Facebook, and may not be ring-fenced enough to keep Facebook private. My guess is that even if Facebook has reached its peak it will be a long, slow ride down the mountain and between then and now at least the early investors will make a lot of money.

But long-term? Facebook is barely five years old. According to figures leaked by one of the private investors, its price-earnings ratio is 141. The good news is that if you're rich enough to buy shares in it you can probably afford to lose the money.

As far as I'm aware, little research has been done studying the Net's migration patterns. From my own experience, I can say that my friends lists on today's social media include many people I've known on other services (and not necessarily in real life) as the old groups reform in a new setting. Facebook may believe that because the profiles on its service are so complex, including everything from status updates and comments to photographs and games, users will stay locked in. Maybe. But my guess is that the next online party location will look very different. If email is for old people, it won't be long before Facebook is, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 19, 2010

Power to the people

We talk often about the fact that ten years of effort - lawsuits, legislation, technology - on the part of the copyright industries has made barely a dent in the amount of material available online as unauthorized copies. We talk less about the similar situation that applies to privacy despite years of best efforts by Privacy International, Electronic Privacy Information Center, Center for Democracy and Technology, Electronic Frontier Foundation, Open Rights Group, No2ID, and newcomer Big Brother Watch. The last ten years have built Google, and Facebook, and every organization now craves large data stores of personal information that can be mined. Meanwhile, governments are complaisant, possibly because they have subpoena power. It's been a long decade.

"Information is the oil of the 1980s," wrote Thomas McPhail and Brenda McPhail in 1987 in an article discussing the politics of the International Telecommunications Union, and everyone seems to take this encomium seriously.

William Heath, who spent his early career founding and running Kable, a consultancy specializing in government IT. The question he focused on a lot: how to create the ideal government for the digital era, has been saying for many months now that there's a gathering wave of change. His idea is that the *new* new thing is technologies to give us back control and up-end the current situation in which everyone behaves as if they own all the information we give them. But it's their data only in exactly the same way that taxpayers' money belongs to the government. They call it customer relationship management; Heath calls the data we give them volunteered personal information and proposes instead vendor relationship management.

Always one to put his effort where his mouth is (Heath helped found the Open Rights Group, the Foundation for Policy Research, and the Dextrous Web as well as Kable), Heath has set up not one, but two companies. The first, Ctrl-Shift, is a research and advisory businesses to help organizations adjust and adapt to the power shift. The second, Mydex, a platform now being prototyped in partnership with the Department of Work and Pensions and several UK councils (PDF). Set up as a community interest company, Mydex is asset-locked, to ensure that the company can't suddenly reverse course and betray its customers and their data.

The key element of Mydex is the personal data store, which is kept under each individual's own control. When you want to do something - renew a parking permit, change your address with a government agency, rent a car - you interact with the remote council, agency, or company via your PDS. Independent third parties verify the data you present. To rent a car, for example, you might present a token from the vehicle licensing bureau that authenticates your age and right to drive and another from your bank or credit card company verifying that you can pay for the rental. The rental company only sees the data you choose to give it.

It's Heath's argument that such a setup would preserve individual privacy and increase transparency while simultaneously saving companies and governments enormous sums of money.

"At the moment there is a huge cost of trying to clean up personal data," he says. "There are 60 to 200 organisations all trying to keep a file on you and spending money on getting it right. If you chose, you could help them." The biggest cost, however, he says, is the lack of trust on both sides. People vanish off the electoral rolls or refuse to fill out the census forms rather than hand over information to government; governments treat us all as if we were suspected criminals when all we're trying to do is claim benefits we're entitled to.

You can certainly see the potential. Ten years ago, when they were talking about "joined-up government", MPs dealing with constituent complaints favored the notion of making it possible to change your address (for example) once and have the new information propagate automatically throughout the relevant agencies. Their idea, however, was a huge, central data store; the problem for individuals (and privacy advocates) was that centralized data stores tend to be difficult to keep accurate.

"There is an oft-repeated fallacy that existing large organizations meant to serve some different purpose would also be the ideal guardians of people's personal data," Heath says. "I think a purpose-created vehicle is a better way." Give everyone a PDS, and they can have the dream of changing their address only once - but maintain control over where it propagates.

There are, as always, key questions that can't be answered at the prototype stage. First and foremost is the question of whether and how the system can be subverted. Heath's intention is that we should be able to set our own terms and conditions for their use of our data - up-ending the present situation again. We can hope - but it's not clear that companies will see it as good business to differentiate themselves on the basis of how much data they demand from us when they don't now. At the same time, governments who feel deprived of "their" data can simply pass a law and require us to submit it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 1, 2010

Duty of care

"Anyone who realizes how important the Web is," Tim Berners-Lee said on Tuesday, "has a duty of care." He was wrapping up a two-day discussion meeting at the Royal Society. The subject: Web science.

What is Web science? Even after two days, it's difficult to grasp, in part because defining it is a work in progress. Here are some of the disciplines that contributed: mathematics, philosophy, sociology, network science, and law, plus a bunch of much more directly Webby things that don't fit easily into categories. Which of course is the point: Web science has to cover much more than just the physical underpinnings of computers and network wires. Computer science or network science can use the principles of mathematics and physics to develop better and faster machines and study architectures and connections. But the Web doesn't exist without the people putting content and applications on it, and so Web science must be as much about human behaviour as about physics.

"If we are to anticipate how the Web will develop, we will require insight into our own nature," Nigel Shadbolt, one of the event's convenors, said on Monday. Co-convenor Wendy Hall has said, similarly, "What creates the Web is us who put things on it, and that's not natural or engineered.". Neither natural (biological systems) or engineered (planned build-out like the telecommunications networks), but something new. If we can understand it better, we can not only protect it better, but guide it better toward the most productive outcomes, just as farmers don't haphazardly interbreed species of corn but use their understanding to select for desirable traits.

The simplest parts of the discussions to understand, therefore, were (ironically) the mathematicians. Particularly intriguing was the former chief scientist Robert May, whose approach to removing nodes from the network to make it non-functional applied equally to the Web, epidemiology, and banking risk.

This is all happening despite the recent Wired cover claiming the "Web is dead". Dead? Facebook is a Web site; Skype, the app store, IM clients, Twitter, and the New York Times all reach users first via the Web even if they use their iPhones for subsequent visits (and how exactly did they buy those iPhones, hey?) Saying it's dead is almost exactly the old joke about how no one goes to a particular restaurant any more because it's too crowded.

People who think the Web is dead have stopped seeing it. But the point of Web science is that for 20 years we've been turning what started as an academic playground into a critical infrastructure, and for government, finance, education, and social interaction to all depend on the Web it must have solid underpinnings. And it has to keep scaling - in a presentation on the state of deployment of IPv6 in China, Jianping Wu noted that Internet penetration in China is expected to jump from 30 percent to 70 percent in the next ten to 20 years. That means adding 400-900 million users. The Chinese will have to design, manage, and operate the largest infrastructure in the world - and finance it.

But that's the straightforward kind of scaling. IBMer Philip Tetlow, author of The Web's Awake (a kind of Web version of the Gaia hypothesis), pointed out that all the links in the world are a finite set; all the eyeballs in the world looking at them are a finite set...but all the contexts surrounding them...well, it's probably finite but it's not calculable (despite Pierre Levy's rather fanciful construct that seemed to suggest it might be possible to assign a URI to every human thought). At that level, Tetlow believes some of the neat mathematical tools, like Jennifer Chayes' graph theory, will break down.

"We're the equivalent of precision engineers," he said, when what's needed are the equivalent of town planners and urban developers. "And we can't build these things out of watches."

We may not be able to build them at all, at least not immediately. Helen Margetts outlined the constraints on the development of egovernment in times of austerity. "Web science needs to map, understand, and develop government just as for other social phenomena, and export back to mainstream," she said.

Other speakers highlighted gaps between popular mythology and reality. MIT's David Carter noted that, "The Web is often associated with the national and international but not the local - but the Web is really good at fostering local initiatives - that's something for Web science to ponder." Noshir Contractor, similarly, called out The Economist over the "death of distance": "More and more research shows we use the Web to have connections with proximate people."

Other topics will be far more familiar to net.wars readers: Jonathan Zittrain explored the ways the Web can be broken by copyright law, increasing corporate control (there was a lovely moment when he morphed the iPhone's screen into the old CompuServe main menu), the loss of uniformity so that the content a URL points to changes by geographic location. These and others are emerging points of failure.

We'll leave it to an unidentified audience question to sum up the state of Web science: "Nobody knows what it is. But we are doing it."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

August 20, 2010

Naming conventions

Eric Schmidt, the CEO of Google, is not a stupid person, although sometimes he plays one for media consumption. At least, that's how it seemed this week, when the Wall Street Journal reported that he had predicted, apparently in all seriousness, that the accumulation of data online may result in the general right for young people to change their names on reaching adulthood in order to escape the embarrassments of their earlier lives.

As Danah Boyd commented in response, it is to laugh.

For one thing, every trend in national and international law is going toward greater, permanent trackability. I know the UK is dumping the ID card and many US states are stalling on Real ID, but try opening a new bank account in the US or Europe, especially if you're a newly arrived foreigner. It's true that it's not so long ago - 20 years, perhaps - that people, especially in California, did change their names at the drop of an acid tablet. I'm fairly sure, for example, that the woman I once knew as Dancingtree Moonwater was not named that by her parents. But those days are gone with the anti-money laundering regulations, the anti-terrorist laws, and airport security.

For one thing, when is he imagining the adulthood moment to take place? When they're 17 and applying to college and need to cite their past records of good works, community involvement, and academic excellence? When they're 21 and graduating from college and applying for jobs and need to cite their past records of academic excellence, good works, and community involvement? I don't know about you, but I suspect that an admissions officer/prospective employer would be deeply suspicious of a kid coming of age today who had, apparently, no online history at all. Even if that child is a Mormon.

For another, changing your name doesn't change your identity (even if the change is because you got married). Investigators who track down people who've dropped out of their lives and fled to distant parts to start new ones often do so by, among other things, following their hobbies. You can leave your spouse, abandon your children, change jobs, and move to a distant location - but it isn't so easy to shake a passion for fly-fishing or 1957 Chevys. The right to reinvent yourself, as Action on Rights for Children's Terri Dowty pointed out during the campaign against the child-tracking database ContactPoint, is an important one. But that means letting minor infractions and youthful indiscretions fade into the mists of time, not to be pulled out and laughed until, say, 30 years hence, rather than being recorded in a database that thinks it "knows" you.

I think Schmidt knows all this perfectly well. And I think if such an infrastructure - turn 16, create a new identity - were ever to be implemented the first and most significant beneficiary would be...Google. I would expect most people's search engine use to provide as individual a fingerprint as, well, fingerprints. (This is probably less true for journalists, who research something different every week and therefore display the database equivalent of multiple personality disorder.)

Clearly if the solution to young people posting silly stuff online where posterity can bite them on the ass is a change of name the only way to do it is to assign kids online-only personas at birth that can be retired when they reach an age of reason. But in such a scenario, some kids would wind up wanting to adopt their online personas as their real ones because their online reputation has become too important in their lives. In the knowledge economy, as plenty of others have pointed out, reputation is everything.

This is, of course, not a new problem. As usual. When, in 1995, DejaNews (bought by Google some years back to form the basis of the Google Groups archive) was created, it turned what had been ephemeral Usenet postings into a permanent archive. If you think people post stupid stuff on Facebook now, when they know their friends and families are watching, you should have seen the dumb stuff they posted on Usenet when they thought they were in the online equivalent of Benidorm, where no one knew them and there were no consequences. Many of those Usenet posters were students. But I also recall the newly appointed CEO of a public company who went around the WELL deleting all his old messages. Didn't mean there weren't copies...or memories.

There is a genuine issue here, though, and one that a very smart friend with a 12-year-old daughter worries about regularly: how do you, as a parent, guide your child safely through the complexities of the online world and ensure that your child has the best possible options for her future while still allowing her to function socially with her peers? Keeping her offline is not an answer. Neither are facile statements from self-interested CEOs who, insulated by great wealth and technological leadership, prefer to pretend to themselves that these issues have already been decided in their favor.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 9, 2010

The big button caper

There's a moment early in the second season of the TV series Mad Men when one of the Sterling Cooper advertising executives looks out the window and notices, in a tone of amazement, that young people are everywhere. What he was seeing was, of course, the effect of the baby boom. The world really *was* full of young people.

"I never noticed it," I said to a friend the next day.

"Well, of course not," he said. "You were one of them."

Something like this will happen to today's children - they're going to wake up one day and think the world is awash in old people. This is a fairly obvious consequence of the demographic bulge of the Baby Boomers, which author Ken Dychtwald has compared to "a pig going through a python".

You would think that mobile phone manufacturers and network operators would be all over this: carrying a mobile phone is an obvious safety measure for an older, perhaps infirm or cognitively confused person. But apparently the concept is more difficult to grasp than you'd expect, and so Simon Rockman, the founder and former publisher of What Mobile and now working for the GSM Association, convened a senior mobile market conference on Tuesday.

Rockman's pitch is that the senior market is a business opportunity: unlike other market sectors it's not saturated; older users are less likely to be expensive data users and more loyal. The margins are better, he argues, even if average revenue per user is low.

The question is, how do you appeal to this market? To a large extent, seniors are pretty much like everyone else: they want gadgets that are attractive, even cool. They don't want the phone equivalent of support stockings. Still, many older people do have difficulties with today's ultra-tiny buttons, icons, and screens, iffy sound quality, and complex menu structures. Don't we all?

It took Ewan MacLeod, the editor of Mobile Industry Review to point out the obvious. What is the killer app for most seniors in any device? Grandchildren, pictures of. MacLeod has a four-week-old son and a mother whose desire to see pictures apparently could only be fully satisfied by a 24-hour video feed. Industry inadequacy means that MacLeod is finding it necessary to write his own app to make sending and receiving pictures sufficiently simple and intuitive. This market, he pointed out, isn't even price-sensitive. Tell his mother she'll need to spend £60 on a device so she can see daily pictures of her grandkids, and she'll say, "OK." Tell her it will cost £500, and she'll say..."OK."

I bet you're thinking, "But the iPhone!" And to some extent you're right: the iPhone is sleek, sexy, modern, and appealing; it has a zoom function to enlarge its display fonts, and it is relatively easy to use. And so MacLeod got all the grandparents onto iPhones. But he's having to write his own app to easily organize and display the photos the phones receive: the available options are "Rubbish!"

But even the iPhone has problems (even if you're not left-handed). Ian Hosking, a senior research associate at the Cambridge Engineering Design Centre, overlaid his visual impairment simulation software so it was easy to see. Lack of contrast means the iPhone's white on black type disappears unreadably with only a small amount of vision loss. Enlarging the font only changes the text in some fields. And that zoom feature, ah, yes, wonderful - except that enabling it requires you to double-tap and then navigate with three fingers. "So the visual has improved, but the dexterity is terrible."

Oops.

In all this you may have noticed something: that good design is good design, and a phone design that accommodates older people will also most likely be a more usable phone for everyone else. These are principles that have not changed since Donald Norman formulated them in his classic 1998 book The Design of Everyday Things. To be sure there is some progress. Evelyne Pupeter-Fellner, co-founder of Emporia, for example, pointed out the elements of her company's designs that are quietly targeted at seniors: the emergency call system that automatically dials, in turn, a list of selected family members or friends until one answers; the ringing mechanism that lights up the button to press to answer. The radio you can insert the phone into that will turn itself down and answer the phone when it rings. The design that lets you attach it to a walker - or a bicycle. The single-function buttons. Similarly, the Doro was praised.

And yet it could all be so different - if we would only learn from Japan, where nearly 86 percent of seniors have - and use data on - mobile phones, according to Kei Shimada, founder of Infinita.

But in all the "beyond big buttons" discussion and David Doherty's proposition that health applications will be the second killer app, one omission niggled: the aging population is predominantly female, and the older the cohort the more that is true.

Who are least represented among technology designers and developers?

Older women.

I'd call that a pretty clear mismatch. Somewhere between we who design and they who consume is your problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 18, 2010

Things I learned at this year's CFP

- There is a bill in front of Congress to outlaw the sale of anonymous prepaid SIMs. The goal seems to be some kind of fraud and crime prevention. But, as Ed Hasbrouck points out, the principal people who are likely to be affected are foreign tourists and the Web sites who sell prepaid SIMS to them.

- Robots are getting near enough in researchers' minds for them to be spending significant amounts of time considering the legal and ethical consequences in real life - not in Asimov's fictional world where you could program in three safety llaws and your job was done. Ryan Calo points us at the work of Stanford student Victoria Groom on human-robot interaction. Her dissertation research not yet on the site, discovered that humans allocate responsibility for success and failure proportionately according to how anthropomorphic the robot is.

- More than 24 percent of tweets - and rising sharply - are sent by automated accounts, according to Miranda Mowbray at HP labs. Her survey found all sorts of strange bots: things that constantly update the time, send stock quotes, tell jokes, the tea bot that retweets every mention of tea...

- Google's Kent Walker, the 1997 CFP chair, believes that censorship is as big a threat to democracy as terrorism, and says that open architectures and free expression are good for democracy - and coincidentally also good for Google's business.

- Microsoft's chief privacy strategist, Peter Cullen, says companies must lead in privacy to lead in cloud computing. Not coincidentally, others are the conference note that US companies are losing business to Europeans in cloud computing because EU law prohibits the export of personal data to the US, where data protection is insufficient.

- It is in fact possible to provide wireless that works at a technical conference. And good food!

- The Facebook Effect is changing the attitude of other companies about user privacy. Lauren Gelman, who helps new companies with privacy issues, noted that because start-ups all see Facebook's success and want to be the next 400 million-user environment, there was a strong temptation to emulate Facebook's behavior. Now, with the angry cries mounting from consumers, she's having to spend less effort convincing them about the level of pushback companies will get from consumers if they change their policies and defy their expectations. Even so, it's important to ensure that start-ups include privacy in their budgets and not become an afterthought. In this respect, she makes me realize, privacy in 2010 is at the stage that usability was in the early 1990s.

- All new program launches come through the office of the director of Yahoo!'s business and human rights program, Ebele Okabi-Harris. "It's very easy for the press to focus on China and particular countries - for example, Australia last year, with national filtering," she said, "but for us as a company it's important to have a structure around this because it's not specific to any one region." It is, she added later, a "global problem".

- We should continue to be very worried about the database state because the ID cards repeal act continues the trend toward data sharing among government departments and agencies, according to Christina Zaba from No2ID.

- Information brokers and aggregators, operating behind the scenes, are amassing incredible amounts of details about Americans and it can require a great deal of work to remove one's information from these systems. The main customers of these systems are private investigators, debt collectors, media, law firms, and law enforcement. The Privacy Rights Clearinghouse sees many disturbing cases, as Beth Givens outlined, as does Pam Dixon's World Privacy forum.

- I always knew - or thought I knew - that the word "robot" was not coined by Asimov but by Karel Capek for his play R.U.R. (for "Rossum's Universal Robots", which coincidentally I also know that playing a robot in same was Michael Caine's first acting job). But Twitterers tell me that this isn't quite right. The word is derived from the Czech word "robota", "compulsory work for a feudal landlord". And that it was actually coined by Capek's older brother, Josef..

- There will be new privacy threats emerging from automated vehicles, other robots, and voicemail transcription services, sooner rather than later.

- Studying the inner workings of an organization like the International Civil Aviation Organization is truly difficult because the time scales - ten years to get from technical proposals to mandated standard, which is when the public becomes aware of - are a profound mismatch for the attention span of media and those who fund NGOs. Anyone who feels like funding an observer to represent civil society at ICAO should get in touch with Edward Hasbrouck.

- A lot of our cybersecurity problems could be solved by better technology.

- Lillie Coney has a great description of deceptive voting practices designed to disenfranchise the opposition: "It's game theory run amok!"

- We should not confuse insecure networks (as in vulnerable computers and flawed software) with unsecured networks (as in open wi-fi).

- Next year's conference chairs are EPIC's Lillie Coney and Jules Polonetsky. It will be in Washington, DC, probably the second or third week in June. Be there!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 12, 2010

The cost of money

Everyone except James Allan scrabbled in the bag Joe DiVanna brought with him to the Digital Money Forum (my share: a well-rubbed 1908 copper penny). To be fair, Allan had already left by then. But even if he hadn't he'd have disdained the bag. I offered him my pocketful of medium-sized change and he looked as disgusted as if it were a handkerchief full of snot. That's what living without cash for two years will do to you.

Listen, buddy, like the great George Carlin said, your immune system needs practice.

People in developed countries talk a good game about doing away with cash in favor of credit cards, debit cards, and Oyster cards, but the reality, as Michael Salmony pointed out, is that 80 percent of payments in Europe are...cash. Cash seems free to consumers (where cards have clearer charges), but costs European banks €84 billion a year. Less visibly banks also benefit (when the shadow economy hoards high-value notes it's an interest-free loan), and governments profit from Seigniorage (when people buy but do not spend coins).

"Any survey about payment methods," Salmony said Wednesday, "reveals that in all categories cash is the preferred payment method." You can buy a carrot or a car; it costs you nothing directly; it's anonymous, fast, and efficient. "If you talk directly to supermarkets, they all agree that cash is brilliant - they have sorting machines, counting machines...It's optimized so well, much better than cards."

The "unbanked", of course, such as the London migrants Kavita Datta studies, have no other options. Talk about the digital divide, this is the digital money divide: the cashless society excludes people who can't show passports, can't prove their address, or are too poor to have anything to bank with.

"You can get a job without a visa, but not without a bank account," one migrant worker told her. Electronic payments, ain't they grand?

But go to Africa, Asia, or South America, and everything turns upside down. There, too, cash is king - but there, unlike here with banks and ATMs on every corner and a fully functioning system of credit cards and other substitutes, cash is a terrible burden. Of the 2.6 billion people living on less than $2 a day, said Ignacio Mas, fewer than 10 percent have access to formal financial services. Poor people do save, he said, but their lack of good options means they save in bad ways.

They may not have banks, but most do have mobile phones, and therefore digital money means no long multi-bus rides to pay bills. It means being able to send money home at low cost. It means saving money that can't be easily stolen. In Ghana 80 percent of the population have no access to financial services - but 80 percent are covered by MTN, which is partnering with the banks to fill the gap. In Pakistan, Tameer Microfinance Bank partnered with Telenor to launch Easy-Peisa, which did 150,000 transactions its first month and expects a million by December. One million people produce milk in Pakistan; Nestle pays them all painfully by check every month. The opportunity in these countries to leapfrog traditional banking and head into digital payments is staggering, and our banks won't even care. The average account balance of customers for Kenya's M-Pesa customers is...$3.

When we're not destroying our financial system, we have more choices. If we're going to replace cash, what do we replace it with and what do we need? Really smart people to figure out how to do it right - like Isaac Newton, said Thomas Levenson. (Really. Who knew Isaac Newton had a whole other life chasing counterfeiters?) Law and partnership protocols and banks to become service providers for peer-to-peer finance, said Chris Cook. "An iTunes moment," said Andrew Curry. The democratization of money, suggested conference organizer David Birch.

"If money is electronic and cashless, what difference does it make what currency we use?" Why not...kilowatt hours? You're always going to need to heat your house. Global warming doesn't mean never having to say you're cold.

Personally, I always thought that if our society completely collapsed, it would be an excellent idea to have a stash of cigarettes, chocolate, booze, and toilet paper. But these guys seemed more interested in the notion of Facebook units. Well, why not? A currency can be anything. Second Life has Linden dollars, and people sell virtual game world gold for real money on eBay.

I'd say for the same reason that most people still walk around with notes in their wallet and coins in their pocket: we need to take our increasing abstraction step by step. Many have failed with digital cash, despite excellent technology, because they asked people to put "real" money into strange units with no social meaning and no stored trust. Birch is right: storing value in an Oyster card is no different than storing value in Beenz. But if you say that money is now so abstract that it's a collective hallucination, then the corroborative details that give artistic verisimilitude to an otherwise bald and unconvincing currency really matter.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

September 11, 2009

Public broadcasting

It's not so long ago - 2004, 2005 - that the BBC seemed set to be the shining champion of the Free World of Content, functioning in opposition to *AA (MPAA, RIAA) and general entertainment industry desire for total content lockdown. It proposed the Creative Archive; it set up BBC Backstage; and it released free recordings of the classics for download.

But the Creative Archive released some stuff and then ended the pilot in 2006, apparently because much of the BBC's content doesn't really belong to it. And then came the iPlayer. The embedded DRM, along with its initial Windows-only specification (though the latter has since changed), made the BBC look like less of a Free Culture hero.

Now, via the consultative offices of Ofcom we learn that the BBC wants to pacify third-party content owners by configuring its high-definition digital terrestrial services - known to consumers as Freeview HD - to implement copy protection. This request is, of course, part of the digital switchover taking place across the country over the next four years.

The thing is, the conditions under which the BBC was granted the relevant broadcasting licenses require that content be broadcast free-to-air. That is, unencrypted, which of course means no copy protection. So the BBC's request is to be allowed instead to make the stream unusable to outsiders by compressing the service information data using in-house-developed lookup tables. Under the proposal, the BBC will make those tables available free of charge to manufacturers who agree to its terms. Or, pretty clearly, the third party rights holders' terms.

This is the kind of hair-splitting the American humorist Jean Kerr used to write about when she detailed conversations with her children. She didn't think, for example, to include in the long list of things they weren't supposed to do when they got up first on a Sunday morning, the instruction not to make flour paste and glue together all the pages of the Sunday New York Times. "Now, of course, I tell them."

When the BBC does it, it's not so funny. Nor is it encouraging in the light of the broader trend toward claiming intellectual property protection in metadata when the data itself is difficult to restrict. Take, for example, the MTA's Metro-North Railroad, which runs commuter trains (on which Meryl Streep and Robert de Niro so often met in the 1984 movie Falling in Love) from New York City up both sides of the Hudson River to Connecticut. MTA has been issuing cease-and-desist orders to the owner of StationStops a Web site and iPhone schedule app dedicated to the Metro-North trains, claiming that it owns the intellectual property rights in its scheduling data. If it were in the UK, the Guardian's Free Our Data campaign would be all over it.

In both cases - and many others - it's hard to understand the originating organisation's complaint. Metro-North is in the business of selling train tickets; the BBC is supposed to measure its success in 1) the number of people who consumer its output; 2) the educational value of its output to the license fee-paying public. Promulgating schedule data can only help Metro-North, which is not a commercial company but a public benefit corporation owned by the State of New York. It's not going to make much from selling data licenses.

The BBC's stated intention is to prevent perfect, high-definition copies of broadcast material from escaping into the hands of (evil) file-sharers. The alternative, it says, would be to amend its multiplex license to allow it to encrypt the data streams. Which, they hasten to add, would require manufacturers to amend their equipment, which they certainly would not be able to do in time for the World Cup next June. Oh, the horror!

Fair enough, the consumer revolt if people couldn't watch the World Cup in HD because their equipment didn't support the new encryption standard would indeed be quite frightening to behold. But the BBC has a third alternative: tell rights holders that the BBC is a public service broadcaster, not a policeman for hire.

Manufacturers will still have to modify equipment under the more "modest" system information compression scheme: they will have to have a license. And it seems remarkably unlikely that licenses would be granted to the developers of open source drivers or home-brew devices such as Myth TV, and of course it couldn't be implemented retroactively in equipment that's already on the market. How many televisions and other devices will it break in your home?

Up until now, in contrast to the US situation, the UK's digital switchover has been pretty gentle and painless for a lot of people. If you get cable or satellite, at some point you got a new set-top box (mine keep self-destructing anyway); if you receive all your TV and radio over the air you attached a Freeview box. But this is the broadcast flag and the content management agenda all over again.

We know why rights holders want this. But why should the BBC adopt their agenda? The BBC is the best-placed broadcasting and content provider organisation in the world to create a parallel, alternative universe to the strictly controlled one the commercial entertainment industry wants. It is the broadcaster that commissioned a computer to educate the British public. It is the broadcaster that belongs to the people. Reclaim your heritage, guys.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

September 4, 2009

Nothing ventured, nothing lost

What does a venture capitalist do in a recession?

"Panic." Hermann Hauser says, then laughs. It is, in fact, hard to imagine him panicking if you've heard the stories he tells about his days as co-founder of Acorn Computers. He's quickly on to his real, more measured, view.

"It's just the bottom of the cycle, and people my age have been through this a number of times before. Though many people are panicking, I know that normally we come out the other end. If you just look at the deals I'm seeing at the moment, they're better than any deals I've seen in my entire life." The really positive thing, he says, is that, "The speed and quality of innovation are speeding up and not slowing down. If you believe that quality of innovation is the key to a successful business, as I do, then this is a good era. We have got to go after the high end of innovation - advanced manufacturing and the knowledge-based economy. I think we are quite well placed to do that." Fortunately, Amadeus had just raised a fund when the recession began, so it still has money to invest; life is, he admits, less fun for "the poor buggers who have to raise funds."

Among the companies he is excited about is Plastic Logic, which is due to release its first product next year, a competitor to the Kindle that will have a much larger screen, be much lighter, and will also be a computing platform with 3g, Bluetooth, and Wi-fi all built in, all built on plastic transistors that will be green to produce, more responsive than silicon - and sealed against being dropped in the bath water. "We have the world beat," he says. "It's just the most fantastic thing."

Probably if you ask any British geek above the age of 39, an Acorn BBC Micro figured prominently in their earliest experiences with computing. Hauser was and is not primarily a technical guy - although his idea of exhilarating vacation reading is Thermal Physics, by Charles Kittel and Herbert Kroemer - but picking the right guys to keep supplied with tea and financing is a rare skill, too.

"As I go around the country, people still congratulate me on the BBC Micro and tell me how wonderful it was. Some are now professors in computer science and what they complain about is that as people switched over to PCs - on the BBC Micro everybody knew how to program. The main interface was a programming interface, and it was so easy to program in BASIC everybody did it. Kids have no clue what programming is about - they just surf the Net. Nobody really understands any more what a computer does from the transistor up. It's a dying breed of people who actually know that all this is built on CMOS gates and can build it up from there."

Hauser went on to found an early effort in pen computing - "the technology wasn't good enough" and "the basic premise that I believed in, that pen computing would be important because everybody knew how to wield a pen just wasn't true" - and then the venture capital fund Amadeus, through which he helped fund, among others, leading Bluetooth chip supplier CSR. Britain, he says, is a much more hospitable environment now than it was when he was trying to make his Cambridge bank manager understand Acorn's need for a £1 million overdraft. Although, he admits now, "I certainly wouldn't have invested in myself." And would have missed Acorn's success.

"I think I'm the only European who's done four billion-dollar companies," he says. "Of course I've failed a lot. I assume that more of my initiatives that I've founded finally failed than finally succeeded."

But times have changed since consultants studied Acorn's books and told them to stop trading immediately because they didn't understand how technology companies worked. "All the building blocks you need to have to have a successful technology cluster are now finally in place," he says. "We always that the technology, but we always lacked management, and we've grown our own entrepreneurs now in Britain." He calls Stan Boland, CEO of 3g USB stock manufacturer Icera and Acorn's last managing director a "rock star" and "one of the best CEOs I have come across in Europe or the US." In addition, he says, "There is also a chance of attracting the top US talent, for the first time." However, "The only thing I fear and that we have to be careful about is that the relative decline doesn't turn into an absolute decline."

One element of Britain's changing climate with respect to technology investment that Hauser is particularly proud of is helping create tax credits and taper relief for capital gains through his work on Leon Mandelson's advisory panel on new industry and new jobs. "The reason I have done it is that I don't believe in the post-industrial society. We have to have all parts of industry in our country."

Hauser's latest excitement is stem cells; he's become the fourth person in the world to have his entire genome mapped. "It's the beginning of personal medicine."

The one thing that really bemuses him is being given lifetime achievement awards. "I have lived in the future all my life, and I still do. It's difficult to accept that I've already created a past. I haven't done yet the things I want to do!"


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

June 13, 2009

Futures

"What is the future of computers, freedom, and privacy?" a friend asked over lunch, apparently really wanting to know. This was ten days ago, and I hesitated before finding an out.

"I don't know," I said. "I haven't been to the conference yet.

Now I have been to the conference, at least this year's instance of it, and I still don't really know how to answer this question. As always, I've come away with some ideas to follow up, but mostly the sense of a work in progress. How do some people manage to be such confident futurologists?

I don't mean science fiction writers: while they're often confused with futurologists and Arthur C. Clarke's track record in predicting communications satellites notwithstanding, they're not, really. They're storytellers who take our world, change a few variables, and speculate. I also don't mean trend-spotters, who see a few instances of something and generalize from there, or pundits, who are just very, very good at quotables.

Futurologists are good at the backgrounds science fiction writers use - but not good at coming up with stories. They're not, as I had it explained to me once, researchers, because they dream rather than build things. The smart ones have figured out that dramatic predictions get more headlines - and funding - than mundane ones and they have a huge advantage over urban planners and actuaries: they don't have to be right, just interesting. (Whereas, a "psychic seer" like Nostradamus doesn't even have to be interesting as long as his ramblings are vague enough to be reinterpretable every time some new major event comes along.)

It's perennially intriguing how much of the past images of the future throw away: changing fashions in clothing, furniture, and lifestyles leave no trace. Take, for example, Popular Mechanics' 1950 predictions for 2000. Some of that article is prescient: converging televisions and telephones, for example. Some extrapolates from then new technologies such as X-rays, plastics, and frozen foods. But far more of it is a reminder of how much better the future was in the past: family helicopters, solar power in real, widespread use, cheap housing. And yet even more of it reflects the constrained social roles of the 1950s: the assumption that all those synthetic plastic fabrics, furniture, and finishings would be hosed down by...the woman of the house.

I'll bet the guy who wrote that had a wife who was always complaining about having to do all the housework. And didn't keep his books at home. Or family heirlooms, personal memorabilia, or silly gewgaws picked up on that trip to Pittsburgh. I'm not entirely clear why anyone would find frozen milk and candy made from sawdust appealing, though I suppose home cooking is indeed going out of style.

But my friend's question was serious: I can't answer it by throwing extravagantly wild imaginings at it for their entertainment value. Plus, he's probably most interested in his lifetime and that of his children, and it's a simple equation that the farther out the future you're predicting the less plausible you have to be.

It's not hard to guess that computing power will continue to grow, even if it doesn't continue to keep pace with Moore's Law and is counterbalanced by the weight of Page's Law. What *is* hard to guess is how people will want to use it. To most of the generation writing the future in the 1950s, when World War II and the threat of Nazism was fresh, it was probably inconceivable that the citizens of democratic countries would be so willing to allow so many governments to track them in detail. As inconceivable, I suppose, as that the pill would come along a few years later and wipe away the social order they believed was nature's way. Orwell, of course, foresaw the possibilities of a surveillance society, but he imagined the central control of a giant government, not a society where governments rely on commercial companies to fill out their dossiers on citizens.

I find it hard to imagine dramatic futures in part because I do believe most people want to hold onto at least parts of their past, and therefore that any future we construct will be more like Terry Gilliam's movies than anything else, festooned with bizarre duct work and populated by junk that's either come back into fashion or that we simply forgot to throw away. And there are plenty of others around to predict the apocalypse (we run out of energy, three-quarters of the world's population dies, economic and environmental collapse, will you burn that computer or sit on it?) or its opposite (we find the Singularity, solve our energy problems, colonize space, and fix biology so we live forever). Neither seems to me the most likely.

I doubt my friend would have been satisfied with the answer: "More of the same, only different." But my guess is that the battle to preserve privacy will continue for a long time. Every increase in computing power makes greater surveillance possible, and 9/11 provided the seeming justification that overrode the fading memory of what was at stake in World War II. It won't be until an event with that kind of impact reminds people of the risk you take when you allow "If you have nothing to hide, you have nothing to fear" to become society's mantra that the mainstream will fight to take back their privacy.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 29, 2009

Three blind governments

I spent my formative adult years as a musician. And even so, if I were forced to choose to sacrifice one of my senses as a practical matter pick sight over hearing: as awful and isolating as it would be to be deaf it would be far, far worse to be blind.

Lack of access to information and therefore both employment and entertainment is the key reason. How can anyone participate in the "knowledge economy" if you can't read?

Years ago, when I was writing a piece about disabled access to the Net, the Royal National Institute for the Blind put me in touch with Peter Brasher, a consultant who was particularly articulate on the subject of disabled access to computing.

People tend to make the assumption - as I did - that the existence of Braille editions and talking books meant that blind and partially sighted people were catered for reasonably well. In fact, he said, only 8 percent of the blind population can read Braille; its use is generally confined to those who are blind from childhood (although see here for a counterexample). But by far and away the majority of vision loss comes later in life. It's entirely possible that the percentage of Braille readers is now considerably less; today's kids are more likely to be taught to rely on technology - text-to-speech readers, audio books, and so on. From 50 percent in the 1950s, the percentage of blind American children learning Braille has dropped to 10 percent.

There's a lot of concern about this which can be summed up by this question: if text-to-speech technology and audio books are so great, why aren't sighted kids told to use them instead of bothering to learn to read?

But the bigger issue Brasher raised was one of independence. Typically, he said, the availability of books in Braille depends on someone with an agenda, often a church. The result for an inquisitive reader is a constant sense of limits. Then computers arrived, and it became possible to read anything you wanted of your own choice. And then graphical interfaces arrived and threatened to take it all away again; I wrote here about what it's like to surf the Web using the leading text-to-speech reader, JAWS. It's deeply unpleasant, difficult, tiring, and time-consuming.

When we talk about people with limited ability to access books - blind, partially sighted; in other cases fully sighted but physically disabled - we are talking about an already deeply marginalized and underserved population. Some of the links above cite studies that show that unemployment among the Braille-reading blind population is 44 percent - and 77 percent among blind non-Braille readers. Others make the point that inability to access printed information interferes with every aspect of education and employment.

And this is the group that this week's meeting of the Standing Committee on Copyright and Related Rights at the World Intellectual Property Office has convened to consider. Should there be a blanket exception to allow the production of alternative formats of books for the visually impaired and disabled?

The proposal, introduced by Brazil, Paraguay, and Ecuador, seems simple enough, and the cause unarguable. The World Blind Union estimates that 95 percent of books never become available in alternative formats and when they do it's after some delay. As Brasher said nearly 15 years ago, such arrangements depend on the agendas ofcharitable organizations.

The culprit, as in so many net.wars, is copyright law. The WBU published arguments for copyright reform (DOC) in 2004. Amazon's Kindle is a perfect example of the problem: bowing to the demands of publishers, text-to-speech can be - and is being - turned off in the Kindle. The Kindle - any ebook reader with speech capabilities - ought to have been a huge step forward for disabled access to books.

And now, according to Twits present, at WIPO, the US, Canada, and the EU are arguing against the idea of this exemption. (They're not the only ones; elsewhere, the Authors Guild has argued that exemptions should be granted by special license and registration, something I'd certainly be unhappy about if I were blind.)

Governments, particularly democratic ones, are supposed to be about ensuring equal opportunities for all. They are supposed to be about ensuring fair play. What about the American Disabilities Act, the EU's charter of fundamental human rights, and Canada's human rights act? Can any of these countries seriously argue that the rights of publishers and copyright holders trump the needs of a seriously disadvantaged group of people that every single one of us is at risk of joining?

While it's clear that text-to-speech and audio books don't solve every problem, and while the US is correct to argue that copyright is only one of a number of problems confronting the blind, when the WBU argues that copyright poses a significant barrier to access shouldn't everyone listen? Or are publishers confused by the stereotypical image of the pirate with the patch over one eye?

If governments and rightsholders want us to listen to them about other aspects of copyright law, they need to be on the right side of this issue. Maybe they should listen to their own marketing departments about the way it looks when rich folks kick people who are already disadvantaged - and then charge for the privilege.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2009

Statebook of the art

The bad thing about the Open Rights Group's new site, Statebook is that it looks so perfectly simple to use that the government may decide it's actually a good idea to implement something very like it. And, unfortunately, that same simplicity may also create the illusion in the minds of the untutored who still populate the ranks of civil servants and politicians that the technology works and is perfectly accurate.

For those who shun social networks and all who sail in her: Statebook's interface is an almost identical copy of that of Facebook. True, on Facebook the applications you click on to add are much more clearly pointless wastes of time, like making lists of movies you've liked to share with your friends or playing Lexulus (the reinvention of the game formerly known as Scrabulous until Hasbrouck got all huffy and had it shut down).

Politicians need to resist the temptation to believe it's as easy as it looks. The interfaces of both the fictional Statebook and the real Facebook look deceptively simple. In fact, although friends tell me how much they like the convenience of being able to share photos with their friends in a convenient single location, and others tell me how much they prefer Facebook's private messaging to email, Facebook is unwieldy and clunky to use, requiring a lot of wait time for pages to load even over a fast broadband connection. Even if it weren't, though, one of the difficulties with systems attempting to put EZ-2-ewes front ends on large and complicated databases is that they deceive users into thinking the underlying tasks are also simple.

A good example would be airline reservations systems. The fact is that underneath the simple searching offered by Expedia or Travelocity lies some extremely complex software; it prices every itinerary rather precisely depending on a host of variables. These may not just the obvious things like the class of cabin, but the time of day, the day of the week, the time of year, the category of flyer, the routing, how far in advance the ticket is being purchased, and the number of available seats left. Only some of this is made explicit; frequent flyers trying to maxmize their miles per dollar despair while trying to dig out arcane details like the class of fare.

In his 1988 book The Design of Everyday Things, Donald Norman wrote about the need to avoid confusing the simplicity or complexity of an interface with the characteristics of the underlying tasks. He also writes about the mental models people create as they attempt to understand the controls that operate a given device. His example is a refrigerator with two compartments and two thermostatic controls. An uninformed user naturally assumes each thermostat controls one compartment, but in his example, one control sets the thermostat and the other directs the proportion of cold air that's sent to each comparment. The user's mental model is wrong and, as a consequence, attempts that user makes to set the temperature will also, most likely, be wrong.

In focusing on the increasing quantity and breadth of data the government is collecting on all of us, we've neglected to think about how this data will be presented to its eventual users. We have warned about the errors that build up in very large databases that are compiled from multiple sources. We have expressed concern about surveillance and about its chilling impact on spontaneous behaviour. And we have pointed out that data is not knowledge; it is very easy to take even accurate data and build a completely false picture of a person's life. Perhaps instead we should be focusing on ensuring that the software used to query these giant databases-in-progress teaches users not to expect too much.

As an everyday example of what I mean, take the automatic line-calling system used in tennis since 2005, Hawkeye. Hawkeye is not perfectly accurate. Its judgements are based on reconstructions that put together the video images and timing data from four or more high-speed video cameras. The system uses the data to calculate the three-dimensional flight of the ball; it incorporates its knowledge of the laws of physics, its model of the tennis court, and its database of the rules of the game in order to judge whether the ball is in or out. Its official margin for error is 3.6mm.

A study by two researchers at Cardiff University disputed that number. But more relevant here, they pointed out that the animated graphics used to show the reconstructed flight of the ball and the circle indicating where it landed on the court surface are misleading because they look to viewers as though they are authoritative. The two researchers, Harry Collins and Robert Evans, proposed that in the interests of public education the graphic should be redesigned to display the margin for error and the level of confidence.

This would be a good approach for database matches, too, especially since the number of false matches and errors will grow with the size of the databases. A real-life Statebook that doesn't reflect the uncertainty factor of each search, each match, and each interpretation next to every hit would indeed be truly dangerous.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 3, 2009

Copyright encounters of the third dimension

Somewhere around 2002, it occurred to me that the copyright wars we're seeing over digitised intellectual property - music, movies, books, photographs - might, in the not-unimaginable future be repeated, this time with physical goods. Even if you don't believe that molecular manufacturing will ever happen, 3D printing and rapid prototyping machines offer the possibility of being able to make a large number of identical copies of physical goods that until now were difficult to replicate without investing in and opening a large manufacturing facility.

Lots of people see this as a good thing. Although: Chris Phoenix, co-founder of the Center for Responsible Nanotechnology, likes to ask, "Will we be retired or unemployed?"

In any case, I spent some years writing a book proposal that never went anywhere, and then let the idea hang around uselessly, like a human in a world where robots have all the jobs.

Last week, at the University of Edinburgh's conference on governance of new technologies (which I am very unhappy to have missed), RAF engineer turned law student Simon Bradshaw presented a paper on the intellectual property consequences of "low-cost rapid prototyping". If only I'd been a legal scholar...

It turns out that as a legal question rapid prototyping has barely been examined. Bradshaw found nary a reference in a literature search. Probably most lawyers think this stuff is all still just science fiction. But, as Bradshaw does, make some modest assumptions, and you find that perhaps three to five years from now we could well be having discussions about whether Obama was within the intellectual property laws to give the Queen a printed-out, personalized iPod case designed to look like Elvis, whose likeness and name are trademarked in the US. Today's copyright wars are going to seem so *simple*.

Bradshaw makes some fairly reasonable assumptions about this timeframe. Until recently, you could pay anywhere from $20,000 to $1.5 million for a fabricator/3D printer/rapid prototyping machine. But prices and sizes are dropping and functionality is going up. Bradshaw puts today's situation on a par with the state of personal computers in the late 1970s, the days of the Commodore PET and the Apple II and home kids like the Sinclair MK14. Let's imagine, he says, the world of the second generation fabricator: the size of a color laser printer, cost $1,000 or less, fed with readily available plastic, better than 0.1mm resolution (and in color), 20cm cube maximum size, and programmable by enthusiasts.

As the UK Intellectual Property Office will gladly tell you, there are four kinds of IP law: copyright, patent, trademark, and design. Of these, design is by far the least known; it's used to protect what the US likes to call "trade dress", that is, the physical look and feel of a particular item. Apple, for example, which rarely misses a trick when it comes to design, applied for a trademark on the iPhone's design in the US, and most likely registered it under the UK's design right as well. Why not? Registration is cheap (around £200), and the iPhone design was genuinely innovative.

As Bradshaw analyzes it, all four of these types of IP law could apply to objects created using 3D printing, rapid prototyping, fabricating...whatever you want to call it. And those types of law will interact in bizarre and unexpected ways - and, of course, differently in different countries.

For example: in the UK, a registered design can be copied if it's done privately and for non-commercial use. So you could, in the privacy of your home, print out copies of a test-tube stand (in Bradshaw's example) whose design is registered. You could not do it in a school to avoid purchasing them.

Parts of the design right are drafted so as to prevent manufacturers from using the right to block third-parties from making spare parts. So using your RepRap to make a case for your iPod is legal as long as you don't copy any copyrighted material that might be floating around on the surface of the original. Make the case without Elvis.

But when is an object just an object and when is it a "work of artistic merit"? Because if what you just copied is a sculpture, you're in violation of copyright law. And here, Bradshaw says, copyright law is unhelpfully unclear. Some help has come from the recent ruling in Lucasfilm v Ainsworth, the case about the stormtrooper helmets copied from the first Star Wars movie. Is a 3D replica of a 2D image a derivative work?

Unsurprisingly, it looks like US law is less forgiving. In the helmet case, US courts ruled in favor of Lucasfilm; UK courts drew a distinction between objects that had been created for artistic purposes in their own right and those that hadn't.

And that's all without even getting into the thing that if everyone has a fabricator there are whole classes of items that might no longer be worth selling. In that world, what's going to be worth paying for is the designs that drive the fabricators. Think knitted Dr Who puppets, only in 3D.

It's all going to be so much fun, dontcha think?

Update (1/26/2012): Simon Bradshaw's paper is now published here.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 5, 2008

Saving seeds

The 17 judges of the European Court of Human Rights ruled unanimously yesterday that the UK's DNA database, which contains more than 3 million DNA samples, violates Article 8 of the European Convention on Human Rights. The key factor: retaining, indefinitely, the DNA samples of people who have committed no crime.

It's not a complete win for objectors to the database, since the ruling doesn't say the database shouldn't exist, merely that DNA samples should be removed once their owners have been acquitted in court or the charges have been dropped. England, the court said, should copy Scotland, which operates such a policy.

The UK comes in for particular censure, in the form of the note that "any State claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance..." In other words, before you decide to be the first on your block to use a new technology and show the rest of the world how it's done, you should think about the consequences.

Because it's true: this is the kind of technology that makes surveillance and control-happy governments the envy of other governments. For example: lacking clues to lead them to a serial killer, the Los Angeles Police Department wants to copy Britain and use California's DNA database to search for genetic profiles similar enough to belong to a close relative .The French DNA database, FNAEG, was proposed in 1996, created in 1998 for sex offenders, implemented in 2001, and broadened to other criminal offenses after 9/11 and again in 2003: a perfect example of function creep. But the French DNA database is a fiftieth the size of the UK's, and Austria's, the next on the list, is even smaller.

There are some wonderful statistics about the UK database. DNA samples from more than 4 million people are included on it. Probably 850,000 of them are innocent of any crime. Some 40,000 are children between the ages of 10 and 17. The government (according to the Telegraph) has spent £182 million on it between April 1995 and March 2004. And there have been suggestions that it's too small. When privacy and human rights campaigners pointed out that people of color are disproportionately represented in the database, one of England's most experienced appeals court judges, Lord Justice Sedley, argued that every UK resident and visitor should be included on it. Yes, that's definitely the way to bring the tourists in: demand a DNA sample. Just look how they're flocking to the US to give fingerprints, and how many more flooded in when they upped the number to ten earlier this year. (And how little we're getting for it: in the first two years of the program, fingerprinting 44 million visitors netted 1,000 people with criminal or immigration violations.)

At last week's A Fine Balance conference on privacy-enhancing technologies, there was a lot of discussion of the key technique of data minimization. That is the principle that you should not collect or share more data than is actually needed to do the job. Someone checking whether you have the right to drive, for example, doesn't need to know who you are or where you live; someone checking you have the right to borrow books from the local library needs to know where you live and who you are but not your age or your health records; someone checking you're the right age to enter a bar doesn't need to care if your driver's license has expired.

This is an idea that's been around a long time - I think I heard my first presentation on it in about 1994 - but whose progress towards a usable product has been agonizingly slow. IBM's PRIME project, which Jan Camenisch presented, and Microsoft's purchase of Credentica (which wasn't shown at the conference) suggest that the mainstream technology products may finally be getting there. If only we can convince politicians that these principles are a necessary adjunct to storing all the data they're collecting.

What makes the DNA database more than just a high-tech fingerprint database is that over time the DNA stored in it will become increasingly revealing of intimate secrets. As Ray Kurzweil kept saying at the Singularity Summit, Moore's Law is hitting DNA sequencing right now; the cost is accordingly plummeting by factors of ten. When the database was set up, it was fair to characterize DNA as a high-tech version of fingerprints or iris scans. Five - or 15, or 25, we can't be sure - years from now, we will have learned far more about interpreting genetic sequences. The coded, unreadable messages we're storing now will be cleartext one day, and anyone allowed to consult the database will be privy to far more intimate information about our bodies, ourselves than we think we're giving them now.

Unfortunately, the people in charge of these things typically think it's not going to affect them. If the "little people" have no privacy, well, so what? It's only when the powers they've granted are turned on them that they begin to get it. If a conservative is a liberal who's been mugged, and a liberal is a conservative whose daughter has needed an abortion, and a civil liberties advocate is a politician who's been arrested...maybe we need to arrest more of them.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 21, 2008

The art of the impossible

So the question of last weekend very quickly became: how do you tell plausible fantasy from wild possibility? It's a good conversation starter.

One friend had a simple assessment: "They are all nuts," he said, after glancing over the weekend's program. The problem is that 150 years ago anyone predicting today's airline economy class would also have sounded nuts.

Last weekend's (un)conference was called Convergence, but the description tried to convey the sense of danger of crossing the streams. The four elements that were supposed to converge: computing, biotech, cognitive technology, and nanotechnology. Or, as the four-colored conference buttons and T-shirts had it, biotech, infotech, cognotech, and nanotech.

Unconferences seem to be the current trend. I'm guessing, based on very little knowledge, that it was started by Tim O'Reilly's FOO camps or possibly the long-running invitation-only Hackers conference. The basic principle is: collect a bunch of smart, interesting, knowledgeable people and they'll construct their own program. After all, isn't the best part of all conferences the hallway chats and networking, rather than the talks? Having been to one now (yes, a very small sample), I think in most cases I'm going to prefer the organized variety: there's a lot to be said for a program committee that reviews the proposals.

The day before, the Center for Responsible Nanotechnology ran a much smaller seminar on Global Catastrophic Risks. It made a nice counterweight: the weekend was all about wild visions of the future; the seminar was all about the likelihood of our being wiped out by biological agents, astronomical catastrophe, or, most likely, our own stupidity. Favorite quote of the day, from Anders Sandberg: "Very smart people make very stupid mistakes, and they do it with surprising regularity." Sandberg learned this, he said, at Oxford, where he is a philosopher in the Institute for the Future of Humanity.

Ralph Merkle, co-inventor of public key cryptography, now working on diamond mechanosynthesis, said to start with physics textbooks, most notably the evergreen classic by Halliday and Resnick. You can see his point: if whatever-it-is violates the laws of physics it's not going to happen. That at least separates the kinds of ideas flying around at Convergence and the Singularity Summit from most paranormal claims: people promoting dowsing, astrology, ghosts, or ESP seem to be about as interested in the laws of physics as creationists are in the fossil record.

A sidelight: after years of The Skeptic, I'm tempted to dismiss as fantasy anything where the proponents tell you that it's just your fear that's preventing you from believing their claims. I've had this a lot - ghosts, alien spacecraft, alien abductions, apparently these things are happening all over the place and I'm just too phobic to admit it. Unfortunately, the behavior of adherents to a belief just isn't evidence that it's wrong.

Similarly, an idea isn't wrong just because its requirements are annoying. Do I want to believe that my continued good health depends on emulating Ray Kurzweil and taking 250 pills a day and, a load of injections weekly? Certainly not. But I can't prove it's not helping him. I can, however, joke that it's like those caloric restriction diets - doing it makes your life *seem* longer.

Merkle's other criterion: "Is it internally consistent?" This one's harder to assess, particularly if you aren't a scientific expert yourself.

But there is the technique of playing the man instead of the ball. Merkle, for example, is a cryonicist and is currently working on diamond mechanosynthesis. Put more simply, he's busy designing the tools that will be needed to build things atom by atom when - if - molecular manufacturing becomes a reality. If that sounds nutty, well, Merkle has earned the right to steam ahead unworried because his ideas about cryptography, which have become part of the technology we use every day to protect ecommerce transactions, were widely dismissed at first.

Analyzing language is also open to the scientifically less well-educated: do the proponents of the theory use a lot of non-standard terms that sound impressive but on inspection don't seem to mean anything? It helps if they can spell, but that's not a reliable indicator - snake oil salesmen can be very professional, and some well-educated excellent scientists can't spell worth a damn.

The Risks seminar threw out a useful criterion for assessing scenarios: would it make a good movie? If your threat to civilization can be easily imagined as a line delivered by Bruce Willis, it's probably unlikely. It's not a scientifically defensible principle, of course, but it has a lot to recommend it. In human history, what's killed the most people while we're worrying about dramatic events like climate change and colliding asteroids? Wars and pandemics.

So, where does that leave us? Waiting for deliverables, of course. Even if a goal sounds ludicrous working towards it may still produce useful results. A project like Aubrey de Grey's ideas about "curing aging" by developing techniques for directly repairing damage (or SENS, for Strategies for Engineered Negligible Senescence) seems a case in point. And life extension is the best hope for all of these crazy ideas. Because, let's face it: if it doesn't happen in our lifetime, it was impossible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 7, 2008

Reality TV

The Xerox machine in the second season of Mad Men has its own Twitter account, as do many of the show's human characters. Other TV characters have MySpace pages and Facebook groups, and of course they're all, legally or illegally, on YouTube.

Here at the American Film Institute's Digifest in Hollywood - really Hollywood, with the stars on the sidewalks and movie theatres everywhere - the talk is all of "cross-platform". This event allows the AFI's Digital Content Lab to show off some of the projects it's fostered over the last year, and the audience is full of filmmakers, writers, executives, and owners of technology companies, all trying to figure out digital television.

One of the more timely projects is a remix of the venerable PBS Newshour with Jim Lehrer. A sort of combination of Snopes, Wikipedia, and any of a number of online comment sites, the goal of The Fact Project is to enable collaboration between the show's journalists and the public. Anyone can post a claim or a bit of rhetoric and bring in supporting or refuting evidence; the show's journalistic staff weigh in at the end with a Truthometer rating and the discussion is closed. Part of the point, said the project's head, Lee Banville, is to expose to the public the many small but nasty claims that are made in obscure but strategic places - flyers left on cars in supermarket parking lots, or radio spots that air maybe twice on a tiny local station.

The DCL's counterpart in Australia showed off some other examples. Areo, for example, takes TV sets and footage and turns them into game settings. More interesting is the First Australians project, which in the six-year process of filming a TV documentary series created more than 200 edited mini-documentaries telling each interviewee's story. Or the TV movie Scorched, which even before release created a prequel and sequel by giving a fictional character her own Web site and YouTube channel. The premise of the film itself was simple but arresting. It was based on one fact, that at one point Sydney had no more than 50 weeks of water left, and one what-if - what if there were bush fires? The project eventually included a number of other sites, including a fake government department.

"We go to islands that are already populated," said the director, "and pull them into our world."

HBO's Digital Lab group, on the other hand, has a simpler goal: to find an audience in the digital world it can experiment on. Last month, it launched a Web-only series called Hooking Up. Made for almost no money (and it looks it), the show is a comedy series about the relationship attempts of college kids. To help draw larger audiences, the show cast existing Web and YouTube celebrities such as LonelyGirl15, KevJumba, and sxePhil. The show has pulled in 46,000 subscribers on YouTube.

Finally, a group from ABC is experimenting with ways to draw people to the network's site via what it calls "viewing parties" so people can chat with each other while watching, "live" (so to speak), hit shows like Grey's Anatomy. The interface the ABC party group showed off was interesting. They wanted, they said, to come up with something "as slick as the iPhone and as easy to use as AIM". They eventually came up with a three-dimensional spatial concept in which messages appear in bubbles that age by shrinking in size. Net old-timers might ask churlishly what's so inadequate about the interface of IRC or other types of chat rooms where messages appear as scrolling text, but from ABC's point of view the show is the centrepiece.

At least it will give people watching shows online something to do during the ads. If you're coming from a US connection, the ABC site lets you watch full episodes of many current shows; the site incorporates limited advertising. Perhaps in recognition that people will simply vanish into another browser window, the ads end with a button to click to continue watching the show and the video remains on pause until you click it.

The point of all these initiatives is simple and the same: to return TV to something people must watch in real-time as it's broadcast. Or, if you like, to figure out how to lure today's 20- and 30-somethings into watching television; Newshour's TV audience is predominantly 50- and 60-somethings.

ABC's viewing party idea is an attempt - as the team openly said - to recreate what the network calls "appointment TV". I've argued here before that as people have more and more choices about when and where to watch their favourite scripted show, sports and breaking news will increasingly rule television because they are the only two things that people overwhelmingly want to see in real time. If you're supported by advertising, that matters, but success will depend on people's willingness to stick with their efforts once the novelty is gone. The question to answer isn't so much whether you can compete with free (cue picture of a bottle of water) but whether you can compete with freedom (cue picture of evil file-sharer watching with his friends whenever he wants).


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 31, 2008

Machine dreams

Just how smart are humans anyway? Last week's Singularity Summit spent a lot of time talking about the exact point at which computer processing power would match that of the human brain, but that's only the first step. There's the software to make the hardware do stuff, and then there's the whole question of consciousness. At that point, you've strayed from computer science into philosophy and you might as well be arguing about angels on the heads of pins. Of course everyone hopes they'll be alive to see these questions settled, but in the meantime all we have is speculation and the snide observation that it's typical that a roomful of smart people would think that all problems can be solved by more intelligence.

So I've been trying to come up with benchmarks for what constitutes artificial intelligence, and the first thing I think is that the Turing test is probably too limited. In it, a judge has to determine which of two typing correspondents is the machine and which the human, That's fine as far as it goes, but one of the consistent threads that un through all this is a noticeable disdain for human bodies.

While our brain power is largely centralized, it still seems to me likely that both its grey matter and the rest of our bodies are an important part of the substrate. How we move through space, how our bodies react and feed our brains is part and parcel of how our minds work, however much we may wish to transcend biology. The fact that we can watch films of bonobos and chimpanzees and recognise our own behaviour in their interactions should show us that we're a lot closer to most animal species than we think - and a lot further from most machines.

For that sort of reason, the Turing test seems limited. A computer passes that test if, when paired against a human, the judge can't tell which is which. At the moment, it seems clear the winner is going to be spambots - some spam messages are already devised cleverly enough to fool even Net-savvy individuals into opening them sometimes. But they're hardly smart - they're just programmed that way. And a lot depends on the capability of the judge - some people even find Eliza convincing, though it's incredibly easy to send off-course into responses that are clearly those of a machine. Find a judge who wants to believe and you're into the sort of game that self-styled psychics like to play.

Nor can we judge a superhuman intelligence by the intractable problems it solves. One of the more evangelist speakers last weekend talked about being able to instantly create tall buildings via nanotechnology. (I was, I'm afraid, irresistibly reminded of that Bugs Bunny cartoon where Marvin pours water on beans to produce instant Martians to get rid of Bugs.) This is clearly just silly: you're talking about building a gigantic building out of molecules. I don't care how many billions of nanobots you have, the sheer scale means it's going to take time. And, as Kevin Kelly has written, no matter how smart a machine is, figuring out how to cure cancer or roll back aging won't be immediate either because you can't really speed up the necessary experiments. Biology takes time.

Instead, one indicator might be variability of response; that is, that feeding several machines the same input - or giving the same machine the same input at different times - produces different, equally valid interpretations. If, for example, you give a 10th grade class Jane Austen's Pride and Prejudice to read and report on, different students might with equal legitimacy describe it as a historical account of the economic forces affecting 18th century women, a love story, the template for romantic comedy, or even the story of the plain sister in a large family whose talents were consistently overlooked until her sisters got married.

In The Singularity Is Near, Ray Kurzweil laments that each human must read a text separately and that knowledge can't be quickly transferred from one to another the way a speech recognition program can be loaded into a new machine in seconds - but that's the point. Our strength is that our intelligences are all different, and we aren't empty vessels into which information is poured but stews in which new information causes varying chemical reactions.

You might argue that search engines can already do this, in that you don't get the same list of hits if you type the same keywords into Google versus Yahoo! versus Ask.com, and if you come back tomorrow you may get a different response from any one of them. That's true. It isn't the kind of input I had in mind, but fair enough.

The other benchmark that's occurred to me so far is that machines will be getting really smart when they get bored.

ZDNet UK editor Rupert Goodwins has a variant on this from when he worked at Sinclair Research. "If it went out one evening, drank too much, said the next morning, 'never again' and repeated the exercise immediately. Truly human." But see? There again: a definition of human intelligence that requires a body.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 24, 2008

Living by numbers

"I call it tracking," said a young woman. She had healthy classic-length hair, a startling sheaf of varyingly painful medical problems, and an eager, frequent smile. She spends some minutes every day noting down as many as 40 different bits of information about herself: temperature, hormone levels, moods, the state of the various medical problems, the foods she eats, the amount and quality of sleep she gets. Every so often, she studies the data looking for unsuspected patterns that might help her defeat a problem. By this means, she says she's greatly reduced the frequency of two of them and was working on a third. Her doctors aren't terribly interested, but the data helps her decide which of their recommendations are worth following.

And she runs little experiments on herself. Change a bunch of variables, track for a month, review the results. If something's changed, go back and look at each variable individually to find the one that's making the difference. And so on.

Of course, everyone with the kind of medical problem - diabetes, infertility, allergies, cramps, migraines, fatigue - that medicine can't really solve - has done something like this for generations. Diabetics in particularly have long had to track and control their blood sugar levels. What's different is the intensity - and the computers. She currently tracks everything in an Excel spreadsheet, but what she's longing for is good tools to help her with data analysis.

From what Gary Wolf, the organizer of this group, Quantified Self, says - about 30 people are here for its second meeting, after hours at Palo Alto's Institute for the Future to swap notes and techniques on personal tracking - getting out of the Excel spreadsheet is a key stage in every tracker's life. Each stage of improvement thereafter gets much harder.

Is this a trend? Co-founder Kevin Kelley thinks so, and so does the Washington Post, which covered this group's first meeting. You may not think you will ever reach the stage of obsession that would lead you to go to a meeting about it, but in fact, if the interviews I did with new-style health companies in the past year is any guide, we're going to be seeing a lot of this in the health side of things. Home blood pressure monitors, glucose tests, cholesterol tests, hormone tests - these days you can buy these things in Wal-Mart.

The key question is clearly going to be: who owns your health data? Most of the medical devices in development assume that your doctor or medical supplier will be the one doing the monitoring; the dozens of Web sites highlighted in that Washington Post article hope there's a business in helping people self-track everything from menstrual cycles to time management. But the group in Palo Alto are more interested in self-help: in finding and creating tools everyone can use, and in interoperability. One meeting member shows off a set of consumer-oriented prototypes - bathroom scale, pedometer, blood pressure monitor, that send their data to software on your computer to display and, prospectively, to a subscription Web site. But if you're going to look at those things together - charting the impact of how much you walk on your weight and blood pressure - wouldn't you also want to be able to put in the foods you eat? There could hardly be an area where open data formats will be more important.

All of that makes sense. I was less clear on the usefulness of an idea another meeting member has - he's doing a start-up to create it - a tiny, lightweight recording camera that can clip to the outside of a pocket. Of course, this kind of thing already has a grand, old man in the form of Steve Mann, who has been recording his life with an increasingly small sheaf of devices for a couple of decades now. He was tired, this guy said, of cameras that are too difficult to use and too big and heavy; they get left at home and rarely used. This camera they're working on will have a wide-angle lens ("I don't know why no one's done this") and take two to five pictures a second. "That would be so great," breathes the guy sitting next to me.

Instantly, I flash on the memory of Steve Mann dogging me with flash photography at Computers, Freedom, and Privacy 2005. What happens when the police subpoenas your camera? How long before insurance companies and marketing companies offer discounts as inducements to people to wear cameras and send them the footage unedited so they can study behavior they currently can't reach?

And then he said, "The 10,000 greatest minutes of your life that your grandchildren have to see," and all you can think is, those poor kids.

There is a certain inevitable logic to all this. If retailers, manufacturers, marketers, governments, and security services are all convinced they can learn from data mining us why shouldn't we be able to gain insights by doing it ourselves?

At the moment, this all seems to be for personal use. But consider the benefits of merging it with Web 2.0 and social networks. At last you'll be able to answer the age-old question: why do we have sex less often than the Joneses?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 26, 2008

Wimsey's whimsy

One of the things about living in a foreign country is this: every so often the actual England I live in collides unexpectedly with the fictional England I grew up with. Fictional England had small, friendly villages with murders in them. It had lowering, thick fogs and grim, fantastical crimes solvable by observation and thought. It had mathematical puzzles before breakfast in a chess game. The England I live in has Sir Arthur Conan Doyle's vehement support for spiritualism, traffic jams, overcrowding, and four million people who read The Sun.

This week, at the GikIII Workshop, in a break between Internet futures, I wandered out onto a quadrangle of grass so brilliantly and perfectly green that it could have been an animated background in a virtual world. Overlooking it were beautiful, stolid, very old buildings. It had a sign: Balliol College. I was standing on the quad where, "One never failed to find Wimsey of Balliol planted in the center of the quad and laying down the law with exquisite insolence to somebody." I know now that many real people came out of Balliol (three kings, three British prime ministers, Aldous Huxley, Robertson Davies, Richard Dawkins, and Graham Greene) and that those old buildings date to 1263. Impressive. But much more startling to be standing in a place I first read about at 12 in a Dorothy Sayers novel. It's as if I spent my teenaged years fighting alongside Angel avatars and then met David Boreanaz.

Organised jointly by Ian Brown at the Oxford Internet Institute and the University of Edinburgh's Script-ed folks, GikIII (prounounced "geeky") is a small, quirky gathering that studies serious issues by approaching them with a screw loose. For example: could we control intelligent agents with the legal structure the Ancient Romans used for slaves (Andrew Katz)? How sentient is a robot sex toy? Should it be legal to marry one? And if my sexbot rapes someone, are we talking lawsuit, deactivation, or prison sentence (Fernando Barrio)? Are RoadRunner cartoons all patent applications for devices thought up by Wile E. Coyote (Caroline Wilson)? Why is The Hound of the Baskervilles a metaphor for cloud computing (Miranda Mowbray)?

It's one of the characteristics of modern life that although questions like these sound as practically irrelevant as "how many angels, infinitely large, can fit on the head of a pin, infinitely small?", which may (or may not) have been debated here seven and a half centuries ago, they matter. Understanding the issues they raise matters in trying to prepare for the net.wars of the future.

In fact, Sherlock Holmes's pursuit of the beast is metaphorical; Mowbray was pointing out the miasma of legal issues for cloud computing. So far, two very different legal directions seem likely as models: the increasingly restrictive EULAs common to the software industry, and the service-level agreements common to network outsourcing. What happens if the cloud computing company you buy from doesn't pay its subcontractors and your data gets locked up in a legal battle between them? The terms and conditions in effect for Salesforce.com warn that the service has 30 days to hand back your data if you terminate, a long time in business. Mowbray suggests that the most likely outcome is EULAs for the masses and SLAs at greater expense for those willing to pay for them.

On social networks, of course, there are only EULAs, and the question is whether interoperability is a good thing or not. If the data people put on social networks ("shouldn't there be a separate disability category for stupid people?" someone asked) can be easily transferred from service to service, won't that make malicious gossip even more global and permanent? A lot of the issues Judith Rauhofer raised in discussing the impact of global gossip are not new to Facebook: we have a generation of 35-year-olds coping with the globally searchable history of their youthful indiscretions on Usenet. (And WELL users saw the newly appointed CEO of a large tech company delete every posting he made in his younger, more drug-addled 1980s.) The most likely solution to that particular problem is time. People arrested as protesters and marijuana smokers in the 1960s can be bank presidents now; in a few years the work force will be full of people with Facebook/MySpace/Bebo misdeeds and no one will care except as something laugh at drunkenly late out in the pub.

But what Lilian Edwards wants to know is this: if we have or can gradually create the technology to make "every ad a wanted ad" - well, why not? Should we stop it? Online marketing is at £2.5 billion a year according to Ofcom, and a quarter of the UK's children spend 22 hours a week playing computer games, where there is no regulation of industry ads and where Web 2.0 is funded entirely by advertising. When TV and the Internet roll together, when in-game is in-TV and your social network merges with megamedia, and MTV is fully immersive, every detail can be personalized product placement. If I grew up five years from now, my fictional Balliol might feature Angel driving across the quad in a Nissan Prairie past a billboard advertising airline tickets.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 5, 2008

Return of the browser wars

It was quiet, too quiet. For so long it's just been Firefox/Mozilla/Netscape, Internet Explorer, and sometimes Opera that it seemed like that was how it was always going to be. In fact, things were so quiet that it seemed vaguely surprising that Firefox had released a major update and even long-stagnant Internet Explorer has version 8 out in beta. So along comes Chrome to shake things up.

The last time there were as many as four browsers to choose among, road-testing a Web browser didn't require much technical knowledge. You loaded the thing up, pointed it at some pages, and if you liked the interface and nothing seemed hideously broken, that was it.

This time round, things are rather different. To really review Chrome you need to know your AJAX from your JavaScript. You need to be able to test for security holes, and then discover more security vulnerabilities. And the consequences when these things are wrong are so much greater now.

For various reasons, Chrome probably isn't for me, quite aside from its copy-and-paste EULA oops. Yes, it's blazingly fast and I appreciate that because it separates each tab or window into its own process it crashes more gracefully than its competitors. But the switching cost lies less in those characteristics than in the amount of mental retraining it takes to adapt your way of working to new quirks. And, admittedly based on very short acquaintance, Chrome isn't worth it now that I've reformatted Firefox 3's address bar into a semblance of the one in Firefox 2. Perhaps when Chrome is a little older and has replaced a few more of Firefox's most useful add-ons (or when I eventually discover that Chrome's design means it doesn't need them).

Chrome does not do for browsers what Google did for search engines. In 1998, Google's ultra-clean, quick-loading front page and search results quickly saw off competing, ultra-cluttered, wait-for-it portals like Altavista because it was such a vast improvement. (Ironically, Google now has all those features and more, but it's smart enough to keep them off the front page.)

Chrome does some cool things, of course, as anything coming out of Google always has. But its biggest innovation seems to be more completely merging local and global search, a direction in which Firefox 3 is also moving, although with fewer unfortunate consequences. And, as against that, despite the "incognito" mode (similar to IE8) there is the issue of what data goes back to Google for its coffers.

It would be nice to think that Chrome might herald a new round of browser innovation and that we might start seeing browsers that answer different needs than are currently catered for. For example: as a researcher I'd like a browser to pay better attention to archiving issues: a button to push to store pages with meaningful metadata as well as date and time, the URL the material was retrieved from, whether it's been updated since and if so how, and so on. There are a few offline browsers that sort of do this kind of thing, but patchily.

The other big question hovering over Chrome is standards: Chrome is possible because the World Wide Web Consortium has done its work well. Standards and the existence of several competing browsers with significant market share has prevented any one company from seizing control and turning the Web into the kind of proprietary system Tim Berners-Lee resisted from the beginning. Chrome will be judged on how well it renders third-party Web pages, but Google can certainly tailor its many free services to work best with Chrome - not so different a proposition from the way Microsoft has controlled the desktop.

Because: the big thing Chrome does is bring Google out of the shadows as a competitor to Microsoft. In 1995, Business Week ran a cover story predicting that Java (write once, run on anything) and the Web (a unified interface) could "rewrite the rules of the software industry". Most of the predictions in that article have not really come true - yet - in the 13 years since it was published; or if they have it's only in modest ways. Windows is still the dominant operating system, and Larry Ellison's thin clients never made a dent in the market. The other big half of the challenge to Microsoft, GNU/Linux and the open-source movement, was still too small and unfinished.

Google is now in a position to deliver on those ideas. Not only are the enabling technologies in place but it's now a big enough company with reliable enough servers to make software as a Net service dependable. You can collaboratively process your words using Google Docs, coordinate your schedules with Google Calendar, and phone across the Net with Google Talk. I don't for one minute think this is the death of Microsoft or that desktop computing is going to vanish from the Earth. For one thing, despite the best-laid cables and best-deployed radios of telcos and men, we are still a long way off of continuous online connectivity. But the battle between the two different paradigms of computing - desktop and cloud - is now very clearly ready for prime time.

Wendy M. Grossman's Web site hasn extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 6, 2008

The Digital Revolution turns 15

"CIX will change your life," someone said to me in 1991 when I got a commission to review a bunch of online systems and got my first modem. At the time, I was spending most or all of every day sitting alone in my house putting words in a row for money.

The Net, Louis Rossetto predicted in 1993, when he founded Wired, would change everybody's lives. He compared it to a Bengali typhoon. And that was modest compared to others of the day, who compared it favorably to the discovery of fire.

Today, I spend most or all of every day sitting alone in my house putting words in a row for money.

But yes: my profession is under threat, on the one hand from shrinkage of the revenues necessary to support newspapers and magazines - which is indeed partly fuelled by competition from the Internet - and on the other hand from megacorporate publishers who routinely demand ownership of the copyrights freelances used to resell for additional income - a practice that the Internet was likely to largely kill off anyway. Few have ever gotten rich from journalism, but freelance rates haven't budged in years; staff journalists get very modest raises and for those they are required to work more hours a week and produce more words.

That embarrassingly solipsistic view aside, more broadly, we're seeing the Internet begin to reshape the entertainment, telecommunications, retail, and software industries. We're seeing it provide new ways for people to organize politically and challenge the control of information. And we're seeing it and natural laziness kill off our history: writers and students alike rely on online resources at the expense of offline archives.

Wired was, of course, founded to chronicle the grandly capitalized Digital Revolution, and this month, 15 years on, Rossetto looked back to assess the magazine's successes and failures.

Rossetto listed three failures and three successes. The three failures: history has not ended; Old Media are not dead (yet); and governments and politics still thrive. The three successful predictions: the long boom; the One Machine, a man/machine planetary consciousness; that technology would change the way we relate to each other and cause us to reinvent social institutions.

I had expected to see the long boom in the list of failures, and not just because it was so widely laughed at when it was published. Rossetto is fair to say that the original 1997 feature was not invalidated by the 2000 stock market bust. It wasn't about that (although one couldn't resist snickering about it as the NASDAQ tanked). Instead, what the piece predicted was a global economic boom covering the period 1980 to 2020.

Wrote Peter Schwartz and Peter Leyden, "We are riding the early waves of a 25-year run of a greatly expanding economy that will do much to solve seemingly intractable problems like poverty and to ease tensions throughout the world. And we'll do it without blowing the lid off the environment."

Rossetto, assessing it now, says, " There's a lot of noise in the media about how the world is going to hell. Remember, the truth is out there, and it's not necessarily what the politicians, priests, or pundits are telling you."

I think: 1) the time to assess the accuracy of an article outlining the future to 2020 is probably around 2050; 2) the writers themselves called it a scenario that might guide people through traumatic upheavals to a genuinely better world rather than a prediction; 3) that nonetheless, it's clear that the US economy, which they saw as leading the way has suffered badly in the 2000s with the spiralling deficit and rising consumer debt; 4) that media alarm about the environment, consumer debt, government deficits, and poverty is hardly a conspiracy to tell us lies; and 5) that they signally underestimated the extent to which existing institutions would adapt to cyberspace (the underlying flaw in Rossetto's assumption that governments would be disbanding by now).

For example, while timing technologies is about as futile as timing the stock market, it's worth noting that they expected electronic cash to gain acceptance in 1998 and to be the key technology to enable electronic commerce, which they guessed would hit $10 billion by 2000. Last year it was close to $200 billion. Writing around the same time, I predicted (here) that ecommerce would plateau at about 10 percent of retail; I assumed this was wrong, but it seems that it hasn't even reached 4 perecent yet, though it's obvious that, particularly in the copyright industries, the influence of online commerce is punching well above its statistical weight.

No one ever writes modestly about the future. What sells - and gets people talking - are extravagant predictions, whether optimistic or pessimistic. Fifteen years is a tiny portion even of human history, itself a blip on the planet. Tom Standage, writing in his 1998 book The Victorian Internet, noted that the telegraph was a far more radically profound change for the society of its day than the Internet is for ours. A century from now, the Internet may be just as obsolete. Rossetto, like the rest of us, will have to wait until he's dead to find out if his ideas have lasting value.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 11, 2008

Beyond biology

"Will we have enough food?"

Last Saturday (for an article in progress for the Guardian), I attended the monthly board meeting at Alcor, probably the largest of the several cryonics organizations. Cryonics: preserving a newly deceased person's body in the hope that medical technology will improve to the point where that person can be warmed up, revived, and cured.

I was the last to arrive at what I understand was an unusually crowded meeting: fifteen, including board members, staffers, and visitors. Hence the chair's anxious question.

The conference room has a window at one end that looks into a mostly empty concrete space at a line of giant cylinders, some gleaming steel, some dull aluminum. These "dewars" are essentially giant Thermos bottles, and they are the vessels in which cryopreserved patients are held. Each dewar can hold up to nine patients – four whole bodies, head down, and five neuro patients in a column down the middle.

There is a good reason to call these cryopreserved Alcor members "patients". If the cryonics dream ever comes to fruition, they will not have been dead now. And in any case, calling them patients has the same function as naming your sourdough starter: it reminds you that here is something that cannot survive without your responsible care.

To Alcor's board and staff, these are often personal friends. A number have their framed pictures on the board room wall, with the dates of their birth and cryopreservation. It was therefore a little eerie to realize that those visible dewars were, mostly, occupied.

I think the first time I ever heard of anything like cryonics was Woody Allen's movie Sleeper. Reading about it as a serious proposition came nearly 20 years later, in Ed Regis's 1992 book Great Mambo Chicken and the Transhuman Condition. Regis's book, which I reviewed for New Scientist, was a vivid ramble through the outer fringes of science, which he dubbed "fin-de-siècle hubris".

My view hasn't changed: since cremation and burial both carry a chance of revival of zero, cryonics has to do hardly anything to offer better odds, no matter how slight. But it remains a contentious idea. Isaac Asimov, for example, was against it, at least for himself. The science fiction I read as a teenager was filled with overpopulated earths covered in giant blocks of one-room apartments and people who lived on synthetic food because there was no longer the space or ability to grow enough of the real stuff. And we're going to add long-dead people as well?

That kind of issue comes up when you mention cryonics. Isn't it selfish? Or expensive? Or an imposition on future generations? What would the revived person would live on, given their outdated skills. Supposing you wake up a slave?

Many of these issues have been considered, if not by cryonicists themselves for purely practical reasons then by sf writers. Robert A. Heinlein's 1957 book The Door Into Summer had its protagonist involuntarily frozen and deposited into the future with no assets and no employment prospects, given that his engineering background was 30 years out of date. Larry Niven's 1991 short story "Rammer" had its hero revived into the blanked body of a criminal and sent out as a spaceship pilot by a society that would have calmly vaped his personality and replaced it with the next one if he were found unsuitable (Niven was also, by the way, the writer who coined the descriptor "corpsicle" for the cryopreserved). Even Woody Allen's Miles Monroe woke up in danger.

The thing is, those aren't reasons for cryonicists not to try to make their dream a reality. They are arguments for careful thought on the part of the cryonics organizations who are offering cryopreservation and possible revival as services. And they do think about it, in part because the people running those organizations expect to be cryopreserved themselves The scientist and Alcor board member Ralph Merkle, in an interview last year, pointed out that the current board chooses its successors with great care, "Because our lives will depend on selecting a good group to continue the core values."

Many of them are also bad aarguments. Most people, given their health, want their lives to continue; if they didn't, we'd be awash in suicides. If overpopulation is the problem, having children is just as selfish a way of securing immortality as wanting longer life for oneself. If burdening future generations is the problem, doing so by being there is hardly worse than using up all the planet's resources in our lifetime, leaving our descendants to suffer the consequences unaided. Nor is being uncertain of the consequences a reason: human history is filled with technologies we've developed on the basis that we'd deal with the consequences as they arose. Some consequences were good, some bad; most technologies have a mix of the two.

After the board meeting ended, several of those present and I went on talking about just these issues over lunch.

"We won't be harder to deal with than a baby," one of them said. True, but there is a much bigger biological urge to reproduce than there is to revive someone who was pronounced dead a century or two ago.

"We are kind of going around biology," he admitted.

Only up to a point: there was enough food.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).