Main

December 16, 2022

A garden of snakes

Thumbnail image for Thumbnail image for coyote-roadrunner-cliff.pngIt's hard to properly enjoy I-told-you-so schadenfreude when you know, from Juan Vargas (D-CA)'s comments this week, that disproportionately the people most affected by the latest cryptocurrency collapse are those who can least afford it. What began as a cultish libertarian desire to bypass the global financial system became a vector for wild speculation, and is now the heart of a series of collapsing frauds.

From the beginning, I've called bitcoin and its sequels as "the currency equivalent of being famous for being famous". Crypto(currency) fans like to claim that the world's fiat currencies don't have any underlying value either, but those are backed by the full faith and credit of governments and economies. Logically, crypto appeals most to those with the least reason to trust their governments: the very rich who resent paying taxes and those who think they have nothing to lose.

This week the US House and Senate both held hearings on the collapse of cryptocurrency exchange and hedge fund FTX and its deposed, arrested, and charged CEO Sam Bankman-Fried. The key lesson: we can understand the main issues surrounding FTX and its fellow cryptocurrency exchanges without understanding either the technical or financial intricacies.

A key question is whether the problem is FTX or the entire industry. Answers largely split along partisan lines. Republican member chose FTX, and tended to blame Securities and Exchange Commission chair Gary Gensler. Democrats were more likely to condemn the entire industry.

As Jesús G. "Chuy" García (D-IL) put it, "FTX is not an anomaly. It's not just one corrupt guy stealing money, it's an entire industry that refuses to comply with existing regulation that thinks it's above the law." Or, per Brad Sherman (D-CA), "My fear is that we'll view Sam Bankman-Fried as just one big snake in a crypto garden of Eden. The fact is, crypto is a garden of snakes."

When Sherrod Brown (D-OH) asked whether FTX-style fraud existed at other crypto firms, all four expert speakers said yes.

Related is the question of whether and how to regulate crypto, which begins with the problem of deciding whether crypto assets are securities under the decades-old Howey test. In its ongoing suit against Ripple, Gensler's SEC argues for regulation as securities. Lack of regulation has enabled crypto "innovation" - and let it recreate practices long banned in traditional financial markets. For an example see Ben McKenzie's and Jacob Silverman's analysis of leading crypto exchange Binance's endemic conflicts of interest and the extreme risks it allows customers to take that are barred under securities regulations.

Regulation could correct some of this. McKenzie gave the Senate committee numbers: fraudulent financier Bernie Madoff had 37,000 clients; FTX had 32 times that in the US alone. The collective lost funds of the hundreds of millions of victims worldwide could be ten times bigger than Madoff.

But: would regulating crypto clean up the industry or lend it legitimacy it does not deserve? Skeptics ask this about alt-med practitioners.

Some background. As software engineer Stephen Diehl explains in his new book, Popping the Crypto Bubble, securities are roughly the opposite of money. What you want from money is stability; sudden changes in value spark cost-of-living crises and economic collapse. For investors, stability is the enemy: they want investments' value to go up. The countervailing risk is why the SEC's requires companies offering securities to publish sufficient truthful information to enable investors to make a reasonable assessment.

In his book, Diehl compares crypto to previous bubbles: the Internet, tulips, the railways, the South Sea. Some, such as the Internet and the railways, cost early investors fortunes but leave behind valuable new infrastructure and technologies on which vast new industries are built. Others, like tulips, leave nothing of new value. Diehl, like other skeptics, believes cryptocurrencies are like tulips.

The idea of digital cash was certainly not new in 2008, when "Satoshi" published their seminal paper on bitcoin; the earliest work is usually attributed to David Chaum, whose 1982 dissertation contained the first known proposal for a blockchain protocol, proposed digital cash in a 1983 paper, and set up a company to commercialize digital cash in 1990 - way too early. Crypto's ethos came from the cypherpunks mailing list, which was founded in 1992 and explored the idea of using cryptography to build a new global financial system.

Diehl connects the reception of Satoshi's paper to its timing, just after the 2007-2008 financial crisis. There's some logic there: many have never recovered.

For a few years in the mid-2010s, a common claim was that cryptocurrencies were bubbles but the blockchain would provide enduring value. Notably disagreeing was Michael Salmony, who startled the 2016 Tomorrow's Transactions Forum by saying the blockchain was a technology in search of a solution. Last week, IBM and Maersk announced they are shutting down their enterprise blockchain because, Dan Robinson writes at The Register, despite the apparently idea use case, they couldn't attract industry collaboration.

More recently we've seen the speculative bubble around NFTs, but otherwise we've heard only about their wildly careening prices in US dollars and the amount of energy mining them consumes. Until this year, when escalating crashes and frauds are taking over. Distrust does not build value.


Illustrations: The Warner Brothers coyote, realizing he's standing on thin air.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 25, 2022

Assume a spherical cow

SphericalCow-IngridKallick-370.jpgThe early months of 2020 were a time of radical uncertainty - that is, decisions had to be made that affected the lives of whole populations where little guidance was available. As Leonard Smith and David Tuckett explained at their 2018 conference on the subject (and a recent Royal Society scientific meeting) decisions under radical uncertainty are often one-offs whose lessons can't inform the future. Tuckett's and Smith's goal was to understand the decision-making process itself in the hope that this part of the equation at least could be reused and improved.

Inevitably, the discussion landed on mathematical models, which attempt to provide tools to answer the question, "What if?" This question is the bedrock of science fiction, but science fiction writers' helpfulness has limits: they don't have to face bereaved people if they get it wrong; they can change reality to serve their sense of fictional truth; and they optimize for the best stories, rather than the best outcomes. Beware.

In the case of covid, humanity had experience in combating pandemics, but not covid, which turned out to be unlike the first known virus family people grabbed for: flu. Imperial College epidemiologist Neil Ferguson became a national figure when it became known that his 2006 influenza model suggesting that inaction could lead to 500,000 deaths had influenced the UK government's delayed decision to impose a national lockdown. Ferguson remains controversial; Scotland's The Ferrett offers a fact check that suggests that many critics failed to understand the difference between projection and prediction and the importance of the caveat "if nothing is done". Models offer possible futures, but not immutable ones.

As Erica Thompson writes in her new book, Escape From Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It, models also have limits that we ignore at our peril. Chief among them is the fact that the model is always an abstracted version of reality. If it weren't, our computers couldn't calculate them any more than they can calculate all the real world's variables. Thompson therefore asks: how can we use models effectively in decision making without becoming trapped inside the models' internal worlds, where their simplified assumptions are always true? More important, how can we use models to improve our decision making with respect to the many problems we face that are filled with uncertainties?

The science of covid - or of climate change - is only a small part of the factors a government must weigh in deciding how to respond; what science tells us must be balanced against the economic and social impacts of different approaches. In June 2020, Ferguson estimated that locking down a week earlier would have saved 20,000 lives. At the time, many people had already begun withdrawing from public life. And yet one reason the government delayed was the belief that the population would quickly give in to lockdown fatigue and resist restrictions, rendering an important tool unusable later, when it might be needed even more. This assumption turned out to be largely wrong, as was the assumption in Ferguson's 2006 model that 50% of the population would refuse to comply with voluntary quarantine. Thompson calls this misunderstanding of public reaction a "gigantic failure of the model".

What else is missing? she asks. Ferguson had to resign when he himself was caught breaking the lockdown rules. Would his misplaced belief that the population wouldn't comply have been corrected by a more diverse team?

Thompson began her career with a PhD in physics that led her to examine many models of North Atlantic storms. The work taught her more about the inferences we make from models than about storms, and it opened for her the question of how to use the information models provide without falling into the trap of failing to recognize the difference between the real world and Model Land - that is, the assumption-enclosed internal world of the models.

From that beginning, Thompson works through different aspects of how models work and where their flaws can be found. Like Cathy O'Neil's Weapons of Math Destruction, which illuminated the abuse of automated scoring systems, this is a clearly-written and well thought-out book that makes a complex mathematical subject and accessible to a general audience. Thompson's final chapter, which offers approaches to evaluating models and lists of questions to ask modelers, should be read by everyone in government.

Thompson's focus on the dangers of failing to appreciate the important factors models omit leads her to skepticism about today's "AI", which of course is trained on such models: "It seems to me that rather than AI developing towards the level of human intelligence, we are instead in danger of human intelligence descending to the level of AI by concreting inflexible decision criteria into institutional structures, leaving no room for the human strengths of empathy, compassion, a sense of fairness and so on." Later, she adds, "AI is fragile: it can work wonderfully in Model Land but, by definition, it does not have a relationship with the real world other than one mediated by the models that we endow it with."

In other words, AI works great if you can assume a spherical cow.


Illustrations: The spherical cow that mocks unrealistic scientific models drawn jumping over the moon by Ingrid Kallick for the 1996 meeting of the American Astronomical Association (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 4, 2022

Meaningful access

Screenshot from 2022-11-04 12-56-46-370.jpg"We talk as if being online is a choice," Sonia Livingstone commented on , "but we live in a context and all the decisions around us matter."

As we've observed before, it's only for the most privileged that *not* being online or *not* carrying a smartphone comes without cost.

Livingstone was speaking on a panel on digital inequalities at this week's UK IGF, an annual forum that mulls UK concerns over Internet governance in order to feed them into the larger global conversation on such matters (IGF). The panel highlighted two groups most vulnerable to digital exclusion: old people and children.

According to Ofcom's 2022 Online Nations report, in 2021 6% of British over-18s did not have Internet access at home. That average is, however, heavily skewed by over-65s, 20% of whom don't have Internet access at home and another 7% of whom have Internet access at home but don't use it. In the other age groups, the percentage without home access starts at 1% for 18-24 and rises to 3% for 44-54. The gap across ages is startlingly larger than the gap across economic groups, although obviously there's overlap: Age UK estimated in 2021 that 2 million pensioners were living in poverty.

I know one of the people in that 20%. She is adamant that there is nothing the Internet has to offer that she could possibly want. (I feel this way about cryptocurrencies.) Because, fortunately, the social groups she's involved in are kind, tolerant, and small, the impact of this refusal probably falls more on them than on her: they have to make the phone calls and send the printed-out newsletters to ensure she's kept in the loop. And they do.

Another friend, whose acquaintance with the workings of his computer is so nodding that he gets his son round to delete some files when his hard drive fills up, would happily do without it - except that his failing mobility means that he finds entertainment by playing online poker. To him, the computer is a necessary, but despised, evil. In Ofcom's figures, he'd look all right - Internet access at home, uses it near-daily. But the reality is that despite his undeniable intelligence he's barely capable of doing much beyond reading his email and loading the poker site. Worse, he has no interest in learning anything more; he just hates all of it. Is that what we mean by "Internet access"?

These two are what people generally think of when they talk about the "digital divide".

As Sally West, policy manager for Age UK, noted, if you're not online it's becoming increasingly difficult to do mundane things like book a GP appointment or do any kind of banking. Worse, isolation during the pandemic led some to stop using the Internet because they didn't have their customary family support. In its report on older people and the Internet, Age UK found that about half a million over-65s have stopped using the Internet. And, West said, unlike riding a bike, Internet skills don't necessarily stay with you when you stop using them. Even if they do, they lose relevance as the technology changes.

For children, lack of access translates into educational disadvantage and severely constricted life opportunities. Despite the government's distribution of laptops. Nominet's Digital Youth Index finds that a quarter of young people lack access to one, and 16% rely primarily on mobile data. And, said Jess Barrett, children lack understanding of privacy and security yet are often expected to be their family's digital expert.

More significantly, the Ofcom report finds that 20% of people - and a *third* of people aged 25-34 - used only a smartphone to go online 2021. That's *double* the number in 2020. Ofcom suggests that staying home much of 2020 and newer smartphones' larger screens may be relevant factors. I'd guess that economic uncertainty played an important role and that 2022's cost-of-living crisis will cause these numbers to rise again. There's also a generational aspect; today's 30-year-olds got their teenaged independence via smart phones.

To Old Net Curmudgeons, phone-only access isn't really *Internet* access; it's walled-garden apps. Where the open Internet promised that all of us could build and distribute things, apps limit us to consuming what the apps' developers allow. This is not petty snobbery; creating the next generation of technology pioneers requires learning as active users instead of lurkers.

This disenfranchisement led Lizzie Coles-Kemp to an approach that's rarely discussed: "We need to think how to design services for limited access, and we need to think what access means. It's not binary." This approach is essential as the of the mobile phone world's values risk overwhelming those of the open Internet.

In response, Livingstone mooted the idea of "meaningful access": the right device for the context and sufficient skills and knowledge that you can do what you need to.

The growing cost-of-living crisis, exacerbated this week by an interest rate rise, makes it easy to predict a marked further rise in households that jettison fixed-line broadband. This year may be the first since the Internet began in which online access in the UK shrinks.

"We are just highlighting two groups," Livingstone concluded. "But the big problem is poverty and exclusion. Solve those, and it fixes it."

Illustrations: UK IGF's panel on digital inequalities: Cliff Manning, Sally West, Sonia Livingstone, Lizzie Coles-Kemp, Jess Barrett,

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week on Twitter or @wendyg@mastodon.xyz.

September 23, 2022

Insert a human

We Robot - 2022 - boston dynamics.JPGRobots have stopped being robots. This is a good thing.

This is my biggest impression of this year's We Robot conference: we have moved from the yay! robots! of the first year, 2012, through the depressed doldrums of "AI" systems that make the already-vulnerable more vulnerable circa 2018 to this year, when the phrase that kept twanging was "sociotechnical systems". For someone with my dilettantish conference-hopping habit, this seems like the necessary culmination of a long-running trend away from robots as autonomous mobile machines to robots/AI as human-machine partnerships. We Robot has never talked much about robot rights, instead focusing on considering the policy challenges that arise as robots and AI become embedded in our lives. This is realism; as We Robot co-founder Michael Froomkin writes, we're a long, long way from a self-aware and sentient machine.

The framing of sociotechnical systems is a good thing in part because so much of what passes for modern "artificial intelligence" is humans all the way down, as Mary L. Gray and Siddhart Suri documented in their book, Ghost Work. Even the companies that make self-driving cars, which a few years ago were supposed to be filling the streets by now, are admitting that full automation is a long way off. "Admitting" as in consolidating or being investigated for reckless hyping.

If this was the emerging theme, it started with the first discussion, of a paper on humans in the loop, by Margot Kaminski, Nicholson Price, and Rebecca Crootof. Too often, the proposed policy-making proposal for handling problems with decision making systems is to insert a human, a "solution" they called the "MABA-MABA trap", for "Machines Are Better At / Men Are Better At". While obviously humans and machines have differing capabilities - people are creative and flexible, machines don't get bored - just dropping in a human without considering what role that human is going to fill doesn't necessarily take advantage of the best capabilities of either. Hybrid systems are of necessity more complex - this is why cybersecurity keeps getting harder - but policy makers may not take this into account or think clearly about what the human's purpose is going to be.

At this conference in 2016, Madeleine Claire Elish foresaw that the human would become a moral crumple zone or liability sponge, absorbing blame without necessarily being at fault. No one will admit that this is the human's real role - but it seems an apt description of the "safety driver" watching the road, trying to stay alert in case the software driving the car needs backup or the poorly-paid human given a scoring system and tasked with awarding welfare benefits. What matters, as Andrew Selbst said in discussing this paper, is the *loop*, not the human - and that may include humans with invisible control, such as someone who can massage the data they enter into a benefits system in order to help a particularly vulnerable child, or who have wide discretion, such as a judge who is ultimately responsible for parole decisions no matter what the risk assessment system says.

This is not the moment to ask what constitutes a human.

It might be, however, the moment to note the commentator who said that a lot of the problems people are suggesting robots/AI can solve have other, less technological solutions. As they said, if you are putting a pipeline through a community without its consent, is the solution to deploy police drones to protect the pipeline and the people working on it - or is it to put the pipeline somewhere else (or to move to renewables and not have a pipeline at all)? Change the relationship with the community and maybe you can partly disarm the police.

One unwelcome forthcoming issue, discussed in a paper by Kate Darling and Daniella DiPaola is the threat merging automation and social marketing poses to consumer protection. A truly disturbing note came from DiPaola, who investigated manipulation and deception with personal robots and 75 children. The children had three options: no ads, ads allowed only if they are explicitly disclosed to be ads, or advertising through casual conversation. The kids chose casual conversation because they felt it showed the robot *knew* them. They chose this even though they knew the robot was intentionally designed to be a "friend". Oy. In a world where this attitude spreads widely and persists into adulthood, no amount of "media literacy" or learning to identify deception will save us; these programmed emotional relationships will overwhelm all that. As DiPaola said, "The whole premise of robots is building a social relationship. We see over and over again that it works better if it is more deceptive."

There was much more fun to be had - steamboat regulation as a source of lessons for regulating AI (Bhargavi Ganesh and Shannon Vallor), police use of canid robots (Carolin Kemper and Michael Kolain), and - a new topic - planning for the end of life of algorithmic and robot systems (Elin Björling and Laurel Riek). The robots won't care, but the humans will be devastated.

Illustrations: Hanging out at We Robot with Boston Dynamics' "Spot".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 15, 2022

Online harms

boris-johnson-on-his-bike-European-Cycling-Federation-370.jpgAn unexpected bonus of the gradual-then-sudden disappearance of Boris Johnson's government, followed by his own resignation, is that the Online Safety bill is being delayed until after Parliament's September return with a new prime minister and, presumably, cabinet.

This is a bill almost no one likes - child safety campaigners think it doesn't go far enough; digital and human rights campaigners - Big Brother Watch, Article 19, Electronic Frontier Foundation, Open Rights Group, Liberty, a coalition of 16 organizations (PDF) - because it threatens freedom of expression and privacy while failing to tackle genuine harms such as the platforms' business model; and technical and legal folks because it's largely unworkable.

The DCMS Parliamentary committee sees it as wrongly conceived. The he UK Independent Reviewer of Terrorism Legislation, Jonathan Hall QC, says it's muzzled and confused. Index on Censorship calls it fundamentally broken, and The Economist says it should be scrapped. The minister whose job it has been to defend it, Nadine Dorries (C-Mid Bedfordshire), remains in place at the Department for Culture, Media, and Sport, but her insistence that resigning-in-disgrace Johnson was brought down by a coup probably won't do her any favors in the incoming everything-that-goes-wrong-was-Johnson's-fault era.

In Wednesday's Parliamentary debate on the bill, the most interesting speaker was Kirsty Blackman (SNP-Aberdeen North), whose Internet usage began 30 years ago, when she was younger than her children are now. Among passionate pleas that her children should be protected from some of the high-risk encounters she experienced, was: "Every person, nearly, that I have encountered talking about this bill who's had any say over it, who continues to have any say, doesn't understand how children actually use the Internet." She called this the bill's biggest failing. "They don't understand the massive benefits of the Internet to children."

This point has long been stressed by academic researchers Sonia Livingstone and Andy Phippen, both of whom actually do talk to children. "If the only horse in town is the Online Safety bill, nothing's going to change," Phippen said at last week's Gikii, noting that Dorries' recent cringeworthy TikTok "rap" promoting the bill focused on platform liability. "The liability can't be only on one stakeholder." His suggestion: a multi-pronged harm reduction approach to online safety.

UK politicians have publicly wished to make "Britain the safest place in the world to be online" all the way back to Tony Blair's 1997-2007 government. It's a meaningless phrase. Online safety - however you define "safety" - is like public health; you need it everywhere to have it anywhere.

Along those lines, "Where were the regulators?" Paul Krugman asked in the New York Times this week, as the cryptocurrency crash continues to flow. The cryptocurrency market, which is now down to $1 trillion from its peak of $3 trillion, is recapitulating all the reasons why we regulate the financial sector. Given the ongoing collapses, it may yet fully vaporize. Krugman's take: "It evolved into a sort of postmodern pyramid scheme". The crash, he suggests, may provide the last, best opportunity to regulate it.

The wild rise of "crypto" - and the now-defunct Theranos - was partly fueled by high-trust individuals who boosted the apparent trustworthiness of dubious claims. The same, we learned this week was true of Uber 2014-2017, Based on the Uber files,124,000 documents provided by whistleblower Mark MacGann, a lobbyist for Uber 2014-2016, the Guardian exposes the falsity of Uber's claims that its gig economy jobs were good for drivers.

The most startling story - which transport industry expert Hubert Horan had already published in 2019 - is the news that the company paid academic economists six-figure sums to produce reports it could use to lobby governments to change the laws it disliked. Other things we knew about - for example, Greyball, the company's technology denying regulators and police rides so they couldn't document Uber's regulatory violations and Uber staff's abuse of customer data - are now shown to have been more widely used than we knew. Further appalling behavior, such as that of former CEO Travis Kalanick, who was ousted in 2017, has been thoroughly documented in the 2019 book, Super Pumped, by Mike Isaac, and the 2022 TV series based on it, Super Pumped.

But those scandals - and Thursday/s revelation that 559 passengers are suing the company for failing to protect them from rape and assault by drivers - aren't why Horan described Uber as a regulatory failure in 2019. For years, he has been indefatigably charting Uber's eternal unprofitability. In his latest, he notes that Uber has lost over $20 billion since 2015 while cutting driver compensation by 40%. The company's share price today is less than half its 2019 IPO price of $45 - and a third of its 2021 peak of $60. The "misleading investors" kind of regulatory failure.

So, returning to the Online Safety bill, if you undermine existing rights and increase the large platforms' power by devising requirements that small sites can't meet *and* do nothing to rein in the platforms' underlying business model...the regulatory failure is built in. This pause is a chance to rethink.

Illustrations: Boris Johnson on his bike (European Cyclists Federation via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 17, 2022

Level two

Tesla-crash-NYTimes-370.pngThis week provided two examples of the dangers of believing too much hype about modern-day automated systems and therefore overestimating what they can do.

The first is relatively minor: Google employee Blake Lemoine published his chats with a bot called LaMDA and concluded it was sentient "basd on my religious beliefs". Google put Lemoine on leave and the press ran numerous (many silly) stories. Veterans shrugged and muttered, "ELIZA, 1966".

The second, however...

On Wednesday, the US National Highway Traffic Safety Administration released a report (PDF) studying crashes involving cars under the control of "driver-assist" technologies. Out of 367 such crashes in the nine months after NHTSA began collecting data in July 2021, 273 involved Teslas being piloted by either "full self-driving software" or its precursor, "Tesla Autopilot".

There are important caveats, which NTHSA clearly states. Many contextual details are missing, such as how many of each manufacturer's cars are on the road and the number of miles they've traveled. Some reports may be duplicates; others may be incomplete (private vehicle owners may not file a report) or unverified. Circumstances such as surface and weather conditions, or whether passengers were wearing seat belts, are missing. Manufacturers differ in the type and quantity of crash data they collect. Reports may be unclear about whether the car was equipped with SAE Level 2 Advanced Driver Assistance Systems (ADAS) or SAE Levels 3-5 Automated Driving Systems (ADS). Therefore, NTHSA says, "The Summary Incident Report Data should not be assumed to be statistically representative of all crashes." Still, the Tesla number stands out, far ahead of Honda's 90, which itself is far ahead of the other manufacturers listed.

SAE, ADAS, and ADS refer to the system of levels devised by the Society of Automotive Engineers (now SAE International) in 2016. Level 0 is no automation at all; Level 1 is today's modest semi-automated assistance such as cruise control, lane-keeping, and automatic emergency braking. Level 2, "partial automation", is now: semi-automated steering and speed systems, road edge detection, and emergency braking.

Tesla's Autopilot is SAE Level 2. Level 3 - which may someday include Tesla's Full Self Drive Capability - is where drivers may legitimately begin to focus on things other than the road. In Level 4, most primary driving functions will be automated, and the driver will be off-duty most of the time. Level 5 will be full automation, and the car will likely not even have human-manipulable controls.

Right now, in 2022, we don't even have Level 3, though Tesla CEO Elon Musk keeps promising we're on the verge of it with his company's Full Self-Drive Capability, its arrival always seems to be one to two years away. As long ago as 2015, Musk was promising Teslas would be able to drive themselves while you slept "within three years"; in 2020 he estimated "next year" - and he said it again a month ago. In reality, it's long been clear that cars autonomous enough for humans to check out while on the road are further away than they seemed five years ago, as British transport commentator Christian Wolmar accurately predicted in 2018.

Many warned that Levels 2 and 3 are would be dangerous. The main issue, pointed out by psychologists and behavorial scientists, is that humans get bored watching a computer do stuff. In an emergency, where the car needs the human to take over quickly, said human, whose attention has been elsewhere, will not be ready. In this context it's hard to know how to interpret the weird detail in the NTHSA report that in 16 cases Autopilot disengaged less than a second before the crash.

The NHTSA news comes just a few weeks after a New York Times TV documentary investigation examining a series of Tesla crashes. Some it links to the difficulty of designing software that can distinguish objects across the road - that is, the difference between a truck crossing the road and a bridge. In others, such as the 2018 crash in Mountain View, California, the NTSB found a number of contributing factors, including driver distraction and overconfidence in the technology - "automation complacence", as Robert L. Sumwalt calls it politely.

This should be no surprise. In his 2019 book, Ludicrous, auto industry analyst Edward Niedermeyer mercilessly lays out the gap between the rigorous discipline embraced by the motor industry so it can turn out millions of cars at relatively low margins with very few defects and the manufacturing conditions Niedermeyer observes at Tesla. The high-end, high-performance niche sports cars Tesla began with were, in Niedermeyer's view, perfectly suited to the company's disdain for established industry practice - but not to meeting the demands of a mass market, where affordability and reliability are crucial. In line with Nidermeyer's observations, Bloomberg Intelligence predicts that Volkswagen will take over the lead in electric vehicles by 2024. Niedermeyer argues that because it's not suited to the discipline required to serve the mass market, Tesla's survival as a company depends on these repeated promises of full autonomy. Musk himself even said recently that the company is "worth basically zero" if it can't solve self-driving.

So: financial self-interest meets the danger zone of Level 2 with perceptions of Level 4. I can't imagine anything more dangerous.

Illustrations: One of the Tesla crashes investigated in New York Times Presents.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 8, 2022

The price of "free"

Protest_against_Amazon_by_East_African_workers_(32446948818).jpg"This isn't over," we predicted in April 2021 when Amazon warehouse workers in Bessemer, Alabama voted against unionizing. And so it has proved: on April 1 workers at its Staten Island warehouse voted to join the Amazon Labor Union.

There will be more of this, and there needs to be. As much as people complain - often justifiably - about unions, no one individual can defend themselves and their rights in the face of the power of a giant company. Worse, as the largest companies continue to get bigger and the number of available employers shrinks, that power imbalance is still growing. Antitrust law can only help reopen the market to competition with smaller and newer businesses; organized labor and labor law are required to ensure fair treatment for workers (see also Amazon's warehouse injury rate, which is about double the industry average). Even the top class of Silicon Valley engineers have lost out; in 2015 Apple, Google, Adobe, and Intel were fined $415 million for operating a "no-poaching" cartel; Lucasfilm, Pixar, and Intuit settled earlier for a joint $20 million.

One lesson to take from this is that instead of treating multi-billionaires as symbols of success we should take the emergence of that level of wealth disparity as a bad sign.

In 1914, Henry Ford famously doubled wages for the factory workers building his cars. At Michigan Radio, Sarah Cwiek explains that it was a gamble intended to produce a better, more stable workforce. Cwiek cites University of California-Berkeley labor economist Harley Shaiken to knock on the head the notion that it was solely in order to expand the range of people who could afford to buy the cars - but that also was one of the benefits to his business.

The purveyors of "pay-with-data-and-watching-ads" services can't look forward to that sort of benefit. For one thing, as multi-sided markets their primary customers aren't us but advertisers who don't sell directly to the masses. For another, a company like Google or Facebook doesn't benefit directly from the increasing wealth of its users; it can collect their data either way. Even the companies like Amazon and Uber, that actually sell people things or services, see faster returns from squeezing both their customers and their third-party suppliers - which they can do because of their dominant positions.

On Twitter, Cory Doctorow has a long thread arguing that antitrust law also has a role to play in securing workers' rights against the hundreds of millions companies like Uber and DoorDash are pouring into lobbying for legislation that keeps their gig workers classed as "independent contractors" instead of employees with rights such as paid sick leave, health insurance, and workmen's compensation.

Doctorow's thread is based on analyzing two articles: a legal analysis by Marshall Steinbaum laying out the antitrust case against the gig economy platforms, which fail to deliver their promises of independence and control to workers. Steinbaum highlights the value of antitrust law to the self-employed, who rely on being able to work for many outlets. In what the law calls "vertical restraint", the platforms dictate prices to customers and require exclusivity - both the opposite of the benefits self-employment is supposed to deliver. Any freelance in any business knows that too-great dependence on one or two employers is dangerous; a single shift in personnel or company policy can threaten your ability to make rent. It is the joint operation of antitrust law and labor regulation that is necessary, Steinbaum writes: "...taking away their ability to exercise control in the absence of an employment relationship is a necessary condition for the success of any effort to curtail the gig economy and the threat it poses to worker power and to workers' welfare."

Doctorow goes on to add that using antitrust law in this way would open the way to requiring interoperability among platform apps, so that a driver could assess which platform would pay them the best and direct customers to that one. It's an idea with potential - but unfortunately it reminds me of Mark Huntley-James' story "Togetherness", which formed part of Tales of the Cybersalon - A New High Street. In it, a hapless customer trying to get a parcel delivery is shunted from app to app as the pickup shop keeps shifting to get a better deal. (The story, along with the rest of the Tales of the Cybersalon, will be published later this year.) I'm not sure that the urgent-lift-seeking customer experience will be enhanced by, "Sorry, luv, I can't take you unless you sign up for NewApp." However, Doctorow's main point stands.

All of this is yet another way that the big technology companies benefit from negative externalities - that is, the costs they impose on society at large. The content moderators who work for Facebook, Uber's and Lyft's drivers, the behind-the-scenes ghost-worker intermediaries that pass for "AI", Amazon's Amazon's time-crunched warehouse workers...together add up to a large economy of underpaid, stressed workers deliberately kept outside of standard employment contracts and workers' rights. Such a situation cannot be sustainable for a society.


Illustrations: Amazon warehouse workers protesting in Minnesota in 2018 (by Czar at Wikimedia, cc-by-2.0.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 18, 2022

There may be trouble ahead...

ElliQ7.pngOne of the first things the magician and paranormal investigator James Randi taught all of us in the skeptical movement was the importance of consulting the right kind of expert.

Randi made this point with respect to tests of paranormal phenomena such as telekinesis and ESP. At the time - the 1970s and 1980s - there was a vogue for sending psychic claimants to physicists for testing. A fair amount of embarrassment ensued. As Randi liked to say, physicists, like many other scientists, are not experienced in the art of deception. Instead, they are trained to assume that things in their lab do not lie to them.

Not a safe assumption when they're trying to figure out how a former magician has moved an empty plastic film can a few millimeters, apparently with just the power of their mind. Put in a magician who knows how to set up the experiment so the claimant can't cheat, and *then* if the effect still occurs you know something genuinely weird is going on.

I was reminded of this reading this quote from Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins, writing in Nature: "When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research,"

The article itself is scary enough for one friend to react to it with, "This is the apocalypse". The researchers undertook a "thought experiment" after the Swiss Federal Institute for NBC Protection (Spiez Laboratory), asked theiir company, Collaborations Pharmaceuticals Inc, to provide a presentation on how their AI technology could be misused in drug discovery to its biennial conference on new technologies and their implications for the Chemical and Biological Weapons conventions. They work, they write, in an entirely virtual world; their molecules exist only in their computer. It had never previously occurred to them to wonder if the machine learning models they were building to help design new molecules that could be developed into new, life-saving drugs could be turned to generating toxins instead. Asked to consider it, they quickly discovered that it was disturbingly easy to generate prospective lethal neurotoxins. Because: generating potentially helpful molecules required creating models to *avoid* toxicity - which meant being able to predict its appearance.

As they go on to say, our general discussions of the potential harms AI can enable are really very limited. The biggest headlines go to putting people out of work; the rest is privacy, discrimination, fairness, and so on. Partly, that's because those are the ways AI has generally been most visible: automation that deskills or displaces humans, or algorithms that make decisions about government benefits, employment, education, content recommendations, or criminal justice outcomes. But also it's because the researchers working on this technology blinker their imagination to how they want their new idea to work.

The demands of marketing don't help. Anyone pursuing any form of research, whether funded by industry or government grant, has to make the case for why they should be given the money. So of course in describing their work they focus on the benefits. Those working on self-driving cars are all about how they'll be safer than human drivers, not scary possibilities like widespread hundred-car pileups if hackers were to find a way to exploit unexpected software bugs to make them all go haywire at the same time.

Sadly, many technology journalists pick up only the happy side. On Wednesday, as one tiny example, the Washington Post published a cheery article about EliiQ, an Alexa-like AI device "designed for empathy" meant to keep lonely older people company. The commenters saw more of the dark side than the writer did: ongoing $30 subscription, data collection and potential privacy invasion, and, especially, potential for emotional manipulation as the robot tells its renter what it (not she, as per writer Steven Zeitchik) calculates they want to hear.

It's not like this is the first such discovery. Malicious Generative Adversarial Networks (GANs) are the basis of DeepFakes. If you can use some new technology for good, why *wouldn't* you be able to use it for evil? Cars drive sick kids to hospitals and help thieves escape. Computer programmers write word processors and viruses, the Internet connects us directly to medical experts and sends us misinformation, cryptography protects both good and bad secrets, robots help us and collect our data. Why should AI be different?

I'd like to think that this paper will succeed where decades of prior experience have failed, and make future researchers think more imaginatively about how their work can be abused. Sadly, it seems a forlorn hope.

In Gemma Milne's 2020 book examining how hype interferes with our ability to make good decisions about new technology, Smoke and Mirrors, she warns that hype keeps us from asking the crucial question: Is this new technology worth its cost? Potential abuse is part of that cost-benefit assessment. We need researchers to think about what can go wrong a lot earlier in the development cycle - and we need them to add experts in the art of forecasting trouble (science fiction writers, perhaps?) to their teams. Even technology that looks like magic...isn't.

Illustrations: EliiQ (company PR photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 18, 2022

The search for intelligent life

IPFS-medium-zkcapital.jpegThe mythology goes like this. In the beginning, the Internet was decentralized. Then came money and Web 2.0, and they warped the best dreams of Web 2.0 into corporate giants. Now, web3 is going to restore the status ante?

Initial reaction: why will it be different this time?

Maybe it won't. Does that mean people shouldn't try? Ah. No. No, it does not.

One reason it's so difficult to write about web3 is that under scrutiny it dissolves into a jumble of decentralized web, cryptocurrencies, blockchain, and NFTs, though the Economist has an excellent explanatory podcast. Decentralizing the web I get: ever since Edward Snowden decentralization has been seen as a way to raise the costs of passive surveillance. The question has been: how? Blockchain and bitcoin sound nothing like the web - or a useful answer.

But even if you drop all the crypto stuff and just say "decentralized web to counter surveillance and censorship, it conveys little to the man on the Clapham omnibus. Try to explain, and you rapidly end up in a soup of acronyms that are meaningful only to technologists. In November, on first encountering web3, I suggested there are five hard problems. The first of those, ease of use, is crucial. Most people will always flock to whatever requires least effort; the kind of people who want to build a decentralized Internet are emphatically unusual. The biggest missed financial opportunity of my lifetime will likely have been ignoring the advice to buy some bitcoin in 2009 because it was just too much trouble. Most of today's big Internet companies got that way because whatever they were offering was better - more convenient, saved time, provided better results.

This week, David Rosenthal, developer of core Nvidia technologies, published a widely-discussed dissection of cryptocurrencies and blockchain, which Cory Doctorow followed quickly with a recap/critique. Tl;dr: web3 is already centralized, and blockchain and cryptocurrencies only pay off if their owners can ignore the external costs they impose on the rest of the world. Rosenthal argues that ignoring externalities is inherent in theSilicon Valley-type libertarianism from which they sprang.

Rosenthal also makes an appearance in the Economist podcast to explain that if you ask most people what the problems are with the current state of the Web, they don't talk centralization. They talk about overwhelming amounts of advertising, harassment, scams, ransomware, and expensive bandwidth. In his view, changing the technical infrastructure won't change the underlying economics - scale and network effects - that drive centralization, which, as all of these commentators note, has been the eventual result of every Internet phase since the beginning.

It's especially easy to be suspicious about this because of the venture capital money flooding in seeking returns.

"Get ready for the crash," Tim O'Reilly told CBS News. In a blog posting last December, he suggestshow to find the good stuff in web3: look for the parts that aren't about cashing out and getting rich fast but *are* about solving hard problems that matter in the real world.

This is all helpful in understanding the broader picture, but doesn't answer the question of whether there's presently meat inside web3. Once bitten, twice shy, three times don't be ridiculous.

What gave me pause was discovering that Danny O'Brien has gone to work for the Filecoin Foundation and the Filecoin Foundation for the Distributed Web - aka, "doing something in web3". O'Brien has a 30-year history of finding the interesting places to be. In the UK, he was one-half of the 1990s must-read newsletter NTK, whose slogan was "They stole our revolution. Now we're stealing it back." Filecoin - a project to develop blockchain-based distributed storage, which he describes as "the next generation of something like Bittorrent" - appears to be the next stage of that project. The mention of Bittorrent reminded how technologically dull the last few years have been.

O'Brien's explanation of Filecoin and distributed storage repeatedly evoked prior underused art that only old-timers remember. For example, in 1997 Cambridge security engineer Ross Anderson proposed the Eternity Service, an idea for distributing copies of data around the world so its removal from the Internet would be extremely difficult. There was Ian Clarke's 1999 effort to build such a thing, Freenet, a peer-to-peer platform for distributing data that briefly caused a major moral panic in the UK. Freenet failed to gain much adoption - although it's still alive today - because no one wanted to risk hosting unknown caches of data. Filecoin intends to add financial economic incentives: think a distributed cloud service.

O'Brien's mention of the need to ensure that content remains addressable evokes Ted Nelson's Project Xanadu, a pre-web set of ideas about sharing information. Finally, zero-knowledge proofs make it possible to show a proof that you have run a particular program and gotten back a specific result without revealing the input. The mathematics involved is arcane, but the consequence is far-reaching: you can prove results *and* protect privacy.

If this marriage of old and new research is "web3", suddenly it sounds much more like something that matters. And it's being built, at least partly, by people who remember the lessons of the past well enough not to repeat them. So: cautious signs that some part of "web3" will do something.


Illustrations: Diagram of centralized vs decentralized (IPFS) systems (from zK Capital at Medium).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 12, 2021

Third wave

512px-Web_2.0_Map.svg.pngIt seems like only yesterday that we were hearing that Web 2.0 was the new operating system of the Internet. Pause to look up. It was 2008, in the short window between the founding of today's social media giants (2004-2006) and their smartphone-accelerated explosion (2010).

This week a random tweet led me to discover Web3. As Aaron Mak explains at Slate, "Web3" is an idea for running a next-generation Internet on public blockchains in the interests of decentralization (which net.wars has long advocated). To date, the aspect getting the most attention is decentralized finance (DeFi, or, per Mary Branscombe, deforestation finance), a plan for bypassing banks and governments by conducting financial transactions on the blockchain.

At Freecode, Nader Dabit goes into more of the technical underpinnings. At Fabric Ventures (Medium), Max Mersch and Richard Muirhead explain its importance. Web3 will bring a "borderless and frictionless" native payment layer (upending mediator businesses like Paypal and Square), bring the "token economy" to support new businesses (upending venture capitalists), and tie individual identity to wallets (bypassing authentication services like OAuth, email plus password, and technology giant logins), thereby enabling multiple identities, among other things. Also interesting is the Cloudflare blog, where Thibault Meunier states that as a peer-to-peer system Web3 will use cryptographic identifiers and allow users to selectively share their personal data at their discretion. Some of this - chiefly the robustness of avoiding central points of failure - is a return to the Internet's original design goals.

Standards-setter W3C is working on at least one aspect - cryptographically verifiable Decentralized Identifiers, and it's running into opposition, from Google, Apple, and Mozilla, whose browsers control 87% of the market.

Let's review a little history.

The 20th century Internet was sorta, kinda decentralized, but not as much as people like to think. The technical and practical difficulties of running your own server at home fueled the growth of portals and web farms to do the heavy lifting. Web design went from plain text (see for example, Live Journal and Blogspot (now owned by Google). You can argue about how exactly it was that a lot of blogs died off circa 2010, but I'd blame Twitter, writers found it easier to craft a sentence or two and skip writing the hundreds of words that make a blog post. Tim O'Reilly and Clay Shirky described the new era as interactive, and moving control "up the stack" from web browsers and servers to the services they enabled. Data, O'Reilly predicted, was the key enabler, and the "long tail" of niche sites and markets would be the winner. He was right about data, and largely wrong about the long tail. He was also right about this: "Network effects from user contributions are the key to market dominance in the Web 2.0 era." Nearly 15 years later, today's web feels like a landscape of walled cities encroaching on all the public pathways leading between them.

Point Network (Medium) has a slightly different version of this history; they call Web 1.0 the "read-only web"; Web 2.0 the "server/cloud-based social Web", and Web3 the "decentralized web".

The pattern here is that every phase began with a "Cambrian" explosion of small sites and businesses and ended with a consolidated and centralized ecosystem of large businesses that have eaten or killed everyone else. The largest may now be so big that they can overwhelm further development to ensure their future dominance; at least, that's one way of looking at Mark Zuckerberg's metaverse plan.

So the most logical outcome from Web3 is not the pendulum swing back to decentralization that we may hope, but a new iteration of the existing pattern, which is at least partly the result of network effects. The developing plans will have lots of enemies, not least governments, who are alert to anything that enables mass tax evasion. But the bigger issue is the difficulty of becoming a creator. TikTok is kicking ass, according to Chris Stokel-Walker, because it makes it extremely easy for users to edit and enhance their videos.

I spy five hard problems. One: simplicity and ease of use. If it's too hard, inconvenient, or expensive for people to participate as equals, they will turn to centralized mediators. Two: interoperability and interconnection. Right now, anyone wishing to escape the centralization of social media can set up a Discord or Mastodon server, yet these remain decidedly minority pastimes because you can't message from them to your friends on services like Facebook, WhatsApp, Snapchat, or TikTok. A decentralized web in which it's hard to reach your friends is dead on arrival. Three: financial incentives. It doesn't matter if it's venture capitalists or hundreds of thousands of investors each putting up $10, they want returns. As a rule of thumb, decentralized ecosystems benefit all of society; centralized ones benefit oligarchs - so investment flows to centralized systems. Four: sustainability. Five: how do we escape the power law of network effects?

Gloomy prognostications aside, I hope Web3 changes everything, because in terms of its design goals, Web 2.0 has been a bust.


Illustrations: Tag cloud from 2007 of Web 2.0 themes (Markus Angermeier and Luca Cremonini, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 29, 2021

Majority report

Frari_(Venice)_nave_left_-_Monument_to_Doge_Giovanni_Pesaro_-_Statue_of_the_Doge.jpgHow do democracy and algorithmic governance live together? This was the central question of a workshop this week on computational governance. This is only partly about the Internet; many new tools for governance are appearing all the time: smart contracts, for example, and AI-powered predictive systems. Many of these are being built with little idea of how they can go wrong.

The workshop asked three questions:

- What can technologists learn from other systems of governance?
- What advances in computer science would be required for computational systems to be useful in important affairs like human governance?
- Conversely, are there technologies that policy makers can use to improve existing systems?

Implied is this: who gets to decide? On the early Internet, for example, decisions were reached by consensus among engineers, funded by hopeful governments, who all knew each other. Mass adoption, not legal mandate, helped the Internet's TCP/IP protocols dominate over many other 1990s networking systems: it was free, it worked well enough, and it was *there*. The same factors applied to other familiar protocols and applications: the web, email, communications between routers and other pieces of infrastructure. Proposals circulated as Requests for Comments, and those that found the greatest acceptance were adopted. In those early days, as I was told in a nostalgic moment at a conference in 1998, anyone pushing a proposal because it was good for their company would have been booed off the stage. It couldn't last; incoming new stakeholders demanded a voice.

If you're designing an automated governance system, the fundamental question is this: how do you deal with dissenting minorities? In some contexts - most obviously the US Supreme Court - dissenting views stay on the record alongside the majority opinion. In the long run of legal reasoning, it's important to know how judgments were reached and what issues were considered. You must show your work. In other contexts where only the consensus is recorded, minority dissent is disappeared - AI systems, for example, where the labelling that's adopted is the result of human votes we never see.

In one intriguing example, a panel of judges may rule a defendant is guilty or not guilty depending on whether you add up votes by premise - the defendant must have both committed the crime and possessed criminal intent - or by conclusion, in which each judge casts a final vote and only these are counted. In a small-scale human system the discrepancy is obvious. In a large-scale automated system, which type of aggregation do you choose, and what are the consequences, and for whom?

Decentralization poses a similarly knotty conundrum. We talk about the Internet's decentralized origins, but its design fundamentally does not prevent consolidation. Centralized layers such as the domain name system and anti-spam blocking lists are single points of control and potential failure. If decentralization is your goal, the Internet's design has proven to be fundamentally flawed. Lots of us have argued that we should redecentralize the Internet, but if you adopt a truly decentralized system, where do you seek redress? In a financial system running on blockchains and smart contracts, this is a crucial point.

Yet this fundamental flaw in the Internet's design means that over time we have increasingly become second-class citizens on the Internet, all without ever agreeing to any of it. Some US newspapers are still, three and a half years on, ghosting Europeans for fear of GDPR; videos posted to web forums may be geoblocked from playing in other regions. Deeper down the stack, design decisions have enabled surveillance and control by exposing routing metadata - who connects to whom. Efforts to superimpose security have led to a dysfunctional system of digital certificates that average users either don't know is there or don't know how to use to protec themselves. Efforts to cut down on attacks and network abuse have spawned a handful of gatekeepers like Google, Akamai, Cloudflare, and SORBS that get to decide what traffic gets to go where. Few realize how much Internet citizenship we've lost over the last 25 years; in many of our heads, the old cooperative Internet is just a few steps back. As if.

As Jon Crowcroft and I concluded in our paper on leaky networks for this year's this year's Gikii, "leaky" designs can be useful to speed development early on even though they pose problems later, when issues like security become important. The Internet was built by people who trusted each other and did not sufficiently imagine it being used by people who didn't, shouldn't, and couldn't. You could say it this way: in the technology world, everything starts as an experiment and by the time there are problems it's lawless.

So this the main point of the workshop: how do you structure automated governance to protect the rights of minorities? Opting to slow decision making to consider the minority report impedes decision making in emergencies. If you limit Internet metadata exposure, security people lose some ability to debug problems and trace attacks.

We considered possible role models: British corporate governance; smart contracts;and, presented by Miranda Mowbray, the wacky system by which Venice elected a new Doge. It could not work today: it's crazily complex, and impossible to scale. But you could certainly code it.


Illustrations: Monument to the Doge Giovanni Pesaro (via Didier Descouens at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 1, 2021

Plausible diversions

amazon-astro.pngIf you want to shape a technology, the time to start is before it becomes fixed in the mindset of "'twas ever thus". This was the idea behind the creation of We Robot. At this year's event (see below for links to previous years), one clear example of this principle came from Thomas Krendl Gilbert and Roel I. J. Dobbe, whose study of autonomous vehicles pointed out the way we've privileged cars by coining "jaywalkification". On the blank page in the lawbook, we chose to make it illegal for pedestrians to get in cars'' way.

We Robot's ten years began with enthusiasm, segued through several depressed years of machine learning and AI, and this year has seemingly arrived at a twist on Arthur C. Clark's famous dictum To wit: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability. You could say it's been ten years of progressively removing robots' glamor.

Something like this was at the heart of the paper by Andrew Selbst, Suresh Venkatasubramanian, and I. Elizabeth Kumar, which uses the computer science staple of abstraction as a model for assigning responsibility for the behavior of complex systems. Weed out debates over the innards - is the system's algorithm unfair, or was the training data biased? - and aim at the main point​: this employer chose this system that produced these results. No one needs to be inside its "black box" if you can understand its boundaries. In one analogy, it's not the manufacturer's fault if a coffee maker fails to produce drinkable coffee from poisoned water and ground acorns; it *is* their fault if the machine turns potable water and ground coffee into toxic sludge. Find the decision points, and ask: how were those decisions made?

Gilbert and Dobbe used two other novel coinages: "moral crumple zoning" (from Madeleine Claire Elish's paper at We Robot 2016) and "rubblization", for altering the world to assist machines. Exhibit A, which exemplifies all three, is the 2018 incident in which an Uber car on autopilot killed a pedestrian in Tempe, Arizona. She was jaywalking; she and the inattentive safety driver were moral crumple zoned; and the rubblized environment prioritized cars.

Part of Gilbert's and Dobbe's complaint was that much discussion of autonomous vehicles focused on the trolley problem, which has little relevance to how either humans or AIs drive cars. It's more useful instead to focus on how autonomous vehicles reshape public space as they begin to proliferate.

This reshaping issue also arose in two other papers, one on smart farming in East Africa by Laura Foster, Katie Szilagyi, Angeline Wairegi, Chidi Oguamanam, and Jeremy de Beer, and one by Annie Brett on the rapid, yet largely overlooked expansion of autonomous vehicles in ocean shipping, exploration, and data collection. In the first case, part of the concern is the extension of colonization by framing precision agriculture and smart farming as more valuable than the local knowledge held by small farmers, the majority of whom are black women, and viewing that knowledge as freely available for appropriation. As in the Western world, where manufacturers like John Deere and Monsanto claim intellectual property rights in seeds and knowledge that formerly belonged to farmers, the arrival of AI alienates local knowledge by stowing it in algorithms, software, sensors, and equipment and makes the plants on which our continued survival depends into inert raw material. Brett, in her paper, highlights the growing gaps in international regulation as the Internet of Things goes maritime and changes what's possible.

A slightly different conflict - between privacy and the need to not be "mis-seen" - lies at the heart of Alice Xiang's discussion of computer vision. Elsewhere, Agathe Balayn and Seda Gürses make a related point in a new EDRi report that warns against relying on technical debiasing tweaks to datasets and algorithms at the expense of seeing the larger social and economic costs of these systems.

In a final example, Marc Canellas studied whole cybernetic systems and finds they create gaps where it's impossible for any plaintiff to prove liability, in part because of the complexity and interdependence inherent in these systems. Canellas proposes that the way forward is to redefine intentional discrimination and apply strict liability. You do not, Cynthia Khoo observed in discussing the paper, have to understand the inner workings of complex technology in order to understand that the system is reproducing the same problems and the same long history if you focus on the outcomes, and not the process - especially if you know the process is rigged to begin with. The wide spread of move fast and break things, Canellas noted, mostly encumbers people who are already vulnerable.

I like this overall approach of stripping away the shiny distraction of new technology and focusing on its results. If, as a friend says, Facebook accurately described setting up an account as "adding a line to our database" instead of "connecting with your friends", who would sign up? Similarly, don't let Amazon get cute about its new "Astro" comprehensive in-home data collector.

Many look at Astro and see instead the science fiction robot butler of decades hence. As Frank Pasquale noted, we tend to overemphasize the far future at the expense of today's decisions. In the same vein, Deborah Raji called robot rights a way of absolving people of their responsibility. Today's greater threat is that gig employers are undermining workers' rights, not whether robots will become sentient overlords. Today's problem is not that one day autonomous vehicles may be everywhere, but that the infrastructure needed to make partly-autonomous vehicles safe will roll over us. Or, as Gilbert put it: don't ask how you want cars to drive; ask how you want cities to work.


Previous years: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference; 2020.

Illustrations: Amazon photo of Astro.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 16, 2021

When software eats the world

The_National_Archives_at_Kew_-_geograph.org.uk_-_2127149.jpgOne part of our brains knows that software can be fragile. Another part of our brains, when faced with the choice of trusting the human or trusting the machine...trusts the machine. It may have been easier to pry trust away from the machine twenty years ago, when systems crashed more often, sometimes ruining months of work and the mantra, "Have you tried turning it off and back on again?" didn't yet work as a reliable way of restoring function. Perhaps more important, we didn't *have* to trust software because we had canonical hard copies. Then, as predicted, the copies became "backups". Now, often, they don't exist at all, with the result that much of what we think we know is becoming less well-attested. How many of us even print out our bank statements any more? Three recent stories highlight this.

First is the biggest UK computer-related scandal for many years, the outrageous Post Office prosecution of hundreds of subpostmasters of theft and accounting fraud, all while insisting that their protests of innocence must all be lies because its software, sourced from Fujitsu, could not possibly be wrong. Eventually, the Court of Appeal quashed 39 convictions and excoriated both the Post Office and Fujitsu for denying the existence of two known bugs that led to accounting discrepancies. They should never have been able to get away with their claim of infallibility - first, because generations of software engineers could have told the court that all software has bugs, and second, because Ross Anderson's work proving that software vulnerabilities were the cause of phantom ATM withdrawals, overriding the UK banking industry's insistence that its software, too, was infallible.

At Lawfare, Susan Landau, discussing work she did in collaboration with Steve Bellovin, Matt Blaze, and Brian Owsley. uses the Post Office fiasco as a jumping-off point to discuss the increasing problem of bugs in software used to produce evidence presented in court. Much of what we think of as "truth" - Breathalyzer readings, forensic tools, Hawkeye line calls in tennis matches - are not direct measurements but software-derived interpretations of measurements. Hawkeye at least publishes its margin for error even though tennis has decided to pretend it doesn't exist. Manufacturers of evidence-producing software, however, claim commercial protection, leaving defendants unable to challenge the claims being made about them. Landau and her co-authors conclude that courts must recognize that they can't assume the reliability of evidence produced bysoftware and that defendants must be able to conduct "adversarial audits".

Second story. At The Atlantic, Jonathan Zittrain complains that the Internet is "rotting". Link rot - broken links when pages get deleted or reorganized - and content drift, which sees the contents of a linked page change over time, are familiar problems for anyone who posts anything online. Gabriel Weinberg, the founder of search engine DuckDuckGo, has has talked about API rot, which breaks dependent functionality. Zittrain's particular concern is legal judgments, which increasingly may incorporate disappeared or changed online references like TikTok videos and ebooks. Ebooks in particular can be altered on the fly, leaving no trace of that thing you distinctly remember seeing.

Zittrain's response has been to help create sites to track these alterations and provide permanent links. It probably doesn't matter much that the net.wars archive has (probably) thousands of broken links. As long as the Internet Archive's Wayback Machine continues to exist as a source for vaped web pages, most of the ends of those links can be recovered. The Archive is inevitably incomplete, and only covers the open web. But it *does* matter if the basis for a nation's legal reasoning and precedents - what Zittrain calls "long-term writing" - can't be established with any certainty. Hence the enormous effort put in by the UK's National Archives to convert millions of pages of EU legislation so all could understand the legitimacy of post-Brexit UK law.

Third story. It turns out the same is true for the brick-by-brick enterprise we call science. In the 2020 study Open is not forever, authors Mikael Laakso, Lisa Matthias, and Najko Jahn find journal rot. Print publications are carefully curated and preserved by librarians and archivists, as well as the (admittedly well-funded) companies that publish them. Open access journals, however, have had a patchy record of success, and the study finds that between 2000 and 2019 174 open access journals from all major research disciplines and from all geographical regions vanished from the web. In science, as in law, it's not enough to retain the end result; you must be able to show your work and replicate your reasoning.

It's more than 20 years since I heard experts begin to fret about the uncertain durability of digital media; the Foundation for Information Research included the need for reliable archives in its 1998 founding statement. The authors of the journal study note that the journals themselves are responsible for maintaining their archives and preserving their portion of the scholarly record; they conclude that solving this problem will require the participation of the entire scholarly community.

What isn't clear, at least to me, is how we assure the durability of the solutions. It seemed a lot easier when it was all on paper in a reassuringly solid building.

Illustrations: The UK National Archives, in Kew (photo by Erian Evans via Wikimedia)..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 25, 2021

Them

Thumbnail image for IBM-watson-jeopardy.png"It." "It." "It."

In the first two minutes of a recent episode of the BBC program Panorama, "Are You Scared Yet, Human?", the word that kept popping out was "it". The program is largely about the AI race between the US and China, an obviously important topic - see Amy Webb's recent book, The Big Nine. But what I wanted to scream at the show's producers was: "AI is not *it*. AI is *they*." The program itself proved this point by seguing from commercial products to public surveillance systems to military dreams of accurate targeting and ensuring an edge over the other country.

The original rantish complaint I thought I was going to write was about gendering AI-powered voice assistants and, especially, robots. Even though Siri has a female voice it's not a "she". Even if Alexa has a male voice it's not a "he". Yes, there's a long tradition of dubbing ships, countries, and even fiddles "she", but that bothers me less than applying the term to a compliant machine. Yolande Strengers and Jenny Kennedy made this point quite well in their book The Smart Wife, in which they trace much of today's thinking about domestic robots to the role model of Rosie, in the 1960s outer space animated TV sitcom The Jetsons. Strengers and Kennedy want to "queer" domestic robots so they no longer perpetuate heteronormative gender stereotypes.

The it-it-it of Panorama raised a new annoyance. Calling AI "it" - especially when the speaker is, as here, Jeff Bezos or Elon Musk - makes it sound like a monolithic force of technology that can't be stopped or altered, rather than what it is: am umbrella term for a bunch of technologies, many of them experimental and unfinished, and all of which are being developed and/or exploited by large companies and military agencies for their own purposes, not ours. "It" hides the unrepresentative workforce defining AI's present manifestation, machine learning. *This* AI is "systems", not a *thing*, and their impact varies depending on the application.

Last week, Pew Research released the results of a survey it conducted in 2020, in which two-thirds of the experts they consulted predicted that ethics would not be embedded in AI by 2030. Many pointed out that societies and contexts differ; that who gets to define "ethics" is crucial, and that there will always be bad actors who ignore whatever values the rest of us agree on. The report quotes me saying it's not AI that needs ethics, it's the *owners*.

I made a stab at trying to categorize the AI systems we encounter every day. The first that spring to mind are scoring applications whose impact on most people's lives appears to be in refusing access to things we need - asylum, probation in the criminal justice system, welfare in the benefits system, credit in the financial system - and assistance systems that answer questions and offer help, such as recommendation algorithms, search engines, voice assistants, and so on. I forgot about systems playing games, and since then a fourth type has accelerated into public use, in the form of identification systems, almost all of them deeply flawed but being deployed anyway: automated facial recognition, emotion recognition, smile detection, and fancy lie detectors.

I also forgot about medical applications, but despite many genuine breakthroughs - such as today's story that machine learning has helped develop a blood test to detect 50 types of early-stage cancer - many highly touted efforts have been failures.

"It"ifying AI makes many machine learning systems sound more successful than they are. Today's facial recognition is biased and inaccurate . Even in the pandemic, Benedict Dellot told a recent Westminster Health Forum seminar on AI in health care, the big wins in the pandemic have come from conventional data analysis underpinned by new data sharing arrangements. As examples, he cited sharing lists of shielding patients with local authorities to ensure they got the support they needed, linking databases to help local authorities identify vulnerable people, and repurposing existing technologies. But shove "AI" in the name and it sounds more exciting; see also "nano" before this and "e-" before that.

Maybe - *maybe* - one day we will say "AI" and mean a conscious, superhuman brain as originally imagined by science fiction writers and Alan Turing. Machine learning is certainly not that. as Kate Crawford writes in her recent Atlas of AI. Instead, we're talking about a bunch of computers calculating statistics from historical data, forever facing backward. And, as authors such as Sarah T. Roberts and Mary L. Gray and Siddharth Suri have documented, very often today's AI is humans all the way down. Direct your attention to the poorly-paid worker behind the curtain.

Crawford's book reminded me of Arthur C. Clarke's famous line, "Any sufficiently advanced technology is indistinguishable from magic." After reading her structural analysis of machine-learning-AI, it morphed into: "Any technology that looks like magic is hiding something." For Crawford, what AI is hiding is its essential nature as an extractive industry. Let's not grant these systems any more power than we have to. Breaking "it" apart into "them" allows us to pick and choose the applications we want.

Illustrations: IBM's Watson winning at Jeopardy; its later adventures in health care were less successful.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 20, 2021

Ontology recapiltulates phylogeny

Thumbnail image for sidewalklabs-streetcrossing.pngI may be reaching the "get off my lawn!" stage of life, except the things I'm yelling at are not harmless children but new technologies, many of which, as Charlie Stross writes, leak human stupidity into our environment.

Case in point: a conference this week chose for its platform an extraordinarily frustrating graphic "virtual congress center" that was barely more functional than Second Life (b. 2003). The big board displaying the agenda was not interactive; road signs and menu items pointed to venues by name, but didn't show what was going on in them. Yes, there was a reception desk staffed with helpful avatars. I do not want to ask for help, I want simplicity. The conference website advised: "This platform requires the installation of a dedicated software in your computer and a basic training." Training? To watch people speak on my computer screen? Why can't I just "click here to attend this session" and see the real, engaged faces of speakers, instead of motionless cartoon avatars?

This is not a new-technology issue but a usability issue that hasn't changed since Donald Norman's 1988 The Design of Everyday Things sought to do away with user manuals.

I tell myself that this isn't just another clash between generational habits.

Even so, if current technology trends continue I will be increasingly left behind, not just because I don't *want* to join in but because, through incalculable privilege, much of the time I don't *need* to. My house has no smart speakers, I see no reason to turn on open banking, and much of the time I can leave my mobile phone in a coat pocket, ignored.

But Out There in the rest of the world, where I have less choice, I read that Amazon is turning on Sidewalk, a proprietary mesh network that uses Bluetooth and 900MHz radio connections to join together Echo speakers, Ring cameras, and any other compatible device the company decides to produce. The company is turning this thing on by default (free software update!), though if you're lucky enough to read the right press articles you can turn it off. When individuals roam the streets piggybacking on open wifi connections, they're dubbed "hackers". But a company - just ask forgiveness, not permission, yes?

The idea appears to be that the mesh network will improve the overall reliability of each device when its wifi connection is iffy. How it changes the range and detail of the data each device collects is unclear. Connecting these devices into a network is a step change in physical tracking; CNet suggests that a Tile tag attached to a dog, while offering the benefit of an alert if the dog gets loose, could also provide Amazon with detailed tracking of all your dog walks. Amazon says the data is protected with three layers of encryption, but protection from outsiders is not the same as protection from Amazon itself. Even the minimal data Amazon says in its white paper (PDF) it receives - the device serial number and application server ID - reveal the type of device and its location.

We have always talked about smart cities as if they were centrally planned, intended to offer greater efficiency, smoother daily life, and a better environment, and built with some degree of citizen acceptance. But the patient public deliberation that image requires does not fit the "move fast and break things" ethos that continues to poison organizational attitudes. Google failed to gain acceptance for its Toronto plan; Amazon is just doing it. In London in 2019, neither private operators nor police bothered to inform or consult anyone when they decided to trial automated facial recognition.

In the white paper, Amazon suggests benefits such as finding lost pets, diagnostics for power tools, and supporting lighting where wifi is weak. Nice use cases, but note that the benefits accrue to the devices' owner while the costs belong to neighbors who may not have actively consented, but simply not known they had to change the default settings in order to opt out. By design, neither device owners nor server owners can see what they're connected to. I await the news of the first researcher to successfully connect an unauthorized device.

Those external costs are minimal now, but what happens when Amazon is inevitably joined by dozens more similar networks, like the collisions that famously plague the more than 50 companies that dig up London streets? It's disturbingly possible to look ahead and see our public spaces overridden by competing organizations operating primarily in their own interests. In my mind, Amazon's move opens up the image of private companies and government agencies all actively tracking us through the physical world the way they do on the web and fighting over the resulting "insights". Physical tracking is a sizable gap in GDPR.

Again, these are not new-technology issues, but age-old ones of democracy, personal autonomy, and the control of public and private spaces. As Nicholas Couldry and Ulises A. Mejias wrote in their 2020 book The Costs of Connection, this is colonialism in operation. "What if new ways of appropriating human life, and the freedoms on which it depends, are emerging?" they asked. Even if Amazon's design is perfect, Sidewalk is not a comforting sign.


Illustrations: A mock-up from Google's Sidewalk Labs plan for Toronto.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 23, 2021

Fast, free, and frictionless

Sinan-Aral-20210422_224835.jpg"I want solutions," Sinan Aral challenged at yesterday's Social Media Summit, "not a restatement of the problems". Don't we all? How many person-millennia have we spent laying out the issues of misinformation, disinformation, harassment, polarization, platform power, monopoly, algorithms, accountability, and transparency? Most of these have been debated for decades. The big additions of the last decade are the privatization of public speech via monopolistic social media platforms, the vastly increased scale, and the transmigration from purely virtual into physical-world crises like the January 6 Capitol Hill invasion and people refusing vaccinations in the middle of a pandemic.

Aral, who leads the MIT Initiative on the Digital Economy and is author of the new book The Hype Machine, chose his panelists well enough that some actually did offer some actionable ideas.

The issues, as Aral said, are all interlinked. (see also 20 years of net.wars). Maria Ressla connected the spread of misinformation to system design that enables distribution and amplification at scale. These systems are entirely opaque to us even while we are open books to them, as Guardian journalist Carole Cadwalladr noted, adding that while US press outrage is the only pressure that moves Facebook to respond, it no longer even acknowledges questions from anyone at her newspaper. Cadwalladr also highlighted the Securities and Exchange Commission's complaint that says clearly: Facebook misled journalists and investors. This dismissive attitude also shows in the leaked email, in which Facebook plans to "normalize" the leak of 533 million users' data.

This level of arrogance is the result of concentrated power, and countering it will require antitrust action. That in turn leads back to questions of design and free speech: what can we constrain while respecting the First Amendment? Where is the demarcation line between free speech and speech that, like crying "Fire!" in a crowded theater, can reasonably be regulated? "In technology, design precedes everything," Roger McNamee said; real change for platforms at global or national scale means putting policy first. His Exhibit A of the level of cultural change that's needed was February's fad, Clubhouse: "It's a brand-new product that replicates the worst of everything."

In his book, Aral opposes breaking up social media companies as was done incases such as Standard Oil, the AT&T. Zephyr Teachout agreed in seeing breakup, whether horizontal (Facebook divests WhatsApp and Instagram, for example) or vertical (Google forced to sell Maps) as just one tool.

The question, as Joshua Gans said, is, what is the desired outcome? As Federal Trade Commission nominee Lina Khan wrote in 2017, assessing competition by the effect on consumer pricing is not applicable to today's "pay-with-data-but-not-cash" services. Gans favors interoperability, saying it's crucial to restoring consumers' lost choice. Lock-in is your inability to get others to follow when you want to leave a service, a problem interoperability solves. Yes, platforms say interoperability is too difficult and expensive - but so did the railways and telephone companies, once. Break-ups were a better option, Albert Wenger added, when infrastructures varied; today's universal computers and data mean copying is always an option.

Unwinding Facebook's acquisition of WhatsApp and Instagram sounds simple, but do we want three data hogs instead of one, like cutting off one of Lernean Hydra's heads? One idea that emerged repeatedly is slowing "fast, free, and frictionless"; Yael Eisenstat wondered why we allow experimental technology at global scale but policy only after painful perfection.

MEP Marietje Schaake (Democrats 66-NL) explained the EU's proposed Digital Markets Act, which aims to improve fairness by preempting the too-long process of punishing bad behavior by setting rules and responsibilities. Current proposals would bar platforms from combining user data from multiple sources without permission; self-preferencing; and spying (say, Amazon exploiting marketplace sellers' data), and requires data portability and interoperability for ancillary services such as third-party payments.

The difficulty with data portability, as Ian Brown said recently, is that even services that let you download your data offer no way to use data you upload. I can't add the downloaded data from my current electric utility account to the one I switch to, or send my Twitter feed to my Facebook account. Teachout finds that interoperability isn't enough because "You still have acquire, copy, kill" and lock-in via existing contracts. Wenger argued that the real goal is not interoperability but programmability, citing open banking as a working example. That is also the open web, where a third party can write an ad blocker for my browser, but Facebook, Google, and Apple built walled gardens. As Jared Sine told this week's antitrust hearing, "They have taken the Internet and moved it into the app stores."

Real change will require all four of the levers Aral discusses in his book, money, code, norms, and laws - which Lawrence Lessig's 1996 book, Code and Other Laws of Cyberspace called market, software architecture, norms, and laws - pulling together. The national commission on democracy and technology Aral is calling for will have to be very broadly constituted in terms of disciplines and national representation. As Safiya Noble said, diversifying the engineers in development teams is important, but not enough: we need "people who know society and the implications of technologies" at the design stage.


Illustrations: Sinan Aral, hosting the summit.l

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 2, 2021

Medical apartheid

swiss-cheese-virus-defence.jpgEver since 1952, when Clarence Willcock took the British government to court to force the end of wartime identity cards, UK governments have repeatedly tried to bring them back, always claiming they would solve the most recent public crisis. The last effort ended in 2010 after a five-year battle. This backdrop is a key factor in the distrust that's greeting government proposals for "vaccination passports" (previously immunity passports). Yesterday, the Guardian reported that British prime minister Boris Johnson backs certificates that show whether you've been vaccinated, have had covid and recovered, or had a test. An interim report will be published on Monday; trials later this month will see attendees to football matches required to produce proof of negative lateral flow tests 24 hours before the game and on entry.

Simultaneously, England chief medical officer Chris Whitty told the Royal Society of Medicine that most experts think covid will become like the flu, a seasonal disease that must be perennially managed.

Whitty's statement is crucial because it means we cannot assume that the forthcoming proposal will be temporary. A deeply flawed measure in a crisis is dangerous; one that persists indefinitely is even more so. Particularly when, as this morning, culture secretary Oliver Dowden tries to apply spin: "This is not about a vaccine passport, this is about looking at ways of proving that you are covid secure." Rebranding as "covid certificates" changes nothing.

Privacy advocates and human rights NGOs saw this coming. In December, Privacy International warned that a data grab in the guise of immunity passports will undermine trust and confidence while they're most needed. "Until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair." We are a long, long way from that universal access and likely to remain so; today's vaccines will have to be updated, perhaps as soon as September. There is substantial, but not enough, parliamentary opposition.

A grassroots Labour discussion Wednesday night showed this will become yet another highly polarized debate. Opponents and proponents combine issues of freedom, safety, medical efficacy, and public health in unpredictable ways. Many wanted safety - "You have no civil liberties if you are dead," one person said; others foresaw segregation, discrimination, and exclusion; still others cited British norms in opposing making compulsory either vaccinations or carrying any sort of "papers" (including phone apps).

Aside from some specific use cases - international travel, a narrow range of jobs - vaccination passports in daily life are a bad idea medically, logistically, economically, ethically, and functionally. Proponents' concerns can be met in better - and fairer - ways.

The Independent SAGE advisory group, especially Susan Michie, has warned repeatedly that vaccination passports are not a good solution for solution life. The added pressure to accept vaccination will increase distrust, she has repeatedly said, particularly among victims of structural racism.

Instead of trying to identify which people are safe, she argues that the government should be guiding employers, businesses, schools, shops, and entertainment venues to make their premises safer - see for example the CDC's advice on ventilation and list of tools. Doing so would not only help prevent the spread of covid and keep *everyone* safe but also help prevent the spread of flu and other pathogens. Vaccination passports won't do any of that. "It again puts the burden on individuals instead of spaces," she said last night in the Labour discussion. More important, high-risk individuals and those who can't be vaccinated will be better protected by safer spaces than by documentation.

In the same discussion, Big Brother Watch's Silkie Carlo predicted that it won't make sense to have vaccination passports and then use them in only a few places. "It will be a huge infrastructure with checkpoints everywhere," she predicted, calling it "one of the civil liberties threats of all time" and "medical apartheid" and imagining two segregated lines of entry to every venue. While her vision is dramatic, parts of it don't go far enough: imagine when this all merges with systems already in place to bar access to "bad people". Carlo may sound unduly paranoid, but it's also true that for decades successive British governments at every decision point have chosen the surveillance path.

We have good reason to be suspicious of this government's motives. Throughout the last year, Johnson has been looking for a magic bullet that will fix everything. First it was contact tracing apps (failed through irrelevance), then test and trace (failing in the absence of "and isolate and support"), now vaccinations. Other than vaccinations, which have gone well because the rollout was given to the NHS, these failed high-tech approaches have handed vast sums of public money to private contractors. If by "vaccination certificates" the government means the cards the NHS gives fully-vaccinated individuals listing the shots they've had, the dates, and the manufacturer and lot number, well fine. Those are useful for those rare situations where proof is really needed and for our own information in case of future issues, it's simple, and not particularly expensive. If the government means a biometric database system that, as Michie says, individualizes the risk while relieving venues of responsibility, just no.

Illustrations: The Swiss Cheese Respiratory Virus Defence, created by virologist Ian McKay.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.


Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 19, 2021

Dystopian non-fiction

Screenshot from 2021-03-18 12-51-27.pngHow dumb do you have to be to spend decades watching movies and reading books about science fiction dystopias with perfect surveillance and then go on and build one anyway?

*This* dumb, apparently, because that what Shalini Kantayya discovers in her documentary Coded Bias, which premiered at the 2020 Sundance Film Festival. I had missed it until European Digital Rights (EDRi) arranged a streaming this week.

The movie deserves the attention paid to The Social Dilemma. Consider the cast Kantayya has assembled: "math babe" Cathy O'Neil, data journalism professor Meredith Broussard, sociologist Zeynep Tufekci, Big Brother Watch executive director Silkie Carlo, human rights lawyer Ravi Naik, Virginia Eubanks, futurist Amy Webb, and "code poet" Joy Buolamwini, who is the film's main protagonist and provides its storyline, such as it is. This film wastes no time on technology industry mea non-culpas, opting instead to hear from people who together have written a year's worth of reading on how modern AI disassembles people into piles of data.

The movie is framed by Buoalmwini's journey, which begins in her office at MIT. At nine, she saw a presentation on TV from MIT's Media Lab, and, entranced by Cynthia Breazeal's Kismet robot, she instantly decided: she was going to be a robotics engineer and she was going to MIT.

At her eventual arrival, she says, she imagined that coding was detached from the world - until she started building the Aspire Mirror and had to get a facial detection system working. At that point, she discovered that none of the computer vision tracking worked very well...until she put on a white mask. She started examining the datasets used to train the facial algorithms and found that every system she tried showed the same results: top marks for light-skinned men, inferior results for everyone else, especially the "highly melanated".

Teaming up with Deborah Raji, in 2018 Buolamwini published a study (PDF) of racial and gender bias in Amazon's Rekognition system, then being trialed with law enforcement. The company's response leads to a cameo, in which Buolamwini chats with Timnit Gebru about the methods technology companies use to discredit critics. Poignantly, today's viewers know that Gebru, then still at Google was only months away from becoming the target of exactly that behavior, fired over her own critical research on the state of AI.

Buolamwini's work leads Kantayya into an exploration of both algorithmic bias generally, and the uncontrolled spread of facial recognition in particular. For the first, Kantayya surveys scoring in recruitment, mortgage lending, and health care, and visits the history of discrimination in South Africa. Useful background is provided by O'Neil, whose Weapons of Math Destruction is a must-read on opaque scoring, and Broussard, whose Artificial Unintelligence deplores the math-based narrow conception of "intelligence" that began at Dartmouth in 1956, an arrogance she discusses with Kantayya on YouTube.

For the second, a US unit visits Brooklyn's Atlantic Plaza Towers complex, where the facial recognition access control system issues warnings for tiny infractions. A London unit films the Oxford Circus pilot of live facial recognition that led Carlo, with Naik's assistance, to issue a legal challenge in 2018. Here again the known future intervenes: after the pandemic stopped such deployments, BBW ended the challenge and shifted to campaigning for a legislative ban.

Inevitably, HAL appears to remind us of what evil computers look like, along with a red "I'm an algorithm" blob with a British female voice that tries to sound chilling.

But HAL's goals were straightforward: it wanted its humans dead. The motives behind today's algorithms are opaque. Amy Webb, whose book The Big Nine profiles the nine companies - six American, three Chinese - who are driving today's AI, highlights the comparison with China, where the government transparently tells citizens that social credit is always watching and bad behavior will attract penalties for your friends and family as well as for you personally. In the US, by contrast, everyone is being scored all the time by both government and corporations, but no one is remotely transparent about it.

For Buolamwini, the movie ends in triumph. She founds the Algorithmic Justice League and testifies in Congress, where she is quizzed by Alexandria Ocasio-Cortez(D-NY) and Jamie Raskin (D-MD), who looks shocked to learn that Facebook has patented a system for recognizing and scoring individuals in retail stores. Then she watches as facial recognition is banned in San Francisco, Somerville, Massachusetts, and Oakland, and the electronic system is removed from the Brooklyn apartment block - for now.

Earlier, however, Eubanks, author of Automating Inequality, issued a warning that seems prescient now, when the coronavirus has exposed all our inequities and social fractures. When people cite William Gibson's "The future is already here - it's just not evenly distributed", she says, they typically mean that new tools spread from rich to poor. "But what I've found is the absolute reverse, which is that the most punitive, most invasive, most surveillance-focused tools that we have, they go into poor and working communities first." Then they get ported out, if they work, to those of us with higher expectations that we have rights. By then, it may be too late to fight back.

See this movie!


Illustrations: Joy Buolamwini, in Coded Bias.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 26, 2021

The convenience

Houston-HV-FINAL-Mobile-Van-2.jpgA couple of days ago, MSNBC broadcast a segment featuring a mobile vaccination effort in which a truck equipped with a couple of medical personnel and a suitably stored supply of vaccines and other medical equipment, was shown driving around to various neighborhoods, parking in front of people's homes, where the personnel would knock on doors. There was a very brief clip of a woman identified as reluctant. "What made you decide to take the vaccine after all?" the interviewer asked (more or less). "The convenience," she said, from behind her mask.

Wow.

It's always been - or should have been - obvious that all vaccine hesitancy is not equal. Some people are just going to be born rebels, refusing to do *anything* an authority tells them to do, no matter how well-attested the instruction is or how much risk accompanies ignoring it. Some have adopted resistance as a performative or tribal identity. Some may be deeply committed through serious, if flawed, assessment of the vaccine itself. Some have serious historical and cultural reasons to be distrustful. Others have medical contraindications. Some may actually even be suicidal. But some - and they may even be the majority - could go either way, depending on circumstances. As a friend commented after I told them the story, imagine a single mother with three kids, one or more jobs, and a long daily to-do list. Vaccination may be far, far down the list in terms of urgency.

Even knowing all this, seeing the woman state it so baldly was breathtaking because we've gotten used to assuming that anyone opposing vaccination does so out of deeply-held and angry commitment. The nudge people would probably be less surprised. For those of us who spend time promoting skepticism, the incident was also a good reminder of the value of engaging with people's real concerns.

It also reminds that when people's decisions seem inexplicable "the convenience" is often an important part of their reasoning. It's certainly part of why a lot of security breaches happen. Most people's job is not in security but in payroll or design or manufacturing, and their need to get their actual jobs done takes precedence. Faced with a dilemma, they will do the quickest and easiest thing, and those who design attacks know and exploit this very human tendency. The smart security person will, as Angela Sasse has been saying for 20 years, design security policies so they're the easiest path to follow.

The friction they add has been a significant reason why privacy tools have often failed to command any significant market share: they require exceptional effort, first because of the necessity of locating, installing, and learning to use them and second because so often they bring with them the price of non-conformance. Ever try getting your friends to shift from WhatsApp to Signal? Until the recent WhatsApp panic, it was impossible because of the difficulty they could foresee of getting all their other contacts - the school and church groups, the tennis club, the neighbors - to move as well. No one wants to have to remember which service to use for each contact.

One or another version of this problem has hindered the adoption of privacy tools for nearly 30 years, beginning in 1991 when Phil Zimmermann invented PGP in an effort to give PC users access to strong encryption. For most people, PGP was - and, sadly, still is, too difficult to install and too much of a nuisance to use. The result was that hardly anyone used encrypted communications until it became invisibly built into messaging services like WhatsApp and Signal.

The move away from universally interoperable email risks becoming a real problem in splintering communications, if my personal experience is any guide. A friend recently demanded to know why I didn't have an iPhone; she was annoyed that she couldn't send me messages on her preferred app. "Because I have an Android," I said. "What's that?" she asked. For her, Android users are incomprehensibly antisocial (and for new-hot-kid-in-town Clubhouse we are not worthy.)

On a wider canvas, that issue of convenience is most of the answer to how we began with a cooperative decentralized Internet and are now contending with an Internet dominated for most people by centralized walled gardens. At every stage from the first web sites, when someone wanting to host a website had to do everything themselves, to today's social media new companies succeeded by solving the frustrations of the previous generation. People want to chat with their friends, see photos, listen to music, and build businesses; anything like a technical barrier that makes any of that harder is an opportunity for someone to insert themselves as an intermediary or, as TikTok is doing now, to innovate. The same network effects that helped Facebook, Apple, and Google to grow to their present side make it difficult to counter their dominance by seeding alternatives.

It did not have to come out this way; ISPs (and, later, others) could have chosen to provide tools and services to make it easy for us to own our own communities. For anyone trying to do that now it's a hard, hard sell. Those of us who want to see the Internet redecentralize will have to create the equivalent of a mobile vaccination van.


Illustrations: Houston Vaccines' mobile unit.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 27, 2020

Data protection in review

Thumbnail image for 2015_Max_Schrems_(17227117226).jpgA tax on small businesses," a disgusted techie called data protection, circa 1993. The Data Protection Directive became EU law in 1995, and came into force in the UK in 1998.

The narrow data protection story of the last 25 years, like that of copyright, falls into three parts: legislation, government bypasses to facilitate trade, and enforcement. The broader story, however, includes a power struggle between citizens and both public and private sector organizations; a brewing trade war; and the difficulty of balancing conflicting human rights.

Like free software licenses, data protection laws seed themselves across the world by requiring forward compliance. Adopting this approach therefore set the EU on a collision course with the US, where the data-driven economy was already taking shape.

Ironically, privacy law began in the US, with the Fair Credit Reporting Act (1970), which gives Americans the right to view and correct the credit files that determine their life prospects. It was joined by the Privacy Act (1974), which covers personally identifiable information held by federal agencies, and the Electronic Communications Privacy Act (1986), which restricts government wiretaps on transmitted and stored electronic data. Finally, the 1996 Health Insurance Portability and Accountability Act protect health data (with now-exploding exceptions. In other words, the US's consumer protection-based approach leaves huge unregulated swatches of the economy. The EU's approach, by contrast, grew out of the clear historical harms of the Nazis' use of IBM's tabulation software and the Stasi's endemic spying on the population, and regulates data use regardless of sector or actor, minus a few exceptions for member state national security and airline passenger data. Little surprise that the results are not compatible.

In 1999, Simon Davies saw this as impossible to solve for Scientific American (TXT): "They still think that because they're American they can cut a deal, even though they've been told by every privacy commissioner in Europe that Safe Harbor is inadequate...They fail to understand that what has happened in Europe is a legal, constitutional thing, and they can no more cut a deal with the Europeans than the Europeans can cut a deal with your First Amendment." In 2000, he looked wrong: the compromise Safe Harbor agreement enabled EU-US data flows.

In 2008, the EU began discussing an update to encompass the vastly changed data ecosystem brought by Facebook, YouTube, and Twitter, the smartphone explosion, new types of personally identifiable information, and the rise and fall of what Andres Guadamuz last year called "peak cyber-utopianism". By early 2013, it appeared that reforms might weaken the law, not strengthen it. Then came Snowden, whose revelations reanimated privacy protection. In 2016, the upgraded General Data Protection Regulation was passed despite a massive opposing lobbying operation. It the month before GDPR came into force">came into force in 2018, but even now many US sites still block European visitors rather than adapt because "you are very important to us".

Everyone might have been able to go on pretending the fundamental incompatibility didn't exist but for two things. The first is the 2014 European Court of Justice decision requiring Google to honor "right to be forgotten" requests (aka Costeja). Americans still see Costeja as a terrible abrogation of free speech; Europeans more often see it as a balance between conflicting rights and a curb on the power of large multinational companies to determine your life.

The second is Austrian lawyer Max Schrems. While still a student, Schrems saw that Snowden's revelations utterly up-ended the Safe Harbor agreement. He filed a legal case - and won it, in 2016, just as GDPR was being passed.The EU and US promptly negotiated a replacement, Privacy Shield. Schrems challenged again. And won again, this year. "There must be no Schrems III!", EU politicians said in September. In other words: some framework must be found to facilitate transfers that passes muster within the law. The US's approach appears to be trying to get data protection and localization laws barred via trade agreements despite domestic opposition. One of the Trump administration's first acts was to require federal agencies to exempt foreigners from Privacy Act protections.

No country is more affected by this than the UK, which as a new non-member can't trade without an adequacy decision and no longer gets the member-state exception for its surveillance regime. This dangerous high-wire moment for the UK traps it in that EU-US gap.

Last year, I started hearing complaints that "GDPR has failed". The problem, in fact, is enforcement. Schrems took action because the Irish Data Protection Regulator, in pole position because companies like Facebook have sited their European headquarters there, was failing to act. The UK's Information Commissioner's Office was under-resourced from the beginning. This month, the Open Rights Group sued the ICO to force it to act on the systemic breaches of the GDPR it acknowledged in a June 2019 report (PDF) on adtech.

Equally a problem are the emerging limitations of GDPR and consent, which areentirely unsuited for protecting privacy in the onrushing "smart" world in which you are at the mercy of others' Internet of Things. The new masses of data that our cities and infrastructure will generate will need a new approach.


Illustrations: Max Schrems in 2015.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 18, 2020

Systems thinking

Official_portrait_of_Chi_Onwurah_crop_3.jpgThere's a TV ad currently running on MSNBC that touts the services of a company that makes custom T-shirts to help campaigns raise funds for causes such as climate change.

Pause. It takes 2,700 liters of water to make a cotton T-shirt - water that, the Virtual Water project would argue, is virtually exported from cotton-growing nations to those earnest climate change activists. Plus other environmental damage relating to cotton; see also the recent paper tracking the pollution impact of denim microfibers. So the person buying the T-shirt may be doing a good thing on the local level by supporting climate change activism while simultaneously exacerbating the climate change they're trying to oppose.

The same sort of issue arose this week at the UK Internet Governance Forum with respect to what the MP and engineer Chi Onwurah (Labour-Newcastle upon Tyne Central) elegantly called "data chaos" - that is, the confusing array of choices and manipulations we're living in. Modern technology design has done a very good job of isolating each of us into a tiny silo, in which we attempt to make the best decisions for ourselves and our data without any real understanding of the wider impact on wider society.

UCL researcher Michael Veale expanded on this idea: "We have amazing privacy technologies, but what we want to control is the use of technologies to program and change entire populations." Veale was participating in a panel on building a "digital identity layer" - that is, a digital identity infrastructure to enable securely authenticated interactions on the Internet. So if we focus on confidentiality we miss the danger we're creating in allowing an entire country to rely on intermediaries whose interests are not ours but whose actions could - for example - cause huge populations to self-isolate during a pandemic. It is incredibly hard just to get a half-dozen club tennis players to move from WhatsApp to something independent of Facebook. At the population level, lock-in is far worse.

Third and most telling example. Last weekend, at the 52nd annual conference of the Cybernetics Society, Kate Cooper, from the Birmingham Food Council, made a similar point when, after her really quite scary talk, she was asked whether we could help improve food security if those of us who have space started growing vegetables in our gardens. The short answer: no. "It's subsistence farming," she said, going on to add that although growing your own food helps you understand your own relationship with food and where it comes from and can be very satisfying to do, it does nothing at all to help you gain a greater understanding of the food system and the challenges of keeping it secure. This is - or could be - another of Yes, Minister's irregular verbs: I choose not to eat potato chips; you very occasionally eat responsibly-sourced organic potato chips; potato chips account for 6% of Britain's annual crop of potatoes. This was Cooper's question: is that a good use of the land, water, and other resources? Growing potatoes in your front garden will not lead you to this question.

Cybernetics was new to me two years ago, when I was invited to speak at the 50th anniversary conference. I had a vague idea it had something to do with Isaac Asimov's robots. In its definition, Wikipedia cites MIT scientific Norbert Weiner in 1948: "the scientific study of control and communication in the animal and the machine". So it *could* be a robot. Trust Asimov.

Attending the 2018 event, followed by this year's, which was shared with the American Society for Cybernetics, showed cybernetics up as a slippery transdiscipline. The joint 2020 event veered from a case study of IBM to choreography, taking in subjects like the NHS Digital Academy, design, family therapy, social change, and the climate emergency along the way. Cooper, who seemed as uncertain as I was two years ago whether her work really had anything to do with cybernetics, fit right in.

The experience has led me to think of cybernetics as a little like Bayes' Theorem as portrayed in Sharon Bertsch McGrayne's book The Theory That Would Not Die. As she tells the story, for two and a half centuries after its invention, select mathematicians kept the idea alive but rarely dared to endorse it publicly - and today it's everywhere. The cybernetics community feels like this, too: a group who are nurturing an overlooked, poorly understood-by-the-wider-world, but essential field waiting for the rest of us to understand its power.

For a newcomer, getting oriented is hard; some of the discussion seems abstract enough to belong in a philosophy department. Other aspects - such as Ray Ison's description of his new book, The Hidden Power of Systems Thinking, smacks of self-help, especially when he describes it: "The contention of the book is that systems thinking in practice provides the means to understand and fundamentally alter the systems governing our lives."

At this stage, however, with the rolling waves of crises hitting our societies (which Ison helpfully summed up in an apt cartoon), if this is cybernetics, it sounds like exactly what we need. "Why," asked the artist Vanilla Beer, whose father was the cybernetics pioneer Stafford Beer, "is something so useful unused?" Beats me.


Illustrations: Chi Onwurah (official portrait, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 12, 2020

Getting out the vote

Thumbnail image for bush-gore-hanging-chad-florida.jpg"If voting changed anything, they'd abolish it, the maverick British left-wing politician Ken Livingstone wrote in 1987.

In 2020, the strategy appears to be to lecture people about how they should vote if they want to change things, and then make sure they can't. After this week's denial-of-service attack on Georgia voters and widespread documentation of voter suppression tactics, there should be no more arguments about whether voter suppression is a problem.

Until a 2008 Computers, Freedom, and Privacy tutorial on "e-deceptive campaign practices", organized by Lillie Coney, I had no idea how much effort was put into disenfranchising eligible voters. The tutorial focused on the many ways new technology - the pre-social media Internet - was being adapted to do very old work to suppress the votes of those who might have undesired opinions. The images from the 2018 mid-term elections and from this week in Georgia tell their own story.

In a presentation last week, Rebecca Mercuri noted that there are two types of fraud surrounding elections. Voter fraud, which is efforts by individuals to vote when they are not entitled to do so and is the stuff proponents of voter ID requirements get upset about, is vanishingly rare. Election fraud, where one group or another try to game the election in their favor, is and has been common throughout history, and there are many techniques. Election fraud is the big thing to keep your eye on - and electronic voting is a perfect vector for it. Paper ballots can be reexamined, recounted, and can't easily be altered without trace. Yes, they can be stolen or spoiled, but it's hard to do at scale because the boxes of ballots are big, heavy, and not easily vanished. Scale is, however, what computers were designed for, and just about every computer security expert agrees that computers and general elections do not mix. Even in a small, digitally literate country like Estonia a study found enormous vulnerabilities.

Mercuri, along with longtime security expert Peter Neumann, was offering an update on the technical side of voting. Mercuri is a longstanding expert in this area; in 2000, she defended her PhD thesis, the first serious study of the security problems for online voting, 11 days before Bush v. Gore burst into the headlines. TL;DR: electronic voting can't be secured.

In the 20 years since, the vast preponderance of computer security experts have continued to agree with her. Naturally, people keep trying to find wiggle room, as if some new technology will change the math; besides election systems vendors there are well-meaning folks with worthwhile goals, such as improving access for visually impaired people, ensuring access for a widely scattered membership, such as unions, or motivating younger people.

Even apart from voter suppression tactics, US election systems continue to be a fragmented mess. People keep finding new ways to hack into them; in 2017, Bloomberg reported that Russia hacked into voting systems in 39 US states before the US presidential election and targeted election systems in all 50. Defcon has added a voting machine hacking village, where, in 2018, an 11-year-old hacked into a replica of the Florida state voting website in under ten minutes. In 2019, Defcon hackers were able to buy a bunch of voting machines and election systems on eBay - and cracked every single one for the Washington Post. The only sensible response: use paper.

Mercuri has long advocated for voter-verified paper ballots (including absentee and mail-in ballots) as the official votes that can be recounted or audited as needed. The complexity and size of US elections, however, means electronic counting.

In Congressional testimony, Matt Blaze, a professor at Georgetown University, has made three recommendations (PDF): immediately dump all remaining paperless direct recording electronic voting machines; provide resources, infrastructure, and training to local and state election officials to help them defend their systems against attacks; and conduct risk-limiting audits after every election to detect software failures and attacks. RLAs, which were proposed in a 2012 paper by Mark Lindeman and Philip B. Stark (PDF), involves counting a statistically significant random sampling of ballots and checking the results against the machine. The proposal has a fair amount of support, including from the Electronic Frontier Foundation.

Mercuri has doubts; she argues that election administrators don't understand the math that determines how many ballots to count in these audits, and thinks the method will fail to catch "dispersed fraud" - that is, a few votes changed across many precincts rather than large clumps of votes changed in a few places. She is undeniably right when she says that RLAs are intended to avoid counting the full set of ballots; proponents see that as a *good* thing - faster, cheaper, and just as good. As a result, some states - Michigan, Colorado (PDF) - are beginning to embrace it. My guess is there will be many mistakes in implementation and resulting legal contests until everyone either finds a standard for best practice or decides they're too complicated to make work.

Even more important, however, is whether RLAs can successfully underpin public confidence in election integrity. Without that, we've got nothing.

Illustrations: Hanging chad, during the 2000 Bush versus Gore vote.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 1, 2020

A life in three lockdowns

squires-rainbow.jpg"For most people it's their first lockdown," my friend Eva said casually a couple of weeks ago. "It's my third."

Third? Third?!

Eva is Eva Pascoe, whose colorful life story so far includes founding London's first cybercafe in 1994, setting up Cybersalon as a promulgator of ideas and provocations, and running a consultancy for retailers. She drops hints of other activities: mining cryptocurrencies in Scandinavia using renewable energy, for example. I'm fairly sure it's all true.

So: three lockdowns.

Eva's first lockdown was in 1981, when the Communist Party in her home country, Poland, decided to preempt Russian intervention against the Solidarity workers' movement and declared martial law. One night the country's single TV channel went blank; the next morning they woke up to sirens and General Wojciech Jaruzelski banning public gatherings and instituting a countrywide curfew under which no one could leave their house after 6pm. Those restrictions still left everyone going to work every day and, as it turned out crucially, kept the churches open for business.

Her second was in 1987, and was unofficial. On April 26, 1986, her nuclear physics student flatmate noticed that the Geiger counter in his lab at Warsaw's Nuclear Institute was showing extreme - and consistent - levels of radiation. The Russian Communist Party was saying nothing, and the rest of Poland wouldn't find out until four days later, but Chernobyl had blown up. Physicists knew and spread the news by word of mouth. The drills they'd had in Polish schools told them what to do: shelter indoors, close all windows, admit no fresh air. Harder was getting others to trust their warnings at a time without mobile phones and digital cameras to show the Geiger counter's readings.

Those two lockdowns had similarities. First, they were abrupt, arriving overnight with no time to prepare. That posed a particular difficulty in the second lockdown, when outside food couldn't be trusted because of radioactive fallout, and it wasn't clear whether the water in the taps was safe. "As in COVID-19," she wrote in a rough account I asked her to create, "we had to protect against an invisible enemy with no clear knowledge of the surface risks already in the flat, and no ability to be sure when the danger passes." After 14 days, with no sick pay available, they had to re-emerge and go to work. With the Communist Party still suggesting the radiation was mostly harmless, "In the absence of honest government information, many myths about cures for fallout circulated, some looking more promising than others."

Their biggest asset in both lockdowns was the basement tunnels that connect Warsaw's ten-story blocks of flats, each equipped with six to ten entrances leading to separate staircases. A short run through these corridors enabled inhabitants to connect with the hundreds of other people in the same block when it was too dangerous to go outside. Even under martial law, with deaths and thousands of arrests on the streets, mostly of Solidarity activists, those basement corridors enabled parties featuring home-brewed beer and vodka, pickled cabbage, mushrooms, and herring, and "sausages smuggled in from Grandma's house in the countryside". Most important was the vodka.

The goal of martial law was to stop the spread of ideas, in this case, the Polish freedom movement. The connections made in those basement corridors - and the churches - ensured it failed. After 18 months, the lockdown ended because it was unsustainable. Communist rule ended in 1989, as in many other eastern European countries.

Chernobyl's effects were harder to shake. When the government eventually admitted the explosion had taken place, it downplayed the danger, suggesting that vegetables would be safe to eat if scrubbed with hot water, that the level of radiation was about the same as radon - at the time, thought to be safe - and insisted the population should participate in the May 1 Labor Day marches. Eventually, Polish leaders broke ranks, advised people to stay at home and stop eating food from the affected 40% of Poland, and organized supplies of Lugol for young people to try to mitigate the effects of the radioactive iodine Chernobyl had spread. Eva, a few years too old to qualify, calls her Hashimoto's thyroiditis "a lifelong reminder of why we must not blindly trust government health advice during large-scale medical emergencies".

Eva's lessons: always have a month's supply of food stocks; make friends with virologists, as this will not be our last pandemic; buy a gas mask and make sure everyone knows how to put it on. Most important, buy home-brew equipment. "It not only helps to pass time, but alcohol becomes a currency when the value of money disappears."

This lockdown gave us advance notice; if you were paying attention, you could see it forming on the horizon a month out. Anyone who was stocked for a no-deal Brexit was already prepared. But ironically, the thing that provided safety, society, and survival during Eva's first two lockdowns would be lethal if applied in this one, which finds her in a comfortable London house with a partner and two children. Basement tunnels connecting households would be spreading disease and death, not ideas and safety in which to hatch them. Our tunnels are the Internet and social media; our personal connections are strengthening, even with hugs on pause.


Illustrations: Sign posted on the front door of a local shop that had to close temporarily.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

Appified

china-alihealth.jpegAround 2010, when smartphones took off (Apple's iPhone user base grew from 8 million in 2009 to 100 million in early 2011), "There's an app for that" was a joke widely acknowledged as true. Faced with a pandemic, many countries are looking to develop apps that might offer shortcuts to reaching some variant of "old normal". The UK is no exception, and much of this week has been filled with debate about the nascent contact tracing app being developed by the National Health Service's digital arm, NHSx. The logic is simple: since John Snow investigated cholera in 1854, contact tracing has remained slow, labor-intensive , and dependent on infected individuals' ability to remember all their contacts. With a contagious virus that spreads promiscuously to strangers who happen to share your space for a time, individual memory isn't much help. Surely we can do better. We have technology!

In 2011, Jon Crowcroft and Eiko Yoneki had that same thought. Their Fluphone proved the concept, even helping identify asymptomatic superspreaders through the social graph of contacts developing the illness.

In March, China's Alipay Health got our attention. This all-seeing, all-knowing, data-mining, risk score-outputting app whose green, yellow, and red QR codes are inspected by police at Chinese metro stations, workplaces, and other public areas seeks to control the virus's movements by controlling people's access. The widespread Western reaction, to a first approximation: "Ugh!" We are increasingly likely to end up with something similar, but with very different enforcement and a layer of "democratic voluntary" - *sort* of China, but with plausible deniability.

Or we may not. This is a fluid situation!

This week has been filled with debate about why the UK's National Health Service's digital arm (NHSx) is rolling its own app when Google and Apple are collaborating on a native contact-tracing platform. Italy and Spain have decided to use it; Germany, which was planning to build its own app, pivoted abruptly, and Australia and Singapore (whose open source app, TraceTogether, was finding some international adoption) are switching. France balked, calling Apple "uncooperative".

France wants a centralized system, in which matching exposure notifications is performed on a government-owned central server. That means trusting the government to protect it adequately and not start saying, "Oooh, data, we could do stuff with that!" In a decentralized system, the contact matching us performed on the device itself, with the results released to health officials if the user decides to do so. Apple and Google are refusing to support centralized systems, largely because in many of the countries where iOS and Android phones are sold it poses significant dangers for the population. Essentially, the centralized ones ask you for a lot more trust in your government.

All this led to Parliament's Human Rights Committee, which spent the week holding hearings on the human rights implications of contact tracing apps. (See Michael Veale's and Orla Lynskey's written evidence and oral testimony.) In its report, the committee concluded that the level of data being collected isn't justifiable without clear efficacy and benefits; rights-protecting legislation is needed (helpfully, Lilian Edwards has spearheaded an effort to produce model safeguarding legislation; an independent oversight body is needed along with a Digital Contact Tracing Human Rights Commissioner; the app's efficacy and data security and privacy should be reviewed every 21 days; and the government and health authorities need to embrace transparency. Elsewhere, Marion Oswald writes that trust is essential, and the proposals have yet to earn it.

The specific rights discussion has been accompanied by broader doubts about the extent to which any app can be effective at contact tracing and the other flaws that may arise. As Ross Anderson writes, there remain many questions about practical applications in the real world. In recent blog postings, Crowcroft mulls modern contact tracing apps based on what they learned from Fluphone.

The practical concerns are even greater when you look at Ashkan Soltani's Twitter feed, in which he's turning his honed hacker sensibilities on these apps, making it clear that there are many more ways for these apps to fail than we've yet recognized. The Australian app, for example, may interfere with Bluetooth-connected medical devices such as glucose monitors. Drug interactions matter; if apps are now medical devices, then their interactions must be studied, too. Soltani also raises the possibility of using these apps for voter suppression. The hundreds of millions of downloads necessary to make these apps work means even small flaws will affect large numbers of people.

All of these are reasons why Apple and Google are going to wind up in charge of the technology. Even the UK is now investigating switching. Fixing one platform is a lot easier than debugging hundreds, for example, and interoperability should aid widespread use, especially when international travel resumes, currently irrelevant but still on people's minds. In this case, Apple's and Google's technology, like the Internet itself originally, is a vector for spreading the privacy and human rights values embedded in its design, and countries are changing plans to accept it - one more extraordinary moment among so many.

Illustrations: Alipay Health Code in action (press photo).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 24, 2020

Viruswashing

wizard-of-oz-crystal-ball.jpgIndividual humans surprise you in a crisis; the curmudgeon across the street turns into a tireless volunteer; the sycophantic celebrity abruptly becomes a helpfully trenchant critic of their former-friend politicians. Organizations - whether public, as in governments, or private, as in companies - tend to remain in character, carried on by inertia, and claim their latest actions are to combat the crisis. For climate change - "greenwashing". For this pandemic - "viruswashing", as some of the creepiest companies seek to de-creepify themselves in the name of public health.

In the last month, Privacy International's surveillance legislation tracker has illustrated the usual basic crisis principles. One: people will accept things on a temporary basis that they wouldn't accept if they thought they'd be permanent. Two: double that for scared and desperate people. Three: the surveillance measures countries adopt reflect their own laws and culture. Four: someone always has a wish list of surveillance powers in their bottom drawer, ready to push for in a crisis. Five: the longer the crisis goes on the harder it will be to fully roll things back to their pre-crisis state when we can eventually all agree it's ended.

Some governments are taking advantage. Trump, for example, has chosen this moment to suspend immigration. More broadly, the UN Refugee Agency warns that refugee rights are being lost. Of 167 countries that have closed their borders in full or in part, 57 make no exceptions for asylum-seekers.

But governments everywhere are also being wooed by both domestic and international companies. Palantir, for example, is working with the US Centers for Disease Control and Prevention and its international counterparts to track the virus's spread. In the UK, Palantir and an AI start-up are data-mining NHS databases to build a predictive computer model. Largely uknown biometric start-ups are creating digital passports for NHS workers. The most startling is the news that the even-creepier NSO Group, whose government clients have used its software to turn journalists' and activists' phones into spy devices is trying to sell Western governments on its (repurposed) tracking software.

On Twitter, Pat Walshe (@privacymatters) highlights the Covid Credentials Initiative, a collaboration among 60 organizations to create verifiable credential solutions - that is, some sort of immunity certificate that individuals for individuals. Walshe also notes Jai Vijayan's story about Microsoft's proposals: "Your phone will become your digital passport". Walsh's commenters remind that in a fair number of countries SIM registration is essential. The upshot sounds similar to China's Alipay Health app, which scores each phone user and outputs a green, yellow, or red health code - which police check at entrances to areas of the city, public transport, and workplaces before allowing entry. Except: in the West we're talking a system built by private, secretive companies that, as Mike Elgan wrote last year at Fast Company, are building systems in the US that add up functionally to something very like China's much-criticized social credit scheme.

In Britain, where there's talk of "immunity certificates" - deconfinement apps - my model history of ID cards, which became mandatory under the National Registration Act (1939) and which no one decommissioned after World War II ended...until 1952, when Harry Willcock, who had refused to show police his ID card on demand, won in court by arguing that the law had lapsed when the emergency ended and the High Court agreed that the ID cards were now being used in unintended ways. Ever since, someone regularly proposes to bring them back. In the early 2000s it was to eliminate benefit fraud; in 2006 it was crime prevention. Now immunity certificates could be a wedge.

Tracking and tracing are age-old epidemiologists' tools; it's natural that people want to automate them, given the speed and scale of this pandemic. It's just the source: the creepiest companies are seizing the opportunity to de-creepify themselves by pivoting to public health. Eventually, Palantir has to do this if it wants to pay its investors the kind of returns they're used to; the law enforcement and security market is just too small. That said, at the Economist Hal Hodson casts nuance on Palantir's deal with the NHS - for now.

Obviously, we need all the help we can get. Nonetheless, these are not companies that are generally on our side. Letting them turn embed themselves into essential public health infrastructure feels like accepting letting a Mafia family use the proceeds of crime to buy themselves legitimate businesses. Meanwhile, much of the technology is unproven for health purposes and may not be effective, and basing it on apps, as Rachel Coldicutt writes, is a vector for discrimination

The post 9/11 surveillance build-up should have taught us that human rights must be embedded at the beginning because neither the "war on terror" nor the "war on drugs" has a formal ending when powers naturally expire. While this specific pandemic will end, others will come behind it. So: despite the urgency, protecting ourselves against permanent changes is easiest handled now, while the systems for tracking and tracing infections and ensuring public safety are being built. A field hospital can be built in ten days and then dismantled as if it never was; public health infrastructure cannot.


Illustrations: The Wicked Witch of the West and her crystal ball, from The Wizard of Oz (1939).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 20, 2020

Obsession

Vinci_-_Hammer_2A-PD-Wikimedia.jpgIn our universe everything is temporary except the rebellious nature of humans when you tell them something can't be done. For millennia, humans have sought to master the universe by controlling matter, creating synthetic life forms, conquering death, reading the future, and conjuring energy. As science and technology progressed, the methods changed from alchemy to chemistry, various fantastical ideas to bioengineering, astrology to astronomy, and learning to exploit more energy-dense fuels. "There's no such thing as a free lunch" applies to physical motion, perhaps more than to anything else in life.

Last week, a group of scientists, historians, and archivists convened at the Royal Institution, which organized the event jointly with the Leonardo da Vinci Society, to consider seriously the history of perpetual motion beginning with Leonardo da Vinci, as it's the quincentenary of his death. People tend to giggle when you say you're attending this sort of event. But "This is scientific!" protested one of the organizers.

It turns out perpetual motion provides enduring opportunities to drive you mad and injure your scientific respectability. In 1995, the paranormal debunker James Randi said (in An Encyclopedia of Claims, Frauds, and Hoaxes of the Occult and Supernatural) that perpetual motion "has probably cost more time, money, and mental effort for the crackpots than any other pursuit except for the philosopher's stone".

Last week, in Philip Steadman's gallop through historical devices such as thermoscopes and Cornelis Drebbel's variant, it was notable how often the same approaches reappeared. You can try them yourself.

Even building a fake requires meticulous engineering,Michael T. Wright, explained. For inspiration and technical foundations, many would-be makers turned to clockworks. "And vice-versa." The enemy is friction: it slows your mechanism, creates the need for new energy inputs, and generally means your motion isn't perpetual. Clockmakers have options - oil, shrinking and polishing moving parts, aligning gears - but at some point, Wright said, "They leave the perpetual motion maker to be crazy on his own."

As engineering developed in the 19th century, Ben Marsden said, scientists like WJM Rankine, William Thomson (aka Lord Kelvin), and Henry Dircks fretted over experimental engines, asking of each new iteration: "Is this perpetual motion?" In 1861, Dircks reviewed many of these efforts in Perpetuum Mobile, commenting, "The history of the search for perpetual motion does not afford a single instance of ascertained success." Its introduction reads as a warning: here lies obsession and madness.

At the Science Museum, Sophie Waring has been investigating that madness by mining the archives of the Board of Longitude, best-known for its competition, launched in 1714, to calculate longitude out at sea. Following John Harrison's successful solution, the Board enlarged its remit. "It led to streams of proposals for perpetual motion" to which the Board was persistently unsympathetic. The archives contain abrupt dismissals, seemingly without an underlying evidence-based principle.

In part, as Rupert Cole suggested, this blanket disapproval reflects a scientific culture that only began loosening up in the 1970s. His worked example was Eric Laithwaite, who in 1974 scandalized our host, the Royal Institution, by agreeing to show his RI lectures) on the BBC and suggesting that gyroscopes violated the laws of motion. They don't, but his showmanship inspired a generation of young inventors.

The history of failed ideas shows how hard it is to codify first principles. In Martin Kemp's guided tour through the 1510 Codex Leicester, we watched Leonardo da Vinci try to understand impetus: why does something keep moving after the thing pushing it is disconnected? Kemp characterized da Vinci's 70,000 crabbed, right-to-left words as working through "negative demonstrations". Much of this "heroic enterprise" was spent examining the movement of water. Maybe it's particulate?

We learn about inertia in grade school; it's so easy when you know. The laws of motion observed to that point followed a mathematical pattern proportionately relating force and distance. Throw a ball half as hard, and it travels only half as far. These observations don't help understand impetus mechanics. As JV Field (Birkbeck College) explained, it took nearly another two centuries of scientists building on each other's work - Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, Rene Descartes - before Isaac Newton finally codified the laws of motion.

At that point, both astrology and the idea that a perpetual motion machine was possible really should have died. Unfortunately, humans don't work like that. In 1980, Robert Schadewald recounted a rebirth of interest, and in 1986 The Straight Dope's Cecil Adams roasted a patent application from Joseph Newman.

This sort of thing led New Scientist's "Daedalus", David Jones, to build fake perpetual motion machines. He sold several to museums on the understanding that he would fix them at his own expense if they stopped within five years and share the cost until ten years. He figured 15 years was "perpetual" enough.

"Perpetual" is a matter of perspective. Our lives are too short to perceive the universe slowing down. We can't even directly perceive Jones's admitted fake slowing down, although it is. When Martyn Poliakoff, who was given a coded version of the secret at Jones's death in 2017, agrees that Jones's papers are sealed at the Royal Society for 30 years, I quickly calculate: 2047. Yes, I might be alive to read the explanation. It's certainly worth staying alive for.


Illustrations: Pages from the Codex Leicester (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

2020-02-23:Updated to make clear that the event was organized in collaboration with the Leonardo da Vinci Society.

January 10, 2020

The forever bug

Bug_de_l'an_2000.jpgY2K is back, and this time it's giggling at us.

For the past few years, there's been a growing drumbeat on social media and elsewhere to the effect that Y2K - "the year 2000 bug" - never happened. It was a nothingburger. It was hyped then, and anyone saying now it was a real thing is like, ok boomer.

Be careful what old averted messes you dismiss; they may come back to fuck with you.

Having lived through it, we can tell you the truth: Y2K *was* hyped. It was also a real thing that was wildly underestimated for years before it was taken as seriously as it needed to be. When it finally registered as a genuine and massive problem, millions of person-hours were spent remediating software, replacing or isolating systems that couldn't be fixed, and making contingency and management plans. Lots of things broke, but, because of all that work, nothing significant on a societal scale. Locally, though, anyone using a computer at the time likely has a personal Y2K example. In my own case, an instance of Quicken continued to function but stopped autofilling dates correctly. For years I entered dates manually before finally switching to GnuCash.

The story, parts of which Chris Stokel-Walker recounts at New Scientist, began in 1971, when Bob Bemer published a warning about the "Millennium Bug", having realized years earlier that the common practice of saving memory space by using two digits instead of four to indicate the year was storing up trouble. He was largely ignored, in part, it appeared, because no one really believed the software they were writing would still be in use decades later.

It was the mid-1990s before the industry began to take the problem seriously, and when they did the mainstream coverage broke open. In writing a 1997 Daily Telegraph article, I discovered that mechanical devices had problems, too.

We had both nay-sayers, who called Y2K a boondoggle whose sole purpose was to boost the computer industry's bottom line, and doommongers, who predicted everything from planes falling out of the sky to total societal collapse. As Damian Thompson told me for a 1998 Scientific American piece (paywalled), the Millennium Bug gave apocalyptic types a *mechanism* by which the crash would happen. In the Usenet newsgroup comp.software.year-2000, I found a projected timetable: bank systems would fail early, and by April 1999 the cities would start to burn... When I wrote that society would likely survive because most people wanted it to, some newsgroup members called me irresponsible, and emailed the editor demanding he "fire this dizzy broad". Reconvening ten years later, they apologized.

Also at the extreme end of the panic spectrum was the then-head of Deutsche Bank, Ed Yardeni, who repeatedly predicted that Y2K would cause a worldwide recession; it took him until 2002 to admit his mistake, crediting the industry's hard work.

It was still a real problem, and with some workarounds and a lot of work most of the effects were contained, if not eliminated. Reporters spent New Year's Eve at empty airports, in case there was a crash. Air travel that night, for sure, *was* a nothingburger. In that limited sense, nothing happened.

Some of those fixes, however, were not so much fixes as workarounds. One of these finessed the rollover problem by creating a "window" and telling systems that two-digit years fell between 1920 and 2020, rather than 1900 and 2000. As the characters on How I Met Your Mother might say: "It's a problem for Future Ted and Future Marshall. Let's let those guys handle it."

So, it's 2020, we've hit the upper end of the window, the bug is back, and Future Ted and Future Marshall are complaining about Past Ted and Past Marshall, who should have planned better. But even if they had...the underlying issue is temporary thinking that leads people to still - still, after all these decades - believe that today's software will be long gone 20 years from now and therefore they need only worry about the short term of making it work today.

Instead, the reality is, as we wrote in 2014, that software is forever.

That said, the reality is also that Y2K is forever, because if the software couldn't be rewritten to take a four-digit year field in 1999 it probably can't be today, either. Everyone stresses the need to patch and update software, but a lot - for an increasing value of "a lot" as Internet of Things devices come on the market with no real idea of how long they were be in service - of things can't be updated for one reason or another. Maybe the system can't be allowed to go down; maybe it's a bespoke but crucial system whose maintainers are long gone; maybe the software is just too fragile and poorly documented to change; maybe old versions propagated all over the place and are laboring on in places where they've simply been forgotten. All of that is also a reason why it's not entirely fair for Stokel-Walker to call the old work "a lazy fix". In a fair percentage of cases, creating and moving the window may have been the only option.

But fret ye not. We will get through this. And then we can look forward to 2038, when the clocks run out in Linux. Future Ted and Future Marshall will handle it.


Illustrations: Millennium Bug manifested at a French school (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 20, 2019

Humans in, bugs out

Thumbnail image for Wilcox, Dominic - Stained Glass car.jpgAt the Guardian, John Naughton ponders our insistence on holding artificial intelligence and machine learning to a higher standard of accuracy than the default standard - that is, us.

Sure. Humans are fallible, flawed, prejudiced, and inconsistent. We are subject to numerous cognitive biases. We see patterns where none exist. We believe liars we like and distrust truth-tellers for picayune reasons. We dislike people who tell unwelcome truths and like people who spread appealing, though shameless, lies. We self-destruct, and then complain when we suffer the consequences. We evaluate risk poorly, fearing novel and recent threats more than familiar and constant ones. And on and on. In 10,000 years we have utterly failed to debug ourselves.

My inner failed comedian imagines the frustrated AI engineer muttering, "Human drivers kill 40,000 people in the US alone every year, but my autonomous car kills *one* pedestrian *one* time, and everybody gets all 'Oh, it's too dangerous to let these things out on the roads'."

New always scares people. But it seems natural to require new systems to do better than their predecessor; otherwise, why bother?

Part of the problem with Naughton's comparison is that machine learning and AI systems aren't really separate from us; they're humans all the way down. We create the algorithms, code the software, and allow them to mine the history of flawed human decisions, from which they make their new decisions. If humans are the problem with human-made decisions, then we are as much or more the problem with machine-made decisions.

I also think Naughton's frustrated AI researchers have a few details the wrong way round. While it's true that self-driving cars have driven millions of miles with very few deaths and human drivers were responsible for 36,560 deaths in 2018 in the US alone, it's *also* true that it's still rare for self-driving cars to be truly autonomous: Human intervention is still required startlingly often. In addition, humans drive in a far wider variety of conditions and environments than self-driving cars are as yet authorized to do. The idea that autonomous vehicles will be vastly safer than human drivers is definitely an industry PR talking point, but the evidence is not there yet.

We'd also point out that a clear trend in AI books this year has been to point out all the places where "automated" systems are really "last-mile humans". In Ghost Work, Mary L. Gray and Siddharth Suri document an astonishing array of apparently entirely computerized systems where remote humans intervene in all sorts of unexpected ways through task-based employment, while in Behind the Screen Sarah T. Roberts studies the specific case of the raters of online content. These workers are largely invisible (hence "ghost") because the companies who hire them, via subcontractors, think it sounds better to claim their work is really AI.

Throughout "automation's last mile", humans invisibly rate online content, check that the Uber driver picking you up is who they're supposed to be, and complete other tasks to hard for computers. As Janelle Shane writes in You Look Like a Thing and I Love You, the narrower the task you give an AI the smarter it seems. Humans are the opposite: no one thinks we're smart while we're getting bored by small, repetitive tasks; it's the creative struggle of finding solutions to huge, complex problems that signals brilliance. Some of AI's most ardent boosters like to hope that artificial *general* intelligence will be able to outdo us in solving our most intractable problems, but who is going to invent that? Us, if it ever happens (and it's unlikely to be soon).

There is also a problem with scale and replication. While a single human decision may affect billions, of people, there is always a next time when it will be reconsidered and reinterpreted by a different judge who takes into account differences of context and nuance. Humans have flexibility that machines lack, while computer errors can be intractable, especially when bugs are produced by complex interactions. The computer scientist Peter Neumann has been documenting the risks of over-relying on computers for decades.

However, a lot of our need for computers prove themselves to a superhuman standard is social, cultural, and emotional. AI adds a layer of remoteness and removes some of our sense of agency. With humans, we think we can judge character, talk them into changing their mind, or at least get them to explain the decision. In the just-linked 2017 event, the legal scholar Mireille Hildebrandt differentiated between law - flexible, reinterpretable, modifiable - and administration, which is what you get if a rules-based expert computer system is in charge. "Contestability is the heart of the rule of law," she said.

At the very least, we hope that the human has enough empathy to understand the impact their decision will have on their fellow human, especially in matters of life and death.

We give the last word to Agatha Christie, who decisively backed humans in her 1969 book, Hallowe'en Party, in which alter-ego Ariadne Oliver tells Hercule Poirot, "I know there's a proverb which says, 'To err is human' but a human error is nothing to what a computer can do if it tries."


Illustrations: Artist Dominic Wilcox's concept self-driving car (as seen at the Science Museum, July 2019).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2019

When we were

zittrain-cim-iphone.jpg
"These people changed the world," said Jeff Wilkins, looking out across a Columbus, Ohio ballroom filled with more than 400 people. "And they know it, and are proud of it."

At one time, all this was his.

Wilkins was talking about...CompuServe, which he co-founded in 1969. How does it happen, he asked, that more than 400 people show up to celebrate a company that hasn't really existed for the last 23 years? I can't say, but a group of people happier to see each other (and random outsiders) again would be hard to find. "This is the only reunion I go to," one woman said.

It's easy to forget - or to never have known - CompuServe's former importance. Circa 1993, the Twitter handle now displayed on everyone's business cards and slides was their numbered CompuServe ID. My inclusion of mine (70007,5537) at the end of a Guardian article led a reader to complain that I should instead promote the small ISPs it would kill when broadband arrived. In 1994, Aerosmith released a single on CompuServe, the first time a major label tried online distribution. It probably took five hours to download.

In Wilkins' story, he was studying electrical engineering at the University of Arizona when his father-in-law asked for help with data processing for his new insurance company. Wilkins and fellow grad students Sandy Trevor, John Goltz, Larry Shelley, and Doug Chinnock, soon relocated to Columbus. It was, Wilkins said, Shelley who suggested starting a time-sharing company - "or should I say cloud computing?" Wilkins quipped, to applause and cheers.

Yes, he should. Everything new is old again.

In time-sharing, the fledgling company competed with GE and IBM. The information service started in 1979, as a way to occupy the computers during the empty evenings when the businesses had gone home. For the next 20 years, CompuServers invented everything for themselves: "GO" navigation commands, commercial email (first customer: HJ Heinz), live chat ("CB , news wires, online games and virtual worlds (partnering with Fujitsu on a graphical MUD), shopping... The now-ubiquitous GIF was the brainchild of Steve Wilhite (it's pronounced "JIF"). The legend of CompuServe inventions is kept alive by by Sandy Trevor and Dave Eastburn, whose Nuvocom "software archeology" business holds archives that have backed expert defense against numerous patent claims on technologies that CompuServe provably pioneered.

A panel reminisced about the CIS shopping mall. "We had an online stockbroker before anyone else thought about it," one said. Another remembered a call asking for a 30-minute meeting from the then-CEO of the nationwide flowers delivery service FTD. "I was too busy." (The CEO was Meg Whitman.). For CompuServe's 25th anniversary, the mall's travel agency collaborated on a three-day cruise with, as invited guests, the film critic Roger Ebert, who disseminated his movie reviews through the service and hosted the "Ask Roger Ebert" section in the Movies Forum, and his wife, Chaz. "That may have been the peak."

Mall stores paid an annual fee; curation ensured there weren't too many of any one category of store. Banners advertising products were such a novelty at the time - and often the liveliest, most visually attractive thing on the page - that as many as 25% of viewers clicked on them. Today, Amazon takes a percentage of transactions instead. "If we could have had a universal shopping cart, like Amazon," lamented one, "what might have been?"

Well, what? Could CompuServe now be under threat of a government-mandated breakup to separate its social media business, search, cloud provider, and shopping? Both CompuServe and AOL, whose speed to embrace graphical interfaces and aggressive marketing led it to first outstrip and then buy and dismantle CompuServe in the 1990s, would have had to cannibalize their existing businesses. Used to profits from access fees, both resisted the Internet's monthly subscription model.

One veteran openly admitted how profoundly he underestimated the threat of the Internet after surveying the rickety infrastructure designed by/for academics and students. "I didn't think that the Internet could survive in the reality of a business..." Instead, the information services saw their competition as each other. A contemporary view of the challenges is visible in this 1995 interview with Barry Berkov, the vice-president in charge of CIS.

However, CompuServe's closed approach left no opening for individuals' self-expression. The 1990s rising Internet stars, Geocities and MySpace, were all about that, as are today's social media.

So many shifts have changed social media since then: from topic-centered to person-centered forums, from proprietary to open to centralized, from dial-up modems to pervasive connections, the massive ramp-up of scale and, mobile-fueled, speed, along with the reconfiguration of business models and tehcnical infrastructure. Some things have degraded: past postings on Twitter and Facebook are much harder to find, and unwanted noise is everywhere. CompuServe would have had to navigate each of those shifts without error. As we know now, they didn't make it.

And yet, for 20-odd years, a company of early 20-somethings 2,500 miles from Silicon Valley, invented a prototype of today's world, at first unaware of the near-simultaneous first ARPAnet connection, the beginnings of the network they couldn't imagine would ever be trustworthy enough for businesses and governments to rely on. They may yet be proven right about that.

cis50-banner.jpg

Illustrations: Jonathan Zittrain's mockup of the CompuServe welcome screen (left, with thanks) next to today's iPhone showing how little things have changed; the reunion banner.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 4, 2019

Digital London

cropped-view-from-walkie-talkie-2017.jpgAnyone studying my travel patterns on the London Underground will encounter a conundrum: what makes a person undertake, once or twice a week, a one-way journey into the center of town? How do they get home?

For most people, it would remain, like Sudoku, a pointless puzzle. For Transport for London, the question is of greater importance: what does it plan for? On Monday, at an event run by the Greater London Authority intelligence unit to showcase its digital tools, a TfL data analyst expressed just this sort of conundrum. I had asked, "What's the hardest problem you're working on?" And he said, "Understanding human behavior." Data shows what happened. It gives no clue as to *why* unless you can map the data to other clue-bearing streams. If you can match the dates, times, and weather reports, the flood onto and into buses, trains, tubes, and taxis may be clearly understood as: it was raining. But beyond that...people are weird.

And they're numerous. As London's chief digital officer, Theo Blackwell, said, London is now the largest it's ever been, only recently passing the peak it reached in 1939. Seventy-odd years of peace and improving public health has enabled uninterrupted growth to 9 million; five or six years hence it's expected to reach 11 million, a 20+% rise that will challenge the capacity of housing and transport, and exacerbate the impact of climate change. Think water: London, is drier than you'd expect.

A fellow attendee summed this up this way: "London has put on an entire Birmingham in size in the last ten years." Two million more is approaching the size of greater Manchester. London, in a term used by Greenwood Strategic Advisors' Craig Stephens, is an "attractor city". People don't need a reason to come here, as they do when moving to smaller places. As a result, tracking and predicting migration is one of the thornier problems.

TfL's planning problems are, therefore, a subset of the greater range of conundrums facing London, some of them fueled by the length of the city's history. David Christie, TfL's demand forecasting and analytics manager, commented, for example, that land use was a challenge because there hasn't been an integrated system to track it. Mike Bracken, one of the founders of the Government Digital Service, reminded that legacy systems and vendor lock-in are keeping the UK lagging well behind countries like Peru and Madagascar, which solve services in 12 weeks. "We need to hurry up," he said, "because our mental model of where we stand in relationship to other nations is not going to stand for much longer." He had a tip for making things work: "Don't talk about blockchain. Just fix your website."

Christie's group does the technical work of modeling for TfL. In the 1970s, he said, his department would prepare an input file and send it off to the Driver and Vehicle Licensing Agency's computer and they'd get back results two months later. He still complains that run times for the department's models are an issue, but the existing model has been the basis for current schemes such as Crossrail 1 and the Northern Line extension. What makes this model - the London Simulator - sound particularly interesting was Christie's answer to the question of how they validate the data. "The first requirement of the model is to independently recreate the history." Instead of validating the data, they validate the model by looking to see what it's wrong about in the last 25 years.

Major disruptors TfL expects include increasingly flexible working patterns, autonomous vehicles, more homes. Christie didn't mention it, but I imagine Uber's arrival was an unpredictable external black swan event, abruptly increasing congestion and disrupting modal share. But is it any part of why the car journeys per day have dropped 8% since 2000?

Refreshingly, the discussion focused on using technology in effective ways to achieve widely-held public goals, rather than biased black-box algorithms and automated surveillance, or the empty solutionist landscapes Ben Green objects to in The Smart-Enough City. Instead, they were talking things like utilities sharing information about which roads they need to dig up when, intended to be a win for residents, who welcome less disruption, and for companies, which appreciate saving some of the expense. When, in a final panel, speakers were asked to name significant challenges they'd like to solve, they didn't talk about technology. Instead, Erika Lewis, the deputy director for data policy and strategy at the Department for Culture, Media, and Sport, said she wanted to improve how the local and city governments interface with central government and design services from the ground up around the potential uses for the data. "We missed the boat on smart meters," she said, "but we could do it with self-driving cars."

Similarly, Sarah Mulley, GLA's executive director for communities and intelligence, said engaging with civil society and the informal voluntary sector was a challenge she wanted to solve. "[They have] a lot to say, but there aren't ways to connect into it." Blackwell had the last word. "In certain areas, data has been used in a quite brutal way," he said. "How to gain trust is a difficult leadership challenge for cities."


Illustrations: London in 2017, looking south past London Bridge toward Southwark Cathedral and the Shard from the top of the Walkie-Talkie building.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 30, 2019

The Fregoli delusion

Anomalisa-Fregoli.pngIn biology, a monoculture is a bad thing. If there's only one type of banana, a fungus can wipe out the entire species instead of, as now, just the most popular one. If every restaurant depends on Yelp to find its customers, Yelp's decision to replace their phone number with one under its own control is a serious threat. And if, as we wrote here some years ago, everyone buys everything from Amazon, gets all their entertainment from Netflix, and get all their mapping, email, and web browsing from Google, what difference does it make that you're iconoclastically running Ubuntu underneath?

The same should be true in the culture of software development. It ought to be obvious that a monoculture is as dangerous there as on a farm. Because: new ideas, robustness, and innovation all come from mixing. Plenty of business books even say this. It's why research divisions create public spaces, so people from different disciplines will cross-fertilize. It's why people and large businesses live in cities.

And yet, as the journalist Emily Chang documents in her 2018 book Brotopia: Breaking Up the Boys' Club of Silicon Valley, Silicon Valley technology companies have deliberately spent the last couple of decades progressively narrowing their culture. To a large extent, she blames the spreading influence of the Paypal Mafia. At Paypal's founding, she writes, this group, which includes Palantir founder Peter Thiel, LinkedIn founder Reid Hoffman, and Tesla supremo Elon Musk, adopted the basic principle that to make a startup lean, fast-moving, and efficient you needed a team who thought alike. Paypal's success and the diaspora of its early alumni disseminated a culture in which hiring people like you was a *strategy*. This is what #MeToo and fights for equality are up against.

Businesses are as prone to believing superstitions as any other group of people, and unicorn successes are unpredictable enough to fuel weird beliefs, especially in an already-insular place like Silicon Valley. Yet, Chang finds much earlier roots. In the mid-1960s, System Development Corporation hired psychologists William Cannon and Dallis Perry to create a profile to help it to identify recruits who would enjoy the new profession of computer programming. They interviewed 1,378 mostly male programmers, and found this common factor: "They don't like people." And so the idea that "antisocial" was a qualification was born, spreading outwards through increasingly popular "personality tests" and, because of the cultural differences in the way girls and boys are socialized, gradually and systematically excluding women.

Chang's focus is broad, surveying the landscape of companies and practices. For personal inside experiences, you might try Ellen Pao's Reset: My Fight for Inclusion and Lasting Change, which documents the experiences at Kleiner Perkins, which led her to bring a lawsuit, and at Reddit, where she was pilloried for trying to reduce some of the system's toxicity. Or, for a broader range, try Lean Out, a collection of personal stories edited by Elissa Shevinsky.

Chang finds that even Google, which began with an aggressive policy of hiring female engineers that netted it technology leaders Susan Wojcicki, CEO of YouTube, Marissa Mayer, who went on to try to rescue Yahoo, and Sheryl Sandberg, now COO of Facebook, failed in the long term. Today its male-female radio is average for Silicon Valley. She cites Slack as a notable exception; founder Stewart Butterfield set out to build a different kind of workplace.

In that sense, Slack may be the opposite of Facebook. In Zucked: Waking Up to the Facebook Catastrophe, Roger McNamee tells the mea culpa story of his early mentorship to Mark Zuckerberg and the company's slow pivot into posing problems he believes are truly dangerous. What's interesting to read in tandem with Chang's book is his story of the way Silicon Valley hiring changed. Until around 2000, hiring rewarded skill and experience; the limitations on memory, storage, and processing power meant companies needed trained and experienced engineers. Facebook, however, came along at the moment when those limitations had vanished and as the dot-com bust finished playing out. Suddenly, products could be built and scaled up much faster; open source libraries and the arrival of cloud suppliers meant they could be developed by less experienced, less skilled, *younger*, much *cheaper* people; and products could be free, paid for by advertising. Couple this with 20 years of Reagan deregulation and the influence, which he also cites, of the Paypal Mafia, and you have the recipe for today's discontents. McNamee writes that he is unsure what the solution is; his best effort at the moment appears to be advising Center for Humane Technology, led by former Google design ethicist Tristan Harris.

These books go a long way toward explaining the world Caroline Criado-Perez describes in 2018's Invisible Women: Data Bias in a World Designed for Men. Her discussion is not limited to Silicon Valley - crash test dummies, medical drugs and practices, and workplace design all appear - but her main point applies. If you think of one type of human as "default normal", you wind up with a world that's dangerous for everyone else.

You end up, as she doesn't say, with a monoculture as destructive to the world of ideas as those fungi are to Cavendish bananas. What Zucked and Brotopia explain is how we got there.


Illustrations: Still from Anomalisa (2015).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 2, 2019

Unfortunately recurring phenomena

JI-sunrise--2-20190107_071706.jpgIt's summer, and the current comprehensively bad news is all stuff we can do nothing about. So we're sweating the smaller stuff.

It's hard to know how seriously to take it, but US Senator Josh Hawley (R-MO) has introduced the Social Media Addiction Reduction Technology (SMART) Act, intended as a disruptor to the addictive aspects of social media design. *Deceptive* design - which figured in last week's widely criticized $5 billion FTC settlement with Facebook - is definitely wrong, and the dark patterns site has long provided a helpful guide to those practices. But the bill is too feature-specific (ban infinite scroll and autoplay) and fails to recognize that one size of addiction disruption cannot possibly fit all. Spending more than 30 minutes at a stretch reading Twitter may be a dangerous pastime for some but a business necessity for journalists, PR people - and Congressional aides.

A better approach, might be to require sites to replay the first video someone chooses at regular intervals until they get sick of it and turn off the feed. This is about how I feel about the latest regular reiteration of the demand for back doors in encrypted messaging. The fact that every new home secretary - in this case, Priti Patel - calls for this suggests there's an ancient infestation in their office walls that needs to be found and doused with mathematics. Don't Patel and the rest of the Five Eyes realize the security services already have bulk device hacking?

Ever since Microsoft announced it was acquiring the software repository Github, it should have been obvious the community would soon be forced to change. And here it is: Microsoft is blocking developers in countries subject to US trade sanctions. The formerly seamless site supporting global collaboration and open source software is being fractured at the expense of individual PhD students, open source developers, and others who trusted it, and everyone who relies on the software they produce.

It's probably wrong to solely blame Microsoft; save some for the present US administration. Still, throughout Internet history the communities bought by corporate owners wind up destroyed: CompuServe, Geocities, Television without Pity, and endless others. More recently, Verizon, which bought Yahoo and AOL for its Oath subsidiary (now Verizon Media), de-porned Tumblr. People! Whenever the online community you call home gets sold to a large company it is time *right then* to begin building your own replacement. Large companies do not care about the community you built, and this is never gonna change.

Also never gonna change: software is forever, as I wrote in 2014, when Microsoft turned off life support for Windows XP. The future is living with old software installations that can't, or won't, be replaced. The truth of this resurfaced recently, when a survey by Spiceworks (PDF) found that a third of all businesses' networks include at least one computer running XP and 79% of all businesses are still running Windows 7, which dies in January. In the 1990s the installed base updated regularly because hardware was upgraded so rapidly. Now, a computer's lifespan exceeds the length of a software generation, and the accretion of applications and customization makes updating hazardous. If Microsoft refuses to support its old software, at least open it to third parties. Now, there would be a law we could use.

The last few years have seen repeated news about the many ways that machine learning and AI discriminate against those with non-white skin, typically because of the biased datasets they rely on. The latest such story is startling: Wearables are less reliable in detecting the heart rate of people with darker skin. This is a "huh?" until you read that the devices use colored light and optical sensors to measure the volume of your blood in the vessels at your wrist. Hospital-grade monitors use infrared. Cheaper devices use green light, which melanin tends to absorb. I know it's not easy for people to keep up with everything, but the research on this dates to 1985. Can we stop doing the default white thing now?

Meanwhile, at the Barbican exhibit AI: More than Human...In a video, a small, medium-brown poodle turns his head toward the camera with a - you should excuse the anthropomorphism - distinct expression of "What the hell is this?" Then he turns back to the immediate provocation and tries again. This time, the Sony Aibo he's trying to interact with wags its tail, and the dog jumps back. The dog clearly knows the Aibo is not a real dog: it has no dog smell, and although it attempts a play bow and moves its head in vaguely canine fashion, it makes no attempt to smell his butt. The researcher begins gently stroking the Aibo's back. The dog jumps in the way. Even without a thought bubble you can see the injustice forming, "Hey! Real dog here! Pet *me*!"

In these two short minutes the dog perfectly models the human reaction to AI development: 1) what is that?; 2) will it play with me?; 3) this thing doesn't behave right; 4) it's taking my job!

Later, I see the Aibo slumped, apparently catatonic. Soon, a staffer strides through the crowd clutching a woke replacement.

If the dog could talk, it would be saying "#Fail".


Illustrations: Sunrise from the 30th floor.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 19, 2019

The Internet that wasn't

Bambi-forest.jpgThis week on Twitter, writer and Georgia Tech professor Ian Bogost asked this: "There's a belief that the internet was once great but then we ruined it, but I'm struggling to remember the era of incontrovertible greatness. Lots of arguing from the start. Software piracy. Barnfuls of pornography. Why is the fall from grace story so persistent and credible?"

My reply: "Mostly because most of the people who are all nostalgic either weren't there, have bad memories, or were comfortable with it. Flaming has existed in every online medium that's ever been invented. The big difference: GAFA weren't profiting from it."

Let's expand on that here. Not only was there never a period of peace and tranquility on the Internet, there was never a period of peace and tranquility on the older, smaller, more contained systems that proliferated in the period when you had to dial up and wait through the modems' mating calls. I only got online in 1991, but those 1980s systems - primarily CIX (still going), the WELL (still going), and CompuServe (bought by AOL) - hosted myriad "flame wars". The small CompuServe UK journalism forum I co-managed had to repeatedly eject a highly abusive real-life Fleet Street photographer who obsessively returned with new name, same behavior. CompuServe finally blocked his credit card, an option unavailable to pay-with-data TWIFYS (Twitter-WhatsApp-Instagram-Facebook-YouTube-Snapchat). The only real answer to containing abuse and abusers was and is human moderators.

The quick-trigger abuse endemic on Twitter has persisted since the beginning, as Sara Kiesler and Lee Sproull documented in their 1992 book, Connections, based on years of studies of mailing lists within large organizations. Even people using their real names and job descriptions within a professional context displayed online behavior they would never display offline. The distancing effect appears inherent to the medium and the privacy in which we experience it. Meanwhile, urgency of response rises with each generation. The etiquette books of my childhood recommended rereading angry letters after a day or two before sending; who has the attention span for that now?

Three documented examples of early cyberbullying provide perspective. In Josh Quittner's 1994 Wired story about Usenet, the rec.pets.cats successfully repelled invaders from alt.tasteless when a long-time poster and software engineer taught the others her tools; when she began getting death threats a phone call to the leader's ISP made him back down for fear of losing his Internet access. In Julian Dibbell's A Rape in Cyberspace "Mr Bungle took over another user's avatar in the virtual game space Lambda MOO, and forced it into virtual sex. After inconclusive community consideration, a single administrator quietly expelled Bungle. Finally, in my own piece about Scientology's early approach to the Internet, disputes over disclosing secret scriptures in the newsgroup alt.religion.scientology led to police raids, court cases, and attempts to smother the newsgroup with floods of pro-Scientology postings, also countered by a mix of community practices and purpose-built tools. Nonetheless, even in 1997 in 1997 people complained that tolerating abuse shouldn't be the price of participation.

Software "piracy" was born right alongside the commercial software business. In 1976, a year after Bill Gates and Paul Allen launched Microsoft's first product, a BASIC language interpreter for the early Altair computer, Gates published an open letter to hobbyists begging them to make the new industry viable by buying the software rather than circulate copies. The tug of war over copyrighted material, unauthorized copies, and business models has continued ever since in a straight line from Gates's open letter through Napster to today's battles over the right to repair. The shift moving modifiable software into copyright control was the spark that got Richard Stallman building GNU, the bulk of "Linux".

"Barnfuls of pornography" is slightly exaggerated, especially before search engines simplified finding it. Still, pornography producers are adept at colonizing new technology, from cave paintings to videocassettes, and the Internet was no exception. It was certainly popular: the University of Delft took down its pornography archive because the traffic swamped its bandwidth. In 1994, students protested when Carnegie-Mellon removed sexually explicit newsgroups, and conflicting US states' standards landed Robert and Carleen Thomas in jail.

Some of the Internet's steamy reputation was undeserved. Time magazine's shock-horror 1995 Cyberporn Cyberporn cover story was based on a fraudulent study. That sloppy reporting's fallout included the 1996 passage of the Communications Decency Act, antecedent of today's online harms and age verification.

So why does the myth persist? First, anyone under 35 probably wasn't there. Second, the early Internet was more homogeneous and more open, and you lost less by abandoning a community to create a new one when you mostly interacted with strangers. As previously noted, 1980s online forums did not profit from abuse; today, ramping up "engagement" to fuel ad-bearing traffic is TWIFYS' business model. More important, these scaled-up, closed systems do not offer us the ability to create and deploy tools or enforce our own fine-grained rules.

Crucially, the early Internet seemed *ours* - no expanding privacy policies or data collection. The first spammers, hackers, and virus writers were *amateurs*. Today, as Craig Silverman pointed out on Twitter, "There are tens of thousands of people whose entire job it is to push out spam on Facebook." We were free to imagine this new technology would bring a better world, however dumb that seemed even at the time. The Internet was *magic*.

Tl;dr: human behavior hasn't changed. The Internet hasn't changed. It's just not magic any more.

Illustrations: Bambi, before Man enters the forest.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 28, 2019

Failure to cooperate

sweat-nottage.jpgIn her 2015 Pulitzer Prize-winning play, Sweat, on display nightly in London's West End until mid-July, Lynn Nottage explores class and racial tensions in the impoverished, post-industrial town of Reading, PA. In scenes alternating between 2000 and 2008, she explores the personal-level effects of twin economic crashes, corporate outsourcing decisions, and tribalism: friends become opposing disputants; small disagreements become violent; and the prize for "winning" shrinks to scraps. Them who has, gets; and from them who have little, it is taken.

Throughout, you wish the characters would recognize their real enemies: the company whose steel tubing factory has employed them for decades, their short-sighted union, and a system that structurally short-changes them. The pain of the workers when they are locked out is that of an unwilling divorce, abruptly imposed.

The play's older characters, who would be in their mid-60s today, are of the age to have been taught that jobs were for life. They were promised pensions and could look forward to wage increases at a steady and predictable pace. None are wealthy, but in 2000 they are financially stable enough to plan vacations, and their children see summer jobs as a viable means of paying for college and climbing into a better future. The future, however, lies in the Spanish-language leaflets the company is distributing to frustrated immigrants the union has refused to admit and who will work for a quarter the price. Come 2008, the local bar is run by one of those immigrants, who of necessity caters to incoming hipsters. Next time you read an angry piece attacking Baby Boomers for wrecking the world, remember that it's a big demographic and only some were the destructors. *Some* Baby Boomers were born wreckage, some achieved it, and some had it thrust upon them.

We leave the characters there in 2008: hopeless, angry, and alienated. Nottage, who has a history of researching working class lives and the loss of heavy industry, does not go on to explore the inner workings of the "digital poorhouse" they're moving into. The phrase comes from Virginia Eubanks' 2018 book, Automating Inequality, which we unfortunately missed reviewing before now. If Nottage had pursued that line, she might have found what Eubanks finds: a punitive, intrusive, judgmental, and hostile benefits system. Those devastated factory workers must surely have done something wrong to deserve their plight.

Eubanks presents three case studies. In the first, struggling Indiana families navigate the state's new automated welfare system, a $1.3 billion, ten-year privatization effort led by IBM. Soon after its 2006 launch, it began sending tens of thousands of families notices of refusal on this Kafkaesque basis: "Failure to cooperate". Indiana eventually canceled IBM's contract, and the two have been suing each other ever since. Not represented in court is, as Eubanks says, the incalculable price paid in the lives of the humans the system spat out.

In the second, "coordinated entry" matches homeless Los Angelenos to available resources in order of vulnerability. The idea was that standardizing the intake process across all possible entryways would help the city reduce waste and become more efficient while reducing the numbers on Skid Row. The result, Eubanks finds, is an unpredictable system that mysteriously helps some and not others, and that ultimately fails to solve the underlying structural problem: there isn't enough affordable housing.

In the third, a Pennsylvania predictive system is intended to identify children at risk of abuse. Such systems are proliferating widely and controversially for varying purposes, and all raise concerns about fairness and transparency: custody decisions (Durham, England), gang membership and gun crime (Chicago and London), and identifying children who might be at risk (British local councils). All these systems gather and retain, perhaps permanently, huge amounts of highly intimate data about each family. The result in Pennsylvania was to deter families from asking for the help they're actually entitled to, lest they become targets to be watched. Some future day, those same records may pop when a hostile neighbor files a minor complaint, or haunt their now-grown children when raising their own children.

All these systems, Eubanks writes, could be designed to optimize access to benefits instead of optimizing for efficiency or detecting fraud. I'm less sanguine. In prior art, Danielle Citron has written about the difficulties of translating human law accurately into programming code, and the essayist Ellen Ullman warned in 1996 that even those with the best intentions eventually surrender to computer system imperatives of improving data quality, linking databases, and cross-checking, the bedrock of surveillance.

Eubanks repeatedly writes that middle class people would never put up with this level of intrusion. They may have no choice. As Sweat highlights, many people's options are shrinking. Refusal is only possible for those who can afford to buy their help, an option increasingly reserved for a privileged few. Poor people, Eubanks is frequently told, are the experimental models for surveillance that will eventually be applied to all of us.

In 2017, Cathy O'Neil argued in Weapons of Math Destruction that algorithmic systems can be designed for fairness. Eubanks' analysis suggests that view is overly optimistic: the underlying morality dates back centuries. Digitization has, however, exacerbated its effects, as Eubanks concludes. County poorhouse inmates at least had the community of shared experience. Its digital successor squashes and separates, leaving each individual to drink alone in that Reading bar.


Illustrations: Sweat's London production poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 31, 2019

Moral machines

2001-stewardess.jpgWhat are AI ethics boards for?

I've been wondering about this for some months now, particularly in April, when Google announced the composition of its new Advanced Technology External Advisory Council (ATEAC) - and a week later announced its dissolution. The council was dropped after a media storm that began with a letter from 50 of Google's own employees objecting to the inclusion of Kay Coles James, president of the Heritage Foundation.

At The Verge, James Vincent suggests the boards are for "ethics washing" rather than instituting change. The aborted Google board, for example, was intended, as member Joanna Bryston writes, to "stress test" policies Google had already formulated.

However, corporations are not the only active players. The new Ada Lovelace Institute's research program is intended to shape public policy in this area. The AI Now Institute is studying social implications. Data & Society is studying AI use and governance. Altogether, Brent Mittelstadt counts 63 public-private initiatives, and says the principles they're releasing "closely resemble the four classic principles of medical ethics" - an analogy he finds uncertain.

Last year, when Steven Croft, the Bishop of Oxford, proposed ten commandments for artificial intelligence, I also tended to be dismissive: who's going to listen? What company is going to choose a path against its own financial interests? A machine learning expert friend, has a different complaint: corporations are not the problem, it's governments. No matter what companies decide, governments always demand carve-outs for intelligence and security services, and once they have it, game over.

I did appreciate Croft's contention that all commandments are aspirational. An agreed set of principles would at least provide a standard against which to measure technology and decisions. Principles might be particularly valuable for guiding academic researchers, some of whom currently regard social media as a convenient public laboratory.

Still, human rights law already supplies that sort of template. What can ethics boards do that the law doesn't already? If discrimination is already wrong, why do we need an ethics board to add that it's wrong when an algorithm does it?

At a panel kicking off this year's Privacy Law Scholars, Ryan Calo suggested an answer: "We need better moral imagination." In his view, a lot of the discussion of AI ethics centers on form rather than content: how should it be applied? Should there be a certification regime? Or perhaps compliance requirements? Instead, he proposed that we should be looking at how AI changes the affordances available to us. His analogy: retrieving the sailors left behind in the water after you destroyed their ship was an ethical obligation until the arrival of new technology - submarines - made it infeasible.

For Calo, too many conversations about AI avoid considering the content, As a frustrating example: "The primary problem around the ethics of driverless cars is not how they will reshape cities or affect people with disabilities and ownership structures, but whether they should run over the nuns or the schoolchildren."

As anyone who's ever designed a survey knows, defining the questions is crucial. In her posting, Bryson expresses regret that the intended board will not now be called into action to consider and perhaps influence Google's policy. But the fact that Google, not the board, was to devise policies and set the questions about them makes me wonder how effective it could have been. So much depends on who imagines the prospective future.

The current Kubrick exhibition at London's Design Museum paid considerable homage to Kubrick's vision and imagination in creating the mysterious and wonderful universe in 2001: A Space Odyssey. Both the technology and the furniture still look "futuristic" despite having been designed more than 50 years ago. What *has* dated is the women: they are still wearing 1960s stewardess uniforms and hats, and the one woman with more than a few lines spends them discussing her husband and his whereabouts; the secrecy surrounding the appearance of a monolith in a crater on the moon is for the men to raise. Calo was finding the same thing in rereading Isaac Asimov's Foundation trilogy: "Not one woman leader for four books," he said. "And people still smoke!" Yet they are surrounded by interstellar travel and mind-reading devices.

So while what these boards - as Helen Nissenbaum said in the same panel, "There are so many institutes announcing principles as if that's the end of the story." - are doing now is not inspiring, maybe what they *could* do might be. What if, as Calo suggested, there are human and civil rights commitments AI allows us to make that were impossible before?

"We should be imagining how we can not just preserve extant ethical values but generate new ones based on affordances that we now have available to us," he said, suggesting as one example "mobility as a right". I'm not really convinced that our streets are going to be awash in autonomous vehicles any time soon, but you can see his point. If we have the technology to give independent mobility to people who are unable to drive themselves...well, shouldn't we? You may disagree on that specific idea, but you have to admit: it's a much better class of conversation.tw


Illustrations: Space Station receptionist from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 18, 2019

Math, monsters, and metaphors

Twitter-moral-labyrinth.jpg "My iPhone won't stab me in my bed," Bill Smart said at the first We Robot, attempting to explain what was different about robots - but eight years on, We Robot seems less worried about that than about the brains of the operation. That is, AI, which conference participant Aaron Mannes described as, "A pile of math that can do some stuff".

But the math needs data to work on, and so a lot of the discussion goes toward possible consequences: delivery drones displaying personalized ads (Ryan Calo and Stephanie Ballard); the wrongness of researchers who defend their habit of scraping publicly posted data by saying it's "the norm" when their unwitting experimental subjects have never given permission; the unexpected consequences of creating new data sources in farming (Solon Barocas, Karen Levy, and Alexandra Mateescu); and how to incorporate public values (Alicia Solow-Neiderman) into the control of...well, AI, but what is AI without data? It's that pile of math. "It's just software," Bill Smart (again) said last week. Should we be scared?

The answer seems to be "sometimes". Two types of robots were cited for "robotic space colonialism" (Kristen Thomasen), because they are here enough and now enough for legal cases to be emerging. These are 1) drones, and 2) delivery robots. Mostly. Mason Marks pointed out Amazon's amazing Kiva robots, but they're working in warehouses where their impact is more a result of the workings of capitalism that that of AI. They don't scare people in their homes at night or appropriate sidewalk space like delivery robots, which Paul Colhoun described as "unattended property in motion carrying another person's property". Which sounds like they might be sort of cute and vulnerable, until he continues: "What actions may they take to defend themselves?" Is this a new meaning for move fast and break things?

Colhoun's comment came during a discussion of using various forecasting methods - futures planning, design fiction, the futures wheel (which someone suggested might provide a usefully visual alternative to privacy policies) - that led Cindy Grimm to pinpoint the problem of when you regulate. Too soon, and you risk constraining valuable technology. Too late, and you're constantly scrambling to revise your laws while being mocked by technical experts calling you an idiot (see 25 years of Internet regulation). Still, I'd be happy to pass a law right now barring drones from advertising and data collection and damn the consequences. And then be embarrassed; as Levy pointed out, other populations have a lot more to fear from drones than being bothered by some ads...

The question remains: what, exactly do you regulate? The Algorithmic Accountability Act recently proposed by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) would require large companies to audit machine learning systems to eliminate bias. Discrimination is much bigger than AI, said conference co-founder Michael Froomkin in discussing Alicia Solow-Neiderman's paper on regulating AI, but special to AI is unequal access to data.

Grimm also pointed out that there are three different aspects: writing code (referring back to Petros Terzis's paper proposing to apply the regime of negligence laws to coders); collecting data; and using data. While this is true, it doesn't really capture the experience Abby Jacques suggested could be a logical consequence of following the results collected by MIT's Moral Machine: save the young, fit, and wealthy, but splat the old, poor, and infirm. If, she argued, you followed the mandate of the popular vote, old people would be scrambling to save themselves in parking lots while kids ran wild knowing the cars would never hit them. An entertaining fantasy spectacle, to be sure, but not quite how most of us want to live. As Jacques tells it, the trolley problem the Moral Machine represents is basically a metaphor that has eaten its young. Get rid of it! This was a rare moment of near-universal agreement. "I've been longing for the trolley problem to die," robotics pioneerRobin Murphy said. Jacques herself was more measured: "Philosophers need to take responsibility for what happens when we leave our tools lying around."

The biggest thing I've learned in all the law conferences I go to is that law proceeds by analogy and metaphor. You see this everywhere: Kate Darling is trying to understand how we might integrate robots into our lives by studying the history of domesticating animals; Ian Kerr and Carys Craig are trying to deromanticize "the author" in discussions of AI and copyright law; the "property" in "intellectual property" draws an uncomfortable analogy to physical objects; and Hideyuki Matsumi is trying to think through robot registration by analogy to Japan's Koseki family registration law.

Google koala car.jpgGetting the metaphors right is therefore crucial, which explains, in turn, why it's important to spend so much effort understanding what the technology can really do and what it can't. You have to stop buying the images of driverless cars to produce something like the "handoff model" proposed by Jake Goldenfein, Deirdre Mulligan, and Helen Nissenbaum to explore the permeable boundaries between humans and the autonomous or connected systems driving their cars. Similarly, it's easy to forget, as Mulligan said in introducing her paper with Daniel N. Kluttz, that in "machine learning" algorithms learn only from the judgments at the end; they never see the intermediary reasoning stages.

So metaphor matters. At this point I had a blinding flash of realization. This is why no one can agree about Brexit. *Brexit* is a trolley problem. Small wonder Jacques called the Moral Machine a "monster".

Previous We Robot events as seen by net.wars: 2018 workshop and conference; 2017; 2016 workshop and conference, 2015; 2013, and 2012. We missed 2014.

Illustrations: The Moral Labyrinth art installation, by Sarah Newman and Jessica Fjeld, at We Robot 2019; Google driverless car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 22, 2019

Layer nine

nemeth-osi-9layer-tshirt.jpgIs it possible to regulate the internet without killing it?

Before you can answer that you have to answer this: what constitutes killing the Internet? The Internet Society has a sort of answer, which is a list of what it calls Internet invariants, a useful phrase that is less attackable as "solutionism" by Evgeny Morozov than alternatives that portray the Internet as if it were a force of nature instead of human-designed and human-made.

Few people watching video on their phones on the Underground care about this, but networking specialists view the Internet as a set of layers. I don't know the whole story, but in the 1980s researchers, particularly in Europe, put a lot of work into conceptualizing a seven-layer networking model, Open Systems Interconnection. By 1991, however, a company CEO told me, "I don't know why we need it. TCP/IP is here now. Why can't we just use that?" TCP/IP are the Internet protocols, so that conversation showed the future. However, people still use the concepts OSI built. The bottom, physical layers, are the province of ISPs and telcos. The ones the Internet Society is concerned about are the ones concerning infrastructure and protocols - the middle layers. Layer 7, "Application", is all the things users see - and politicians fight over.

We are at a layer the OSI model failed to recognize, identified by the engineer Evi Nemeth. We - digital and human rights activists, regulators, policy makers, social scientists, net.wars readers - are at layer 9.

So the question we started with might also be phrased, "Is it possible to regulate the application layer while leaving the underlying infrastructure undamaged?" Put like that, it feels like it ought to be. Yet aspects of Internet regulation definitely entangle downwards. Most are surveillance-related, such as the US requirement that ISPs enable interception and data retention. Emerging demands for localized data storage and the General Data Protection Regulation also may penetrate more deeply while raising issues of extraterritorial jurisdiction. GDPR seeds itself into other countries like the stowaway recursive clause of the GNU General Public License for software: both require their application to onward derivatives. Localized data storage demands blocks and firewalls instead of openness.

Twenty years ago, you could make this pitch to policy makers: if you break the openness of the Internet by requiring a license to start an online business, or implementing a firewall, or limiting what people can say and do, you will be excluded form the Internet's economic and social benefits. Since then, China has proved that a national intranet can still fuel big businesses. Meanwhile, the retail sector craters and a new Facebook malfeasance surfaces near-daily, the policy maker might respond that the FAANG- Fab Five pay far less in tax than the companies they've put out of business, employment precarity is increasing, and the FAANGs wield disproportionate power while enabling abusive behavior and the spread of extremism and violence. We had open innovation and this is what it brought us.

To old-timers this is all kinds of confusion. As I said recently on Twitter, it's subsets all the way down: Facebook is a site on the web, and the web is an application that runs on the Internet. They are not equivalents. Here. In countries where Facebook's Free Basics is zero-rated, the two are functionally equivalent.

Somewhere in the midst of a discussion yesterday about all this, it was interesting to consider airline safety. That industry understood very early that safety was crucial to its success. Within 20 years of the Wright Brothers' first flight in 1903, the nascent industry was lobbying the US Congress for regulation; the first airline safety bill passed in 1926. If the airline industry had instead been founded by the sort of libertarians who have dominated large parts of Internet development...well, the old joke about the exchange between General Motors and Bill Gates applies. The computer industry has gotten away with refusing responsibility for 40 years because they do not believe we'll ever stop buying their products, and we let it.

There's a lot to say about the threat of regulatory capture even in two highly regulated industries, medicine and air travel, and maybe we'll say it here one week soon, but the overall point is that outside of the open source community, most stakeholders in today's Internet lack the kind of overarching common goal that continues to lead airlines and airplane manufacturers to collaborate on safety despite also being fierce competitors. The computer industry, by contrast, has spent the last 50 years mocking government for being too slow to keep up with technological change while actively refusing to accept any product liability for software.

In our present context, the "Internet invariants" seem almost quaint. Yet I hope the Internet Society succeeds in protecting the Internet's openness because I don't believe our present situation means that the open Internet has failed. Instead, the toxic combination of neoliberalism, techno-arrogance, and the refusal of responsibility (by many industries - just today, see pharma and oil) has undermined the social compact the open Internet reflected. Regulation is not the enemy. *Badly-conceived* regulation is. So the question of what good regulation looks like is crucial.


Illustrations: Evi Nemeth's adapted OSI model, seen here on a T-shirt historically sold by the Internet Systems Consortium.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 28, 2019

Systemic infection

Thumbnail image for 2001-hal.png"Can you keep a record of every key someone enters?"

This question brought author and essayist Ellen Ullman up short when she was still working as a software engineer and it was posed to her circa 1996. "Yes, there are ways to do that," she replied after a stunned pause.

In her 1997 book Close to the Machine, Ullman describes the incident as "the first time I saw a system infect its owner". After a little gentle probing, her questioner, the owner of a small insurance agency, explained that now that he had installed a new computer system he could find out what his assistant, who had worked for him for 26 years and had picked up his children from school when they were small, did all day. "The way I look at it," he explained, "I've just spent all this money on a system, and now I get to use it the way I'd like to."

Ullman appeared to have dissuaded this particular business owner on this particular occasion, but she went on to observe that over the years she saw the same pattern repeated many times. Sooner or later, someone always realizes that they systems they have commissioned for benign purposes can be turned to making checks and finding out things they couldn't know before. "There is something...in the formal logic of programs and data, that recreates the world in its own image," she concludes.

I was reminded of this recently when I saw a report at The Register that the US state of New Jersey, along with two dozen others, may soon require any contractor working on a contract worth more than $100,000 to install keylogging software to ensure that they're actually working all the hours - one imagines that eventually, it will be minutes - they bill for. Veteran reporter Thomas Claburn goes on to note that the text of the bill was provided by TransparentBusiness, a maker of remote work management software, itself a trend.

Speaking as a taxpayer, I can see the point of ensuring that governments are getting full value for our money. But speaking as a freelance writer who occasionally has had to work on projects where I'm paid by the hour or day (a situation I've always tried to avoid by agreeing a rate for the whole job), the distrust inherent in such a system seems poisonous. Why are we hiring people we can't trust? Most of us who have taken on the risks of self-employment do so because one of the benefits is autonomy and a certain freedom from bosses. And now we're talking about the kind of intensive monitoring that in the past has been reserved for full-time employees - and that none of them have liked much either.

One of the first sectors that is already fighting its way through this kind of transition is trucking. In 2014, Cornell sociologist Karen Levy published the results of three years of research into the arrival of electronic monitoring into truckers' cabs as a response to safety concerns. For truckers, whose cabs are literally their part-time homes, electronic monitoring is highly intrusive; effectively, the trucking company is installing a camera and other sensors not just in their office but also in their living room and bedroom. Instead of using electronics to try to change unsafe practices, she argues, alter the economic incentives. In particular, she finds that the necessity of making a living at low per-mile rates pushes truckers to squeeze the unavoidable hours of unpaid work - waiting for loading and unloading, for example - into their statutory hours of "rest".

The result sounds like it would be familiar to Uber drivers or modern warehouse workers, even if Amazon never deploys the wristbands it patented in 2016. In an interview published this week, Data & Society Institute researcher Alex Rosenblat outlines the results of a four-year study of ride-hail drivers across the US and Canada. Forget the rhetoric that these drivers are entrepreneurs, she writes; they have a boss, and it's the company's algorithm, which dictates their on-the-job behavior and withholds the data they need to make informed decisions.

If we do nothing, this may be the future of all work. In a discussion last week, University of Leicester associate professor Phoebe Moore located "quantified work" at the intersection of two trends: first, the health-oriented self-quantified movement, and second the succeeding waves of workplace management from industrialization through time and motion study, scientific management, and today's organizational culture, where, as Moore put it, we're supposed to "love our jobs and identify with our employer". The first of these has led to "wellness" programs that, particularly in the US, helped grant employers access to vastly more detailed personal data about their employees than has ever been available to them before.

Quantification, the combination of the two trends, Moore warns at Medium, will alter the workplace's social values by tending to pit workers against each other, race track style. Vendors now claim predictive power for AI: which prospective employees fit which jobs, or when staff may be about to quit or take sick leave. One can, as Moore does, easily imagine that, despite the improvements AI can bring, the AI-quantified workplace, will be intensively worker-hostile. The infection continues to spread.


Illustrations: HAL, from 2001: A Space Odyssey (1968).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 22, 2019

Metropolis

Metropolis-openingshot.png"As a citizen, how will I know I live in a smarter city, and how will life be different?" This question was probably the smartest question asked at yesterday's Westminster Forum seminar on smart cities (PDF); it was asked by Tony Sceales, acting as moderator.

"If I feel safe and there's less disruption," said Peter van Manen. "You won't necessarily know. Thins will happen as they should. You won't wake up and say, 'I'm in the city of the future'," said Sam Ibbott. "Services become more personalized but less visible," said Theo Blackwell the Chief Digital Office for London.

"Frictionless" said Jacqui Taylor, offering it as the one common factor she sees in the wildly different smart city projects she has encountered. I am dubious that this can ever be achieved: one person's frictionless is another's desperate frustration: streets cannot be frictionless for *both* cars and cyclists, just as a city that is predicted to add 2 million people over the next ten years can't simultaneously eliminate congestion. "Working as intended" was also heard. Isn't that what we all wish computers would do?

Blackwell had earlier mentioned the "legacy" of contactless payments for public transport. To Londoners smushed into stuffed Victoria Line carriages in rush hour, the city seems no smarter than it ever was. No amount of technological intelligence can change the fact that millions of people all want to go home at the same time or the housing prices that force them to travel away from the center to do so. We do get through the ticket barriers faster.

"It's just another set of tools," said Jennifer Schooling. "It should feel no different."

The notion of not knowing as the city you live in smartens up should sound alarm bells. The fair reason for that hiddenness is the reality that, as Sara Degli Esposti pointed out at this year's Computers, Privacy, and Data Protection, this whole area is a business-to-business market. "People forget that, especially at the European level. Users are not part of the picture, and that's why we don't see citizens engaged in smart city projects. Citizens are not the market. This isn't social media."

She was speaking at CPDP's panel on smart cities and governance, convened by the University of Stirling's William Webster, who has been leading a research project, CRISP, to study these technologies. CRISP asked a helpfully different question: how can we use smart city technologies to foster citizen engagement, coproduction of services, development of urban infrastructure, and governance structures?

The interesting connection is this: it's no surprise when CPDP's activists, regulators, and academics talk about citizen engagement and participation, or deplore a model in which smart cities are a business-led excuse for corporate and government, surveillance. The surprise comes when two weeks later the same themes arise among Westminster Forum's more private and public sector speakers and audience. These are the people who are going to build these new programs and services, and they, too, are saying they're less interested in technology and more interested in solving the problems that keep citizens awake at night: health, especially.

There appears to be a paradigm shift beginning to happen as municipalities begin to seriously consider where and on what to spend their funds.

However, the shift may be solely European. At CPDP, Canadian surveillance studies researcher David Murakami Wood told the story of Toronto, where (Google owner) Alphabet subsidiary Sidewalk Labs swooped in circa 2014 with proposals to redevelop the Quayside area of Toronto in partnership with Waterfront Toronto. The project has been hugely controversial - there were hearings this week in Ottawa, the provincial capital.

As Murakami Wood's tells it, for Sidewalk Labs the area is a real-world experiment using real people's lives as input to create products the company can later sell elsewhere. The company has made clear it intends to keep all the data the infrastructure generates on its servers in the US as well as all the intellectual property rights. This, Murakami Wood argued, is the real cost of the "free" infrastructure. It is also, as we're beginning to see elsewhere, the extension of online tracking or, as Murakami Wood put it, surveillance capitalism into the physical world: cultural appropriation at municipal scale from a company that has no track record in building buildings, or even publishing detailed development plans. Small wonder that Murakami Wood laughed when he heard Sidewalk Labs CEO Dan Doctoroff impress a group of enthusiastic young Canadian bankers with the news that the company had been studying cities for *two years*.

Putting these things together, we have, as Andrew Adams suggested, three paradigms, which we might call US corporate, Chinese authoritarian, and, emerging, European participatory and cooperative. Is this the choice?

Yes and no. Companies obviously want to develop systems once, sell them everywhere. Yet the biggest markets are one-off outliers. "Croydon," said Blackwell, "is the size of New Orleans." In addition, approaches vary widely. Some places - Webster mentioned Glasgow - are centralized command and control; others - Brazil - are more bottom-up. Rick Robinson finds that these do not meet in the middle.

The clear takeaway overall is that local context is crucial in shaping smart city projects and despite some common factors each one is different. We should built on that.


Illustrations: Fritz Lang's Metropolis (1927).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 8, 2019

Doing without

kashmir-hill-untech-gizmodo.pngOver at Gizmodo, Kashmir Hill has conducted a fascinating experiment: cutting, in turn, Amazon, Facebook, Google, Microsoft, and Apple, culminating with a week without all of them. Unlike the many fatuous articles in which privileged folks fatuously boast about disconnecting, Hill is investigating a serious question: how deeply have these companies penetrated into our lives? As we'll see, this question encompasses the entire modern world.

For that reason, it's important. Besides, as Hill writes, it's wrong to answer objections to GAFAM's business practices - or their privacy policies - with, "Well, don't use them, then." It may be to buy from smaller sites and local suppliers, delete Facebook, run Linux, switch to AskJeeves and OpenStreetMap, and dump the iPhone, but doing so requires a substantial rethink of many tasks. As regulators consider curbing GAFAM's power, Hill's experiment shows where to direct our attention.

Online, Amazon is the hardest to avoid. As Lina M. Khan documented last year, Amazon underpins an ever-increasing amount of Internet infrastructure. Netflix, Signal, the WELL, and Gizmodo itself all run on top of Amazon's cloud services, AWS. To ensure she blocked all of them, Hill got a technical expert to set up a VPN that blocked all IP addresses owned by each company and monitored attempted connections. Even that, however, was complicated by the use of content delivery networks, which mask the origin of network traffic.

Barring Facebook also means dumping Instagram and WhatsApp, and, as Hill notes, changing the signin procedure for any website where you've used your Facebook ID. Even if you are a privacy-conscious net.wars reader who would never grant Facebook that pole position, the social media buttons on most websites and ubiquitous trackers also have to go.

For Hill, blocking Apple - which seems easy to us non-Apple users - was "devastating". But this is largely a matter of habit, and habits can be re-educated. The killer was the apps: because iMessage reroutes texts to its own system, some of Hill's correspondents' replies never arrive, and she can't FaceTime her friends. Her conclusion: "It's harder to get out of Apple's ecosystem than Google's." However, once out she found it easy to stay that way - as long as she could resist her friends pulling her back in.

Google proved easier than expected despite her dependence on its services - Maps, calendar, browser. Here the big problem was email. The amount of stored information made it impossible to simply move and delete the account; now we know why Google provides so much "free" storage space. Like Amazon, the bigger issue was all the services Google underpins - trackers, analytics, and, especially, Maps, which Uber, Lyft, and Yelp depend. Hill should be grateful she didn't have a Nest thermostat and doesn't live in Minnesota. The most surprising bit is that so many sites load Google *fonts*. Also, like Facebook, Google has spread logins across the web, and Hill had to find an alternative to Dropbox, which uses Google to verify users.

In our minds, Microsoft is like Apple. Don't like Windows? Get a Mac or use Linux. Ah, but: I have seen the Windows Blue Screen of Death on scheduling systems on both the London Underground and Philadelphia's SEPTA. How many businesses that I interact with depend on Microsoft products? PCs, Office, and Windows servers and point of sale systems are everywhere. A VPN can block LinkedIn, Skype, and (sadly) Github - but it can't block any of those - or the back office systems at your bank. You can sell your Xbox, but even the local film society shows movies using VLC on Windows.

Hill's final episode, in which she eliminates all five simultaneously, posted just last night. As expected, she struggles to find alternative ways to accomplish many tasks she hasn't had to think about before. Ironically, this is easier if you're an Old Net Curmudgeon: as soon as she says large file, can't email, I go, "FTP!" while various web services all turn out to behosted on AWS, and she eventually lands on "command line". It's a definite advantage if you remember how you did stuff *before* the Internet - cash can pay the babysitter (or write a check!), and old laptops can be repurposed to run Linux. Even so, complete avoidance is really only realistic for a US Congressman. The hardest for me personally would be giving up my constant compaion, DuckDuckGo, which is hosted on...AWS.

Several things need to happen to change this - and we *should* change it because otherwise we're letting them pwn us, as in Dave Eggers' The Circle. The first is making the tradeoffs visible, so that we understand who we're really benefiting and harming with our clicks. The second is also regulatory: Lina Khan described in 2017 how to rethink antitrust law to curb Amazon. Facebook, as Marc Rotenberg told CNBC last week, should be required to divest Instagram and WhatsApp. Both Facebook and Google should spin off or discontinue their identity verification and web-wide login systems into separate companies. Third, we should encourage alternatives by using them.

But the last thing is the hardest: we must convince all our friends that it's worth putting up with some inconvenience. As a lifelong non-drinker living in pub-culture Britain, I can only say: good luck with that.


Illustrations: Kashmir Hill and her new technology.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 3, 2019

Prognostalgia

1155px-New_Year_2019_NZ7_1370_(31616532097).jpg"What seems to you like the big technology story of 2018?" I asked a friend. "The lack of excitement," she replied.

New stuff - the future - used to be a lot more fun, a phenomenon that New York Times writer Eric Schulmuller has dubbed prognostalgia. While Isaac Asimov, in predicting the world of 2019 in 1983 or of 2014 in 1964, correctly but depressingly foresaw that computers might exacerbate social and economic divisions, he also imagined that this year we'd be building bases on other planets. These days, we don't even explore the unfamiliar corners of the Internet.

So my friend is right. The wow! new hardware of 2018 was a leather laptop. We don't hear so much about grand visions like organizing the world's information or connecting the world. Instead, the most noteworthy app of 2018 may have been Natural Cycles - particularly for its failures.

Smartphones have become commodities, even in Japan. In 2004, visiting Tokyo seemed like time-traveling the future. People loved their phones so much they adorned them with stuffed animals and tassels. In 2018, people stare at them just as much but the color is gone. If Tokyo still offers a predictive glimpse, it looks like meh.

In technopolitics, 2018 seems to have been the most relentlessly negative since 1998, when the first Internet backlash was paralleling the dot-com boom. Then, the hot, new kid on the block was Google, which as yet was - literally - a blank page: logo, search box, no business model. Nothing to fear. On the other hand...the stock market was wildly volatile, especially among Internet stocks, which mostly rose 1929-style at every glance (Amazon, despite being unprofitable, rose 1,300%). People were fighting governments over encryption, especially to block key escrow. There was panic about online porn. A new data protection law was abroad in the land. A US president was under investigation. Yes, I am cherry-picking.

Over the course of 2018 net.wars has covered the modern versions of most of these. Australia is requiring technology companies to make cleartext available when presented with a warrant. The rest of the Five Eyes apparently intend to follow suit. Data breaches keep getting bigger, and although security issues keep getting more sophisticated and more pervasive, the causes of those breaches are often the same old stupid mistakes that we, the victims, can do nothing about. A big theme throughout the year was the ethics of AI. Finally, there has been little good news for cryptocurrency fanciers, no matter what their eventual usefulness may be. About bitcoin, at least, our previous skepticism appears justified.

The end of the year did not augur well for what's coming next. We saw relatively low-cost cyber attacks that disrupted daily physical life as opposed to infrastructure targets: maybe-drones shut down Gatwick Airport and the malware disrupted printing and distribution on a platform shared by numerous US newspapers. The drone if-it-was attack is probably the more significant: uncertainty is poisonously disruptive. As software is embedded into everything, increasingly we will be unable to trust the physical world or predict the behavior of nearby objects. There will be much more of this - and a backlash is also beginning to take physical form, as people attack Waymo self-driving cars in Arizona. Jurisdictional disputes - who gets to compel the production of data and in which countries - will continue to run. The US's CLOUD Act, a response to the Microsoft case, requires US companies to turn over data on US citizens when ordered to do so no matter its location. Be the envy of other major governments. These are small examples of the incoming Internet of Other People's Things.

A major trend that net.wars has not covered much is China's inroads into supplying infrastructure to various countries in Africa and elsewhere, such as Venezuela. The infrastructure that is spreading now comes from a very different set of cultural values than the Internet of the 1990s (democratic and idealistic) or the web of the 2000s (commercial and surveillant).

So much of what we inevitably write about is not only negative but repeatedly so, as the same conflicts escalate inescapably year after year, that it seems only right to try to find a few positive things to start 2019.

On Twitter, Lawrence Lessig notes that for the first time in 20 years work is passing into the public domain. Freed for use and reuse are novels from Edgar Rice Burroughs, Carl Sandberg, DH Lawrence, Aldous Huxley, and Agatha Christie. Music: "Who's Sorry Now?" and works by Bela Bartok. Film: early Buster Keaton and Charlie Chaplin. Unpause, indeed.

In the US, Democrats are arriving to reconfigure Congress, and while both parties have contributed to increasing surveillance, tightening copyright, and extending the US's territorial reach, the restoration of some balance of powers is promising.

In the UK, the one good thing to be said about the Brexit mess is that the acute phase will soon end. Probably.

So, the future is no fun and the past is gone, and we're left with a messy present that will look so much better 50 years from now. Twas ever thus. Happy new year.


Illustrations: New Year's fireworks in Sweden (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2018

The Rochdale hypothesis

Unity_sculpture,_Rochdale_(1).JPGFirst, open a shop. Thus the pioneers of Rochdale, Lancashire, began the process of building their town. Faced with the jobs and loss of income brought by the Industrial Revolution, a group of 28 people, about half of them weavers, designed the set of Rochdale principles, and set about finding £1 each to create a cooperative that sold a few basics. Ten years later, Wikipedia tells us, Britain was home to thousands of imitators: cooperatives became a movement.

Could Rochdale form the template for building a public service internet?

This was the endpoint of a day-long discussion held as part of MozFest and led by a rogue band from the BBC. Not bad, considering that it took us half the day to arrive at three key questions: What is public? What is service? What is internet?

Pause.

To some extent, the question's phrasing derives from the BBC's remit as a public service broadcaster. "Public service" is the BBC's actual mandate; broadcasting the activity it's usually identified with, is only the means by which it fulfills that mission. There might be - are - other choices. To educate, inform, to entertain, those are its mandate. Neither says radio or TV.

Probably most of the BBC's many global admirers don't realize how broadly the BBC has interpreted that. In the 1980s, it commissioned a computer - the Acorn, which spawned ARM, whose chips today power smartphones - and a series of TV programs to teach the nation about computing. In the early 1990s, it created a dial-up Internet Service Provider to help people get online. Some ten or 15 years ago I contributed to an online guide to the web for an audience with little computer literacy. This kind of thing goes way beyond what most people - for example, Americans - mean by "public broadcasting".

But, as Bill Thompson explained in kicking things off, although 98% of the public has some exposure to the BBC every week, the way people watch TV is changing. Two days later, the Guardian reported that the broadcasting regulator, Ofcom, believes the BBC is facing an "existential crisis" because the younger generation watches significantly less television. An eighth of young people "consume no BBC content" in any given week. When everyone can access the best of TV's back catalogue on a growing array of streaming services, and technology giants like Netflix and Amazon are spending billions to achieve worldwide dominance, the BBC must change to find new relevance.

So: the public service Internet might be a solution. Not, as Thompson went on to say, the Internet to make broadcasting better, but the Internet to make *society* better. Few other organizations in the world could adopt such a mission, but it would fit the BBC's particular history.

Few of us are happy with the Internet as it is today. Mozilla's 2018 Internet Health Report catalogues problems: walled gardens, constant surveillance to exploit us by analyzing our data, widespread insecurity, and increasing censorship.

So, again: what does a public service Internet look like? What do people need? How do you avoid the same outcome?

"Code is law," said Thompson, citing Lawrence Lessig's first book. Most people learned from that book that software architecture could determine human behaviour. He took a different lesson: "We built the network, and we can change it. It's just a piece of engineering."

Language, someone said, has its limits when you're moving from rhetoric to tangible service. Canada, they said, renamed the Internet "basic service" - but it changed nothing. "It's still concentrated and expensive."

Also: how far down the stack do we go? Do we rewrite TCP/IP? Throw out the web? Or start from outside and try to blow up capitalism? Who decides?

At this point an important question surfaced: who isn't in the room? (All but about 30 of the world's population, but don't get snippy.) Last week, the Guardian reported that the growth of Internet access is slowing - a lot. UN data to be published next month by the Web Foundation, shows growth dropped from 19% in 2007 to less than 6% in 2017. The report estimates that it will be 2019, two years later than expected, before half the world is online, and large numbers may never get affordable access. Most of the 3.8 billion unconnected are rural poor, largely women, and they are increasingly marginalized.

The Guardian notes that many see no point in access. There's your possible starting point. What would make the Internet valuable to them? What can we help them build that will benefit them and their communities?

Last week, the New York Times suggested that conflicting regulations and norms are dividing the Internet into three: Chinese, European, and American. They're thinking small. Reversing the Internet's increasing concentration and centralization can't be by blowing up the center because it will fight back. But decentralizing by building cooperatively at the edges...that is a perfectly possible future consonant with its past, even we can't really force clumps of hipsters to build infrastructure in former industrial towns, by luring them there with cheap housing prices. Cue Thompson again: he thought of this before, and he can prove it: here's his 2000 manifesto on e-mutualism.

Building public networks in the many parts of Britain where access is a struggle...that sounds like a public service remit to me.

Illustrations: Illustrations: The Unity sculpture, commemorating the 150th anniversary of the Rochdale Pioneers (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2018

Not the new oil

Ada_Lovelace_Chalon_portrait.jpg"Does data age like fish or like wine?" the economist Diane Coyle asked last week. It was one of a long list of questions she suggested researchers need to answer in a presentation at the new Ada Lovelace Institute. More important, the meeting generally asked, how can data best be used to serve the common good? The newly-created Ada Lovelace Institute is being set up to answer this sort of question.

This is a relatively new way of looking at things that has been building up over the last year or two - active rather than passive, social rather than economic, and requiring a different approach from traditional discussions of individual privacy. That might mean stewardship - management as a public good - rather than governance according to legal or quasi-legal rules; and a new paradigm for privacy, which for the last decades has been cast as an individual right rather than a social compact. As we have argued here before, it is long since time to change that last bit, a point made by Ivana Bartoletti, head of the data privacy and data protection practice for GemServ.

One of the key questions for Coyle, as an economist, is how to value data - hence the question about how it ages. In one effort, she tried to get price and volume statistics from cloud providers, and found no agreement on how they thought about their business or how they made the decision to build a new data center. Bytes are the easiest to measure - but that's not how they do it. Some thought about the number of data records, or computations per second, but these measures are insufficient without knowing the content.

"Forget 'the new oil'," she said; the characteristics are too different. Well, that's good news in a sense; if data is not the new oil then we don't have to be dinosaur bones or plankton. But given how many businesses have spent the last 20 years building their plans on the presumption that data *is* the new oil, getting them to change that view will be an uphill slog. Coyle appears willing to try: data, she said, is a public good, non-rivalrous in use, and, like many digital goods, with high fixed but low marginal costs. She went on to say, however, that personal data is not valuable, citing the small price you get if you divide Facebook's profits across its many users.

This is, of course, not really true, any more than you can decide between wine and fish: data's value depends on the beholder, the beholder's purpose, the context, and a host of other variables. The same piece of data may be valueless at times and highly valuable at others. A photograph of Brett Kavanaugh and Christine Blasey Ford on that bed in 1982, for example, would have been relatively valueless at the time, and yet be worth a fortune now, whether to suppress or to publish. The economic value might increase as long as it was kept secret - but diminish rapidly once it was made public, while the social value is zero while it's secret but huge if made public. As commodities go, data is weird. Coyle invoked Erwin Schrödinger: you don't know what you've got until you look at it. And even then, you have to keep looking as circumstances change.

That was the opening gambit, but a split rapidly surfaced in the panel, which also included Emma Prest, the executive director of DataKind. Prest and Bartoletti raised issues of consent and ethics, and data turned from a public good into a matter of human rights.

If you're a government or a large company focused on economic growth, then viewing data as a social good means wringing as much profit as you can out of it. That to date has been the direction, leading to amassing giant piles of the stuff and enabling both open and secret trades in surveillance and tracking. One often-proposed response is to apply intellectual property rights; the EU tried something like this in 1996 when it passed the Database Directive, generally unloved today, but this gives organizations rights in databases they compile. It doesn't give individuals property rights over "my" data. As tempting as IP rights might be, one problem is that a lot of data is collaboratively created. "My" medical record is a composite of information I have given doctors and their experience and knowledge-based interpretation. Shouldn't they get an ownership share?

Of course someone - probably a security someone - will be along shortly to point out that ethics, rights, and public goods are not things criminals respect. But this isn't about bad guys. Oil or not, data has always also been a source of power. In that sense, it's heartening to see that so many of these conversations - at the nascent Ada Lovelace Institute, at the St Paul's Institute PDF), at the LSE, and at Data & Society, to name just a few - are taking place. If AI is about data, robotics is at least partly about AI in a mobile substrate. Eventually, these discussions of the shape of the future public sphere will be seen for what they are: debates over the future distribution of power. Don't tell Whitehall.


Illustrations: Ada Lovelace.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 3, 2018

JAQing off

benjaminfranklin-pd.jpgYears ago, when I used to be called in to do various types of discussion TV and radio programs as the token skeptic, a friend said I should turn these invitations down. He had done so himself, on the basis that absent specialists to provide "the other side" of the debate, the programs would drop the item.

I was less sure, because before The Skeptic was founded, those programs did still run - with the opposition provided by a different type of anti-science. Programs featuring mediums and ghost-seers would debate religious representatives who saw these activities and apparitions as evil, but didn't doubt their existence or discuss the psychology of belief. So I thought the more likely outcome was that the programs would run anyway, but be more damaging to the public perception of science.

I did, however, argue (I think in a piece for New Scientist that matters of fact should not be fodder for "debate" on TV. "Balance" was much-misused even in the early 1990s, and I felt that if you defined it as "presenting two opposing points of view" then every space science story would require a quote from the Flat Earth Society. Fortunately, no one has gone that far in demanding "balance". Yet.

Deborah Lipstadt, opened her book Denying the Holocaust with an argument like my friend's. She refuses to grant Holocaust deniers the apparent legitimacy of a platform. This is ultimately an economic question: if the producers want spectacular mysteries, then the skeptic is there partly as a decorative element and partly to absolve the producers of the accusation that they're promoting nonsense. The program runs if you decline. If they want Deborah Lipstadt as their centerpiece, then she is in a position to demand that her fact-based work not be undermined by some jackass presenting "an alternative view".

Maybe that should be JAQass. A couple of weeks ago, Slate ran a piece by AskHistorians subreddit volunteer moderator Johannes Breit. He and his fellow moderators, who sound like they come from the Television without Pity school of moderation, keep AskHistorians high-signal, low-noise by ruthlessly stamping on speculation, abuse, and anything that smacks of denial of established fact. The Holocaust and Nazi Germany are popular topics, so Holocaust denial is a particular target of their efforts.

"Conversation is impossible if one side refuses to acknowledge the basic premise that facts are facts," he writes. And then: "Holocaust denial is a form of political agitation in the service of bigotry, racism, and anti-Semitism." For these reasons, he argues that Mark Zuckerberg's announced plan to control anti-Semitic hate speech on Facebook is to remove only postings that advocate violence will not work: In Breit's view, "Any attempt to make Nazism palatable again is a call for violence." Accordingly, the AskHistorians moderators have a zero-tolerance policy even for "just asking questions" - or JAQing, a term I hadn't noticed before - which in their experience is not innocent questioning at all, but deliberate efforts to sow doubt in the audiences' minds.

"Just asking questions" was apparently also Gwyneth Paltrow's excuse for not wanting to comply with Conde Nast's old-fangled rules about fact checking. It appears in HIV denial (that is, early 1990s Sunday Times-style refusal to accept the scientific consensus about the cause of AIDS).

One reason the AskHistorians moderators are so unforgiving, Breit writes, is because it shares a host - Reddit - with myriad other subcommunities that are "notorious for their toxicity". I'd argue this is a feature as well as a bug: AskHistorians' regime would be vastly harder to maintain if there weren't other places where people can blow off steam and vent their favorite anti-matter. As much as I loathe a business that promotes dangerous and unhealthy practices in the name of "wellness", I'm still a free speech advocate - actual free speech, not persecuted-conservative-mythology free speech.

I agree with Breit that Zuckerberg's planned approach for Facebook won't work. But Breit's approach isn't applicable either because of scale: AskHistorians, with a clearly defined mission and real expert commenters, has 37 moderators. I can't begin to guess how many that would translate to for Facebook, where groups are defined but the communities that form around each individual poster are not. That said, if you agree with Breit about the purpose of JAQ, his approach is close to the one I've always favored: distinguishing between content and behavior.

Mostly , we need principles. Without them, we have a patchwork of reactions but no standards to debate. We need not to confuse Google and Facebook with the internet. And we need to think about the readers as well as posters. Finally, we need to understand the tradeoffs. History teaches us a lot about the price of abrogating free speech. The events of the last two years have taught us that our democracies can be undermined by hostile actors turning social media to their own purposes.

My suspicion is that it's the economic incentives underlying these businesses that have to be realigned, and that the solution to today's problems is less about limiting speech than about changing business models to favor meaningful connection rather than "engagement" (aka, outrage). That probably won't be enough by itself, but it's the part of the puzzle that is currently not getting enough attention..


Illustrations: Benjamin Franklin, who said, "Whoever would overthrow the liberty of a nation must begin by subduing the freeness of speech."

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 27, 2018

Think horses, not zebras

IBM-watson-jeopardy.pngThese two articles made a good pairing: Oscar Schwartz's critique of AI hype in the Guardian, and Jennings Brown's takedown of IBM's Watson in real-world contexts. Brown's tl;dr: "This product is a piece of shit," a Florida doctor reportedly told IBM in the leaked memos on which Gizmodo's story is based. "We can't use it for most cases."

Watson has had a rough ride lately: in August 2017 Brown catalogued mounting criticisms of the company and its technology; that June, MIT Technology Review did, too. All three agree: IBM's marketing has outstripped Watson's technical capability.

That's what Schwartz is complaining about: even when scientists make modest claims; media and marketing hype it to the hilt. As a result, instead of focusing on design and control issues such as how to encode social fairness into algorithms, we're reading Nick Bostrom's suggestion that an uncontrolled superintelligent AI would kill humanity in the interests of making paper clips or the EU's deliberation about whether robots should have rights. These are not urgent issues, and focusing on them benefits only vendors who hope we don't look too closely at what they're actually doing.

Schwartz's own first example is the Facebook chat bots that were intended to simulate negotiation-like conversations. Just a couple of days ago someone referred to this as bots making up their own language and cited it as an example of how close AI is to the Singularity. In fact, because they lacked the right constraints, they just made strange sentences out of normal English words. The same pattern is visible with respect to self-driving cars.

You can see why: wild speculation drives clicks - excuse me, monetized eyeballs - but understanding what's wrong with how most of us think about accuracy in machine learning is *mathy*. Yet understanding the technology's very real limits is crucial to making good decisions about it.

With medicine, we're all particularly vulnerable to wishful thinking, since sooner or later we all rely on it for our own survival (something machines will never understand). The UK in particular is hoping AI will supply significant improvements because of the vast amount of patient, that is, training, data the NHS has to throw at these systems. To date, however, medicine has struggled to use information technology effectively.

Attendees at We Robot have often discussed what happens when the accuracy of AI diagnostics outstrips that of human doctors. At what point does defying the AI's decision become malpractice? At this year's conference, Michael Froomkin presented a paper studying the unwanted safety consequences of this approach (PDF).

The presumption is that the AI system's ability to call on the world's medical literature on top of generations of patient data will make it more accurate. But there's an underlying problem that's rarely mentioned: the reliability of the medical literature these systems are built on. The true extent of this issue began to emerge in 2005, when John Ioannidis published a series of papers estimating that 90% of medical research is flawed. In 2016, Ioannidis told Retraction Watch that systematic reviews and meta-analyses are also being gamed because of the rewards and incentives involved.

The upshot is that it's more likely to be unclear, when doctors and AI disagree, where to point the skepticism. Is the AI genuinely seeing patterns and spotting things the doctor can't? (In some cases, such as radiology, apparently yes. But clinical trials and peer review are needed.) Does common humanity mean the doctor finds clues in the patient's behavior and presentation that an AI can't? (Almost certainly.) Is the AI neutral in ways that biased doctors may not be? Stories of doctors not listening to patients, particularly women, are legion. Yet the most likely scenario is that the doctor will be the person entering data - which means the machine will rely on the doctor's interpretation of what the patient says. In all these conflicts, what balance do we tell the AI to set?

Much sooner than Watson will cure cancer we will have to grapple with which AIs have access to which research. In 2015, the team responsible for drafting Liberia's ebola recovery plan in 2014 wrote a justifiably angry op-ed in the New York Times. They had discovered that thousands of Liberians could have been spared ebola had a 1982 paper for Annals of Virology been affordable for them to read; it warned that Liberia needed to be included in the ebola virus endemic zone. Discussions of medical AI to date appear to handwave this sort of issue, yet cost structures, business models, and use of medical research are crucial. Is the future open access, licensing and royalties, all-you-can-eat subscriptions?

The best selling point for AI is that its internal corpus of medical research can be updated a lot faster than doctors' brains can be. In 2017, David Epstein wrote at ProPublica, many procedures and practices become entrenched, and doctors are difficult to dissuade from prescribing them even when they've been found useless. In the US, he added, the 21st Century Cures Act, passed in December 2016, threatens to make all this worse by lowering standards of evidence.

All of these are pressing problems no medical AI can solve. The problem, as usual, is us.

Illustrations: Watson wins at Jeopardy (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 1, 2018

The three IPs

Thumbnail image for 1891_Telegraph_Lines.jpgAgainst last Friday's date history will record two major European events. The first, as previously noted is the arrival into force of the General Data Protection Regulation, which is currently inspiring a number of US news sites to block Europeans. The second is the amazing Irish landslide vote to repeal the 8th amendment to the country's constitution, which barred legislators from legalizing abortion. The vote led the MEP Luke Ming Flanagan to comment that, "I always knew voters were not conservative - they're just a bit complicated."

"A bit complicated" sums up nicely most people's views on privacy; it captures perfectly the cognitive dissonance of someone posting on Facebook that they're worried about their privacy. As Merlin Erroll commented, terrorist incidents help governments claim that giving them enough information will protect you. Countries whose short-term memories include human rights abuses set their balance point differently.

The occasion for these reflections was the 20th birthday of the Foundation for Information Policy Research. FIPR head Ross Anderson noted on Tuesday that FIPR isn't a campaigning organization, "But we provide the ammunition for those who are."

Led by the late Caspar Bowden, FIPR was most visibly activist in the late 1990s lead-up to the passage of the now-replaced Regulation of Investigatory Powers Act (2000). FIPR in general and Bowden in particular were instrumental in making the final legislation less dangerous than it could have been. Since then, FIPR helped spawn the 15-year-old European Digital Rights and UK health data privacy advocate medConfidential.

Many speakers noted how little the debates have changed, particularly regarding encryption and surveillance. In the case of encryption, this is partly because mathematical proofs are eternal, and partly because, as Yes, Minister co-writer Antony Jay said in 2015, large organizations such as governments always seek to impose control. "They don't see it as anything other than good government, but actually it's control government, which is what they want.". The only change, as Anderson pointed out, is that because today's end-to-end connections are encrypted, the push for access has moved to people's phones.

Other perennials include secondary uses of medical data, which Anderson debated in 1996 with the British Medical Association. Among significant new challenges, Anderson, like many others noted the problems of safety and sustainability. The need to patch devices that can kill you changes our ideas about the consequences of hacking. How do you patch a car over 20 years? he asked. One might add: how do you stop a botnet of pancreatic implants without killing the patients?

We've noted here before that built infrastructure tends to attract more of the same. Today, said Duncan Campbell, 25% of global internet traffic transits the UK; Bude, Cornwall remains the critical node for US-EU data links, as in the days of the telegraph. As Campbell said, the UK's traditional position makes it perfectly placed to conduct global surveillance.

One of the most notable changes in 20 years: there were no less than two speakers whose open presence would have been unthinkable: Ian Levy, the technical director of the National Cyber Security centre, the defensive arm of GCHQ, and Anthony Finkelstein, the government's chief scientific advisor for national security. You wouldn't have seen them even ten years ago, when GCHQ was deploying its Mastering the Internet plan, known to us courtesy of Edward Snowden. Levy made a plea to get away from the angels versus demons school of debate.

"The three horsemen, all with the initials 'IP' - intellectual property, Internet Protocol, and investigatory powers - bind us in a crystal lattice," said Bill Thompson. The essential difficulty he was getting at is that it's not that organizations like Google DeepMind and others have done bad things, but that we can't be sure they haven't. Being trustworthy, said medConfidential's Sam Smith, doesn't mean you never have to check the infrastructure but that people *can* check it if they want to.

What happens next is the hard question. Onora O'Neill suggested that our shiny, new GDPR won't work, because it's premised on the no-longer-valid idea that personal and non-personal data are distinguishable. Within a decade, she said, new approaches will be needed. Today, consent is already largely a façade; true consent requires understanding and agreement.

She is absolutely right. Even today's "smart" speakers pose a challenge: where should my Alexa-enabled host post the privacy policy? Is crossing their threshold consent? What does consent even mean in a world where sensors are everywhere and how the data will be used and by whom may be murky. Many of the laws built up over the last 20 years will have to be rethought, particularly as connected medical devices pose new challenges.

One of the other significant changes will be the influx of new and numerous stakeholders whose ideas about what the internet is are very different from those of the parties who have shaped it to date. The mobile world, for example, vastly outnumbers us; the Internet of Things is being developed by Asian manufacturers from a very different culture.

It will get much harder from here, I concluded. In response, O'Neill was not content. It's not enough, she said, to point out problems. We must propose at least the bare bones of solutions.


Illustrations: 1891 map of telegraph lines (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


May 25, 2018

Who gets the kidney?

whogetsthekidney.jpg
At first glance, Who should get the kidney? seemed more reasonable and realistic than MIT's Moral Machine.

To recap: about a year ago, MIT ran an experiment, a variation of the old trolley problem, in which it asked visitors in charge of a vehicle about to crash to decide which nearby beings (adults, children, pets) to sacrifice and which to save. Crash!

As we said at the time, people don't think like that. In charge of a car, you react instinctively to save yourself, whoever's in the car with you, and then try to cause the least damage to everything else. Plus, much of the information the Moral Machine imagined - this stick figure is a Nobel prize-winning physicist; this one is a sex offender - just is not available to a car driver in a few seconds and even if it were, it's cognitive overload.

So, the kidney: at this year's We Robot, researchers offered us a series of 20 pairs of kidney recipients and a small selection of factors to consider: age, medical condition, number of dependents, criminal convictions, drinking habits. And you pick. Who gets the kidney?

Part of the idea as presented is that these people have a kidney available to them but it's not a medical match, and therefore some swapping needs to happen to optimize the distribution of kidneys. This part, which made the exercise sound like a problem AI could actually solve, is not really incorporated into the tradeoffs you're asked to make. Shorn of this ornamentation, Who Gets the Kidney? is a simple and straightforward question of whom to save. Or, more precisely, who in future will prove to have deserved to have been given this second chance at life? You are both weighing the value of a human being as expressed through a modest set of known characteristics and trying to predict the future. In this, it is no different from some real-world systems, such as the benefits and criminal justice systems Virginia Eubanks studies in her recent book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.

I found, as did the others in our group, that decision fatigue sets in very quickly. In this case, the goal - to use the choices to form like-minded discussion clusters of We Robot attendees - was not life-changing, and many of us took the third option, flipping a coin.

At my table, one woman felt strongly that the whole exercise was wrong; she embraced the principle that all lives are of equal value. Our society often does not treat them that way, and one reason is obvious: most people, put in charge of a kidney allocation system, want things arranged so that if they themselves they will get one.

Instinct isn't always a good guide, either. Many people, used to thinking in terms of protecting children and old people as "they've had their chance at life", automatically opt to give the kidney to the younger person. Granted, I'm 64, and see above paragraph, but even so: as distressing as it is to the parents, a baby can be replaced very quickly with modest effort. It is *very* expensive and time-consuming to replace an 85-year-old. It may even be existentially dangerous, if that 85-year-old is the one holding your society's institutional memory. A friend advises that this is a known principle in population biology.

The more interesting point, to me, was discovering that this exercise really wasn't any more lifelike than the moral machine. It seemed more reasonable because unlike the driver in the crashing car, kidney patients have years of documentation of their illness and there is time for them, their families, and their friends to fill in further background. The people deciding the kidney's destination are much better informed, and in the all-too-familiar scenario of allocating scarce resources. And yet: it's the same conundrum, and in the end how many of us want the machine, rather than a human, to decide whether we live or die?

Someone eventually asked: what if we become able to make an oversupply of kidneys? This only solves the top layer of the problem. Each operation has costs in surgeons' time, medical equipment, nursing care, and hospital infrastructure. Absent a disruptive change in medical technology, it's hard to imagine it will ever be easy to give everyone a kidney who needs one. Say it in food: we actually do grow enough food to supply everyone, but it's not evenly distributed, so in some areas we have massive waste and in others horrible famine (and in some places, both).

Moving to current practice, in a Guardian article Eubanks documents the similar conundrums confronting those struggling to allocate low-income housing, welfare, and other basic needs to poor people in the US in a time of government "austerity". The social workers, policy makers, and data scientists on these jobs have to make decisions, that, like the kidney and driving examples, have life-or-death consequences. In this case, as Eubanks puts it, they decide which get helped among "the most exploited and marginalized people in the United States". The automated systems Eubanks encounters do not lower barriers to programs as promised and, she writes, obscure the political choices that created these social problems in the first place. Automating the response doesn't change those.


Illustrations: Project screenshot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 26, 2018

Game of thrones

tennisballonclay.jpg"If a sport could have been invented with the possibility of corruption in mind, that sport would be tennis," wrote Richard Ings, the former Administrator of Rules for the Association of Tennis Professionals, in 2005 in a report not published until now.

This is a tennis story - but it's also a story about what happens when new technology meets a porous, populous, easily socially engineered, global system with great economic inequality that can be hacked to produce large sums of money. In other words, it's arguably a prototype for any number of cybersecurity stories.

A newly published independent panel report (PDF) finds a "tsunami" of corruption at the lower levels of tennis in the form of match-fixing and gambling, exactly as Ings predicted. This should surprise no one who's been paying attention. The extreme disparity between the money at the highly visible upper levels and the desperate scratching for the equivalent of worms for everyone else clearly sets up motives. Less clear until this publication were the incentives contributed by the game's structure and the tours' decision to expand live scoring to hundreds of tiny events.

Here's how tennis really works. The players - even those who never pass the first round - at the US Open or Wimbledon are the cream of the cream of the cream, generally ranked in the top 150 or so of the millions globally who play the game. Four times a year at the majors - the Australian Open, Roland Garros, the US Open, and Wimbledon - these pros have pretty good paydays. The rest of the year, they rack up frequent flyer miles, hotel bills, coaches' and trainers' salaries, and the many other expenses that go into maintaining an itinerant business style.

As Michael Mewshaw reported as long ago as 1983 in his book Short Circuit, "tanking" - deliberately losing - is a tour staple despite rules requiring "best efforts". People tank for many reasons: bad mood, fatigue, frustration, weather, niggling injuries, better money on offer elsewhere. But also, as Ings wrote: some matches have no significance, in part beeause, as Daily Tennis editor Robert Waltz has often pointed out, the ranking system does not penalize losses and pushes players to overplay, likely contributing to the escalating injury rate.

Between the Association of Tennis Players (the men's tour) and the Women's Tennis Association, there are more than 3,000 players with at least one ranking point. The report counted 336 men and 253 women who actually break even. Besides them, in 2013 the International Tennis Federation says counted 8,874 male, 4,862 female, professional tennis players, of whom 3,896 men, 2,212 women earned no prize money.

So, do the math: you're ranked in the low 800s, your shoulder hurts, your year-to-date prize money is $555, and you're playing three rounds of qualifying to earn entry to a 32-player main draw event whose total prize money is $25,000. Tournaments at that level are not required to provide housing or hospitality (food) and you're charged a $40 entry fee. You're a young player gaining match practice and points hoping to move up with all possible speed, or a player rebuilding your ranking after injury, or an aging player on the verge of having to find other employment. And someone who has previously befriended you as a fan offers you $50,000 to throw the match. Not greed, the report writes: desperation.

In some cases, losing in the final round of qualifying doesn't matter because there's already an open slot for a lucky loser. No one but you has to know. No one really *can* know if the crucial shot you missed was deliberate or not, and you're in the main draw anyway and will get that prize money and ranking points. Accommodating the request may not hurt you, but, the report argues, each of those individual decisions is a cancer eating away at the integrity of the game.

Ings foresaw all this in 2005, when he wrote his report (PDF), which appears as Appendix 11 of the new offering. On Twitter, Ings called it his proudest achievement in his time in sports.

Obviously, smartphones and the internet - especially mobile banking - play a crucial role. Between 2001 and 2014 Australian sports betting quintupled. Live scores and online betting companies facilitate remote in-play betting on matches most people don't know exist. And, the report finds, the restrictions on gambling in many countries have not helped; when people can't gamble legally they do so illegally, facilitated by online options. Provided with these technologies, corrupt bettors can bet on any sub-unit of a match: a point, a game, a set. A player who won't throw a match might still throw a point or a set. Bettors can also cheat by leveraging the delay between the second they see a point end on court and the time the umpire pushes the button to send the score to live services across the world. In some cases, the Guardian reported in 2016, corrupt umpires help out by extending that delay.

The fixes needed for all this are like the ones suggested for any other cybersecurity problem. Disrupt the enabling technology; educate the users; and root out the bad guys. The harder - but more necessary - things are fixing the incentives, because doing so requires today's winners to restructure a game that's currently highly profitable for them. Thirteen years on, will they do it?


Illustrations: Tennis ball (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 15, 2017

Bitcoin for dummies

Thumbnail image for Bitcoin_Digital_Currency_Logo.pngThe writers of the sitcom The Big Bang Theory probably thought they were on safe ground in early November when (at a guess) they pegged the price of bitcoin at $5,000 for the episode that had its first airing in the US on November 30 (Season 11, episode 9, "The Bitcoin Entanglement"). By then, it had doubled. This week, it neared $17,500, according to Coindesk. In between, it's dropped as much as 25% in a single day.

All of which explains why I've had numerous conversations this week in which I tried to talk people out of feeling bad that they didn't buy bitcoin back when it was cheap. Mortgaging your house or opening up credit card debt in order to buy bitcoin, as CNBC reports some people are doing, is a disastrously bad idea.

Bitcoin is at the stage where a sense of proportion is in short supply. You've got Deutsche Bank claiming that a bitcoin crash would endanger global markets, the Bank of England saying it's no threat, and Andrew Weilbacher at btcmanager.com arguing in return that the euro will be far more destructive. The Bank of England likely has it right: bitcoin is too small - at its $17,000 peak the whole market is $300 billion - to cause a global crash, even at current prices and volatility. It can certainly crash personal economies quite effectively, though.

But why stop Weilbacher when he's having fun? "Bitcoin is poised to overtake current technology for the internet and finance, not considering all of the other blockchain protocols. If and when this technology passes more archaic versions, it will begin to take on the total market valuation of the internet - $19 trillion - and the financial industry as a whole," he writes. Stuff like this always makes me think of this quote from Wall Street giant (and Warren Buffett teacher) Benjamin Graham: "Bright young men have been promising to work miracles with other people's money since time immemorial."

The dot-com bust was a great example. And yet, at its height in 2000 when even the most insistent dot-com boosters were admitting it was a bursting bubble, even the most skeptical believed that ten years later the internet would be much bigger. Many of those early internet companies never recovered, of course - but the internet still hasn't stopped growing.

So is bitcoin like an internet company or like the internet?

Bitcoin was conceived as two things: a cryptocurrency and a payment system. At the beginning people who mined or bought it were mostly curious and wanted to experiment. It was technically challenging, but cheap. A couple of years ago, we were hearing a lot about its potential for cutting costs out of financial transactions.

That dream is in trouble: the rapid rise in prices is killing bitcoin as a cost-cutter because as bitcoin's exchange rate goes up, so do its transaction costs. About 100,000 outlets worldwide accept payment in bitcoin, but there are also many private uses, particularly in areas where trust in government and the financial system is collapsing. The reality, though, is that very few people seriously use bitcoin as a currency and some of them are reconsidering. Steam, for example, announced on December 6 that it was ceasing to accept bitcoin payments partly because of pricing volatility but mostly because the fees are nearing $20 per transaction, 100 times what it cost when Steam started accepting it.

There's another problem, too: recent calculations say that the bitcoin transaction network is hideously energy-intensive, and even if miners derive all their power from renewables, if prices continue to rise it won't be sustainable. Even if it is, Visa is vastly faster and vastly more energy-efficient.

Those involved in fintech have been saying for some time that whatever happens to bitcoin, the blockchain, which records transactions in secure but verifiable blocks, is really significant (although older industry guys call it a "distributed ledger" and wonder why all the fuss over a 30-year-old technology). I see no reason not to believe them. However, you can't invest in the blockchain by buying bitcoin. Instead, the people investing in exploiting this are banks, other financial institutions, and large and small technology companies. That being the case, the idea that the power of the system lies in its decentralized peer-to-peer nature that requires no central authority seems likely to die even faster than the same idea about the internet itself. Get your libertarian rhetoric while you can. And your crypto kittens.

Bitcoin is not scaling. That doesn't mean other cryptocurrencies can't, but it does make Derek Thompson, who, writing for The Atlantic, called bitcoin a digital baseball card, without the faces or stats", even more likely to be right.

So, at present, most bitcoin owners are speculators hoping to cash out by selling to a greater fool. Over the time of bitcoin's existence, mining has moved from ordinary laptops to GPUs, to purpose-built ASICs. Today, most mining is controlled by a relative handful of players with giant clusters. If you are really insistent upon trying to make some money out of the bitcoin bubble, your best bet is the old picks and shovels approach. Needless to say, others have already thought of this.

Bottom line: you may regret missed opportunities but they don't make you feel nearly as stupid as the ones you took but wish you hadn't.


Illustrations: Bitcoin logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2017

Twister

Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".


Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 17, 2017

Counterfactuals

Thumbnail image for lanier-lrm-2017.jpgOn Tuesday evening, virtual reality pioneer and musician Jaron Lanier, in London to promote his latest book, Dawn of the New Everything, suggested the internet took a wrong turn in the 1990s by rejecting the idea of combating spam by imposing a tiny - "homeopathic" - charge to send email. Think where we'd be now, he said. The mindset of paying for things would have been established early, and instead of today's "behavior modification empires" we'd have a system where people were paid for the content they produce.

Lanier went on to invoke the ghost of Ted Nelson who began his earliest work on Project Xanadu in 1960, before ARPAnet, the internet, and the web. The web fosters copying. Xanadu instead gave every resource a permanent and unique address, and linking instead of copying meant nothing ever lost its context.

The problem, as Nelson's 2011 autobiography Possiplex and a 1995 Wired article, made plain, is that trying to get the thing to work was a heartbreaking journey filled with cycles of despair and hope that was increasingly orthogonal to where the rest of the world was going. While efforts continue, it's still difficult to comprehend, no matter how technically visionary and conceptually advanced it was. The web wins on simplicity.

But the web also won because it was free. Tim Berners-Lee is very clear about the importance he attaches to deciding not to patent the web and charge licensing fees. Lanier, whose personal stories about internetworking go back to the 1980s, surely knows this. When the web arrived, it had competition: Gopher, Archie, WAIS. Each had its limitations in terms of user interface and reach. The web won partly because it unified all their functions and was simpler - but also because it was freer than the others.

Suppose those who wanted minuscule payments for email had won? Lanier believes today's landscape would be very different. Most of today's machine learning systems, from IBM Watson's medical diagnostician to the various quick-and-dirty translation services rely on mining an extensive existing corpus of human-generated material. In Watson's case, it's medical research, case studies, peer review, and editing; in the case of translation services it's billions of side-by-side human-translated pages that are available on the web (though later improvements have taken a new approach). Lanier is right that the AIs built by crunching found data are parasites on generations of human-created and curated knowledge. By his logic, establishing payment early as a fundamental part of the internet would have ensured that the humans that created all that data would be paid for their contributions when machine learning systems mined it. Clarity would result: instead of the "cruel" trope that AIs are rendering humans unnecessary, it would be obvious that AI progress relied on continued human input. For that we could all be paid rather than being made "wards of the state".

Consider a practical application. Microsoft's LinkedIn is in court opposing HiQ, a company that scrapes LinkedIn's data to offer employers services that LinkedIn might like to offer itself. The case, which was decided in HiQ's favor in August but is appeal-bound, pits user privacy (argued by EPIC) against innovation and competition (argued by EFF). Everyone speaks for the 500 million whose work histories are on LinkedIn, but no one speaks for our individual ownership of our own information.

Let's move to Lanier's alternative universe and say the charge had been applied. Spam dropped out of email early on. We developed the habit of paying for information. Publishers and the entertainment industry would have benefited much sooner, and if companies like Facebook and LinkedIn had started, their business models would have been based on payments for posters and charges for readers (he claims to believe that Facebook will change its business model in this direction in the coming years; it might, but if so I bet it keeps the advertising).

In that world, LinkedIn might be our broker or agent negotiating terms with HiQ on our behalf rather than in its own interests. When the web came along, Berners-Lee might have thought pay-to-click logical, and today internet search might involve deciding which paid technology to use. If, that is, people found it economic to put the information up in the first place. The key problem with Lanier's alternative universe: there were no micropayments. A friend suggests that China might be able to run this experiment now: Golden Shield has full control, and everyone uses WeChat and AliPay.

I don't believe technology has a manifest destiny, but I do believe humans love free and convenient, and that overwhelms theory. The globally spreading all-you-can-eat internet rapidly killed the existing paid information services after commercial access was allowed in 1994. I'd guess that the more likely outcome of charging for email would have been the rise of free alternatives to email - instant messaging, for example, which happened in our world to avoid spam. The motivation to merge spam with viruses and crack into people's accounts to send spam would have arisen earlier than it did, so security would have been an earlier disaster. As the fundamental wrong turn, I'd instead pickcentralization.

Lanier noted the culminating irony: "The left built this authoritarian network. It needs to be undone."

The internet is still young. It might be possible, if we can agree on a path.


Illustrations: Jaron Lanier in conversation with Luke Robert Mason (Eva Pascoe);

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


October 27, 2017

The opposite of privilege

new-22portobelloroad.jpgA couple of weeks ago, Cybersalon held an event to discuss modern trends in workplace surveillance. In the middle, I found myself reminding the audience, many of whom were too young to remember, that 20 or so years ago mobile phones were known locally as "poserphones". "Poserphone" because they were still expensive enough recently enough that they were still associated with rich businessmen who wanted to show off their importance.

The same poseurship today looks like this: "I'm so grand I don't carry a mobile phone." In a sort of rerun of the 1997 anti-internet backlash, which was kicked off by Clifford Stoll's Silicon Snake-Oil, all over the place right now we're seeing numerous articles and postings about how the techies of Silicon Valley are disconnecting themselves and removing technology from the local classrooms. Granted, this has been building for a while: in 2014 the New York Times reported that Steve Jobs didn't let his children use iPhones or iPads.

It's an extraordinary inversion in a very short time. However, the notable point is that the people profiled in these stories are people with the agency to make this decision and not suffer for it. In April, Congressman Jim Sensenbrenner (R-WI), claimed airily that "Nobody has to use the internet", a statement easily disputed. A similar argument can be made about related technology such as phones and tablets: it's perfectly reasonable to say you need downtime or that you want your kids to have a solid classical education with plenty of practice forming and developing long-form thinking. But the option to opt out depends on a lot of circumstances outside of most people's control. You can't, for example disconnect your phone if your zero-hours contracts specifies you will be dumped if you don't answer when they call, nor if you're in high-urgency occupations like law, medicine, or journalism; nor can you do it if you're the primary carer for anyone else. For a homeless person, their mobile phone may be their only hope of finding a job or a place to live.

Battery concerns being what they are, I've long had the habit of turning off wifi and GPS unless I'm actively using them. As Transport for London increasingly seeks to use passenger data to understand passenger flow through the network and within stations, people who do not carry data-generating devices are arguably anti-social because they are refusing to contribute to improving the quality of the service. This argument has been made in the past with reference to NHS data, suggesting that patients who declined to share their data didn't deserve care.

cybersalon-october.jpgToday's employers, as Cybersalon highlighted and as speakers have previously pointed out at the annual Health Privacy Summit, may learn an unprecedented amount of intimate information about their employees via efforts like wellness programs and the data those capture from devices like Fitbits and smart watches. At Cornell, Karen Levy has written extensively about the because-safety black box monitoring coming to what historically has been the most independent of occupations, truck driving. At Middlesex Phoebe Moore is studying the impact of workplace monitoring on white collar workers. How do you opt out of monitoring if doing so means "opting out" of employment?

The latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location. Your voice may be captured by the waiting speech-driven device in your friend's car or home; ever tried asking someone to turn off Alexa-Siri-OKGoogle while you're there?

For these reasons, publicly highlighting your choice to opt out reads as, "Look how privileged I am", or some much more compact and much more offensive term. This will be even more true soon, when opting out will require vastly more effort than it does now and there will be vastly fewer opportunities to do it. Even today, someone walking around London has no choice about how many CCTV cameras capture them in motion. You can ride anonymously on the tube and buses as long as you are careful to buy, and thereafter always top up, your Oyster smart card with cash. But the latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location.

It's clear "normal" people are beginning to know this. This week, in a supermarket well outside of London, I was mocking a friend for paying for some groceries by tapping a credit card. "Cash," I said. "What's wrong with nice, anonymous cash?" "It took 20 seconds!" my friend said. The aging cashier regarded us benignly. "They can still track you by the mobile phones you're carrying," she said helpfully. Touché.

Illustrations: George Orwell's house at 22 Portobello road; Cybersalon (Phoebe Moore, center).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 29, 2017

Ubersicht

London_Skyline.jpgIf it keeps growing, every company eventually reaches a moment where this message arrives: it's time to grow up. For Microsoft, IBM, and Intel it was antitrust suits. Google's had the EU's €2.4 billion fine. For Facebook and Twitter, it may be abuse and fake news.

This week, it was Uber's turn, when Transport for London declined to renew Uber's license to operate. Uber's response was to apologize and promise to "do more" while urging customers to sign its change.org petition. At this writing, 824,000 have complied.

Travis_Kalanick_at_DLD_Munich_2015_(cropped).jpgI can't see the company as a victim here. The "sharing economy" rhetoric of evil protectionist taxi regulators has taken knocks from the messy reality of the company's behavior and the Grade A jerkishness of its (now former) founding CEO, the controversial Travis Kalanick. The tone-deaf "Rides of Glory" blog post. The safety-related incidents that TfL complains the company failed to report because: PR. Finally, the clashes with myriad city regulators the company would prefer to bypass: currently, it's threatening to pull out of Quebec. Previously, both Uber and Lyft quit Austin, Texas for a year rather than comply with a law requiring driver fingerprinting. In a second London case, Uber is arguing that its drivers are not employees; SumOfUs begs to differ.

People who use Uber love Uber, and many speak highly of drivers they use regularly. In one part of their brains, Uber-loving friends advocate for social justice, privacy, and fair wages and working conditions; in the other, Uber is so cool, cheap, convenient, and clean, and the app tracks the cab in real time...and city transport is old, grubby, and slow. But we're not at the beginning of this internet thing any more, and we know a lot about what happens when a cute, cuddly company people love grows into a winner-takes-all behemoth the size of a nation-state.

A consideration beyond TfL's pay grade is that transport doesn't really scale, as Hubert Horan explains in his detailed analysis of the company's business model. As Horan explains, Uber can't achieve new levels of cost savings and efficiency (as Amazon and eBay did) because neither the fixed costs of providing the service nor network externalities create them. More simply, predatory competition - that is, venture capitalists providing the large sums that allow Uber to undercut and put out of business existing cab firms (and potentially public transport) - is not sustainable until all other options have been killed and Uber can raise its prices.

Black_London_Cab.jpgEarlier this year, at a conference on autonomous vehicles, TfL's representative explained the problems it faces. London will grow from 8.6 million to 10 million people by 2025. On the tube, central zone trains are already running at near the safe frequency limit and space prohibits both wider and longer trains. Congestion will increase: trucks, cars, cabs, buses, bicycles, and pedestrians. All these interests - plus the thousands of necessary staff - need to be balanced, something self-interested companies by definition do not do. In Silicon Valley, where public transport is relatively weak, it may not be clearly understood how deeply a city like London depends on it.

At Wired UK, Matt Burgess says Uber will be back. When Uber and Lyft exited Austin, Texas rather than submit to a new law requiring them to fingerprint drivers, within a year state legislators had intervened. But that was several scandals ago, which is why I think that this once SorryWatch has it wrong: Uber's apology may be adequately drafted (as they suggest, minus the first paragraph), but the company's behaviour has been egregious enough to require clear evidence of active change. Uber needs a plan, not a PR campaign - and urging its customers to lobby for it does not suggest it's understood that.

At London Reconnections, John Bull explains the ins and outs of London's taxi regulation in fascinating detail. Bull argues that in TfL Uber has met a tech-savvy and forward-thinking regulator that is its own boss and too big to bully. Given that almost the only cost the company can squeeze is its drivers' compensation, what protections need to be in place? How does increasing hail-by-app taxi use fit into overall traffic congestion?

Uber is one of the very first of the new hybrid breed of cyber-physical companies. Bypassing regulators - asking forgiveness rather than permission - may have flown when the consequences were purely economic, but it can't be tolerated in the new era of convergence, in which the risks are. My iPhone can't stab me in my bed, (as Bill Smart has memorably observed, but that's not true of these hybrids..

TfL will presumably focus on rectifying the four areas in its announcement. Beyond that, though I'd like to see Uber pressed for some additional concessions. In particular, I think the company - and others like it - should be required to share their aggregate ride pattern data (not individual user accounts) with TfL to aid the authority to make better decisions for the benefit of all Londoners. As Tom Slee, the author of What's Yours Is Mine: Against the Sharing Economy, has put it, "Uber is not 'the future', it's 'a future'".


Illustrations: London skyline (by Mewiki); London black cab (Jimmy Barrett; Travis Kalanick (Dan Taylor).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 30, 2012

Robot wars

Who'd want to be a robot right now, branded a killer before you've even really been born? This week, Huw Price, a philosophy professor, Martin Rees, an emeritus professor of cosmology and astrophysics, and Jaan Tallinn, co-founder of Skype and a serial speaker at the Singularity Summit, announced the founding of the Cambridge Project for Existential Risk. I'm glad they're thinking about this stuff.

Their intention is to build a Centre for the Study of Existential Risk. There are many threats listed in the short introductory paragraph explaining the project - biotechnology, artificial life, nanotechnology, climate change - but the one everyone seems to be focusing on is: yep, you got it, KILLER ROBOTS - that is, artificial general intelligences so much smarter than we are that they may not only put us out of work but reshape the world for their own purposes, not caring what happens to us. Asimov would weep: his whole purpose in creating his Three Laws of Robotics was to provide a device that would allow him to tell some interesting speculative, what-if stories and get away from the then standard fictional assumption that robots were eeeevil.

The list of advisors to Cambridge project has some interesting names: Hermann Hauser, now in charge of a venture capital fund, whose long history in the computer industry includes founding Acorn and an attempt to create the first mobile-connected tablet (it was the size of a 1990s phone book, and you had to write each letter in an individual box to get it to recognize handwriting - just way too far ahead of its time); and Nick Bostrum of the Future of Humanity Institute at Oxford. The other names are less familiar to me, but it looks like a really good mix of talents, everything from genetics to the public understanding of risk.

The killer robots thing goes quite a way back. A friend of mine grew up in the time before television when kids would pay a nickel for the Saturday show at a movie theatre, which would, besides the feature, include a cartoon or two and the next chapter of a serial. We indulge his nostalgia by buying him DVDs of old serials such as The Phantom Creeps, which features an eight-foot, menacing robot that scares the heck out of people by doing little more than wave his arms at them.

Actually, the really eeeevil guy in that movie is the mad scientist, Dr Zorka, who not only creates the robot but also a machine that makes him invisible and another that induces mass suspended animation. The robot is really just drawn that way. But, like CSER, what grabs your attention is the robot.

I have a theory about this that I developed over the last couple of months working on a paper on complex systems, automation, and other computing trends, and this is that it's all to do with biology. We - and other animals - are pretty fundamentally wired to see anything that moves autonomously as more intelligent than anything that doesn't. In survival terms, that makes sense: the most poisonous plant can't attack you if you're standing out of reach of its branches. Something that can move autonomously can kill you - yet is also more cuddly. Consider the Roomba versus a modern dishwasher. Counterintuitively, the Roomba is not the smarter of the two.

And so it was that on Wednesday, when Voice of Russia assembled a bunch of us for a half-hour radio discussion, the focus was on KILLER ROBOTs, not synthetic biology (which I think is a much more immediately dangerous field) or climate change (in which the scariest new development is the very sober, grown-up, businesslike this-is-getting-expensive report from the insurer Munich Re). The conversation was genuinely interesting, roaming from the mysteries of consciousness to the problems of automated trading and the 2010 flash crash. Pretty much everyone agreed that there really isn't sufficient evidence to predict a date at which machines might be intelligent enough to pose an existential risk to humans. You might be worried about self-driving cars, but they're likely to be safer than drunk humans.

There is a real threat from killer machines; it's just that it's not super-human intelligence or consciousness that's the threat here. Last week, Human Rights Watch and the International Human Rights Clinic published Losing Humanity: the Case Against Killer Robots, arguing that governments should act pre-emptively to ban the development of fully autonomous weapons. There is no way, that paper argues, for autonomous weapons (which the military wants so fewer of *our* guys have to risk getting killed) to distinguish reliably between combatants and civilians.

There were some good papers on this at this year's We Robot conference from Ian Kerr and Kate Szilagyi (PDF) and Markus Wegner.

From various discussions, it's clear that you don't need to wait for *fully* autonomous weapons to reach the danger point. In today's partially automated systems, the operator may be under pressure to make a decision in seconds and "automation bias" means the human will most likely accept whatever the machines suggests it will do, the military equivalent of clicking OK. The human in the loop isn't as much of a protection as we might hope against the humans designing these things. Dr Zorka, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series

May 25, 2012

Camera obscura

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't), or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 6, 2012

I spy

"Men seldom make passes | At girls who wear glasses," Dorothy Parker incorrectly observed in 1937. (How would she know? She didn't wear any). You have to wonder what she could have made of Google Goggles which, despite the marketing-friendly alliterative name, are neither a product (yet) nor a new idea.

I first experienced the world according to a heads-up display in 1997 during a three-day conference (TXT) on wearable computing at MIT ($). The eyes-on demonstration was a game of pool with the headset augmenting my visual field with overlays showing cuing angles. (Could be the next level of Olympic testing: checking athletes for contraband contract lenses and earpieces for those in sports where coaching is not allowed.)

At that conference, a lot of ideas were discussed and demonstrated: temperature-controlling T-shirts, garments that could send back details of a fallen soldier's condition, and so on. Much in evidence were folks like Thad Starner, who scanned my business card and handed it back to me and whose friends commented on the way he'd shift his eyes to his email mid-conversation, and Steve Mann, who turned himself into a cyborg experiment as long ago as the 1980s. Checking their respective Web pages, I see that Mann hasn't updated the evolution of wearables graphic since the late 1990s, by which time the headset looked like an ordinary pair of sunglasses; in 2002, when airport security forced him to divest his gear, he had trouble adjusting to life without it. Starner is on leave to work at...Project Glass, the home of Google Goggles.

The problem when a technological dream spans decades is that between conception and prototype things change. In 1997, that conference seemed to think wearable computing - keyboards embroidered in conductive thread, garments made of cloth woven from copper-covered strands, souped-up eyeglasses, communications-enabled watches, and shoes providing from the energy generated in walking - surely was a decade or less away.

The assumptions were not particularly contentious. People wear wrist watches and jewelry, right? So they'll wear things with the same fashion consciousness, but functional. Like, it measures and displays your heart rhythms (a woman danced wearing a light-flashing pendant that sped up with her heart rate), or your moods (high-tech mood rings), or acts as the controller for your personal area network.

Today, a lot of people don't *wear* wrist watches any more.

For wearable guys, it's good progress. The functionality that required 12 pounds of machinery draped about your person - I see from my pieces linked above and my contemporaneous notes, that the rig I tried felt like wearing a very heavy, inflexible sandwich board - is an iPhone or Android. Even my old Palm Centro comes close. As Jack Schofield writes in the Guardian, the headset is really all that's left that we don't have. And Google has a lot of competition.

What interests me is let's say these things do take off in a big way. What then? Where will the information come from to display on those headsets? Who will be the gatekeepers? If we - some of us - want to see every building decorated with outsized female nudes, will we have to opt in for porn?

My speculation here is surely not going to be futuristic enough, because like most people I'm locked into current trends. But let's say that glasses bolt onto the mobile/Internet ecologies we have in place. It is easy to imagine that, if augmented reality glasses do take off, they will be an important gateway to the next generation of information services. Because if all the glasses are is a different way of viewing your mobile phone, then they're essentially today's ear pieces - surely not sufficient motivation for people with good vision to wear glasses. So, will Apple glasses require an iTunes account and an iOS device to gain access to a choice of overlays to turn on and off that you receive from the iTunes store in real time? Similarly, Google/Android/Android marketplace. And Microsoft/Windows Mobile/Bing or something. And whoever.

So my questions are things like: will the hardware and software be interoperable? Will the dedicated augmented reality consumer need to have several pairs? Will it be like, "Today I'm going mountain climbing. I've subscribed to the Ordnance Survey premium service and they have their own proprietary glasses, so I'll need those. And then I need the Google set with the GPS enhancement to get me there in the car and find a decent restaurant afterwards." And then your kids are like, "No, the restaurants are crap on Google. Take the Facebook pair, so we can ask our friends." (Well, not Facebook, because the kids will be saying, "Facebook is for *old* people." Some cool, new replacement that adds gaming.)

What's that you say? These things are going to collapse in price so everyone can afford 12 pairs? Not sure. Prescription glasses just go on getting more expensive. I blame the involvement of fashion designers branding frames, but the fact is that people are fussy about what they wear on their faces.

In short, will augmented reality - overlays on the real world - be a new commons or a series of proprietary, necessarily limited, world views?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


November 25, 2011

Paul Revere's printing press

There is nothing more frustrating than watching smart, experienced people reinvent known principles. Yesterday's Westminster Forum on cybersecurity was one such occasion. I don't blame them, or not exactly: it's just maddening that we have made so little progress, while the threats keep escalating. And it is from gatherings like this one that government policy is made.

Rephrasing Bill Clinton's campaign slogan, "It's the people, stupid," said Philip Virgo, chairman of the security panel of the IT Livery Company, to kick off the day, a sentiment echoed repeatedly by nearly every other speaker. Yes, it's the people - who trust when they shouldn't, who attach personal devices to corporate networks, who disclose passwords when they shouldn't, who are targeted by today's Facebook-friending social engineers. So how many people experts on the program? None. Psychologists? No. Nor any usability experts or people whose jobs revolve around communication, either. (Or women, but I'm prepared to regard that as a separate issue.)

Smart, experienced guys, sure, who did a great job of outlining problems and a few possible solutions. Somewhere toward the end of the proceedings, someone allowed in passing that yes, it's not a good idea to require people to use passwords that are too complex to remember easily. This is the state of their art? It's 12 years since Angela Sasse and Anne Adams covered this territory in Users Are Not the Enemy. Sasse has gone on to help found the field of security economics, which seeks to quantify the cost of poorly designed security - not just in data breaches and DoS attacks but in the lost productivity of frustrated, overburdened users. Sasse argues that the problem isn't so much the people as user-hostile systems and technology.

"As user-friendly as a cornered rat," Virgo says he wrote of security software back in 1983. Anyone who's looked at configuring a firewall lately knows things haven't changed that much. In a world of increasingly mass-market software and devices, security software has remained resolutely elitist: confusing error messages, difficult configuration, obscure technology. How many users know what to do when their browser says a Web site certificate is invalid? Or how to answer anti-virus software that asks whether you want to authorise HIPS/RegMod-007?

"The current approach is not working," said William Beer, director of information security and cybersecurity for PriceWaterhouseCoopers. "There is too much focus on technology, and not enough focus from business and government leaders." How about academics and consumers, too?

There is no doubt, though, that the threats are escalating. Twenty years ago, the biggest worry was that a teenaged kid would write a virus that spread fast and furious in the hope of getting on the evening news. Today, an organized criminal underground uses personal information to target a small group of users inside RSA, leveraging that into a threat to major systems worldwide. (Trend Micro CTO Andy Dancer said the attack began in the real world with a single user befriended at their church. I can't find verification, however.)

The big issue, said Martin Smith, CEO of The Security Company, is that "There's no money in getting the culture right." What's to sell if there's no technical fix? Like when your plane is held to ransom by the pilot, or when all it takes to publish 250,000 US diplomatic cables is one alienated, low-ranked person with a DVD burner and a picture of Lady Gaga? There's a parallel here to pharmaceuticals: one reason we have few weapons to combat rampaging drug resistance is that for decades developing new antibiotics was not seen as a profitable path.

Granted, you don't, as Dancer said afterwards, want to frame security as an issue of "fixing the people" (but we already know better than that). Nor is it fair to ban company employees from social media lest some attacker pick it up and use it to create a false sense of trust. Banning the latest new medium, said former GCHQ head John Bassett, is just the instinctive reaction in a disturbance; in 1775 Boston the "problem" was Paul Revere's printing press stirring up trouble.

Nor do I, personally, want to live in a trust-free world. I'm happy to assume the server next to me is compromised, but "Trust no one" is a lousy way to live.

Since perfect security is not possible, Dancer advised, organizations should plan for the worst. Good advice. When did I first hear it? Twenty years ago and most months since, by Peter Neumann in his RISKS Forum. It is depressing and frustrating that we are still having this conversation as if it were new - and that we will have it all over again over the next decade as smart meters roll out to 26 million British households by 2020, opening up the electrical grid to attacks that are already being predicted and studied.

Neumann - and Dancer - is right. There is no perfect security because it's in no one's interest to create it. Plan for the worst.

To Gene Spafford, 1989: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room protected by armed guards - and even then I have my doubts."

For everything else, there's a stolen Mastercard.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 11, 2011

The sentiment of crowds

Context is king.

Say to a human, "I'll meet you at the place near the thing where we went that time," and they'll show up at the right place. That's from the 1987 movieBroadcast News: Aaron (Albert Brooks) says it; cut to Jane (Holly Hunter), awaiting him at a table.

But what if Jane were a computer and what she wanted to know from Aaron's statement was not where to meet but how Aaron felt about it? This is the challenge facing sentiment analysis.

At Wednesday's Sentiment Analysis Symposium, the key question of context came up over and over again as the biggest challenge to the industry of people who claim that they can turn Tweets, blog postings, news stories, and other mass data sources into intelligence.

So context: Jane can parse "the place", "the thing", and "that time" because she has expert knowledge of her past with Aaron. It's an extreme example, but all human writing makes assumptions about the knowledge and understanding of the reader. Humans even use those assumptions to implement privacy in a public setting: Stephen Fry could retweet Aaron's words and still only Jane would find the cafe. If Jane is a large organization seeking to understand what people are saying about it and Aaron is 6 million people posting on Twitter, Tom can use sentiment analyzer tools to give a numerical answer. And numbers always inspire confidence...

My first encounter with sentiment analysis was this summer during Young Rewired State, when a team wanted to create a mood map of the UK comparing geolocated tweets to indices of multiple deprivation. This third annual symposium shows that here is a rapidly engorging industry, part PR, part image consultancy, and part artificial intelligence research project.

I was drawn to it out of curiosity, but also because it all sounds slightly sinister. What do sentiment analyzers understand when I say an airline lounge at Heathrow Terminal 4 "brings out my inner Sheldon? What is at stake is not precise meaning - humans argue over the exact meaning of even the greatest communicators - but extracting good-enough meaning from high-volume data streams written by millions of not-monkeys.

What could possibly go wrong? This was one of the day's most interesting questions, posed by the consultant Meta Brown to representatives of the Red Cross, the polling organization Harris Interactive, and Paypal. Failure to consider the data sources and the industry you're in, said the Red Cross's Banafsheh Ghassemi. Her example was the period just after Hurricane Irene, when analyzing social media sentiment would find it negative. "It took everyday disaster language as negative," she said. In addition, because the Red Cross's constituency is primarily older, social media are less indicative than emails and call center records. For many organizations, she added, social media tend to skew negative.

Earlier this year, Harris Interactive's Carol Haney, who has had to kill projects when they failed to produce sufficiently accurate results for the client, told a conference, "Sentiment analysis is the snake oil of 2011." Now, she said, "I believe it's still true to some extent. The customer has a commercial need for a dial pointing at a number - but that's not really what's being delivered. Over time you can see trends and significant change in sentiment, and when that happens I feel we're returning value to a customer because it's not something they received before and it's directionally accurate and giving information." But very small changes over short time scales are an unreliable basis for making decisions.

"The difficulty in social media analytics is you need a good idea of the questions you're asking to get good results," says Shlomo Argamon, whose research work seems to raise more questions than answers. Look at companies that claim to measure influence. "What is influence? How do you know you're measuring that or to what it correlates in the real world?" he asks. Even the notion that you can classify texts into positive and negative is a "huge simplifying assumption".

Argamon has been working on technology to discern from written text the gender and age - and perhaps other characteristics - of the author, a joint effort with his former PhD student Ken Bloom. When he says this, I immediately want to test him with obscure texts.

Is this stuff more or less creepy than online behavioral advertising? Han-Sheong Lai explained that Paypal uses sentiment analysis to try to glean the exact level of frustration of the company's biggest clients when they threaten to close their accounts. How serious are they? How much effort should the company put into dissuading them? Meanwhile Verint's job is to analyze those "This call may be recorded" calls. Verint's tools turn speech to text, and create color voiceprint maps showing the emotional high points. Click and hear the anger.

"Technology alone is not the solution," said Philip Resnik, summing up the state of the art. But, "It supports human insight in ways that were not previously possible." His talk made me ask: if humans obfuscate their data - for example, by turning off geolocation - will this industry respond by finding ways to put it all back again so the data will be more useful?

"It will be an arms race," he agrees. "Like spam."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 28, 2011

Crypto: the revenge

I recently had occasion to try out Gnu Privacy Guard, the Free Software Foundation's version of PGP, Phil Zimmermann's legendary Pretty Good Privacy software. It was the first time I'd encrypted an email message since about 1995, and I was both pleasantly surprised and dismayed.

First, the good. Public key cryptography is now implemented exactly the way it should have been all along: once you've installed it and generated a keypair, encrypting a message is ticking a box or picking a menu item inside your email software. Even key management is handled by a comprehensible, well-designed graphical interface. Several generations of hard work have created this and also ensured that the various versions of PGP, OpenPGP, and GPG are interoperable, so you don't have to worry about who's using what. Installation was straightforward and the documentation is good.

Now, the bad. That's where the usability stops. There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners.

Item: the subject line doesn't get encrypted. There is nothing you can do about this except put a lot of thought into devising a subject line that will compel people to read the message but that simultaneously does not reveal anything of value to anyone monitoring your email. That's a neat trick.

Item: watch out for attachments, which are easily accidentally sent in the clear; you need to encrypt them separately before bundling them into the message.

Item: while there is a nifty GPG plug-in for Thunderbird - Enigmail - Outlook, being commercial software, is less easily supported. GPG's GpgOL module works only with 2003 (SP2 and above) and 2007, and not on 64-bit Windows. The problem is that it's hard enough to get people to change *one* habit, let alone several.

Item: lacking appropriate browser plug-ins, you also have to tell them to stop using Webmail if the service they're used to won't support IMAP or POP3, because they won't be able to send encrypted mail or read what others send them over the Web.

Let's say you're running a field station in a hostile area. You can likely get users to persevere despite these points by telling them that this is their work system, for use in the field. Most people will put up with a some inconvenience if they're being paid to do so and/or it's temporary and/or you scare them sufficiently. But that strategy violates one of the basic principles of crypto-culture, which is that everyone should be encrypting everything so that sensitive traffic doesn't stand out. They are of course completely right, just as they were in 1993, when the big political battles over crypto were being fought.

Item: when you connect to a public keyserver to check or download someone's key, that connection is in the clear, so anyone surveilling you can see who you intend to communicate with.

Item: you're still at risk with regard to traffic data. This is what RIPA and data retention are all about. What's more significant? Being able to read a message that says, "Can you buy milk?" or the information that the sender and receiver of that message correspond 20 times a day? Traffic data reveals the pattern of personal relationships; that's why law enforcement agencies want it. PGP/GPG won't hide that for you; instead, you'll need to set up a proxy or use Tor to mix up your traffic and also protect your Web browsing, instant messaging, and other online activities. As Tor's own people admit, it slows performance, although they're working on it (PDF).

All this says we're still a long way from a system that the mass market will use. And that's a damn shame, because we genuinely need secure communications. Like a lot of people in the mid-1990s, I'd have thought that by now encrypted communications would be the norm. And yet not only is SSL, which protects personal details in transit to ecommerce and financial services sites, the only really mass-market use, but it's in trouble. Partly, this is because of the technical issues raised in the linked article - too many certification authorities, too many points of failure - but it's also partly because hardly anyone understands how to check that a certificate is valid or knows what to do when warnings pop up that it's expired or issued for a different name. The underlying problem is that many of the people who like crypto see it as both a cool technology and a cause. For most of us, it's just more fussy software. The big advance since the mid 1990s is that at least now the *developers* will use it.

Maybe mobile phones will be the thing that makes crypto work the way it should. See, for example, Dave Birch's current thinking on the future of identity. We've been arguing about how to build an identity infrastructure for 20 years now. Crypto is clearly the mechanism. But we still haven't solved the how.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 30, 2011

Trust exercise

When do we need our identity to be authenticated? Who should provide the service? Whom do we trust? And, to make it sustainable, what is the business model?

These questions have been debated ever since the early 1990s, when the Internet and the technology needed to enable the widespread use of strong cryptography arrived more or less simultaneously. Answering them is a genuinely hard problem (or it wouldn't be taking so long).

A key principle that emerged from the crypto-dominated discussions of the mid-1990s is that authentication mechanisms should be role-based and limited by "need to know"; information would be selectively unlocked and in the user's control. The policeman stopping my car at night needs to check my blood alcohol level and the validity of my driver's license, car registration, and insurance - but does not need to know where I live unless I'm in violation of one of those rules. Cryptography, properly deployed, can be used to protect my information, authenticate the policeman, and then authenticate the violation result that unlocks more data.

Today's stored-value cards - London's Oyster travel card, or Starbucks' payment/wifi cards - when used anonymously do capture some of what the crypto folks had in mind. But the crypto folks also imagined that anonymous digital cash or identification systems could be supported by selling standalone products people installed. This turned out to be wholly wrong: many tried, all failed. Which leads to today, where banks, telcos, and technology companies are all trying to figure out who can win the pool by becoming the gatekeeper - our proxy. We want convenience, security, and privacy, probably in that order; they want security and market acceptance, also probably in that order.

The assumption is we'll need that proxy because large institutions - banks, governments, companies - are still hung up on identity. So although the question should be whom do we - consumers and citizens - trust, the question that ultimately matters is whom do *they* trust? We know they don't trust *us*. So will it be mobile phones, those handy devices in everyone's pockets that are online all the time? Banks? Technology companies? Google has launched Google Wallet, and Facebook has grand aspirations for its single sign-on.

This was exactly the question Barclaycard's Tom Gregory asked at this week's Centre for the Study of Financial Innovation round-table discussion (PDF) . It was, of course, a trick, but he got the answer he wanted: out of banks, technology companies, and mobile network operators, most people picked banks. Immediate flashback.

The government representatives who attended Privacy International's 1997 Scrambling for Safety meeting assumed that people trusted banks and that therefore they should be the Trusted Third Parties providing key escrow. Brilliant! It was instantly clear that the people who attended those meetings didn't trust their banks as much as all that.

One key issue is that, as Simon Deane-Johns writes in his blog posting about the same event, "identity" is not a single, static thing; it is dynamic and shifts constantly as we add to the collection of behaviors and data representing it.

As long as we equate "identity" with "a person's name" we're in the same kind of trouble the travel security agencies are when they try to predict who will become a terrorist on a particular flight. Like the browser fingerprint, we are more uniquely identifiable by the collection of our behaviors than we are by our names, as detectives who search for missing persons know. The target changes his name, his jobs, his home, and his wife - but if his obsession is chasing after trout he's still got a fishing license. Even if a link between a Starbucks card and its holder's real-world name is never formed, the more data the card's use enters into the system the more clearly recognizable as an individual he will be. The exact tag really doesn't matter in terms of understanding his established identity.

What I like about Deane-Johns' idea -

the solution has to involve the capability to generate a unique and momentary proof of identity by reference to a broad array of data generated by our own activity, on the fly, which is then useless and can be safely discarded"

is two things. First, it has potential as a way to make impersonation and identity fraud much harder. Second is that implicit in it is the possibility of two-way authentication, something we've clearly needed for years. Every large organization still behaves as though its identity is beyond question whereas we - consumers, citizens, employees - need to be thoroughly checked. Any identity infrastructure that is going to be robust in the future must be built on the understanding that with today's technology anyone and anything can be impersonated.

As an aside, it was remarkable how many people at this week's meeting were more concerned about having their Gmail accounts hacked than their bank accounts. My reasoning is that the stakes are higher: I'd rather lose my email reputation than my house.. Their reasoning is that the banking industry is more responsive to customer problems than technology companies. That truly represents a shift from 1997, when technology companies were smaller and more responsive.

More to come on these discussions...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 19, 2011

Back to school

Is a university education worth paying for? the Guardian asked this week on the day A-level results came out. This question is doing the rounds. The Atlantic figures the next big US economic crash will be created by defaults on student loans. The Chicago Tribune panics about students' living expenses. The New York Times frets that you need a Master's degree to rise above minimum wage in a paper hat and calculates the return on investment of that decision. CNN Money mulls the debt load of business school.

The economic value of a degree is a good question with many variables, and one I was lucky not to have to answer from 1971 to 1975, when my parents paid Cornell $3,000, rising to $5,000, a year in tuition fees, plus living expenses. What's happened since is staggering (and foreseen). In 2011-2012, the equivalent tuition fee is $41,325. Plus living expenses. A four-year degree now costs more than most people pay for a house. A friend sending his kid to Columbia estimates the cost, all-in, for nine months per year at $60,000 (Manhattan is expensive). Times four. Eight, if his other kid chooses a similar school. And in ten years we may think these numbers are laughable, too: university endowments have fallen in value like everyone else's savings; the recession means both government grants and alumni donations are down; and costs are either fixed or continue to rise.

At Oxford, the tuition fees vary according to what you're studying. A degree comparable to mine starts at £3,375 for EU students and tops out at £12,700 for overseas students. Overseas students are also charged a "college fee" of nearly £6,000. Next year, it seems most universities will be charging home students the government-allowed maximum of £9,000. Even though these numbers look cheap to an American, I understand the sticker shock: as recently as 1998 university tuition was free. My best suggestion to English 13-year-olds is to get your parents to move to Scotland as soon as possible.

These costs, coupled with the recession, led Paypal founder Peter Thiel to suggest that the US is in the grip of an about-to-burst education bubble.

Business school was always a numbers proposition: every prospective student has always weighed up the costs of tuition and a two-year absence from their paid jobs against the improved career prospects they hoped to acquire. But those pursuing university degrees were always more of a mixed bag big enough to include those who wanted to put off becoming adults and who liked learning and being surrounded by smart people to do it with.

Is the Net the solution, as some suggest? A Russian at a party once explained her country's intellectual achievements to me: anyone, no matter how poor, could take pride in learning and improving their mind. Why couldn't we do the same? Certainly, the Net is a fantastic resource for the pursuit of learning for its own sake, particularly in the sciences. MIT led the way in putting its course materials online, and even without paying journal subscriptions there are full libraries ready for perusal.

It's a lovely thought, but I suspect it works best for those who are surrounded by or at least come from a culture that respects intellectual pursuits and that kind of self-disciplined application. My parents came from immigrant families and fervently believed in education as a way to a better life. Even though they themselves lacked formal education past high school they read a great deal of high-quality material throughout their lives; their house was full of newspapers, books, and magazines on almost every topic. My parents certainly saw a degree as a kind of economic passport, but that clearly wasn't the only reason they valued education. My mother was so ashamed that she hadn't finished high school that she spent her late 60s getting a GED and completing a college degree. At that age, she certainly wasn't doing a degree for its economic benefits.

The Net is a trickier education venue if you really do value learning solely in economic terms and what you need is the credential. If it's to become a substitute for today's university system, a number of things will have to change. Home higher education in at least some fields will need to go through the same process as home schooling has in order to establish itself as a viable alternative. Employers will need to find ways for people to prove their knowledge and ability. Universities will have to open up to the idea of admitting home-study students for a single, final year (distance learning specialists like the Open University ought to have a leg up here). Prestigious institutions will survive; cheap institutions will survive. At the biggest risk are the middle ones with good-but-not-great reputations and high costs.

Popular culture likes to depict top universities as elite clubs filled with arrogant, entitled snobs. The danger this will become true. If it does, as long as they continue to fill the ranks of politicians, CEOs, and the rest of the "great and good", that group will become ever more remote from the people they govern and employ. Bad news, all round.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 8, 2011

The grey hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views, also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers - not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea - and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden?
One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friendly default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night, believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help - but for that to be effective we need far greater transparency from all these - largely American - companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 24, 2011

Bits of the realm

Money is a collective hallucination. Or, more correctly, money is an abstraction that allows us to exchange - for example - writing words for food, heat, or a place to live. Money means the owner of the local grocery store doesn't have to decide how many pounds of flour and Serrano ham 1,000 words are worth, and I don't have to argue copyright terms while paying my mortgage.

But, as I was reading lately in The Coming Collapse of the Dollar and How to Profit From It by James Turk, the owner of GoldMoney, that's all today's currencies are: abstractions. Fiat currencies. The real thing disappeared when we left the gold standard in 1972. Accordingly none of the currencies I regularly deal with - pounds, dollars, euros - are backed by anything more than their respective governments' "full faith and credit". Is this like Tinker Bell? If I stop believing will they cease to exist? Certainly some people think so, and that's why, as James Surowiecki wrote in The New Yorker in 2004, some people believe that gold is the One True Currency.

"I've never bought gold," my father said in the late 1970s. "When it's low, it's too expensive. When it's high, I wish I'd bought it when it was low." Gold was then working its way up to its 1980 high of $850 an ounce. Until 2004 it did nothing but decline. Yesterday, it closed at $1518.

That's if you view the world from the vantage point of the dollar. If gold is your sun and other currencies revolve around it like imaginary moths, nothing's happened. An ounce just buys a lot more dollars now than it did and someday will be tradable for wagonloads of massively devalued fiat currencies. You don't buy gold; you convert your worthless promises into real stored value.

Personally, I've never seen the point of gold. It has relatively few real-world uses. You can't eat it, wear it, or burn it for heat and light. But it does have the useful quality of being a real thing, and when you could swap dollars for gold held in the US government's vault, dollars, too, were real things.

The difficulty with Bitcoins is that they have neither physical reality nor a long history (even if that history is one of increasing abstraction). Using them requires people to make the jump from the national currency they know straight into bits of code backed by a bunch of mathematics they don't understand.

Alternative currencies have been growing for some time now - probably the first was Ithaca Hours, which are accepted by many downtown merchants in my old home town of Ithaca, NY. What gives Ithaca Hours their value is that you trade them with people you know and can trust to support the local economy. Bitcoins up-end that: you trade them with strangers who can't find out who you are. The big advantage, as Bitcoin Consultancy co-founder Amir Taaki explains on Slashdot, is that their transaction costs are very, very low.

The idea of cryptographic cash is not new, though the peer-to-peer implementation is. Anonymous digital cash was first mooted by David Chaum in the 1980s; his company Digicash, began life in 1990 and by 1993 had launched ecash. At the time, it was widely believed that electronic money was an inevitable development. And so it likely is, especially if you believe e-money specialist Dave Birch, who would like nothing more than to see physical cash die a painful death.

But the successful electronic transaction systems are those that build on existing currencies and structures. Paypal, founded in 1998, achieved its success by enabling online use of existing bank accounts and credit cards. M-pesa and other world-changing mobile phone schemes are enabling safe and instant transactions to the developing world. Meanwhile, Digicash went bankrupt in 1999 and every other digital cash attempt of the 1990s also failed.

For comparison, ten-year-old GoldMoney's latest report says it's holding $1.9 billion in precious metals and currencies for its customers - still tiny by global standards. The most interesting thing about GoldMoney, however, is not the gold bug aspect but its reinvention of gold as electronic currency: you can pay other GoldMoney customers in electronic shavings of gold (minimum one-tenth of a gram) at a fraction of international banking costs.

"Humans will trade anything," writes Danny O'Brien in his excellent discussion of Bitcoin. Sure: we trade favors, baseball cards, frequent flyer miles, and information. But Birch is not optimistic about Bitcoin's long-term chances, and neither am I, though for different reasons. I believe that people are very conservative about what they will take in trade for the money they've worked hard to earn. Warren Buffett and his mentor, Benjamin Graham, typically offer this advice about investing: don't buy things you don't understand. By that rule, Bitcoins fail. Geeks are falling on them like any exciting, new start-up, but I'll guess that most people would rather bet on horses than take Bitcoins. There's a limit to how abstract we like our money to be.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 27, 2011

Mixed media

In a fight between technology and the law, who wins? This question has been debated since Net immemorial. Techies often seem to be sure that law can't win against practical action. And often this has been true: the release of PGP defeated the International Traffic in Arms Regulations that banned the export of strong cryptography; TOR lets people all over the world bypass local Net censorship rules; and, in the UK, over the last few weeks Twitter has been causing superinjunctions to collapse.

On the other hand, technology by itself is often not enough. The final defeat of the ITAR had at least as much to do with the expansion of ecommerce and the consequent need for secured connections as it did with PGP. TOR is a fine project, but it is not a mainstream technology. And Twitter is a commercial company that can be compelled to disclose what information it has about its users (though granted, this may be minimal) or close down accounts.

Last week, two events took complementary approaches to this question. The first, Big Tent UK, hosted by Google, Privacy International, and Index on Censorship, featured panels and discussions loosely focused on how law can control technology. The second, OpenTech loosely focused on how technology can change our understanding of the world, if not up-end the law itself. At the latter event, projects like Lisa Evans' effort to understand government spending relied on government-published data, while others, such as OpenStreetMap and OpenCorporates seek to create open-source alternatives to existing proprietary services.

There's no question that doing things - or, in my case, egging on people who are doing things - is more fun than purely intellectual debate. I particularly liked the open-source hardware projects presented at OpenTech, some of which are, as presenter Paul Downey said, trying to disrupt a closed market. See for example, River Simple's effort to offer an open-source design for a haydrogen-powered car. Downey whipped through perhaps a dozen projects, all based on the notion that if something can be represented by lines on a PowerPoint slide you can send it to a laser cutter.

But here again I suspect the law will interfere at some point. Not only will open-source cars have to obey safety regulations, but all hardware designs will come up against the same intellectual property issues that have been dogging the Net from all directions. We've noted before Simon Bradshaw's work showing that copyright as applied to three-dimensional objects will be even more of a rat's nest than it has been when applied to "simple" things like books, music, and movies.

At BigTentUK, copyright was given a rest for once in favor of discussions of privacy, the limits of free speech, and revolution. As is so often the case with this type of discussion, it wasn't long before someone - British TV producer Peter Bazalgette - invoked George Orwell. Bizarrely, he aimed "Orwellian" at Privacy International executive director Simon Davies, who a minute before had proposed that the solution to at least some of the world's ongoing privacy woes would be for regulators internationally to collaborate on doing their jobs. Oddly, in an audience full of leading digital rights activists and entrepreneurs, no one admitted to representing the Information Commissioner's office.

Yet given these policy discussions as his prelude, the MP Jeremy Hunt (Con-South West Surry), the secretary of state for Culture, Olympics, Media, and Sport, focused instead on technical progress. We need two things for the future, he said: speed and mobility. Here he cited Bazalgette's great-great-grandfather's contribution to building the sewer system as a helpful model for today. Tasked with deciding the size of pipes to specify for London's then-new sewer system, Joseph Bazalgette doubled the size of pipe necessary to serve the area of London with the biggest demand; we still use those same pipes. We should, said Hunt, build bandwidth in the same foresighted way.

The modern-day Bazalgette, instead, wants the right to be forgotten: people, he said, should have the right to delete any information that they voluntarily surrender. Much like Justine Roberts, the founder of Mumsnet, who participated in the free speech panel, he seemed not to understand the consequences of what he was asking for. Roberts complained that the "slightly hysterical response" to any suggestion of moderating free speech in the interests of child safety inhibits real discussion; the right to delete is not easily implemented when people are embedded in a three-dimensional web of information.

The Big Tent panels on revolution and conflict would have fit either event, including href="http://en.wikipedia.org/wiki/Wael_Ghonim">Wael Ghonim who ran a Facebook page that fomented pro-democracy demonstrations in Egypt and respresentatives of PAX and Unitar, projects to use the postings of "citizen journalists" and public image streams respectively to provide early warnings of developing conflict.

In the end, we need both technology and law, a viewpoint best encapsulated by Index on Censorship chief executive John Kampfner, who said he was worried by claims that the Internet is a force for good. "The Internet is a medium, a tool," he said. "You can choose to use it for moral good or moral ill."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 14, 2011

Face time

The history of the Net has featured many absurd moments, but this week was some sort of peak of the art. In the same week I read that a) based a $450 million round of investment from Goldman Sachs Facebook is now valued at $50 billion, higher than Boeing's market capitalization and b) Facebook's founder, Mark Zuckerberg, is so tired of the stress of running the service that he plans to shut it down on March 15. As I seem to recall a CS Lewis character remarking irritably, "Why don't they teach logic in these schools?" If you have a company worth $50 billion and you don't much like running it any more, you sell the damn thing and retire. It's not like Zuckerberg even needs to wait to be Time's Man of the Year.

While it's safe to say that Facebook isn't going anywhere soon, it's less clear what its long-term future might be, and the users who panicked at the thought of the service's disappearance would do well to plan ahead. Because: if there's one thing we know about the history of the Net's social media it's that the party keeps moving. Facebook's half-a-billion-strong user base is, to be sure, bigger than anything else assembled in the history of the Net. But I think the future as seen by Douglas Rushkoff, writing for CNN last week is more likely: Facebook, he argued based on its arguably inflated valuation, is at the beginning of its end, as MySpace was when Rupert Murdoch bought it in 2005 for $580 million. (Though this says as much about Murdoch's Net track record as it does about MySpace: Murdoch bought the text-based Delphi, at its peak moment in late 1993.)

Back in 1999, at the height of the dot-com boom, the New Yorker published an article (abstract; full text requires subscription) comparing the then-spiking stock price of AOL with that of the Radio Corporation of America back in the 1920s, when radio was the hot, new democratic medium. RCA was selling radios that gave people unprecedented access to news and entertainment (including stock quotes); AOL was selling online accounts that gave people unprecedented access to news, entertainment, and their friends. The comparison, as the article noted, wasn't perfect, but the comparison chart the article was written around was, as the author put it, "jolly". It still looks jolly now, recreated some months later for this analysis of the comparison.

There is more to every company than just its stock price, and there is more to AOL than its subscriber numbers. But the interesting chart to study - if I had the ability to create such a chart - would be the successive waves of rising, peaking, and falling numbers of subscribers of the various forms of social media. In more or less chronological order: bulletin boards, Usenet, Prodigy, Genie, Delphi, CompuServe, AOL...and now MySpace, which this week announced extensive job cuts.

At its peak, AOL had 30 million of those; at the end of September 2010 it had 4.1 million in the US. As subscriber revenues continue to shrink, the company is changing its emphasis to producing content that will draw in readers from all over the Web - that is, it's increasingly dependent on advertising, like many companies. But the broader point is that at its peak a lot of people couldn't conceive that it would shrink to this extent, because of the basic principle of human congregation: people go where their friends are. When the friends gradually start to migrate to better interfaces, more convenient services, or simply sites their more annoying acquaintances haven't discovered yet, others follow. That doesn't necessarily mean death for the service they're leaving: AOL, like CIX, the The WELL, and LiveJournal before it, may well find a stable size at which it remains sufficiently profitable to stay alive, perhaps even comfortably so. But it does mean it stops being the growth story of the day.

As several financial commentators have pointed out, the Goldman investment is good for Goldman no matter what happens to Facebook, and may not be ring-fenced enough to keep Facebook private. My guess is that even if Facebook has reached its peak it will be a long, slow ride down the mountain and between then and now at least the early investors will make a lot of money.

But long-term? Facebook is barely five years old. According to figures leaked by one of the private investors, its price-earnings ratio is 141. The good news is that if you're rich enough to buy shares in it you can probably afford to lose the money.

As far as I'm aware, little research has been done studying the Net's migration patterns. From my own experience, I can say that my friends lists on today's social media include many people I've known on other services (and not necessarily in real life) as the old groups reform in a new setting. Facebook may believe that because the profiles on its service are so complex, including everything from status updates and comments to photographs and games, users will stay locked in. Maybe. But my guess is that the next online party location will look very different. If email is for old people, it won't be long before Facebook is, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 7, 2011

Scanning the TSA

There are, Bruce Schneier said yesterday at the Electronic Privacy Information Center mini-conference on the TSA (video should be up soon), four reasons why airport security deserves special attention, even though it directly affects a minority of the population. First: planes are a favorite terrorist target. Second: they have unique failure characteristics - that is, the plane crashes and everybody dies. Third: airlines are national symbols. Fourth: planes fly to countries where terrorists are.

There's a fifth he didn't mention but that Georgetown lawyer Pablo Molina and We Won't Fly founder James Babb did: TSAism is spreading. Random bag searches on the DC Metro and the New York subways. The TSA talking about expanding its reach to shopping malls and hotels. And something I found truly offensive, giant LED signs posted along the Maryland highways announcing that if you see anything suspicious you should call the (toll-free) number below. Do I feel safer now? No, and not just because at least one of the incendiary devices sent to Maryland state offices yesterday apparently contained a note complaining about those very signs.

Without the sign, if you saw someone heaving stones at the cars you'd call the police. With it, you peer nervously at the truck in front of you. Does that driver look trustworthy? This is, Schneier said, counter-productive because what people report under that sort of instruction is "different, not suspicious".

But the bigger flaw is cover-your-ass backward thinking. If someone tries to bomb a plane with explosives in a printer cartridge, missing a later attempt using the exact same method will get you roasted for your stupidity. And so we have a ban on flying with printer cartridges over 500g and, during December, restrictions on postal mail, something probably few people in the US even knew about.

Jim Harper, a policy scholar with the Cato Institute and a member of the Department of Homeland Security's Data Privacy and Integrity Advisory Committee, outlined even more TSA expansion. There are efforts to create mobile lie detectors that measure physiological factors like eye movements and blood pressure.

Technology, Lillie Coney observed, has become "like butter - few things are not improved if you add it."

If you're someone charged with blocking terrorist attacks you can see the appeal: no one wants to be the failure who lets a bomb onto a plane. Far, far better if it's the technology that fails. And so expensive scanners roll through the nation's airports despite the expert assessment - on this occasion, from Schneier and Ed Luttwak, a senior associate with the Center for Strategic and International Studies - that the scanners are ineffective, invasive, and dangerous. As Luttwak said, the machines pull people's attention, eyes, and brains away from the most essential part of security: watching and understanding the passengers' behavior.

"[The machine] occupies center stage, inevitably," he said, "and becomes the focus of an activity - not aviation security, but the operation of a scanner."

Equally offensive in a democracy, many speakers argued, is the TSA's secrecy and lack of accountability. Even Meera Shankar, the Indian ambassador, could not get much of a response to her complaint from the TSA, Luttwak said. "God even answered Job." The agency sent no representative to this meeting, which included Congressmen, security experts, policy scholars, lawyers, and activists.

"It's the violation of the entire basis of human rights," said the Stanford and Oxford lawyer Chip Pitts around the time that the 112th Congress was opening up with a bipartisan reading of the US Constitution. "If you are treated like cattle, you lose the ability to be an autonomous agent."

As Libertarian National Committee executive director Wes Benedict said, "When libertarians and Ralph Nader agree that a program is bad, it's time for our government to listen up."

So then, what are the alternatives to spending - so far, in the history of the Department of Homeland Security, since 2001 - $360 billion, not including the lost productivity and opportunity costs to the US's 100 million flyers?

Well, first of all, stop being weenies. The number of speakers who reminded us that the US was founded by risk-takers was remarkable. More people, Schneier noted, are killed in cars every month than died on 9/11. Nothing, Ralph Nader said, is spent on the 58,000 Americans who die in workplace accidents every year or the many thousands more who are killed by pollution or medical malpractice.

"We need a comprehensive valuation of how to deploy resources in a rational manner that will be effective, minimally invasive, efficient, and obey the Constitution and federal law," Nader said

So: dogs are better at detecting explosives than scanners. Intelligent profiling can whittle down the mass of suspects to a more manageable group than "everyone" in a giant game of airport werewolf. Instead, at the moment we have magical thinking, always protecting ourselves from the last attack.

"We're constantly preparing for the rematch," said Lillie Coney. "There is no rematch, only tomorrow and the next day." She was talking as much about Katrina and New Orleans as 9/11: there will always, she said, be some disaster, and the best help in those situations is going to come from individuals and the people around them. Be prepared: life is risky.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 17, 2010

Sharing values

And then they came for Google...

The notion that the copyright industries' war on file-sharing would eventually rise to the Google level of abstraction used to be a sort of joke. It was the kind of thing the owners of torrent search sites (and before them, LimeWire and Gnutella nodes) said as an extreme way of showing how silly the whole idea was that file-sharing could be stamped out by suing people. It was the equivalent in airport terms of saying, "What are they going to do? Have us all fly naked?"

This week, it came true. You can see why: the British Phonographic Institute's annual report cites research it commissioned from Harris Interactive showing that 58 percent of "illegal downloaders" used Google to find free music. (Of course, all free music is not unauthorized copies of music, but we'll get to that in a minute.)

The rise of Google in particular (it has something like 90 percent of the UK market, somewhat less in the US) and search engines in general as the main gateway through which people access the Internet made it I think inevitable that at some point the company would become a focus for the music industry. And Google is responding, announcing on December 2 that it would favor authorized content in its search listings and remove prevent "terms closely related with piracy" from appearing in AutoComplete.

Is this censorship? Perhaps, but I find it hard to get too excited about, partly because Autocomplete is the annoying boor who's always finishing my sentences wrongly, partly because having to type "torrent" doesn't seem like much of a hardship, and partly because I don't believe this action will make much of a difference. Still, as Google's design shifts more toward the mass market, such subtle changes will create ever-larger effects.

I would be profoundly against demonizing file-sharing technology by making it technically impossible to use Google to find torrent/cyber locker/forum sites - because such sites are used for many other things that have nothing to do with distributing music - but that's not what's being talked about here. It's worth noting, however, that this is (yet another) example of Google's double standards when it comes to copyright. Obliging the music industry's request costs them very little and also creates the opportunity to nudge its own YouTube a little further up the listings. Compare and contrast, however, to the company's protracted legal battle over its having digitized and made publicly available millions of books without the consent of the rights holders.

If I were the music industry I think I'd be generally encouraged by the BPI's report. It shows that paid, authorized downloads are really beginning to take off; digital now accounts for nearly 25 percent of UK record industry revenues. Harris Interactive found that approximately 7.7 million people in the UK continue to download music "illegally". Jupiter Research estimated the foregone revenues at £219 million. The BPI's arithmetic estimates that paid, authorized downloads represent about a quarter of all downloads. Seems to me that's all moving in the right direction - without, mind you, assistance from the draconian Digital Economy Act.

The report also notes the rise of unauthorized, low-cost pay sites that siphon traffic away from authorized pay services. These are, to my view, the equivalent of selling counterfeit CDs, and I have no problem with regarding them as legitimately lost sales or seeing them shut down.

Is the BPI's glass half-empty or half-full? I think it's filling up, just like we told them it would. They are progressively competing successfully with free, and they'd be a lot further along that path if they had started sooner.

As a former full-time musician with many friends still in the trade, it's hard to argue that encouraging people towards services that pay the artist at the expense of those that don't is a bad principle. What I really care about is that it should be as easy to find Andy Cohen playing "Oh, Glory" as it is to find Lady Gaga singing anything. And that's an area where the Internet is the best hope for parity we've ever had; as a folksinger friend of mine said a couple of years back, "The music business never did anything for us."

I've been visiting Cohen this week, and he's been explicating the German sociologist Ferdinand Tönnies' structure, with the music business as gesellschaft (society) versus folk music as community (gemeinschaft)

"Society has rules, communities have customs," he said last night. "When a dispute over customs has to be adjudicated, that's the border of society." Playing music for money comes under society's rules - that is, copyright. But for Cohen, a professional musician for more than 40 years with multiple CDs, music is community.

We've been driving around Memphis visiting his friends, all of whom play themselves, some easily, some with difficulty. Music is as much a part of their active lives as breathing. This is a fundamental disconnect from the music industry, which sees us all as consumers and every unpaid experience of music as a lost sale, This is what "sharing music" really means: playing and singing together - wherever.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 24, 2010

Lost in a Haystack

In the late 1990s you could always tell when a newspaper had just gotten online because it would run a story about the Good Times virus.

Pause for historical detail: the Good Times virus (and its many variants) was an email hoax. An email message with the subject heading "Good Times" or, later, "Join the Crew", or "Penpal Greetings", warned recipients that opening email messages with that header would damage their computers or delete the contents of their hard drives. Some versions cited Microsoft, the FCC, or some other authority. The messages also advised recipients to forward the message to all their friends. The mass forwarding and subsequent complaints were the payload.

The point, in any case, is that the Good Times virus was the first example of mass social engineering that spread by exploiting not particularly clever psychology and a specific kind of technical ignorance. The newspaper staffers of the day were very much ordinary new users in this regard, and they would run the story thinking they were serving their readers. To their own embarrassment, of course. You'd usually see a retraction a week or two later.

Austin Heap, the progenitor of Haystack, software he claimed was devised to protect the online civil liberties of Iranian dissidents, seems unlikely to have been conducting an elaborate hoax rather than merely failing to understand what he was doing. Either way, Haystack represents a significant leap upward in successfully taking mainstream, highly respected publications for a technical ride. Evgeny Morozov's detailed media critique underestimates the impact of the recession and staff cuts on an already endangered industry. We will likely see many more mess-equals-technology-plus-journalism stories because so few technology specialists remain in the post-recession mainstream media.

I first heard Danny O'Brien's doubts about Haystack in June, and his chief concern was simple and easily understood: no one was able to get a copy of the software to test it for flaws. For anyone who knows anything about cryptography or security, that ought to have been damning right out of the gate. The lack of such detail is why experienced technology journalists, including Bruce Schneier, generally avoided commenting on it. There is a simple principle at work here: the *only* reason to trust technology that claims to protect its users' privacy and/or security is that it has been thoroughly peer-reviewed - banged on relentlessly by the brightest and best and they have failed to find holes.

As a counter-example, let's take Phil Zimmermann's PGP, email encryption software that really has protected the lives and identities of far-flung dissidents. In 1991, when PGP first escaped onto the Net, interest in cryptography was still limited to a relatively small, though very passionate, group of people. The very first thing Zimmermann wrote in the documentation was this: why should you trust this product? Just in case readers didn't understand the importance of that question, Zimmermann elaborated, explaining how fiendishly difficult it is to write encryption software that can withstand prolonged and deliberate attacks. He was very careful not to claim that his software offered perfect security, saying only that he had chosen the best algorithms he could from the open literature. He also distributed the source code freely for review by all and sundry (who have to this day failed to find substantive weaknesses). He concludes: "Anyone who thinks they have devised an unbreakable encryption scheme either is an incredibly rare genius or is naive and inexperienced." Even the software's name played down its capabilities: Pretty Good Privacy.

When I wrote about PGP in 1993, PGP was already changing the world by up-ending international cryptography regulations, blocking mooted US legislation that would have banned the domestic use of strong cryptography, and defying patent claims. But no one, not even the most passionate cypherpunks, claimed the two-year-old software was the perfect, the only, or even the best answer to the problem of protecting privacy in the digital world. Instead, PGP was part of a wider argument taking shape in many countries over the risks and rewards of allowing civilians to have secure communications.

Now to the claims made for Haystack in its FAQ:

However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy.

Without proper and open testing of the entire system - peer review - they could not possibly know this. The strongest cryptographic algorithm is only as good as its implementation. And even then, as Clive Robertson writes in Financial Cryptography, technology is unlikely to be a complete solution.

What a difference a sexy news hook makes. In 1993, the Clinton Administration's response to PGP was an FBI investigation that dogged Zimmermann for two years; in 2010, Hillary Clinton's State Department fast-tracked Haystack through the licensing requirements. Why such a happy embrace of Haystack rather than existing privacy technologies such as Freenet, Tor, or other anonymous remailers and proxies remains as a question for the reader.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 9, 2010

The big button caper

There's a moment early in the second season of the TV series Mad Men when one of the Sterling Cooper advertising executives looks out the window and notices, in a tone of amazement, that young people are everywhere. What he was seeing was, of course, the effect of the baby boom. The world really *was* full of young people.

"I never noticed it," I said to a friend the next day.

"Well, of course not," he said. "You were one of them."

Something like this will happen to today's children - they're going to wake up one day and think the world is awash in old people. This is a fairly obvious consequence of the demographic bulge of the Baby Boomers, which author Ken Dychtwald has compared to "a pig going through a python".

You would think that mobile phone manufacturers and network operators would be all over this: carrying a mobile phone is an obvious safety measure for an older, perhaps infirm or cognitively confused person. But apparently the concept is more difficult to grasp than you'd expect, and so Simon Rockman, the founder and former publisher of What Mobile and now working for the GSM Association, convened a senior mobile market conference on Tuesday.

Rockman's pitch is that the senior market is a business opportunity: unlike other market sectors it's not saturated; older users are less likely to be expensive data users and more loyal. The margins are better, he argues, even if average revenue per user is low.

The question is, how do you appeal to this market? To a large extent, seniors are pretty much like everyone else: they want gadgets that are attractive, even cool. They don't want the phone equivalent of support stockings. Still, many older people do have difficulties with today's ultra-tiny buttons, icons, and screens, iffy sound quality, and complex menu structures. Don't we all?

It took Ewan MacLeod, the editor of Mobile Industry Review to point out the obvious. What is the killer app for most seniors in any device? Grandchildren, pictures of. MacLeod has a four-week-old son and a mother whose desire to see pictures apparently could only be fully satisfied by a 24-hour video feed. Industry inadequacy means that MacLeod is finding it necessary to write his own app to make sending and receiving pictures sufficiently simple and intuitive. This market, he pointed out, isn't even price-sensitive. Tell his mother she'll need to spend £60 on a device so she can see daily pictures of her grandkids, and she'll say, "OK." Tell her it will cost £500, and she'll say..."OK."

I bet you're thinking, "But the iPhone!" And to some extent you're right: the iPhone is sleek, sexy, modern, and appealing; it has a zoom function to enlarge its display fonts, and it is relatively easy to use. And so MacLeod got all the grandparents onto iPhones. But he's having to write his own app to easily organize and display the photos the phones receive: the available options are "Rubbish!"

But even the iPhone has problems (even if you're not left-handed). Ian Hosking, a senior research associate at the Cambridge Engineering Design Centre, overlaid his visual impairment simulation software so it was easy to see. Lack of contrast means the iPhone's white on black type disappears unreadably with only a small amount of vision loss. Enlarging the font only changes the text in some fields. And that zoom feature, ah, yes, wonderful - except that enabling it requires you to double-tap and then navigate with three fingers. "So the visual has improved, but the dexterity is terrible."

Oops.

In all this you may have noticed something: that good design is good design, and a phone design that accommodates older people will also most likely be a more usable phone for everyone else. These are principles that have not changed since Donald Norman formulated them in his classic 1998 book The Design of Everyday Things. To be sure there is some progress. Evelyne Pupeter-Fellner, co-founder of Emporia, for example, pointed out the elements of her company's designs that are quietly targeted at seniors: the emergency call system that automatically dials, in turn, a list of selected family members or friends until one answers; the ringing mechanism that lights up the button to press to answer. The radio you can insert the phone into that will turn itself down and answer the phone when it rings. The design that lets you attach it to a walker - or a bicycle. The single-function buttons. Similarly, the Doro was praised.

And yet it could all be so different - if we would only learn from Japan, where nearly 86 percent of seniors have - and use data on - mobile phones, according to Kei Shimada, founder of Infinita.

But in all the "beyond big buttons" discussion and David Doherty's proposition that health applications will be the second killer app, one omission niggled: the aging population is predominantly female, and the older the cohort the more that is true.

Who are least represented among technology designers and developers?

Older women.

I'd call that a pretty clear mismatch. Somewhere between we who design and they who consume is your problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 4, 2010

Return to the hacker crackdown

Probably many people had forgotten about the Gary McKinnon case until the new government reversed their decision to intervene in his extradition. Legal analysis is beyond our expertise, but we can outline some of the historical factors at work.

By 2001, when McKinnon did his breaking and entering into US military computers, hacking had been illegal in the UK for just over ten years - the Computer Misuse Act was passed in 1990 after the overturned conviction of Robert Schifreen and Steve Gold for accessing Prince Philip's Prestel mailbox.

Early 1990s hacking (earlier, the word meant technological cleverness) was far more benign than today's flat-out crimes of identity fraud, money laundering, and raiding bank accounts. The hackers of the era - most famously Kevin Mitnick were more the cyberspace equivalent of teenaged joyriders: they wandered around the Net rattling doorknobs and playing tricks to get passwords, and occasionally copied some bit of trophy software for bragging rights. Mitnick, despite spending four and a half years in jail awaiting trial, was not known to profit from his forays.

McKinnon's claim that he was looking for evidence that the US government was covering up information about alternative energy and alien visitations seems to me wholly credible. There was and is a definite streak of conspiracy theorists - particularly about UFOs - among the hacker community.

People seemed more alarmed by those early-stage hackers than they are by today's cybercriminals: the fear of new technology was projected onto those who seemed to be its masters. The series of 1990 "Operation Sundown" raids in the US, documented in Bruce Sterling's book , inspired the creation of the Electronic Frontier Foundation. Among other egregious confusions, law enforcement seized game manuals from Steven Jackson Games in Austin, Texas, calling them hacking instruction books.

The raids came alongside a controversial push to make hacking illegal around the world. It didn't help when police burst in at the crack of dawn to arrest bright teenagers and hold them and their families (including younger children) at gunpoint while their computers and notebooks were seized and their homes ransacked for evidence.

"I think that in the years to come this will be recognized as the time of a witch hunt approximately equivalent to McCarthyism - that some of our best and brightest were made to suffer this kind of persecution for the fact that they dared to be creative in a way that society didn't understand," 21-year-old convicted hacker Mark Abene ("Phiber Optik") told filmmaker Annaliza Savage for her 1994 documentary, Unauthorized Access (YouTube).

Phiber Optik was an early 1990s cause célèbre. A member of the hacker groups Legion of Doom and Masters of Deception, he had an exceptionally high media profile. In January 1990, he and other MoD members were raided on suspicion of having caused the AT&T crash of January 15, 1990, when more than half of the telephone network ceased functioning for nine hours. Abene and others were eventually charged in 1991, with law enforcement demanding $2.5 million in fines and 59 years in jail. Plea agreements reduced that a year in prison and 600 hours of community service. The company eventually admitted the crash was due to its own flawed software upgrade.

There are many parallels between these early days of hacking and today's copyright wars. Entrenched large businesses (then AT&T; now RIAA, MPAA, BPI, et al) perceive mostly young, smart Net users as dangerous enemies and pursue them with the full force of the law claiming exaggeratedly large-figure sums in damages. Isolated, often young, targets were threatened with jail and/or huge sums in damages to make examples of them to deter others. The upshot in the 1990s was an entrenched distrust of and contempt for law enforcement on the part of the hacker community, exacerbated by the fact that back then so few law enforcement officers understood anything about the technology they were dealing with. The equivalent now may be a permanent contempt for copyright law.

In his 1990 essay Crime and Puzzlement examining the issues raised by hacking, EFF co-founder John Perry Barlow wrote of Phiber Optik, whom he met on the WELL: "His cracking impulses seemed purely exploratory, and I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves."

When McKinnon was first arrested in March 2002 and then indicted in a Virginia court in October 2002 for cracking into various US military computers - with damage estimated at $800,000 - all this history will still fresh. Meanwhile, the sympathy and good will toward the US engendered by the 9/11 attacks had been dissipated by the Bush administration's reaction: the PATRIOT Act (passed October 2001) expanded US government powers to detain and deport foreign citizens, and the first prisoners arrived at Guantanamo in January 2002. Since then, the US has begun fingerprinting all foreign visitors and has seen many erosions to civil liberties. The 2005 changes to British law that made hacking into an extraditable offense were controversial for precisely these reasons.

As McKinnon's case has dragged on through extradition appeals this emotional background has not changed. McKinnon's diagnosis with Asperger's Syndrome in 2008 made him into a more fragile and sympathetic figure. Meanwhile, the really dangerous cybercriminals continue committing fraud, theft, and real damage, apparently safe from prosecution.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 28, 2010

Privacy theater

On Wednesday, in response to widespread criticism and protest Facebook finally changed its privacy settings to be genuinely more user-friendly - and for once, the settings actually are. It is now reasonably possible to tell at a glance which elements of the information you have on the system are visible and to what class of people. To be sure, the classes available - friends, friends of friends, and everyone - are still broad, but it is a definite improvement. It would be helpful if Facebook provided a button so you could see what your profile looks like to someone who is not on your friends list (although of course you can see this by logging out of Facebook and then searching for your profile). If you're curious just how much of your information is showing, you might want to try out Outbook.

Those changes, however, only tackle one element of a four-part problem.

1: User interface. Fine-grained controls are, as the company itself has said, difficult to present in a simple way. This is what the company changed this week and, as already noted, the new design is a big improvement. It can still be improved, and it's up to users and governments to keep pressure on the company to do so.

2: Business model. Underlying all of this, however, is the problem that Facebook still has make money. To some extent this is our own fault: if we don't want to pay money to use the service - and it's pretty clear we don't - then it has to be paid for some other way. The only marketable asset Facebook has is its user data. Hence Andrew Brown's comment that users are Facebook's product; advertisers are its customers. As others have commented, traditional media companies also sell their audience to their advertisers; but there's a qualitative difference in that traditional media companies also create their own content, which gives them other revenue streams.

3. Changing the defaults. As this site's graphic representation makes clear, since 2005 the changes in Facebook's default privacy settings have all gone one way: towards greater openness. We know from decades of experience that defaults matter because so many computer users never change them. It's why Microsoft has had to defend itself against antitrust actions regarding bundling Internet Explorer and Windows Media Player into its operating system. On Facebook, users should have to make an explicit decision to make their information public - opt in, rather than opt out. That would also be more in line with the EU's Data Protection Directive.

4: Getting users to understand what they're disclosing. Back in the early 1990s, AT&T ran a series of TV ads in the US targeting a competitor's having asked its customers the names of their friends and family for marketing purposes, "I don't want to give those out," the people in the ads were heard to say. Yet they freely disclose on Facebook every day exactly that sort of information. As director of the Foundation for Information Policy Research Caspar Bowden argued persuasively that traffic analysis - seeing who is talking to whom and with what frequency - is far more revealing than the actual contents of messages.

What makes today's social networks different from other messaging systems (besides their scale) is that typically those - bulletin boards, conferencing systems, CompuServe, AOL, Usenet, today's Web message boards - were and are organized around topics of interest: libel law reform, tennis, whatever. Even blogs, whose earliest audiences are usually friends, become more broadly successful because of the topics they cover and the quality of that coverage. In the early days, that structure was due to the fact that most people online were strangers meeting for the first time. These days, it allows those with minority interests to find each other. But in social media the organizing principle is the social connections of individual people whose tenure on the service begins, by and large, by knowing each other. This vastly simplifies traffic analysis.

A number of factors contributed to the success of Facebook. One was the privacy promises the company made (and have since revised). But another was certainly elements of dissatisfaction with the wider Net. I've heard Facebook described as an effort to reinvent the Net, and there's some truth to that in that it presents itself as a safer space. That image is why people feel comfortable posting pictures of their kids. But a key element in Facebook's success has, I think, also been the brokenness of email and, to a lesser degree, instant messaging. As these became overridden with spam, rather than grapple with spam and other unwanted junk or the uncertainty of knowing which friend was using which incompatible IM service, many people gravitated to social networks as a way of keeping their inboxes as personal space.

Facebook is undoubtedly telling the truth when it says that the privacy complaints have, so far, made little difference to the size and engagement of its user base. It's extreme to say that Facebook victimizes its users, but it is true that the active core of long-term users' expectations have been progressively betrayed. Facebook's users have no transparency about or control over what data Facebook shares with its advertisers. Making that visible would go a long way toward restoring users' trust.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 30, 2010

Child's play

In the TV show The West Wing (Season 6, Episode 17, "A Good Day") young teens tackle the president: why shouldn't they have the right to vote? There's probably no chance, but they made their point: as a society we trust kids very little and often fail to take them or their interests seriously.

That's why it was so refreshing to read in 2008's < a href="http://www.dcsf.gov.uk/byronreview/actionplan/">Byron Review the recommendation that we should consult and listen to children in devising programs to ensure their safety online. Byron made several thoughtful, intelligent analogies: we supervise as kids learn to cross streets, we post warning signs at swimming pools but also teach them to swim.

She also, more controversially, recommended that all computers sold for home use in the UK should have Kitemarked parental control software "which takes parents through clear prompts and explanations to help set it up and that ISPs offer and advertise this prominently when users set up their connection."

The general market has not adopted this recommendation; but it has been implemented with respect to the free laptops issued to low-income families under Becta's £300 million Home Access Laptop scheme, announced last year as part of efforts to bridge the digital divide. The recipients - 70,000 to 80,000 so far - have a choice of supplier, of ISP, and of hardware make and model. However, the laptops must meet a set of functional technical specifications, one of which is compliance with PAS 74:2008, the British Internet safety standard. That means anti-virus, access control, and filtering software: NetIntelligence.

Naturally, there are complaints; these fall precisely in line with the general problems with filtering software, which have changed little since 1996, when the passage of the Communications Decency Act inspired 17-year-old Bennett Haselton to start Peacefire to educate kids about the inner working of blocking software - and how to bypass it. Briefly:

1. Kids are often better at figuring out ways around the filters than their parents are, giving parents a false sense of security.

2. Filtering software can't block everything parents expect it to, adding to that false sense of security.

3. Filtering software is typically overbroad, becoming a vehicle for censorship.

4. There is little or no accountability about what is blocked or the criteria for inclusion.

This case looks similar - at first. Various reports claim that as delivered NetIntelligence blocks social networking sites and even Google and Wikipedia, as well as Google's Chrome browser because the way Chrome installs allows the user to bypass the filters.

NetIntelligence says the Chrome issue is only temporary; the company expects a fix within three weeks. Marc Kelly, the company's channel manager, also notes that the laptops that were blocking sites like Google and Wikipedia were misconfigured by the supplier. "It was a manufacturer and delivery problem," he says; once the software has been reinstalled correctly, "The product does not block anything you do not want it to." Other technical support issues - trouble finding the password, for example - are arguably typical of new users struggling with unfamiliar software and inadequate technical support from their retailer.

Both Becta and NetIntelligence stress that parents can reconfigure or uninstall the software even if some are confused about how to do it. First, they must first activate the software by typing in the code the vendor provides; that gets them password access to change the blocking list or uninstall the software.

The list of blocked sites, Kelly says, comes from several sources: the Internet Watch Foundation's list and similar lists from other countries; a manual assessment team also reviews sites. Sites that feel they are wrongly blocked should email NetIntelligence support. The company has, he adds, tried to make it easier for parents to implement the policies they want; originally social networks were not broken out into their own category. Now, they are easily unblocked by clicking one button.

The simple reaction is to denounce filtering software and all who sail in her - censorship! - but the Internet is arguably now more complicated than that. Research Becta conducted on the pilot group found that 70 percent of the parents surveyed felt that the built-in safety features were very important. Even the most technically advanced of parents struggle to balance their legitimate concerns in protecting their children with the complex reality of their children's lives.

For example: will what today's children post to social networks damage their chances of entry into a good university or a job? What will they find? Not just pornography and hate speech; some parents object to creationist sites, some to scary science fiction, others to Fox News. Yesterday's harmless flame wars are today's more serious cyber-bullying and online harassment. We must teach kids to be more resilient, Byron said; but even then kids vary widely in their grasp of social cues, common sense, emotional make-up, and technical aptitude. Even experts struggle with these issues.

"We are progressively adding more information for parents to help them," says Kelly. "We want the people to keep the product at the end. We don't want them to just uninstall it - we want them to understand it and set the policies up the way they want them." Like all of us, Kelly thinks the ideal is for parents to engage with their children on these issues, "But those are the rules that have come along, and we're doing the best we can."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 12, 2010

The cost of money

Everyone except James Allan scrabbled in the bag Joe DiVanna brought with him to the Digital Money Forum (my share: a well-rubbed 1908 copper penny). To be fair, Allan had already left by then. But even if he hadn't he'd have disdained the bag. I offered him my pocketful of medium-sized change and he looked as disgusted as if it were a handkerchief full of snot. That's what living without cash for two years will do to you.

Listen, buddy, like the great George Carlin said, your immune system needs practice.

People in developed countries talk a good game about doing away with cash in favor of credit cards, debit cards, and Oyster cards, but the reality, as Michael Salmony pointed out, is that 80 percent of payments in Europe are...cash. Cash seems free to consumers (where cards have clearer charges), but costs European banks €84 billion a year. Less visibly banks also benefit (when the shadow economy hoards high-value notes it's an interest-free loan), and governments profit from Seigniorage (when people buy but do not spend coins).

"Any survey about payment methods," Salmony said Wednesday, "reveals that in all categories cash is the preferred payment method." You can buy a carrot or a car; it costs you nothing directly; it's anonymous, fast, and efficient. "If you talk directly to supermarkets, they all agree that cash is brilliant - they have sorting machines, counting machines...It's optimized so well, much better than cards."

The "unbanked", of course, such as the London migrants Kavita Datta studies, have no other options. Talk about the digital divide, this is the digital money divide: the cashless society excludes people who can't show passports, can't prove their address, or are too poor to have anything to bank with.

"You can get a job without a visa, but not without a bank account," one migrant worker told her. Electronic payments, ain't they grand?

But go to Africa, Asia, or South America, and everything turns upside down. There, too, cash is king - but there, unlike here with banks and ATMs on every corner and a fully functioning system of credit cards and other substitutes, cash is a terrible burden. Of the 2.6 billion people living on less than $2 a day, said Ignacio Mas, fewer than 10 percent have access to formal financial services. Poor people do save, he said, but their lack of good options means they save in bad ways.

They may not have banks, but most do have mobile phones, and therefore digital money means no long multi-bus rides to pay bills. It means being able to send money home at low cost. It means saving money that can't be easily stolen. In Ghana 80 percent of the population have no access to financial services - but 80 percent are covered by MTN, which is partnering with the banks to fill the gap. In Pakistan, Tameer Microfinance Bank partnered with Telenor to launch Easy-Peisa, which did 150,000 transactions its first month and expects a million by December. One million people produce milk in Pakistan; Nestle pays them all painfully by check every month. The opportunity in these countries to leapfrog traditional banking and head into digital payments is staggering, and our banks won't even care. The average account balance of customers for Kenya's M-Pesa customers is...$3.

When we're not destroying our financial system, we have more choices. If we're going to replace cash, what do we replace it with and what do we need? Really smart people to figure out how to do it right - like Isaac Newton, said Thomas Levenson. (Really. Who knew Isaac Newton had a whole other life chasing counterfeiters?) Law and partnership protocols and banks to become service providers for peer-to-peer finance, said Chris Cook. "An iTunes moment," said Andrew Curry. The democratization of money, suggested conference organizer David Birch.

"If money is electronic and cashless, what difference does it make what currency we use?" Why not...kilowatt hours? You're always going to need to heat your house. Global warming doesn't mean never having to say you're cold.

Personally, I always thought that if our society completely collapsed, it would be an excellent idea to have a stash of cigarettes, chocolate, booze, and toilet paper. But these guys seemed more interested in the notion of Facebook units. Well, why not? A currency can be anything. Second Life has Linden dollars, and people sell virtual game world gold for real money on eBay.

I'd say for the same reason that most people still walk around with notes in their wallet and coins in their pocket: we need to take our increasing abstraction step by step. Many have failed with digital cash, despite excellent technology, because they asked people to put "real" money into strange units with no social meaning and no stored trust. Birch is right: storing value in an Oyster card is no different than storing value in Beenz. But if you say that money is now so abstract that it's a collective hallucination, then the corroborative details that give artistic verisimilitude to an otherwise bald and unconvincing currency really matter.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

March 5, 2010

The surveillance chronicles

There is a touching moment at the end of the new documentary Erasing David, which had an early screening last night for some privacy specialists. In it, Katie, the wife of the film's protagonist, filmmaker David Bond, muses on the contrast between the England she grew up in and the "ugly" one being built around her. Of course, many people become nostalgic for a kinder past when they reach a certain age, but Katie Bond is probably barely 30, and what she is talking about is the engorging Database State (PDF).

Anyone watching this week's House of Lords debate on the Digital Economy Bill probably knows how she feels. (The Open Rights Group has advice on appropriate responses.)

At the beginning, however, Katie's biggest concern is that her husband is proposing to "disappear" for a month leaving her alone with their toddler daughter and her late-stage pregnancy.

"You haven't asked," she points out firmly. "You're leaving me with all the child care." Plus, what if the baby comes? They agree in that case he'd better un-disappear pretty quickly.

And so David heads out on the road with a Blackberry, a rucksack, and an increasingly paranoid state of mind. Is he safe being video-recorded interviewing privacy advocates in Brussels? Did "they" plant a bug in his gear? Is someone about to pounce while he's sleeping under a desolate Welsh tree?

There are real trackers: Cerberus detectives Duncan Mee and Cameron Gowlett, who took up the challenge to find him given only his (rather common) name. They try an array of approaches, both high- and low-tech. Having found the Brussels video online, they head to St Pancras to check out arriving Eurostar trains. They set up a Web site to show where they think he is and send the URL to his Blackberry to see if they can trace him when he clicks on the link.

In the post-screening discussion, Mee added some new detail. When they found out, for example, that David was deleting his Facebook page (which he announced on the site and of which they'd already made a copy), they set up a dummy "secret replacement" and attempted to friend his entire list of friends. About a third of Bond's friends accepted the invitation. The detectives took up several party invitations thinking he might show.

"The Stasi would have had to have a roomful of informants," said Mee. Instead, Facebook let them penetrate Bond's social circle quickly on a tiny budget. Even so, and despite all that information out on the Internet, much of the detectives' work was far more social engineering than database manipulation, although there was plenty of that, too. David himself finds the material they compile frighteningly comprehensive.

In between pieces of the chase, the filmmakers include interviews with an impressive array of surveillance victims, politicians (David Blunkett, David Davis), and privacy advocates including No2ID's Phil Booth and Action on Rights for Children's Terri Dowty. (Surprisingly, no one from Privacy International, I gather because of scheduling issues.)

One section deals with the corruption of databases, the kind of thing that can make innocent people unemployable or, in the case of Operation Ore, destroy lives such as that of Simon Bunce. As Bunce explains in the movie, 98.2 percent of the Operation Ore credit card transactions were fraudulent.

Perhaps the most you-have-got-to-be-kidding moment is when former minister David Blunkett says that collecting all this information is "explosive" and that "Government needs to be much more careful" and not just assume that the public will assent. Where was all this people-must-agree stuff when he was relentlessly championing the ID card ? Did he - my god! - learn something from having his private life exposed in the press?

As part of his preparations, Bond investigates: what exactly do all these organizations know about him? He sends out more than 80 subject access requests to government agencies, private companies, and so on. Amazon.com sends him a pile of paper the size of a phone book. Transport for London tells him that even though his car is exempt his movements in and out of the charging zone are still recorded and kept. This is a very English moment: after bashing his head on his desk in frustration over the length of his wait on hold, when a woman eventually starts to say, "Sorry for keeping you..." he replies, "No problem".

Some of these companies know things about him he doesn't or has forgotten: the time he "seemed angry" on the phone to a customer service representative. "What was I angry about on November 21, 2006?" he wonders.

But probably the most interesting journey, after all, is Katie's. She starts with some exasperation: her husband won't sign this required form giving the very good nursery they've found the right to do anything it wants with their daughter's data. "She has no data," she pleads.

But she will have. And in the Britain she's growing up in, that could be dangerous. Because privacy isn't isolation and it isn't not being found. Privacy means being able to eat sand without fear.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 29, 2010

Game night

Why can't computer games get any serious love? The maverick Labour MP Tom Watson convened a meeting this week to ask just that. (Watson is also pushing for the creation of an advocacy group, Gamers' Voice (Facebook).) From the dates, the meeting is not in response to claims that playing computer games causes rickets.

Pause to go, "Huh?"

We all know what causes rickets in the UK. Winter at these crazy high latitudes causes rickets in the UK. Given the amount of atmosphere and cloud it has to get through in the darker months, sunlight can't muster enough oomph to make Vitamin D on the skins of the pasty, blue-white people they mostly have here. The real point of the clinical review paper that kicked off this round of media nonsense, Watson rants, is that half of all UK adults are deficient in Vitamin D in the winter and spring. Well, duh. Wearing sunscreen has made it worse. So do clothes. And this: to my vast astonishment on arrival here they don't put Vitamin D in the milk. But, hey, let's blame computer games!

And yet: games are taking over. In December Chart-Track market researchfound that the UK games industry is now larger than its film industry. Yesterday's game-playing kids are today's game-playing parents. One day we'll all be gamers on this bus. Criminals pay more for stolen World of Warcraft accounts than for credit card accounts (according to Richard Bartle), and the real-money market for virtual game world props is worth billions (PDF). But the industry gets no government support. Hence Watson's meeting.

At this point, I must admit that net.wars, too, has been deficient: I hardly ever cover games. As a freelance, I can't afford to be hooked on them, so I don't play them, so I don't know enough to write about them. In the early-to-mid 1990s I did sink hours into Hitchhiker's Guide to the Galaxy, Minesweeper, Commander Keen, Lemmings, Wolfenstein 3D, Doom, Doom 2, and some of Duke Nukem. At some point, I decided it was a bad road. When I waste time unproductively I need to feel that I'm about to do something useful. I switched the mouse to the left hand, mostly for ergonomic reasons, and my slightly lower competence with it was sufficient to deter further exploration. The other factor: Quake made it obvious that I'd reached my theoretical limit.

I know games are different now. I've watched a 20-something friend play World of Warcraft and Grand Theft Auto; I've even traded deaths with him in one of those multiplayer games where your real-life best friends are your mortal enemies. Watching him play The Sims as a recalcitrant teenager (is there any other kind?) was the most fun. It seemed like Cosmic Justice to see him shriek in frustration at the computer because the adults in his co-op household were *refusing to wash the dishes*. Ha!

For people who have jobs, games are a (sometimes shameful) hobby; for people who are self-employed they are a dangerous menace. Games are amateur sports without the fresh air. And they are today's demon medium, replacing TV, comic books (my parents believed these rotted the brain), and printed multi-volume novels. All of that contributes to why games get relatively little coverage outside of specialist titles and writers such as Aleks Krotoski and are studied by rare academics like Douglas Thomas and Richard Bartle.

Except: it's arguable that the structure of games and the kind of thinking they require - logical, problem-solving, exploratory, experimental - does in fact inspire a kind of mental fitness that is a useful background skill for our computer-dominated world. There are, as Tom Chatfield, one of the evening's three panelists and an editor at Prospect, says in his new book Fun, Inc, many valuable things people can and do learn from games. (I once watched an inveterate game-playing teen extract himself from the maze at Hampton Court in 15 seconds flat.)

And in fact, that's the thought with which the seminal game cum virtual world was started: in writing MUD, Bartle wanted to give people the means to explore their identities by creating different ones.

It's also fun. And an escape from drab reality. And a challenge. And active, rather than passive, entertainment. The critic Sam Leith (who has compared World of Warcraft to Chartres Cathedral) pointed out that the violent shoot-'em-up games that get the media attention are a small, stereotyped sector of the market that deliberately insert shocking violence recursively to get media attention and increase sales. Limiting the conversation to one stereotypical theme is the problem, not games themselves.

Philip Oliver, founder and CEO of the UK's large independent games developer, Blitz Games, listed some cases in point: in their first 12 weeks of release his company sold 500,000 copies of its The Biggest Loser TV and 3.8 million copies of its Burger King advertising game. And what about that wildly successful Wii Fit?

If you say, "That's different", there is the problem.

Still, if game players are all going to be stereotyped as violent players shooting things...I'm not sure who pointed out that the Houses of Parliament are a fabulous gothic castle in which to set a shoot-'em-up, but it's a great idea. Now, that would really be government support!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 9, 2010

Car talk

The most interesting thing I've heard all week was a snippet on CNBC in which a commentator talked about cars going out of style. The story was that in 2009 the US fleet of cars shrank by four million. That is, four million cars were scrapped without being replaced.

The commentator and the original story have a number of reasons: increasing urbanization, uncertainty about oil prices, frustration about climate change, and so on. But the really interesting trend is a declining interest in cars on the part of young people. (Presumably these are the same young people who don't watch enough TV.)

A pause to reminisce. In 1967, when I was 13, my father bought a grey Mercedes 230SL with a red interior. It should tell you something when I say that I don't like sports cars, have always owned Datsuns/Nissans (including a pickup truck and two Prairies), and am not really interested in cars that aren't mine but I still remember the make and model number of this car from 42 years ago. I remember hoping he wouldn't trade it in before I turned 16 and was old enough to drive. (He did. Nerts.)

When, at 21, I eventually did get my own first car (a medium blue Nissan 710 station wagon with a white leather-like interior), it felt like I had finally achieved independence. Having a car meant that I could leave my parents' house any time I wanted. The power of that was shocking; it utterly changed how I felt about being in their home.

In London, I hardly drive. The public transportation is too good and the traffic too dense. There are exceptions, of course, but the fact is that it would be cheaper for me to book a taxi every time I needed a car than it is to own one. And yet, the image of being behind the wheel on the open road, going nowhere and everywhere retains its power.

People think of the US as inextricably linked to car culture, but the fact is that our national love affair with the car is quite recent and was imposed on us. The 1988 movie Who Framed Roger Rabbit? had it right: at one time even Los Angeles had a terrific public transportation system. But starting in 1922, General Motors, acting in concert with a number of oil companies, most notably Chevron, deliberately set out to buy up and close down thousands of municipal streetcar systems. The scheme was not popular: people did not want to have to buy cars.

CNBC's suggestion was that today's young people find their independence differently: through their cell phones and the Internet. He has a point. As children, many baby boomers shared bedrooms with siblings. Use of the family phone was often restricted. The home was most emphatically not a place where a young adult could expect any privacy.

Today, kids go out less, first because their parents worry about their safety, later because their friends and social lives are on tap from the individual bedrooms they now tend to have. And even if they have to share the family computer and use it in a well-trafficked location, they can carve themselves out a private space inside their phones, by text if not by voice.

The Internet's potential to destroy or remake whole industries is much discussed: see also newspapers, magazines, long-distance telecommunications, music, film, and television. The "Google decade" so many commentators say is ending is, according to Slate, just the beginning of how Google, all by itself, will threaten industries: search portals, ad agencies, media companies, book publishers, telephone companies, Mapquest, soon smart phone manufacturers, and then the big man on campus, Microsoft.

But if there's one thing we know, it's that technology companies are bad bets because they can be and are challenged when the next wave comes along. Who thought ten years ago that Microsoft wouldn't kill everyone else in its field? Twenty years ago, IBM was the unbeatable gorilla.

The happening wave is mobile phones, and it isn't at all clear that Google will dominate, any more than Microsoft has succeeded in dominating the Internet. But the interesting thing is what mobile phones will kill. So far, it's made a dent in the watchmaking industry (because a lot of people carrying phones don't see why they need a watch, too). Similarly, smart phones have subsumed MP3 players, pocket televisions. Now, cars. And, if I had to guess, smart phones will be the most popular vehicles for ebooks, too, and for news. Tim O'Reilly, for example, says that ebooks really began to take off with the iPhone. Literary agents and editors may love the Kindle, but consumers reading while waiting for trains are more likely to choose their phones. Ray Kurzweil is very likely right on track with his cross-platform ereader software, Blio.

All this seems to me to validate the questions we pose whenever we're asked to subsidize the entertainment industry in its struggle to find its feet in this new world. Is it the right business model? Is it the right industry? Is it the right time?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

December 25, 2009

Second acts

Reviewing the big names of 2009 versus the big names of 1999 for ZDNet UK last week turned up some interesting trends there wasn't space to go into. Also worth noting: still unpublished is the reverse portion, looking at what the names who are Internet-famous in 2009 were doing in 1999. These were: Mark Zuckerberg (Facebook), Sergey Brin and Larry Page (Google), Rupert Murdoch, Barack Obama, and Jimmy Wales (Wikipedia).

One of the trends, of course, is the fact that there were so many women making technology headlines in 1999: Kim Polese (Marimba), Martha Lane Fox (Lastminute.com), Carly Fiorina (running - and arguably nearly destroying - HP), Donna Dubinsky (co-founder of Palm), and Eva Pascoe (a media darling for having started the first Internet café, London's Cyberia, and writing a newspaper column). It isn't easy now to come up with names of similar impact in 2009.

You can come up with various theories about this. For example: the shrinking pipeline reported ten years ago by both the ACM and the BCS has borne fruit, so that there are actually fewer women available to play the prominent parts these women did. As against that (as a female computer scientist friend points out) one of the two heads of Oracle is female.

The other obvious possibility is the opposite: that women in prominent roles in technology companies have become so commonplace that they don't command the splashy media attention they did ten years ago. I doubt this; if they're commonplace, you'd expect to see some of their names in common use. I will say, though, that I know quite a few start-ups founded or co-founded by women. It was interesting to learn, in looking up Eva Pascoe's current whereabouts, that part of her goal in starting Cyberia was to educate women about the Internet. She was, of course, right: at the time, particularly in Britain, the attitude was very much that computers were boys' toys and few women then had found the confidence to navigate the online world.

The other interesting thing is the varying fortunes of the technologies the names represent. Some, such as Napster (Shawn Fanning), Netscape (Marc Andreesen) and Cyberia, live on through their successors. Others have changed much less: HP (Fiorina) is still with us, and Palm (Dubinsky and Jeff Hawkins) may yet manage a comeback. Symbian has achieved pretty much everything Colly Myers hoped.

Several of the technologies present the earliest versions of the hot topics of 2009, most notably Napster, which kicked off the file-sharing wars. If I were a music industry executive, I'd be thinking now that I was a numb-nut not to make a deal with the original Napster: it was a company with a central server. Suing it out of existence begat the distributed Gnutella, the even more distributed eDonkey, and then the peer-to-peer BitTorrent and all the little Torrents. Every year, more material is available online with or without the entertainment industry's sanction. This year's destructive industry proposal, three strikes, will hurt all sorts of people if it becomes law - but it will not stop file-sharing.

Of course, Napster's - and contemporary MP3.com's - mistake was not being big enough. The Google Books case, one of the other big stories of the year, shows that size matters: had Brin and Page, still graduate students with an idea and some venture capital funding, tried scanning in library books in 1999 Google would be where Napster is now. Instead, of course, it's too big to fail.

The AOL/Time-Warner merger, for all that it has failed utterly, was the first warning of what has become a long-running debate about network neutrality. At the time, AOL was the biggest conduit for US consumer Internet access; merging with Time-Warner seemed to put dangerous control over that access in the hands of one of the world's largest owners of content. In the event, the marriage was a disastrous failure for both companies. But AOL, now divorced, may not be done yet: the "walled garden" approach to Internet content is finding new life with sites like Facebook. If, of course, it doesn't get run over by the juggernaut of 2009, Twitter.

If AOL does come back into style, it won't be the only older technology finding new life: the entire history of technology seems to be one of constant rediscovery. What, after all, is 2009's cloud computing but a reworking of what the 1960s called time-sharing?

Certainly, a revival of the walled garden would make life much easier for the >deep packet inspectors who would like to snoop intensively on all of us. Phorm, Home Office, it doesn't much matter: computers weren't really fast enough to peek inside data packets in real time much before this year.

One recently resurfaced name from the Net's early history that I didn't flag in the ZDNet piece is Sanford ("Spamford") Wallace, who in the late 1990s was widely blacklisted for sending spam email. By 1999, he had supposedly quit the business. And yet, this year he was convicted of 14,214,753 violations of the CAN-SPAM anti-spam act and told to pay Facebook more than $711 million. How times do not change.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

October 30, 2009

Kill switch

There's an old sort-of joke that goes, "What's the best way to kill the Internet?" The number seven answer, according to Simson Garfinkel, writing for HotWired in 1997: "Buy ten backhoes." Ba-boom.

The US Senate, never folks to avoid improving a joke, came up with a new suggestion: install a kill switch. They published this little gem (as S.773) on April 1. It got a flurry of attention and then forgotten until the last week or two. (It's interesting to look back at Garfinkel's list of 50 ways to kill the Net and notice that only two are government actions, and neither is installing a "kill switch").

To be fair, "kill switch" is an emotive phrase for what they have in mind, which is that the president:

may declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network
.

Now, there's a lot of wiggle room in a vague definition like "critical infrastructure system". That could be the Federal government's own servers. Or the electrical grid, the telephone network, the banking system, the water supply, or even, arguably, Google. (It has 64+ percent of US search queries, and if you can't find things the Internet might as well be dead.) But what this particular desire of the Senate's sounds most like is those confused users who think they can catch a biological virus from their computers.

Still, for the media, calling the Senate's idea a "kill switch" is attention-getting political genius. We don't call the president's power to order the planes out of the sky, as happened on 9/11 a "crash switch", but imagine the outcry against it if we did.

Technically, the idea that there's a single off switch waiting to be implemented somewhere, is of course ridiculous.

The idea is also administrative silliness: Obama, we hope, is kind of busy. The key to retaining sanity when you're busy is to get other people to do all the things they can without your input. We would hope that the people running the various systems powering the federal government's critical infrastructure could make their own, informed decisions - faster than Obama can - about when they need to take down a compromised server.

Despite wishful thinking, John Gilmore's famous aphorism, "The Net perceives censorship as damage and routes around", doesn't really apply here. For one thing, even a senator knows - probably - that you can't literally shut down the entire Internet from a single switch sitting in the President's briefcase (presumably next to the nuclear attack button). Much of the Internet is, after all, outside the US; much of it is in private ownership. (Perhaps the Third Amendment could be invoked here?)

For another, Gilmore's comment really didn't apply to individual Internet-linked computer networks; Google's various bits of outages this year ought to prove that it's entirely possible for those to be down without affecting the network at large. No, the point was that if you try to censor the Net its people will stop you by putting up mirror servers and passing the censored information around until everyone has a copy. The British Chiropractic Association (quacklash!) and Trafigura are the latest organizations to find out what Gilmore knew in 1993. He also meant, I suppose, that the Internet protocols were designed for resilience and to keep trying by whatever alternate routes are available if data packets don't get through.

Earlier this week another old Net hand, Web inventor Tim Berners-Lee, gave some rather sage advice to the Web 2.0 conference. One key point: do not build your local laws into the global network. That principle would not, unfortunately, stop the US government from shutting off its own servers (to spite its face?), but it does nix the idea of, say, building the network infrastructure to the specification of any one particular group - the MPAA or the UK government, in defiance of the increasingly annoyed EU. In the same talk, Berners-Lee also noted (according to CNET): "I'm worried about anything large coming in to take control, whether it's large companies or government."

Threats like these were what he set up W3C to protect against. People talk with reverence of Berners-Lee's role as inventor, but many fewer understand that the really big effort is the 30 years since the aha! moment of creation, during which Berners-Lee has spent his time and energy nurturing the Web and guiding its development. Without that, it could easily have been strangled by competing interests, both corporate and government. As, of course, it still could be, depending on the outcome of the debates over network neutrality rules.

Dozens of decisions like Berners-Lee's were made in creating the Internet. They have not made it impossible to kill - I'm not sure how many backhoes you'd need now, but I bet it's still a surprisingly finite number - but they have made it a resilient and robust network. A largely democratic medium, in fact, unlike TV and radio, at least so far. The Net was born free; the battles continue over whether it should be in chains.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or by email to netwars@skeptic.demon.co.uk.

October 23, 2009

The power of Twitter

It was the best of mobs, it was the worst of mobs.

The last couple of weeks have really seen the British side of Twitter flex its 140-character muscles. First, there was the next chapter of the British Chiropractic Association's ongoing legal action against science writer Simon Singh. Then there was the case of Jan Moir, who wrote a more than ordinarily Daily Mailish piece for the Daily Mail about the death of Boyzone's Stephen Gately. And finally, the shocking court injunction that briefly prevented the Guardian from reporting on a Parliamentary question for the first time in British history.

I am on record as supporting Singh, and I, too, cheered when, ten days ago, Singh was granted leave to appeal Justice Eady's ruling on the meaning of Singh's use of the word "bogus". Like everyone, I was agog when the BCA's press release called Singh "malicious". I can see the point in filing complaints with the Advertising Standards Authority over chiropractors' persistent claims, unsupported by the evidence, to be able to treat childhood illnesses like colic and ear infections.

What seemed to edge closer to a witch hunt was the gleeful take-up of George Monbiot's piece attacking the "hanging judge", Justice Eady. Disagree with Eady's ruling all you want, but it isn't hard to find libel lawyers who think his ruling was correct under the law. If you don't like his ruling, your correct target is the law. Attacking the judge won't help Singh.

The same is not true of Twitter's take-up of the available clues in the Guardian's original story about the gag to identify the Parliamentary Question concerned and unmask Carter-Ruck, the lawyers who served it and their client, Trafigura. Fueled by righteous and legitimate anger at the abrogation of a thousand years of democracy, Twitterers had the PQ found and published thousands of times in practically seconds. Yeah!

Of course, this phenomenon (as I'm so fond of saying) is not new. Every online social medium, going all the way back to early text-based conferencing systems like CIX, the WELL, and, of course, Usenet, when it was the Internet's town square (the function in fact that Twitter now occupies) has been able to mount this kind of challenge. Scientology versus the Net was probably the best and earliest example; for me it was the original net.war. The story was at heart pretty simple (and the skirmishes continue, in various translations into newer media, to this day). Scientology has a bunch of super-secrets that only the initiate, who have spent many hours in expensive Scientology training, are allowed to see. Scientology's attempts to keep those secrets off the Net resulted in their being published everywhere. The dust has never completely settled.

Three people can keep a secret if two of them are dead, said Mark Twain. That was before the Internet. Scientology was the first to learn - nearly 15 years ago - that the best way to ensure the maximum publicity for something is to try to suppress it. It should not have been any surprise to the BCA, Trafigura, or Trafigura's lawyers. Had the BCA ignored Singh's article, far fewer people would know now about science's dim view of chiropractic. Trafigura might have hoped that a written PQ would get lost in the vastness that is Hansard; but they probably wouldn't have succeeded in any case.

The Jan Moir case, and the demonstration outside Carter-Ruck's offices are, however rather different. These are simply not the right targets. As David Allen Green (Jack of Kent) explains, there's no point in blaming the lawyers; show your anger to the client (Trafigura) or to Parliament.

The enraged tweets and Facebook postings about Moir's article helped send a record number of over 25,000 complaints to the Press Complaints Commission, whose Web site melted down under the strain. Yes, the piece was badly reasoned and loathsome, but isn't that what the Daily Mail lives for? Tweets and links create hits and discussion. The paper can only benefit. In fact, it's reasonable to suppose that in the Trafigura and Moir cases both the Guardian and the Daily Mail manipulated the Net perfectly to get what they wanted.

But the stupid part about let's-get-Moir is that she does not *matter*. Leave aside emotional reactions, and what you're left with is someone's opinion, however distasteful.

This concerted force would be more usefully turned to opposing the truly dangerous. See for example, the AIDS denialism on parade by Fraser Nelson at The Spectator. The "come-get-us" tone e suggests that they saw attention New Humanist got for Caspar Melville's mistaken - and quickly corrected - endorsement of the film House of Numbers and said, "Let's get us some of that." There is no more scientific dispute about whether HIV causes AIDS than there is about climate change or evolutionary theory.

If we're going to behave like a mob, let's stick to targets that matter. Jan Moir's column isn't going to kill anybody. AIDS denialism will. So: we'll call Trafigura a win, chiropractic a half-win, and Moir a loser.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

October 16, 2009

Unsocial media

"No one under 30 will use email," the convenor objected.

There was a bunch of us, a pre-planning committee for an event, and we were talking about which technology we should have the soon-to-be appointed program committee use for discussions. Email! Convenient. Accessible by computer or phone. Easily archived, forwarded, quoted, or copied into any other online medium. Why are we even talking about this?

And that's when he said it.

Not so long ago, if you had email you were one of the cool kids, the avant-garde who saw the future and said it was electronic. Most of us spent years convincing our far-flung friends and relatives to get email so we didn't have to phone or - gasp - write a letter that required an envelope and a stamp. Being told that "email is for old people" is a lot like a 1960s "Never trust anyone over 30" hippie finding out that the psychedelic school bus he bought to live in to support the original 1970 Earth Day is a gas-guzzling danger to the climate and ought to be scrapped.

Well, what, then? (Aside: we used to have tons of magazines called things like Which PC? and What Micro? to help people navigate the complex maze of computer choices. Why is there no magazine called Which Social Medium??)

Facebook? Clunky interface. Not everyone wants to join. Poor threading. No easy way to export, search, or archive discussions. IRC or other live chat? No way to read discussion that took place before you joined the chat. Private blog with comments and RSS? Someone has to set the agenda. Twitter? Everything is public, and if you're not following all the right people the conversation is disjointed and missing links you can't retrieve. IM? Skype? Or a wiki? You get the picture.

This week, the Wall Street Journal claimed that "the reign of email is over" while saying only a couple of sentences later, "We all still use email, of course." Now that the Journal belongs to Rupert Murdoch, does no one check articles for sense?

Yes, we all still use email. It can be archived, searched, stored locally, read on any device, accessed from any location, replied to offline if necessary, and read and written thoughtfully. Reading that email is dead is like reading, in 2000, that because a bunch of companies went bust the Internet "fad" was over. No one then who had anything to do with the Internet believed that in ten years the Internet would be anything but vastly bigger than it was then. So: no one with any sense is going to believe that ten years from now we'll be sending and receiving less email than we are now. What very likely will be smaller, especially if industrial action continues, is the incumbent postal services.

What "No one under 30 uses email" really means is that it's not their medium of first choice. If you're including college students, the reason is obvious: email is the official stuff they get from their parents and universities. Facebook, MySpace, Twitter, and texting is how they talk to their friends. Come the day they join the workforce, they'll be using email every day just like the rest of us - and checking the post and their voicemail every morning, too.

But that still leave the question: how do you organize anything if no one can agree on what communications technology to use? It's that question that the new Google Wave is trying to answer. It's too soon, really, to tell whether it can succeed. But at a guess, it lacks one of the fundamental things that makes email such a lowest common denominator: offline storage. Yes, I know everything is supposed to be in "the cloud" and even airplanes have wifi. But for anything that's business-critical you want your own archive where you can access it when the network fails; it's the same principle as backing up your data.

Reviews vary in their take on Wave. LifeHacker sees it as a collaborative tool. ZDNet UK editor Rupert Goodwins briefly called it Usenet 2.0 and then retracted and explained using the phrase "unified comms".

That, really, is the key. Ideally, I shouldn't have to care whether you - or my fellow committee members - prefer to read email, participate in phone calls (via speech-to-text, text-to-speech synthesizers), discuss via Usenet, Skype, IRC, IM, Twitter, Web forums, blogs, or Facebook pages. Ideally, the medium you choose should be automatically translated in to the medium I choose. A Babel medium. The odds that this will happen in an age when what companies most want is to glue you to their sites permanently so they can serve you advertising are very small.

Which brings us back to email. Invented in an era when the Internet was commercial-free. Designed to open standards, so that anyone can send and receive it using any reader they like. Used, in fact, to alert users to updates they want to know about to their accounts on Facebook/IRC/Skype/Twitter/Web forums. Yes, it's overrun with corporate CYA memos and spam. But it's still the medium of record - and it isn't going anywhere. Whereas: those 20-somethings will turn 30 one day soon.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 25, 2009

Dead technology

The longevity of today's digital media is a common concern. Less than 20 years after the creation of a digital Domesday Book, a batch of researchers had to work to make it readable again, whereas the 900-year-old paper-based original is still readable. Anyone maintaining an archive of digital content knows that all that material has to be kept up to date and transferred to new formats as machines and players change.

On friend of mine, a professional sound engineer, begged me to keep my magnetic media when I told him I was transferring it to digital formats. You can, he argued, always physically examine a magnetic tape and come up with some kind of reader for it; with digital media, you're completely stuck if you don't know how the data was organized.

Where was he in 1984, when I bought my sewing machine, a Singer Futura 2000? That machine was, as it turns out, one of the earliest electronic models on the market. I had no idea of that at the time; the particular feature I was looking for (the ability to lock the machine in reverse, so I could have both hands free when reverse-stitching) was available on very few models. This was the best of those few. No one said to me, "And it's electronic!" They said stuff like, "It has all these stitches!" Most of which, to be sure, hardly anyone is likely ever to use other than the one-step buttonhole and a zigzag stitch or two.

Cut to 2009, when one day I turn the machine on and discover the motor works but the machine won't select a stitch or drive that motor. "Probably the circuit board," says the first repair person I talk to. Words of doom.

The problem with circuit boards is - as everyone knows who's had a modern electronic machine fail - that a) they're expensive to replace; b) they're hard to find; c) they're even harder to get repaired. Still, people don't buy sewing machines to use for a year or five; they buy them for a lifetime. In fact, before cars and computers, washing machines and refrigerators, sewing machines were the first domestic machines, they were an expensive purchase, and they were expected to last.

You can repair - and buy parts for - a 150-year-old treadle Singer sewing machine. People still use them, particularly for heavy sewing jobs like leather, many-layered denim, or neoprene. You can also repair the US Singer machine my parents gave me as a present in the mid 1970s. That machine is what they now call "mechanical", by which they mean electric but not electronic. What you can't do is repair a machine from the 1980s: Singer stopped making the circuit boards. If you're very, very lucky, you might be able to find someone who can repair one.

But even that is difficult. One such skilled repairman told me that even though Singer itself had recommended him to me he was unable to get the company to give him the circuit diagrams so he could use his skill for the benefit of both his own customers (and therefore himself) and Singer itself. The concept of open-sourcing has not landed in the sewing machine market; sewing machines are as closed as modern cars with what seems like much less justification. (At least with a car you can argue that a ham-fisted circuit board repairman could cost you your life; hard to make that argument about a sewing machine.)

Of course, from Singer's point of view things are far worse than irreplaceable circuit boards that send a few resentful customers into the gathering feet of Husqvarna Viking or Bernina. Singer's problem is that the market for sewing machines has declined dramatically. In 1902, the owner of Eastleigh Sewing Centre told me, Singer was producing 5 million machines a year. Now, the entire industry of many more manufacturers sells about 500,000. Today's 30- and 40-year-olds never learned to use a sewing machine in school, nor were they taught by their mothers. If they now learn to use one, they're more likely to use a computerized machine (a level up from just "electronic"). What they learn is graphics: the fanciest modern machines can take a GIF or JPG and embroider it on a section of fabric held taut by a hoop.

You can't blame them. Store-bought, mass-market clothing, even when it's made out of former "luxury" fabrics like silk, is actually cheaper than anything you can make at home these days. Only a few things make sense for anyone but the most obsessive to sew at home any more: 1) textile-based craft items like stuffed dolls and quilts, (and embroidered images); 2) items that would be prohibitively expensive to buy or impossible to find, like stage and re-enactment costumes; 3) items you want to be one-of-a-kind and personal such as, perhaps a wedding dress; 4) items that are straightforward to sew but expensive to buy, like curtains and other soft furnishings. The range of machines available now reflects that, so that you're stuck with either buying a beginner's machine or one intended for experts; the middle ground (like my Futura) has vanished. No one has the time to sew garments any more; no one, seemingly, even repairs torn clothing any more.

But damn, I hate throwing stuff out that's mostly functional.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

July 24, 2009

Security for the rest of us


Many governments, faced with the question of how to improve national security, would do the obvious thing: round up the usual suspects. These would be, of course, the experts - that is, the security services and law enforcement. This exercise would be a lot like asking the record companies and film studios to advise on how to improve copyright: what you'd get is more of the same.

This is why it was so interesting to discover that the US National Academies of Science was convening a workshop to consult on what research topics to consider funding, and began by appointing a committee that included privacy advocates and usability experts, folks like Microsoft researcher Butler Lampson, Susan Landau, co-author of books on privacy and wiretapping, and Donald Norman, author of the classic book The Design of Everyday Things. Choosing these people suggests that we might be approaching a watershed like that of the late 1990s, when the UK and the US governments were both forced to understand that encryption was not just for the military any more. The peace-time uses of cryptography to secure Internet transactions and protect mobile phone calls from casual eavesdropping are much broader than crypto's war-time use to secure military communications.

Similarly, security is now everyone's problem, both individually and collectively. The vulnerability of each individual computer is a negative network externality, as NYU economist Nicholas Economides pointed out. But, as many asked, how do you get people to understand remote risks? How do you make the case for added inconvenience? Each company we deal with makes the assumption that we can afford the time to "just click to unsubscribe" or remember one password, without really understanding the growing aggregate burden on us. Norman commented that door locks are a trade-off, too: we accept a little bit of inconvenience in return for improved security. But locks don't scale; they're acceptable as long as we only have to manage a small number of them.

In his 2006 book, Revolutionary Wealth, Alvin Toffler comments that most of us, without realizing it, have a hidden third, increasingly onerous job, "prosumer". Companies, he explained, are increasingly saving money by having us do their work for them. We retrieve and print out our own bills, burn our own CDs, provide unpaid technical support for ourselves and our families. One of Lorrie Cranor's students did the math to calculate the cost in lost time and opportunities if everyone in the US read annually the privacy policy of each Web site they visited once a month. Most of these things require college-level reading skills; figure 244 hours per year per person, $3,544 each...$781 billion nationally. Weren't computers supposed to free us of that kind of drudgery? As everything moves online, aren't we looking at a full-time job just managing our personal security?

That, in fact, is one characteristic that many implementations of security share with welfare offices - and that is becoming pervasive: an utter lack of respect for the least renewable resource, people's time. There's a simple reason for that: the users of most security systems are deemed to be the people who impose it, not the people - us - who have to run the gamut.

There might be a useful comparison to information overload, a topic we used to see a lot about ten years back. When I wrote about that for ComputerActive in 1999, I discovered that everyone I knew had a particular strategy for coping with "technostress" (the editor's term). One dealt with it by never seeking out information and never phoning anyone. His sister refused to have an answering machine. One simply went to bed every day at 9pm to escape. Some refused to use mobile phones, others to have computers at home..

But back then, you could make that choice. How much longer will we be able to draw boundaries around ourselves by, for example, refusing to use online banking, file tax returns online, or participate in social networks? How much security will we be able to opt out of in future? How much do security issues add to technostress?

We've been wandering in this particular wilderness a long time. Angela Sasse, whose 1999 paper Users Are Not the Enemy talked about the problems with passwords at British Telecom, said frankly, "I'm very frustrated, because I feel nothing has changed. Users still feel security is just an obstacle there to annoy them."

In practice, the workshop was like the TV game Jeopardy: the point was to generate research questions that will go into a report, which will be reviewed and redrafted before its eventual release. Hopefully, eventually, it will all lead to a series of requests for proposals and some really good research. It is a glimmer of hope.

Unless, that is, the gloominess of the beginning presentations wins out. If you listened to Lampson, Cranor, and to Economides, you got the distinct impression that the best thing that could happen for security is that we rip out the Internet (built to be open, not secure), trash all the computers (all of whose operating systems were designed in the pre-Internet era), and start over from scratch. Or, like the old joke about the driver who's lost and asking for directions, "Well, I wouldn't start from here".

So, here's my question: how can we make security scale so that the burden stays manageable?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

July 17, 2009

Human factors

For the last several weeks I've been mulling over the phrase security fatigue. It started with a paper (PDF) co-authored by Angela Sasse, in which she examined the burden that complying with security policies imposes upon corporate employees. Her suggestion: that companies think in terms of a "compliance budget" that, like any other budget (money, space on a newspaper page), has to be managed and used carefully. And, she said, security burdens weigh differently on different people and at different times, and a compliance budget needs to comprehend that, too.

Some examples (mine, not hers). Logging onto six different machines with six different user IDs and passwords (each of which has to be changed once a month) is annoying but probably tolerable if you do it once every morning when you get to work and once in the afternoon when you get back from lunch. But if the machines all log you out every time you take your hands off the keyboard for two minutes, by the end of the day they will be lucky to survive your baseball bat. Similarly, while airport security is never fun, the burden of it is a lot less to a passenger traveling solo after a good night's sleep who reaches the checkpoints when they're empty than it is to the single parent with three bored and overtired kids under ten who arrives at the checkpoint after an overnight flight and has to wait in line for an hour. Context also matters: a couple of weeks ago I turned down a ticket to Court 1 at Wimbledon on men's semi-finals day because I couldn't face the effort it would take to comply with their security rules and screening. I grudgingly accept airport security as the trade-off for getting somewhere, but to go through the same thing for a supposedly fun day out?

It's relatively easy to see how the compliance budget concept could be worked out in practice in a controlled environment like a company. It's very difficult to see how it can be worked out for the public at large, not least because none of the many companies each of us deals with sees it as beneficial to cooperate with the others. You can't, for example, say to your online broker that you just can't cope with making another support phone call, can't they find some other way to unlock your account? Or tell Facebook that 61 privacy settings is too many because you're a member of six other social networks and Life is Too Short to spend a whole day configuring them all.

Bruce Schneier recently highlighted that last-referenced paper, from Joseph Bonneau and Soeren Preibusch at Cambridge's computer lab, alongside another by Leslie John, Alessandro Acquisti, and George Loewenstein from Carnegie-Mellon, to note a counterintuitive discovery: the more explicit you make privacy concerns the less people will tell you. "Privacy salience" (as Schneier calls it) makes people more cautious.

In a way, this is a good thing and goes to show what privacy advocates have been saying along: people do care about privacy if you give them the chance. But if you're the owners of Facebook, a frequent flyer program, or Google it means that it is not in your business interest to spell out too clearly to users what they should be concerned about. All of these businesses rely on collecting more and more data about more and more people. Fortunately for them, as we know from research conducted by Lorrie Cranor (also at Carnegie-Mellon), people hate reading privacy policies. I don't think this is because people aren't interested in their privacy. I think this goes back to what Sasse was saying: it's security fatigue. For most people, security and privacy concerns are just barriers blocking the thing they came to do.

But choice is a good thing, right? Doesn't everyone want control? Not always. Go back a few years and you may remember some widely publicized research that pointed out that too many choices stall decision-making and make people feel...tired. A multiplicity of choices adds weight and complexity to the decision you're making: shouldn't you investigate all the choices, particularly if you're talking about which of 56 mutual funds to add to your 401(k)?

It seems obvious, therefore, that the more complex the privacy controls offered by social networks and other services the less likely people are to use them: too many choices, too little time, too much security fatigue. In minor cases in real life, we handle this by making a decision once and sticking to it as a kind of rule until we're forced to change: which brand of toothpaste, what time to leave for work, never buy any piece of clothing that doesn't have pockets. In areas where rules don't work, the best strategy is usually to constrain the choices until what you have left is a reasonable number to investigate and work with. Ecommerce sites notoriously get this backwards: they force you to explore group by group instead of allowing you to exclude choices you'll never use.

How do we implement security and privacy so that they're usable? This is one of the great unsolved, under-researched questions in security. I'm hoping to know more next week.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

February 20, 2009

Control freaks

It seems like every year or two some currently populat company revises its Terms of Service in some stupid way that gets all its users mad and then either 1) backs down or 2) watches a stampede for the exits. This year it's Facebook.

In announcing the reversal, founder Mark Zuckerberg writes that given its 175 million users, if Facebook were a country it would be the sixth most populous country in the world, and called the TOS a "governing document". While those numbers must sound nice on the business plan - wow! Facebook has more people than Pakistan! - in reality Facebook doesn't have 175 million users in the sense that Pakistan has 172 million inhabitants. I'm sure that Facebook, like every other Internet site or service, has a large percentage of accounts that are opened, used once or twice, and left for dead. Countries must plan governance and health care for all their residents; no one's a lapsed user of the country they live in.

Actually, the really interesting thing about 175 million people: that's how many live outside the countries they were born in. Facebook more closely matches the 3 percent of the world's population who are migrants.

It is nice that Zuckerberg is now trying to think of the TOS as collaborative, but the other significant difference is of course that Facebook is owned by a private company that is straining to find a business model before it stops being flavor of the month. (Which, given Twitter's explosive growth, could be any time now.) The Bill of Rights in progress has some good points (that sound very like the WELL's "You own your own words", written back in the 1980s. The WELL has stuck to its guns for 25 years, and any user can delete ("scribble") any posting at any time, but the WELL has something Facebook doesn't: subscription income. Until we know what Facebook's business model is - until *Facebook* knows what Facebook's business model is - it's impossible to put much faith in the durability of any TOS the company creates.

At the Guardian, Charles Arthur argues that Facebook should just offer a loyalty card because no one reads the fine print on those. That's social media for you: grocery shopping isn't designed for sharing information. Facebook and other Net companies get in this kind of trouble is because they *are* social media, and it only takes a few obsessives to spread the word. If you do read the fine print of TOSs on other sites, you'll be even more suspicious.

But it isn't safe to assume - as many people seem to have - that Facebook is just making a land grab. Its missing-or-unknown business model is what makes us so suspicious. But the problem he's grappling with is a real one: when someone wants to delete their account and leave a social network, where is the boundary of their online self?

The WELL's history, however, does suggest that the issues Zuckerberg raises are real. The WELL's interface always allowed hosts and users to scribble postings; the function, according to Howard Rheingold in The Virtual Community and in my own experience was and is very rarely used. But scribble only deletes one posting at a time. In 1990, a departing staffer wrote and deployed a mass scribble tool to seek out and destroy every posting he had ever made. Some weeks later, more famously, a long-time, prolific WELL user named Blair Newman, turned it loose on his own work and then, shortly afterwards, committed suicide.

Any suicide leaves a hole in the lives of the people he knows, but on the WELL the holes are literal. A scribbled posting doesn't just disappear. Instead, the shell of the posting remains, with the message "" in place of the former content. Also, after a message is scribbled even long-dead topics pop up when you read a conference, so a mass scribble hits you in the face repeatedly. It doesn't happen often; the last I remember was about 10 years ago, when a newly appointed CEO of a public company decided to ensure that no trace remained of anything inappropriate he might ever have posted.

Of course, scribbling your own message doesn't edit other people's. While direct quoting is not common on the WELL - after all, the original posting is (usually) still right there, unlike email or Usenet - people refer to and comment on each other's postings all the time. So what's left is a weird echo, as if all copies of the Bible suddenly winked out of existence leaving only the concordances behind.

It is this problem that Zuckerberg is finding difficult. The broad outline so far posted seems right: you can delete the material you've posted, but messages you've sent to others remain in their inboxes. There are still details: what about comments you post to others' status updates or on their Walls? What about tags identifying you that other people have put in their photographs?

Of course, Zuckerberg's real problem is getting people to want to stay. Companies like to achieve this by locking them in, but ironically, just like in real life, reassuring people that they can leave is the better way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 31, 2008

Machine dreams

Just how smart are humans anyway? Last week's Singularity Summit spent a lot of time talking about the exact point at which computer processing power would match that of the human brain, but that's only the first step. There's the software to make the hardware do stuff, and then there's the whole question of consciousness. At that point, you've strayed from computer science into philosophy and you might as well be arguing about angels on the heads of pins. Of course everyone hopes they'll be alive to see these questions settled, but in the meantime all we have is speculation and the snide observation that it's typical that a roomful of smart people would think that all problems can be solved by more intelligence.

So I've been trying to come up with benchmarks for what constitutes artificial intelligence, and the first thing I think is that the Turing test is probably too limited. In it, a judge has to determine which of two typing correspondents is the machine and which the human, That's fine as far as it goes, but one of the consistent threads that un through all this is a noticeable disdain for human bodies.

While our brain power is largely centralized, it still seems to me likely that both its grey matter and the rest of our bodies are an important part of the substrate. How we move through space, how our bodies react and feed our brains is part and parcel of how our minds work, however much we may wish to transcend biology. The fact that we can watch films of bonobos and chimpanzees and recognise our own behaviour in their interactions should show us that we're a lot closer to most animal species than we think - and a lot further from most machines.

For that sort of reason, the Turing test seems limited. A computer passes that test if, when paired against a human, the judge can't tell which is which. At the moment, it seems clear the winner is going to be spambots - some spam messages are already devised cleverly enough to fool even Net-savvy individuals into opening them sometimes. But they're hardly smart - they're just programmed that way. And a lot depends on the capability of the judge - some people even find Eliza convincing, though it's incredibly easy to send off-course into responses that are clearly those of a machine. Find a judge who wants to believe and you're into the sort of game that self-styled psychics like to play.

Nor can we judge a superhuman intelligence by the intractable problems it solves. One of the more evangelist speakers last weekend talked about being able to instantly create tall buildings via nanotechnology. (I was, I'm afraid, irresistibly reminded of that Bugs Bunny cartoon where Marvin pours water on beans to produce instant Martians to get rid of Bugs.) This is clearly just silly: you're talking about building a gigantic building out of molecules. I don't care how many billions of nanobots you have, the sheer scale means it's going to take time. And, as Kevin Kelly has written, no matter how smart a machine is, figuring out how to cure cancer or roll back aging won't be immediate either because you can't really speed up the necessary experiments. Biology takes time.

Instead, one indicator might be variability of response; that is, that feeding several machines the same input - or giving the same machine the same input at different times - produces different, equally valid interpretations. If, for example, you give a 10th grade class Jane Austen's Pride and Prejudice to read and report on, different students might with equal legitimacy describe it as a historical account of the economic forces affecting 18th century women, a love story, the template for romantic comedy, or even the story of the plain sister in a large family whose talents were consistently overlooked until her sisters got married.

In The Singularity Is Near, Ray Kurzweil laments that each human must read a text separately and that knowledge can't be quickly transferred from one to another the way a speech recognition program can be loaded into a new machine in seconds - but that's the point. Our strength is that our intelligences are all different, and we aren't empty vessels into which information is poured but stews in which new information causes varying chemical reactions.

You might argue that search engines can already do this, in that you don't get the same list of hits if you type the same keywords into Google versus Yahoo! versus Ask.com, and if you come back tomorrow you may get a different response from any one of them. That's true. It isn't the kind of input I had in mind, but fair enough.

The other benchmark that's occurred to me so far is that machines will be getting really smart when they get bored.

ZDNet UK editor Rupert Goodwins has a variant on this from when he worked at Sinclair Research. "If it went out one evening, drank too much, said the next morning, 'never again' and repeated the exercise immediately. Truly human." But see? There again: a definition of human intelligence that requires a body.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 17, 2008

Mind the gap

"Everyone in my office is either 50 or 25," said my neighbor, who is clearly not 25. "We call them 'knowledge-free'. I blame the Internet."

Well, the Internet is a handy thing to blame; it's there and today's generation of 20-somethings grew up with the Web - if you're 25 today you were 12 when Netscape went public. My parents, who were born in 1906 and 1913, would have blamed comic books; my older siblings, born between 1938 and 1943, might blame TV.

What are they "knowledge-free" about? The way she tells it, pretty much everything. They have grown up in a world where indoor temperature is the same year-round. Where bananas and peaches are native, year-round fruit that grows on supermarket shelves. Where World War II might as well be World of Warcraft II. Where dryers know when the clothes are dry, and anything worth seeing on TV will show up as a handily edited clip on YouTube. And where probably the biggest association with books is waiting for JK Rowling's next installment of Harry Potter.

Of course, every 50-something generation is always convinced that the day's 20-somethings are inadequate; it's a way of denying you were ever that empty-headed yourself. My generation - today's 50-somethings - and the decade or so ahead of us absolutely terrified our parents: let those dope-smoking, draft-dodging, "Never trust anyone over 30", free-lovers run things?

It's also true that she seems to know a different class of 20-somethings than I do; my 20-plus friends are all smart, funny, thoughtful, well educated, and interested in everything, even if they are curiously lacking in detailed knowledge of early 1970s movies. They read history books. They study science. They worry about the economy. They think about their carbon production and how much fossil fuels they consume. Whereas, the 20-odds in her office write and think about climate change and energy use apparently without ever connecting those global topics with the actual individual fact that they personally expect to wear the same clothes year-round in an indoor environment controlled to a constant temperature.

Just as computers helped facilitate but didn't cause the current financial crisis, the Internet has notthe problem - if anything it ought to be the antidote. What causes this kind of disconnect is simply what happens when you grow up in a certain way; you think the conditions you grew up with are normal. When you're 25, 50 years is an impossibly long time to think about. When you're 55, centuries become graspable notions. All of which has something to do with the way the current economic crisis has developed.

If you compare - as the Washington Post and the Financial Times have - the current mess to the Great Depression, there's a certain logic to thinking that 80 years is just about exactly the right length of time for a given culture to recreate its past mistakes. That's four generations. The first lived through the original crisis; the second heard their parents talk about it; the third heard their grandparents talk about it; the fourth has no memory and hubris sets in.

In this case, part of the hubris that set in was the idea that the Glass-Steagall Act, enacted in 1933 to control the banks after the Great Depression, was no longer needed. The banking industry had of course been trying to get rid of the separation of deposit-taking banks and investment banks for years, and they finally succeeded in 1999. Clinton had no choice but to sign it into law in 1999; the margin by which it passed both Houses was too large. There is no point in blaming only him, as Republicans trying to get McCain into office seem bent on doing.

That year was of course the year of maximum hubris anyway. The Internet bubble was at its height and so was the level of denial in the financial markets that it was a bubble. You can go on to blame the housing bubble brought about by easier access to mortgage money, cheap credit, credit default swaps, and all the other hideous weapons of financial mass destruction, but for me the repeal of Glass-Steagall is where it started. It was a clear sign that the foxes had won the chance to wreck the henhouse again. And fox - or human - or scorpion - nature being what it is, it was quite right to think that they would take it. As Benjamin Graham observed many years ago in The Intelligent Investor, bright young men have offered to work miracles - usually with other people's money - since time immemorial.

At that, maybe we're lucky if the 20-somethings in my neighbor's office are unconscious. Imagine if they were conscious. They would look at today's 50- and 60-somethings and say: you wrecked the environment, you will leave me no energy sources, social security, or health insurance in my old age, you have bankrupted the economy so I will never be able to own a house, and you got to have sex without worrying about dying from it. They'd be like the baby boomers were in the 1960s: mad as hell.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 22, 2008

Intimate exchanges

A couple of years ago I did an interview with Ed Iacobucci CEO and founder of Dayjet, a new kind of airline. Dayjet has no published timetable; instead, prospective passengers (mostly company CEOs and other business types with little time to spare for driving between ill-served smaller cities in the American south) specify their departure point, their destination, and a window of time for Dayjet to get them there. Dayjet responds with a price based on the number of full seats in the plane. The airline, said Iacobucci, is software expressed as a service. And - and this is the key point here - constructing an intellectual property business in such a way meant he didn't have to worry about copying.

Cut to: the current battles over P2P. Danny O'Brien observed recently that with terabyte disk drives becoming luggable and the back catalogue of recorded music being "only" 4Tb, in the medium term the big threat to the music companies isn't P2P but file-swapping between directly connected hard drives, no Internet needed; no detection possible.

Cut to: the amazing career of Alan Ayckbourn and the Stephen Joseph Theatre in Scarborough, North Yorkshire.

Ayckbourn is often thought of as Britain's answer to Neil Simon, but the comparison is unfair to Ayckbourn. Simon is of course a highly skilled playwright and jokesmith, but his characters are in nothing like the despair that Ayckbourn's are, and he has none of the stagecraft. Partly, that may be because Ayckbourn has his own theatre to play with. Since 1959, when his first play was produced, Ayckbourn has written 71 plays (and still counting), and just about all of them were guaranteed production in advance at the Stephen Joseph Theatre, where Ayckbourn has been artistic director since 1974.

Many of them play with space and time. In How the Other Half Loves two dinners share stage space and two characters though they occur on different nights in different living rooms. In Communicating Doors characters shift through the same hotel room over four decades. In Taking Steps three stories of a house are squashed flat into a single stage set. He also has several sets of complementary plays, such as The Norman Conquests, a trilogy which sets each of the plays - the story of a weekend house party - in a different room.

It was in 1985, during a period of obsession with the plays Intimate Exchanges that I decided that at some point I really had to see Alan Ayckbourn's work in its native habitat. Partly, this was due to the marvellous skill with which Lavinia Bertram and Robin Herford shifted among four roles each. Intimate Exchanges is scored for just two actors, and the plays' conceit is that they chronicle, via a series of two-person scenes, 16 variant consequences of a series of escalating choices. Bertram and Herford were the original cast, imported into London from Scarborough. So my thought was: if this is the kind of acting they have up there, one must go. (As bizarre as it seems to go from London to anywhere to go to the theater.)

This year, reading that Ayckbourn is about to retire as artistic director, it seemed like now or never. It's worth the trip: although many of Ayckbourn's plays work perfectly well on a traditional proscenium stage and he's had a lot of success in London's West End and on Broadway (and in fact around the world; he's the most performed playwright who isn't Shakespeare), the theatre-in-the-round adds intimacy. That's particularly true in this summer's trio of ghost plays: Haunting Julia (1994, a story of the aftermath of a suicide)), Snake in the Grass (2002, a story of inheritance and blackmail), and Life and Beth (2008, a story of survival and widowhood). In all these stories, the closer you can get to the characters the better, and the compared to the proscenium stage SJT's round theatre is the equivalent of the cinematic close-up.

That intimacy may be a partial explanation of why so little of Ayckbourn's work has been adapted to movies - and when it has, the results have been so disappointing. Generally, they're either shallow caricatures (such as A Chorus of Disapproval) or wistful and humorless rather than robust and funny (like Alain Resnais' attempts, including Intimate Exchanges). There have been some good TV productions (The Norman Conquests, Season's Greetings (set in a hall surrounded by bits of a living room and dining room)), but these are mysteriously not available commercially.

That being the case, it's hard to understand the severity of the official Ayckbourn Web site's warning about bootleg copies. Given that they know the demand is there, and given the amount those 71 plays are making in royalties and licensing fees, why not buy up the rights to those productions and release them, or begin a project of recording current SJT productions and revivals with a view to commercial release? The SJT shop sells scripts. Why not DVDs?

Asking that risks missing the essential nature of theater, which, along with storytelling, is probably one of the earliest forms of intellectual property expressed as a service. A film is infinitely copiable; every live performance is different, if only subtly, because audience feedback varies. I still wish they'd do it, though.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 6, 2008

The Digital Revolution turns 15

"CIX will change your life," someone said to me in 1991 when I got a commission to review a bunch of online systems and got my first modem. At the time, I was spending most or all of every day sitting alone in my house putting words in a row for money.

The Net, Louis Rossetto predicted in 1993, when he founded Wired, would change everybody's lives. He compared it to a Bengali typhoon. And that was modest compared to others of the day, who compared it favorably to the discovery of fire.

Today, I spend most or all of every day sitting alone in my house putting words in a row for money.

But yes: my profession is under threat, on the one hand from shrinkage of the revenues necessary to support newspapers and magazines - which is indeed partly fuelled by competition from the Internet - and on the other hand from megacorporate publishers who routinely demand ownership of the copyrights freelances used to resell for additional income - a practice that the Internet was likely to largely kill off anyway. Few have ever gotten rich from journalism, but freelance rates haven't budged in years; staff journalists get very modest raises and for those they are required to work more hours a week and produce more words.

That embarrassingly solipsistic view aside, more broadly, we're seeing the Internet begin to reshape the entertainment, telecommunications, retail, and software industries. We're seeing it provide new ways for people to organize politically and challenge the control of information. And we're seeing it and natural laziness kill off our history: writers and students alike rely on online resources at the expense of offline archives.

Wired was, of course, founded to chronicle the grandly capitalized Digital Revolution, and this month, 15 years on, Rossetto looked back to assess the magazine's successes and failures.

Rossetto listed three failures and three successes. The three failures: history has not ended; Old Media are not dead (yet); and governments and politics still thrive. The three successful predictions: the long boom; the One Machine, a man/machine planetary consciousness; that technology would change the way we relate to each other and cause us to reinvent social institutions.

I had expected to see the long boom in the list of failures, and not just because it was so widely laughed at when it was published. Rossetto is fair to say that the original 1997 feature was not invalidated by the 2000 stock market bust. It wasn't about that (although one couldn't resist snickering about it as the NASDAQ tanked). Instead, what the piece predicted was a global economic boom covering the period 1980 to 2020.

Wrote Peter Schwartz and Peter Leyden, "We are riding the early waves of a 25-year run of a greatly expanding economy that will do much to solve seemingly intractable problems like poverty and to ease tensions throughout the world. And we'll do it without blowing the lid off the environment."

Rossetto, assessing it now, says, " There's a lot of noise in the media about how the world is going to hell. Remember, the truth is out there, and it's not necessarily what the politicians, priests, or pundits are telling you."

I think: 1) the time to assess the accuracy of an article outlining the future to 2020 is probably around 2050; 2) the writers themselves called it a scenario that might guide people through traumatic upheavals to a genuinely better world rather than a prediction; 3) that nonetheless, it's clear that the US economy, which they saw as leading the way has suffered badly in the 2000s with the spiralling deficit and rising consumer debt; 4) that media alarm about the environment, consumer debt, government deficits, and poverty is hardly a conspiracy to tell us lies; and 5) that they signally underestimated the extent to which existing institutions would adapt to cyberspace (the underlying flaw in Rossetto's assumption that governments would be disbanding by now).

For example, while timing technologies is about as futile as timing the stock market, it's worth noting that they expected electronic cash to gain acceptance in 1998 and to be the key technology to enable electronic commerce, which they guessed would hit $10 billion by 2000. Last year it was close to $200 billion. Writing around the same time, I predicted (here) that ecommerce would plateau at about 10 percent of retail; I assumed this was wrong, but it seems that it hasn't even reached 4 perecent yet, though it's obvious that, particularly in the copyright industries, the influence of online commerce is punching well above its statistical weight.

No one ever writes modestly about the future. What sells - and gets people talking - are extravagant predictions, whether optimistic or pessimistic. Fifteen years is a tiny portion even of human history, itself a blip on the planet. Tom Standage, writing in his 1998 book The Victorian Internet, noted that the telegraph was a far more radically profound change for the society of its day than the Internet is for ours. A century from now, the Internet may be just as obsolete. Rossetto, like the rest of us, will have to wait until he's dead to find out if his ideas have lasting value.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 25, 2008

The shape of the mushroom


The digital universe is big. Really big. You just can't believe how mind-bogglingly big... Oh, never mind.

There's nothing like a good the-sky-is-falling scenario to get a one-day conference awake, and today at the LSE was no exception.

"It's a catastrophe waiting to happen," said Leslie Willcocks, the head of the Information Systems and Innovation Group at the LSE, putting up a chart. What it showed: the typical data center's use of energy and processing power. Only 1.5 percent of the total energy usage powers processing; 80 percent of CPU is idle. Well. They weren't built to be efficient. They were built to be reliable.

But Willcocks wasn't gearing up to save the planet. Instead, his point was that all this wastage reflects a fetish for connectedness: "The assumption is you have to have reliable information on tap at all times." (Cue Humphrey Appleby: "I need to know everything. How else can I judge whether I need to know it?") Technology design, he argued, is being driven by the explosion in data. The US's 28 million servers today represent 2.5 percent of the US's electricity needs; in 2010 that will be 43 million. This massively inefficient use of energy is trying to fix what he called a far bigger problem: the "data explosion". And, concurrently, the inability to manage same.

In 2007, John Gantz, chief research officer at IDC, said, for the first time in human history the amount of information being created was larger than the amount of storage available. That sounds alarming at first, like the moment you contemplate the mortgage you're thinking of taking out to buy a house and realize that it is larger than the sum of all your financial assets. At second glance, the situation isn't quite so bad.

For one thing, a lot of information is transient. We aren't required to keep a copy of every TV signal - otherwise, imagine the number of copies we'd add every Christmas just for rebroadcasts of It's a Wonderful Life. But once you've added in the impact of regulatory compliance and legal requirements, along with good IT practice, consider the digital footprint of a single email message with a 1Mb attachment. By the time it's done being backed up, sent to four recipients, backed up, and sent to tape at both sending and receiving organizations it's consuming over 51.5Mb of storage.

And things are only going to get exponentially worse between now and 2011. The digital universe will grow by an order of magnitude in five years, from about 177EB in 2006 to 1,773EB in 2011. More than 90 percent of it is unstructured information. Even more alarming for businesses is that while individual consumers account for about 70 percent of the information created, enterprises have responsibility or liability for about 85 percent of it. Think Google buying YouTube and taking on its copyright liability, or NASA's problem with its astronauts' email.

"The information bomb has already happened," said Gantz. "I'm just describing the shape of the mushroom."
To be sure, video amps up the data flows. But it's not the most important issue. Take, for example, the electronification of the NHS. Discarding paper in favor of electronics saves one kind of space - there's a hospital in Bangkok that claims to have been able to open a whole new pediatric wing in the space saved by digitizing its radiography department - but consumes another. All those electronic patient records will have to be stored, backed up and stored and backed up again in each new location they're sent to. Say it all over again with MP3s, electronic patient records, digital radio, VOIP, games, telematics, toys...

No wonder we're all so tired.

And the problem the NHS is solving with barcoding - that people cannot find what they already have - is not so easily solved with information.

Azeem Azhar, seven months away from a job as head of innovation at Reuters, said that one thing he'd learned was that every good idea he had - had already been had by someone else in the organization at some point. As social networks enable people to focus less on documents than on expertise, he suggested, we may finally find a way around that problem.

The great thing about a conference like this is that for every solution someone can find a problem. The British Library, for example, is full of people who ought to know what to keep; that's what librarians do. But the British Library has its roots in an era when it could arrogantly assume it had the resources to keep everything. Ha. Though you sympathized with the trouble they have explaining stuff when an audience member asked why, given that the British Library has made digital copies, it should bother to keep the original, physical Magna Carta.

That question indicates a kind of data madness; the information we derive from studying the physical Magna Carta can't all be digitized. If looking at the digital simulacrum evokes wonder, it's precisely because we know that it is an image - a digital shadow - of the real thing. If the real thing ceases to exist, the shadow grows less meaningful.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 26, 2007

Tomorrow's world

"It's like 1994," Richard Bartle, the longest-serving virtual world creator, said this week. We were at the Virtual Worlds Forum. Sure enough: most of the panels were about how businesses could make money! in virtual worlds! Substitute Web! and Bartle was right.

"Virtual worlds are poised to revolutionize today's Web ecommerce," one speaker said enthusiastically. "They will restore to ecommerce the social and recreational aspect of shopping, the central element in the real world, which was stripped away when retailers went online."

There's gold in them thar cartoon hills.

But which hills? Second Life is, to be sure, the virtual world du jour, and it provides the most obviously exploitable platform for businesses. But in 1994 so did CompuServe. It was only three years later – ten years ago last month – that it had shrunk sufficiently for AOL to buy it as revenge. In turn, AOL is itself shrinking – its subscription revenues for the quarter ending June 30, 2007 were half those in the same quarter in 2006.

If there is one thing we know about Internet communities it's that they keep reforming in new technologies, often with many of the same people. Today's kids bop from world to world in groups, every few months. The people I've known on CIX or the WELL turn up on IRC, LiveJournal, Facebook, and IM. Sometimes you flee, as Corey Bridges said of social networks, because your friends list has become "crufted" up with people you don't like. You take your real friends somewhere else until mutatis mutandem. In the older text-based conferencing systems, same pattern: public conferences filled with too many annoying people joined sent old-timers to gated communities like mailing lists or closed conferences. And so it goes.

In a post pointed at by the VWF blog Metaversed's Nick Wilson defines social virtual worlds and concludes that there are only eight of them – the rest are not yet available to the general public, children's worlds, or simply development platforms. "The virtual worlds space," he concludes, "is not as large as many people think."

Probably anyone who's tried to come to grips with Second Life, number one on Wilson's list, without the benefit of friends to go there with knows that. Many parts of SL are resoundingly empty much of the time, and it seems inarguable that most of SL's millions of registered users try it out a few times and then leave their avatars as records in the database. Nonetheless, companies keep experimenting and find the results valuable. A batch of Italian IBMers even used the world to stage a strike last month. Naturally it crashed IBM's SL Business Center: the 1,850 strikers were spread around seven IBM locations, but you can only put about 50 avatars on an island before server lag starts to get you. Strikes: the original denial-of-service attacks.

But questioning whether there's a whole lot of there there is a nice reminder that in another sense, it's 1999. Perfect World, a Chinese virtual world, went public at the end of July, and is currently valued at $1.6 billion. It is, of course, losing money. Meanwhile Microsoft has invested $240 million of the change rattling around the back of its sofas in Facebook to become its exclusive "advertising partner", giving that company an overall value of $515 billion. That should do nicely to ensure that Google or Yahoo! doesn't buy it outright, anyway. Rupert Murdoch bought MySpace only two years ago for $580 million – which sounds like a steal by comparison if it weren't for the fact that Murdoch has made many online plays and they've all so far been wrong.

Two big issues seem to be dominating discussions about "the virtual world space". One: how to make money. Two: how and whether to make world interoperable, so when you get tired of one you can pick up your avatar and reputation and take them somewhere new. It was in discussing this latter point that Bridges made the comment noted above: after a while in a particular world shedding that world's character might be the one thing you really want to do. In real life, wherever you go, there you are. Freely exploring your possible selves is what Richard Bartle had in mind when he wrote the first MUD.

The first of those is, of course, the pesky thing only a venture capitalist or a journalist would ask. So far, in general game worlds make their money on subscriptions, and social worlds make their money selling non-existent items like land and maintenance fees thereupon (actually, says Linden Labs, "server resources"). But Asia seems already to be moving toward free play with the real money coming from in-game item sales: 80 million Koreans are buying products in and from Cyworld.

But the two questions are related. If your avatar only functions in a single world, the argument goes, that makes virtual worlds closed environments like the ones CompuServe and AOL failed with. That is of course true – but only after someone comes up with an open platform everyone can use. Unlike the Internet at large, though, it's hard to see who would benefit enough from building one to actually do it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 24, 2007

Game gods

Virtual worlds have been with us for a long time. Depending who you listen to, they began in 1979, or 1982, or it may have been the shadows on the walls of Plato's cave. We'll go with the University of Essex MUD, on the grounds that its co-writer Richard Bartle can trace its direct influence on today's worlds.

At State of Play this week, it was clear that just as the issues surrounding the Internet in general have changed very little since about 1988, neither have the issues surrounding virtual worlds.

True, the stakes are higher now and, as Professor Yee Fen Lim noted, when real money starts to be involved people become protective.

Level 70 warrior accounts on World of Warcraft go for as little as $10 (though your level number cannot disguise your complete newbieness), but the unique magic sword you won in a quest may go for much more. The best-known pending case is Bragg versus Second Life over virtual property the world's owners confiscated when they realized that Bragg was taking advantage of a loophole in their system to buy "land" at exceptionally cheap prices. Lim had an interesting take on the Bragg case: as a legal concept, she argued, property is right of control, even though Linden Labs itself defines its virtual property as rental of a processor. As computer science that's fine, but it's not law. Otherwise, she said, "Property is mere illusion."

Ultimately, the issues all come down to this: who owns the user experience? In subscription gaming worlds, the owners tend to keep very tight control of everything – they claim ownership in all intellectual property in the world, limit users' ability to create their own content, and block the sale of cheats as much as possible. In a free-form world like Second Life which may host games but is itself a platform rather than a game, users are much freer to do what they want but the EULAs or Terms of Service may be just as unfair.

Ultimately, no matter what the agreement says, today's privately owned virtual worlds all function under the same reality: the game gods can pull the plug at any time. They own and control the servers. Possession is nine-tenths of the law, and all that. Until someone implements open source world software on a P2P platform, this will always be the way. Linden Labs says, for what it's worth, that its long-term intention is to open-source its platform so that anyone may set up a world. This, too, has been done before, with The Palace.

One consequence of this is that there is no such thing as virtual privacy, a topic that everyone is aware of but no one's talking about. The piecemeal nature of the Net means that your friend's IRC channel doesn't know anything about your Web use, and Amazon.com doesn't track what you do on eBay. But virtual worlds log everything. If you buy a new shirt at a shop and then fly to a distant island to have sex with it, all that is logged. (Just try to ensure the shirt doesn't look like a child's shirt and you don't get into litigation over who owns the island…)

There are, as scholars say, legitimate reasons. Logging everything that happens is important in helping game developers pinpoint the source of crashes and eliminate bugs. Logs help settle disputes over who did what to whose magic sword. And in a court case, they may be important evidence (although how you can ensure that the logs haven't been adjusted to suit the virtual world provider, who is usually one of the parties to the litigation, I don't know).

As long as you think of virtual worlds as games, maybe this isn't that big a problem. After all, no one is forced to spend half their waking hours killing enough monsters in World of Warcraft to join a guild for a six-hour quest.

But something like Second Life aspires to be a lot more than that. The world is adding voice communication, which will be interesting: if you have to use your real voice, the relative anonymity conferred by the synthetic world are gone. Quite apart from bandwidth demands (lag is the bane of every SLer's existence), exploring what virtual life is like in the opposite gender isn't going to work. They're going to need voice synthesizers.

Much of the law in this area is coming out of Asia, where massively multi-player online games took off so early with such ferocity that, according to Judge Unggi Yoon, in a recent case a member of a losing team in one such game ran to the café where the winning team was playing and physically battered one of its members. Yoon, who explained some of the new laws, is an experienced online gamer, all the way back to playing Ultima Online in middle school. In his country, a law has recently come into force taxing virtual world transactions (it works like a VAT threshold – under $100 a month you don't owe anything). For Westerners, who are used to the idea that we make laws and export them rather than the other way around, this is quite a reality shift.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 27, 2007

My so-called second life

It's a passing fad. It's all hype. They've got good PR. Only sad, pathetic people with no real lives would be interested.
All things that were said about the Internet 12 years ago. All things being said now about Second Life today. Wrong about the Internet. Wrong, too, about Second Life.

Hanging around a virtual world dressed as a cartoon character isn't normally my idea of a good time, but last weekend Wired News asked me to attend the virtual technology exposition going on inworld, and so I finally fired up Gwyndred Wuyts, who I'd created some weeks back.

Second Life is of course a logical continuation of the virtual worlds that went before it. The vending machines, avatars, attachments (props such as fancy items of clothing, laptops, or, I am given to understand, quite detailed, anatomically correct genitals), and money all have direct ancestors in previous virtual worlds such as Worlds Away (Fujitsu), The Palace, and Habitat (Lucasfilm). In fact, though, the prior art Second Life echoed most at first was CompuServe, which in 1990 had no graphics except ASCII art and little sense of humor – but was home to technology companies of all sizes, who spoke glowingly of the wonders of having direct contact with their customers. In 1990 every techie had a CompuServe ID.

Along came the Web, and those same companies gratefully retreated to the Web, where they could publish their view of the world and their support documents and edit out the abuse and backtalk. Now, in Second Life, the pendulum is swinging back it's flattened hierarchies all over again.

"You have to treat everyone equally because you can't tell who anyone is. They could be the CEO of a big company," Odin Liam Wright (SL: Liam Kanno) told me this week. " In SL, he says, what you see is "more the psyche than the economic class or vocation or stature."

Having to take people as they present themselves without the advantage of familiar cues and networked references was a theme frequently exploited by Agatha Christie. Britain was then newly mobile, and someone moving to a village no longer came endorsed by letters from mutual friends. People could be anybody, her characters frequently complain.

Americans are raised to love this kind of social mobility. But its downside was on display yesterday in a panel on professionalism at the Information Security conference, where several speakers complained that the informal networks they used to use to check out their prospective security hires no longer exist. International mobility has made it worse: how do you assess a CV when both the credentials and the organizations issuing them are unknown to you?

Well, great: if the information security professionals don't know whom to trust, what hope is there for the rest of us?

Nonetheless, the speaker was wrong. The informal networks exist, just not where he's looking for them. When informal networks get overrun by the mainstream, they move elsewhere. In the late 1980s, Usenet was such a haven; by 1994, when September stopped ending and AOL moved in, everyone had retreated to gated communities (private forums, mailing lists, and so on). Right now, some of those informal networks are on Second Life, and the window is closing as the mainstream becomes more aware of the potential of the virtual world as a platform.

Previous world were popular and still died. But Second Life is different, first and foremost because of timing. People have broadband. They have computers powerful enough to handle the graphics and multiple applications. Their movement around the virtual world is limited only by their manual dexterity and the capacity of the servers to handle so many interacting simulations at once.

Second: experimentation. At this week's show, I picked up a (beta) headset that plugs Skype into Second Life (Second Talk). People (Cattle Puppy Productions) are providing inworld TV displays (and extracted video clips for the rest of us). Reallusion, one of the show's main sponsors, does facial animation it hopes will transform Second Life from a world of text-typing avatars into one of talking characters. You can pick up a portable office including virtual laptop, unpack it in a park, and write and post real blog entries. Why would you do this when you already have blogging software on your desktop? Because Second Life has the potential to roll everything – all the different forms of communication open on your desktop today – into a single platform. And if you grew up with computer games, it's a more familiar platform than the desktop metaphor generations of office workers required.

Third: advertising. The virtual show looked empty compared to a real-world show; it had 6,000-plus visitors over three days. The emptiness was by design to allow more visitors while minimizing lag. Nonetheless, Dell was there with a virtual configurator on which you could specify your new laptop. Elsewhere inworld, you can drive your new Toyota or Pontiac and read your Reuters news. Moving into Second Life is a way for old, apparently stuffy companies to reinvent their image for the notoriously hard-to-reach younger crowd who are media-savvy and ad-cynical. There is real gold in them thar virtual hills.

Finally, a real reason to upgrade my desktop.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 30, 2007

Re-emerging technologies

Along about the third day of this year's email software that works offline; the display looked just like Ameol,an offline mail and news reader I've used for 14 years. (The similarity is only partial, though; Zimbra does synching with mobile devices and a bunch of other things that Ameol doesn't - but that Lotus Notes probably did).

"Reverse pioneering", Tim O'Reilly said the first day, while describing a San Francisco group who build things - including a beer-hauling railway car and carnival rides - out of old bicycle parts.

At some point, also, O'Reilly editor Dale Dougherty gave a talk on the publisher's two relatively new magazines Make and Craft. He illustrated it with pictures of: log cabin quilts, Jacquard looms, the Babbage Differential Calculator, Hollerith tabulating machines, punch cards, and an ancient Singer treadle sewing machine. And oh look! Sewing patterns! And, I heard someone say quite seriously, what about tatting? Do you know anyone who does it who could teach me?

A day later, in Boston, I hear that knitting is taking East Coast geeks by storm. Apparently geek gatherings now produce as many baby hats as the average nursing home.

Not that I'm making fun of all this. After all, recovering old knowledge is a lot of what we do on the folk scene, and I have no doubt that today's geek culture will plunder these past technologies and, very like the Society for Creative Anachronism, which has a large geek (and also folk music community) crossover, mutate them into something newer and stranger. I'd guess that we're about two years away from a quilting bee in the lobby. Of course, the quilting thread will be conductive, and the quilt will glow in the dark with beaded LEDs so you can read under the covers, and version 2.0 will incorporate miniature solar panels (like those little mirrors in the Eastern stuff you used to get in the 1970s) that release heat at night like an electric blanket...and it will be all black.

Of course, this isn't really new even in geek terms. A dozen years ago, the MIT Media Lab held a fashion show to display its latest ideas for embroidering functional keyboards onto networked but otherwise standard Levi's denim jackets and dresses made of conductive fabrics. We don't seem to have come very far toward the future they were predicting then, in which we'd all be wearing T-shirts with sensors that measured our body heat and controlled the room thermostat accordingly (another idea for that quilt).

Instead, geeks, like everyone else, adopted the mobile phone, which has the advantage that you don't have to worry about how to cope with that important conference when your personal area network is in the dirty laundry.

But this is Generation C, as Matt Webb, from the two-man design consultancy Schulze and Webb told us. Generation C likes complexity, connection, and control. GenC is not satisfied with technologies that expect us to respond as passive consumers. We ought to despise mobile phones, especially in the US: they are locked down, controlled by the manufacturers and network operators. Everything should come with an open applications programming interface and...and...a serial port. Hack your washing machine so it only shows the settings you use; hack your luggage so it phones home its GPS coordinates when it's lost.

The conference speaker who drew the most enthusiastic response was Danah Boyd, who had a simple message: people outside of Silicon Valley are different. Don't assume all your users are like you. They have different life stages. This seems so basic and obvious it's shocking to hear people cheer it.
It was during a talk on building technology to selectively jam RFID chips that I had a simple thought: every technology breeds its opposite. Radar to trap speeders begets radar scanners. Cryptography breeds cryptanalysis. Email breeds spam, which breeds spam filtering, which breeds spam smart enough to pass the Turing test.

The same is true of every social trend and phenomenon. John Perry Barlow used to say that years of living in the virtual world had made him appreciate the physical world far more. It's not much of a jump from that to all sorts of traditional crafts.

Don't get me wrong. I'm glad geeks want to knit, sew, and build wooden telescopes. Sewing used to be a relatively mainstream activity, and over the last couple of decades it's been progressively dumbed down. The patterns you buy today are far simpler (and less interesting) to construct than the ones you used to get in the 1970s. It would be terrific if geeks brought some complexity back to it.

But jeez, guys, you need to get out more. Not only is there an entire universe of people who are different from Silicon Valley, there's an entire industry of magazines and books about fabric arts. Next, you get to reinvent colors.

I blogged more serious stuff from etech at Blindside.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her , or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 9, 2007

Getting out the vote

Voter-verified paper audit trails won't save us. That was the single clearest bit of news to come out of this week's electronic voting events.

This is rather depressing, because for the last 15 years it's looked as though VVPAT (as they are euphoniously calling it) might be something everyone could compromise on.: OK, we'll let you have your electronic voting machines as long as we can have a paper backup that can be recounted in case of dispute. But no. According to Rebecca Mercuri in London this week (and others who have been following this stuff on the ground in the US), what we thought a paper trail meant is definitely not what we're getting. This is why several prominent activist organisations have come out against the Holt bill HR811, introduced into Congress this week, despite its apparent endorsement of paper trails.

I don't know about you, but when I imagined a VVPAT, what I saw in my mind's eye was something like an IBM punch card dropping individually into some kind of display where a voter would press a key to accept or reject. Instead, vendors (who hate paper trails) are providing cheap, flimsy, thermal paper in a long roll with no obvious divisions to show where individual ballots are. The paper is easily damaged, it's not clear whether it will survive the 22 months it's supposed to be stored, and the mess is not designed to ease manual recounts. Basically, this is paper that can't quite aspire to the lofty quality of a supermarket receipt.

The upshot is that yesterday you got a programme full of computer scientists saying they want to vote with pencils and paper. Joseoph Kiniry, from University College, Dublin, talked about using formal methods to create a secure system – and says he wants to vote on paper. Anne-Marie Ostveen told the story of the Dutch hacker group who bought up a couple of Nedap machines to experiment on and wound up publicly playing chess on them – and exposing their woeful insecurity – and concluded, "I want my pencil back." And so on.

The story is the same in every country. Electronic voting machines – or, more correctly, electronic ballot boxes – are proposed and brought in without public debate. Vendors promise the machines will be accurate, reliable, secure, and cheaper than existing systems. Why does anyone believe this? How can a voting computer possibly be cheaper than a piece of paper and a pencil? In fact, Jason Kitcat, a longtime activist in this area, noted that according to the Electoral Commission the cost of the 2003 pilots were astounding – in Sheffield £55 per electronic vote, and that's with suppliers waiving some charges they didn't expect either. Bear in mind, also, that the machines have an estimated life of only ten years.

Also the same: governments lack internal expertise on IT, basically because anyone who understand IT can make a lot more money in industry than in either government or the civil service.

And: everywhere vendors are secretive about the inner workings of their computers. You do not have to be a conspiracy theorist to see that privatizing democracy has serious risks.

On Tuesday, Southport LibDem MP John Pugh spoke of the present UK government's enchantment with IT. "The procurers who commission IT have a starry-eyed view of what it can do," he said. "They feel it's a very 'modern' thing." Vendors, also, can be very persuasive (I'd like to see tests on what they put in the ink in those brochures, personally). If, he said, Bill Gates were selling voting machines and came up against Tony Blair, "We would have a bill now."

Politicians are, probably, also the only class of people to whom quick counts appeal. The media, for example, ought to love slow counts that keep people glued to their TV sets, hitting the refresh button on their Web browsers, and buying newspapers throughout. Florida 2000 was a media bonanza. But it's got to be hard on the guys who can't sleep until they know whether they have a job next month.

I would propose the following principles to govern the choice of balloting systems:

- The mechanisms by which votes are counted should be transparent. Voters should be able to see that the vote they cast is the vote they intended to cast,

- Vendors should be contractually prohibited from claiming the right to keep secret their source code, the workings of their machines, or their testing procedures, and they should not be allowed to control the circumstances or personnel under which or by whom their machines are tested. (That's like letting the psychic set the controls of the million-dollar test.)

- It should always be possible to conduct a public recount of individual ballots.

Pugh made one other excellent point: paper-based voting systems are mature. "The old system was never perfect," he said, but over time "we've evolved a way of dealing with almost every conceivable problem." Agents have the right to visit every polling station and watch the count, recounts can consider every single spoiled ballot. By contrast, electronic voting presumes everything will go right.

Guys, it's a computer. Next!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 26, 2007

Vote early, vote often...

It is a truth that ought to be universally acknowledged that the more you know about computer security the less you are in favor of electronic voting. We thought – optimists that we are – that the UK had abandoned the idea after all the reports of glitches from the US and the rather indeterminate results of a couple of small pilots a few years ago. But no: there are plans for further trials for the local elections in May.

It's good news, therefore, that London is to play host to two upcoming events to point out all the reasons why we should be cautious. The first, February 6, is a screening of the HBO movie Hacking Democracy, a sort of documentary thriller. The second, February 8, is a conference bringing together experts from several countries, most prominently Rebecca Mercuri, who was practically the first person to get seriously interested in the security problems surrounding electronic voting. Both events are being sponsored by the Open Rights Group and the Foundation for Information Policy Research, and will be held at University College London. Here is further information and links to reserve seats. Go, if you can. It's free.

Hacking Democracy (a popular download) tells the story of ,a href="http://www.blackboxvoting.org">Bev Harris and Andy Stephenson. Harris was minding her own business in Seattle in 2000 when the hanging chad hit the Supreme Court. She began to get interested in researching voting troubles, and then one day found online a copy of the software that runs the voting machines provided by Diebold, one of the two leading manufacturers of such things. (And, by the way, the company whose CEO vowed to deliver Ohio to Bush.) The movie follows this story and beyond, as Harris and Stephenson dumpster-dive, query election officials, and document a steady stream of glitches that all add up to the same point: electronic voting is not secure enough to protect democracy against fraud.

Harris and Stephenson are not, of course, the only people working in this area. Among computer experts such as Mercuri, David Chaum, David Dill, Deirdre Mulligan, Avi Rubin, and Peter Neumann, there's never been any question that there is a giant issue here. Much argument has been spilled over the question of how votes are recorded; less so around the technology used by the voter to choose preferences. One faction – primarily but not solely vendors of electronic voting equipment – sees nothing wrong with Direct Recording Electronic, machines that accept voter input all day and then just spit out tallies. The other group argues that you can't trust a computer to keep accurate counts, and that you have to have some way for voters to check that the vote they thought they cast is the vote that was actually recorded. A number of different schemes have been proposed for this, but the idea that's catching on across the US (and was originally promoted by Mercuri) is adding a printer that spits out a printed ballot the voter can see for verification. That way, if an audit is necessary there is a way to actually conduct one. Otherwise all you get is the machine telling you the same number over again, like a kid who has the correct answer to his math homework but mysteriously can't show you how he worked the problem.

This is where it's difficult to understand the appeal of such systems in the UK. Americans may be incredulous – I was – but a British voter goes to the polls and votes on a small square of paper with a stubby, little pencil. Everything is counted by hand. The UK can do this because all elections are very, very simple. There is only one election – local council, Parliament – at a time, and you vote for one of only a few candidates. In the US, where a lemon is the size of an orange, an orange is the size of a grapefruit, and a grapefruit is the size of a soccer ball, elections are complicated and on any given polling day there are a lot of them. The famous California governor's recall that elected Arnold Schwarzeneger, for example, had hundreds of candidates; even a more average election in a less referendum-happy state than California may have a dozen races, each with six to ten candidates. And you know Americans: they want results NOW. Like staying up for two or three days watching the election returns is a bad thing.

It is of course true that election fraud has existed in all eras; you can "lose" a box of marked paper ballots off the back of a truck, or redraw districts according to political allegiance, or "clean" people off the electoral rolls. But those types of fraud are harder to cover up entirely. A flawed count in an electronic machine run by software the vendor allows no one to inspect just vanishes down George Orwell's memory hole.

What I still can't figure out is why politicians are so enthusiastic about all this. Yes, secure machines with well-designer user interfaces might get rid of the problem of "spoiled" and therefore often uncounted ballots. But they can't really believe – can they? – that fancy voting technology will mean we're more likely to elect them? Can it?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 6, 2006

A different kind of poll tax

Elections have always had two parts: the election itself, and the dickering beforehand (and occasionally afterwards) over who gets to vote. The latest move in that direction: at the end of September the House of Representatives passed the Federal Election Integrity Act of 2006 (H.R. 4844), which from 2010 will prohibit election officials from giving anyone a ballot who can't present a government-issued photo ID whose issuing requirements included proof of US citizenship. (This lets out driver's licenses, which everyone has, though I guess it would allow passports, which relatively few have.)
These days, there is a third element: specifying the technology that will tabulate the votes. Democracy depends on the voters' being able to believe that what determines the election is the voters' choices rather than the latter two.

The last of these has been written about a great deal in technology circles over the last decade. Few security experts are satisfied with the idea that we should trust computers to do "black box voting" where they count up and just let us know the results. Even fewer security experts are happy with the idea that so many politicians around the world want to embrace: Internet (and mobile phone) voting.

The run-up to this year's mid-term US elections has seen many reports of glitches. My favorite recent report comes from a test in Maryland, where it turned out that the machines under test did not communicate with each other properly when the touch screens were in use. If they don't communicate correctly, voters might be able to vote more than once. Attaching mice to the machines solves the problem – but the incident is exactly the kind of wacky glitch that's familiar from everyday computing life and that can take absurd amounts of time to resolve. Why does anyone think that this is a sensible way to vote? (Internet voting has all the same risks of machine glitches, and then a whole lot more.)

The 2000 US Presidential election isn’t as famous for the removal from the electoral rolls in Florida of few hundred thousand voters as it is for hanging chad – but read or watch on the subject. Of course, wrangling over who gets to vote didn't start then. Gerrymandering districts, fighting over giving the right to vote to women, slaves, felons, expatriates…

The latest twist in this fine, old activity is the push in the US towards requiring Voter ID. Besides the federal bill mentioned above, a couple of dozen states have passed ID requirements since 2000, though state courts in Missouri, Kentucky, Arizona, and California are already striking them down. The target here seems to be that bogeyman of modern American life, illegal immigrants.

Voter ID isn't as obviously a poll tax. After all, this is just about authenticating voters, right? Every voter a legal voter. But although these bills generally include a requirement to supply a voter ID free of charge to people too poor to pay for one, the supporting documentation isn't free: try getting a free copy of your birth certificate, for example. The combination of the costs involved in that aspect, plus the effort involved in getting the ID are a burden that falls disproportionately on the usual already disadvantaged groups (the same ones stopped from voting in the past by road blocks, insufficient provision of voting machines in some precincts, and indiscriminate cleaning of the electoral rolls). Effectively, voter ID creates an additional barrier between the voter and the act of voting. It may not be the letter of a poll tax, but it is the spirit of one.

This is in fact the sort of point that opponents are making.

There are plenty of other logistical problems, of course, such as: what about absentee voters? I registered in Ithaca, New York, in 1972. A few months before federal primaries, the Board of Elections there mails me a registration form; returning it gets me absentee ballots for the Democratic primaries and the elections themselves. I've never known whether my vote is truly anonymous; nor whether it's actually counted. I take those things on trust, just as, I suppose, the Board of Elections trusts that the person sending back these papers is not some stray British person who's does my signature really well. To insert voter ID into that process would presumably require turning expatriate voters over to, say, the US Embassies, who are familiar with authentication and checking identity documents.

Given that most countries have few such outposts, the barriers to absentee voting would be substantially raised for many expatriates. Granted, we're a small portion of the problem. But there's a direct clash between the trend to embrace remote voting - the entire state of Oregon votes by mail – and the desire to authenticate everyone.
We can fix most of the voting technology problems by requiring voter-verifiable, auditable, paper trails, as Rebecca Mercuri began pushing for all those years ago (and most computer scientists now agree with), and there seem to be substantial moves in that direction as state electors test the electronic equipment and scientists find more and more serious potential problems. Twenty-seven states now have laws requiring paper trails. But how we control who votes is the much more difficult and less talked-about frontier.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 12, 2006

Map quest

The other week, I drove through London with a musician friend who spent a lot of the trip telling me how much he loved his new dashboard-mounted GPS.

I could see his point. In my own folksinging days I averaged 50,000 miles a year across the US, and even with a box of maps in the back seat every day the moment invariably came when you discovered that the directions you'd been given were wrong, impenetrable, or missing. At that point one of two things would happen: either you would find the place after much trial and error and many wrong turns or you would get lost. Either way, you would arrive at the gig intemperate, irascible, and cranky, and they'd never hire you again. Me, that is. I'm sure you are sweet and kind and gentle and good and would never yell at someone you've just met for the first time that they miscounted and it's three traffic lights, not two.

By contrast, all my friend had to do was punch in the destination address, and after briefly communing with satellites the GPS directed us in a headmistressy English voice he called Agatha. Stuff like, "Turn left, 100 meters,"

Of course, I don't actually have any sense of how far 100 meters is. I lean more toward "Turn left opposite that gas station up there." But Agatha doesn't know from landmarks or the things humans see. I imagine that will change as the resolution, graphics, and network connections improve. I don't, for example, see why eventually everyone shouldn't be equipped with a complete set of world maps and a display that can be set to show a customizable level of detail (up to full, real-time video) with a recognition program that would enable Agatha to say exactly that while recalculating routes using up-to-the-minute information about traffic jams and other impedimenta. (Doubtless some public-spirited hacker will create a speed trap avoidance add-on.) Today's kids, in fact, are so used to reading multiple screens with multiple scrolls of information on them that the GPS will probably migrate to lower-windshield with user-selectable information overlays. And glasses, watches, or clothing so that if, like the Prisoner, someone abducted you from your London flat you would be able to identify Your Village's location.

Back in today's world, Agatha is also not terribly bright about traffic. We were driving from Kew to Crouch End, and she routed us through…through…Central London. A brief digression. Back in 1972, before the M25 was built, although long after the North Circular Road was cobbled together out of existing streets, I remember a British folk band telling me that that you had to allow an extra two hours any time you had to go through London. I accordingly regard driving inside the M25 with horror and an immediate desire to escape to a train. Yet Agatha was routing us down Marylebone Road.

You cannot tell me she knew it was Good Friday and that the streets would therefore be comparatively empty.

The received wisdom among people who know North London is that the most efficient way from K to C is to take the North Circular Road to Finchley (I think it was) and then do something complicated with London streets. On the way back, we tried a comparative test by turning off the GPS, getting directions to the NCR from the club organizer, and following the signs from there. (You would have to be as navigationally challenged as a blind woodpecker not to be able to find Kew from the NCR, and anyway I knew the way.) It was a quiet, peaceful way to drive and talk, without Agatha's constant interruptions. Or it would have been, except that my friend kept worrying whether we were on the right road, going the right way, speculating it was longer than the other way…

The problem is, of course, that GPS does not teach you geography, any more than the tube map does. Following the serial sequence of instructions never adds up to understanding how the pieces connect. Wherever you go, as the saying is, there you are.

To lament the loss of geographical understanding (to say nothing of the box of maps in the back seat) is, I suppose, not much different from lamenting that people other than Scrabble players can no longer do mental arithmetic because everyone has calculators or whining that no one has the mental capacity to recite The Odyssey any more. Technology changes, and we gladly hand over yet another task. Soon, knowing where Manhattan is in relation to Philadelphia or Finchley Road is in relation to Wembley will seem as quaint as knowing how to load an 8mm projector.

The world will look very different then: no one will ever be lost, since you will always be able to punch in a destination and recalculate. On the other hand, you'll never be really found, either, since pretty much all geography will be in offline storage. We folk travelers used to talk about how the whole country was our back yard. In the GPS world, your own back yard might as well be Minnesota.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her | | Comments (0) | TrackBacks (0)