December 15, 2017

Bitcoin for dummies

Thumbnail image for Bitcoin_Digital_Currency_Logo.pngThe writers of the sitcom The Big Bang Theory probably thought they were on safe ground in early November when (at a guess) they pegged the price of bitcoin at $5,000 for the episode that had its first airing in the US on November 30 (Season 11, episode 9, "The Bitcoin Entanglement"). By then, it had doubled. This week, it neared $17,500, according to Coindesk. In between, it's dropped as much as 25% in a single day.

All of which explains why I've had numerous conversations this week in which I tried to talk people out of feeling bad that they didn't buy bitcoin back when it was cheap. Mortgaging your house or opening up credit card debt in order to buy bitcoin, as CNBC reports some people are doing, is a disastrously bad idea.

Bitcoin is at the stage where a sense of proportion is in short supply. You've got Deutsche Bank claiming that a bitcoin crash would endanger global markets, the Bank of England saying it's no threat, and Andrew Weilbacher at arguing in return that the euro will be far more destructive. The Bank of England likely has it right: bitcoin is too small - at its $17,000 peak the whole market is $300 billion - to cause a global crash, even at current prices and volatility. It can certainly crash personal economies quite effectively, though.

But why stop Weilbacher when he's having fun? "Bitcoin is poised to overtake current technology for the internet and finance, not considering all of the other blockchain protocols. If and when this technology passes more archaic versions, it will begin to take on the total market valuation of the internet - $19 trillion - and the financial industry as a whole," he writes. Stuff like this always makes me think of this quote from Wall Street giant (and Warren Buffett teacher) Benjamin Graham: "Bright young men have been promising to work miracles with other people's money since time immemorial."

The dot-com bust was a great example. And yet, at its height in 2000 when even the most insistent dot-com boosters were admitting it was a bursting bubble, even the most skeptical believed that ten years later the internet would be much bigger. Many of those early internet companies never recovered, of course - but the internet still hasn't stopped growing.

So is bitcoin like an internet company or like the internet?

Bitcoin was conceived as two things: a cryptocurrency and a payment system. At the beginning people who mined or bought it were mostly curious and wanted to experiment. It was technically challenging, but cheap. A couple of years ago, we were hearing a lot about its potential for cutting costs out of financial transactions.

That dream is in trouble: the rapid rise in prices is killing bitcoin as a cost-cutter because as bitcoin's exchange rate goes up, so do its transaction costs. About 100,000 outlets worldwide accept payment in bitcoin, but there are also many private uses, particularly in areas where trust in government and the financial system is collapsing. The reality, though, is that very few people seriously use bitcoin as a currency and some of them are reconsidering. Steam, for example, announced on December 6 that it was ceasing to accept bitcoin payments partly because of pricing volatility but mostly because the fees are nearing $20 per transaction, 100 times what it cost when Steam started accepting it.

There's another problem, too: recent calculations say that the bitcoin transaction network is hideously energy-intensive, and even if miners derive all their power from renewables, if prices continue to rise it won't be sustainable. Even if it is, Visa is vastly faster and vastly more energy-efficient.

Those involved in fintech have been saying for some time that whatever happens to bitcoin, the blockchain, which records transactions in secure but verifiable blocks, is really significant (although older industry guys call it a "distributed ledger" and wonder why all the fuss over a 30-year-old technology). I see no reason not to believe them. However, you can't invest in the blockchain by buying bitcoin. Instead, the people investing in exploiting this are banks, other financial institutions, and large and small technology companies. That being the case, the idea that the power of the system lies in its decentralized peer-to-peer nature that requires no central authority seems likely to die even faster than the same idea about the internet itself. Get your libertarian rhetoric while you can. And your crypto kittens.

Bitcoin is not scaling. That doesn't mean other cryptocurrencies can't, but it does make Derek Thompson, who, writing for The Atlantic, called bitcoin a digital baseball card, without the faces or stats", even more likely to be right.

So, at present, most bitcoin owners are speculators hoping to cash out by selling to a greater fool. Over the time of bitcoin's existence, mining has moved from ordinary laptops to GPUs, to purpose-built ASICs. Today, most mining is controlled by a relative handful of players with giant clusters. If you are really insistent upon trying to make some money out of the bitcoin bubble, your best bet is the old picks and shovels approach. Needless to say, others have already thought of this.

Bottom line: you may regret missed opportunities but they don't make you feel nearly as stupid as the ones you took but wish you hadn't.

Illustrations: Bitcoin logo.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2017


Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".

Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 17, 2017


Thumbnail image for lanier-lrm-2017.jpgOn Tuesday evening, virtual reality pioneer and musician Jaron Lanier, in London to promote his latest book, Dawn of the New Everything, suggested the internet took a wrong turn in the 1990s by rejecting the idea of combating spam by imposing a tiny - "homeopathic" - charge to send email. Think where we'd be now, he said. The mindset of paying for things would have been established early, and instead of today's "behavior modification empires" we'd have a system where people were paid for the content they produce.

Lanier went on to invoke the ghost of Ted Nelson who began his earliest work on Project Xanadu in 1960, before ARPAnet, the internet, and the web. The web fosters copying. Xanadu instead gave every resource a permanent and unique address, and linking instead of copying meant nothing ever lost its context.

The problem, as Nelson's 2011 autobiography Possiplex and a 1995 Wired article, made plain, is that trying to get the thing to work was a heartbreaking journey filled with cycles of despair and hope that was increasingly orthogonal to where the rest of the world was going. While efforts continue, it's still difficult to comprehend, no matter how technically visionary and conceptually advanced it was. The web wins on simplicity.

But the web also won because it was free. Tim Berners-Lee is very clear about the importance he attaches to deciding not to patent the web and charge licensing fees. Lanier, whose personal stories about internetworking go back to the 1980s, surely knows this. When the web arrived, it had competition: Gopher, Archie, WAIS. Each had its limitations in terms of user interface and reach. The web won partly because it unified all their functions and was simpler - but also because it was freer than the others.

Suppose those who wanted minuscule payments for email had won? Lanier believes today's landscape would be very different. Most of today's machine learning systems, from IBM Watson's medical diagnostician to the various quick-and-dirty translation services rely on mining an extensive existing corpus of human-generated material. In Watson's case, it's medical research, case studies, peer review, and editing; in the case of translation services it's billions of side-by-side human-translated pages that are available on the web (though later improvements have taken a new approach). Lanier is right that the AIs built by crunching found data are parasites on generations of human-created and curated knowledge. By his logic, establishing payment early as a fundamental part of the internet would have ensured that the humans that created all that data would be paid for their contributions when machine learning systems mined it. Clarity would result: instead of the "cruel" trope that AIs are rendering humans unnecessary, it would be obvious that AI progress relied on continued human input. For that we could all be paid rather than being made "wards of the state".

Consider a practical application. Microsoft's LinkedIn is in court opposing HiQ, a company that scrapes LinkedIn's data to offer employers services that LinkedIn might like to offer itself. The case, which was decided in HiQ's favor in August but is appeal-bound, pits user privacy (argued by EPIC) against innovation and competition (argued by EFF). Everyone speaks for the 500 million whose work histories are on LinkedIn, but no one speaks for our individual ownership of our own information.

Let's move to Lanier's alternative universe and say the charge had been applied. Spam dropped out of email early on. We developed the habit of paying for information. Publishers and the entertainment industry would have benefited much sooner, and if companies like Facebook and LinkedIn had started, their business models would have been based on payments for posters and charges for readers (he claims to believe that Facebook will change its business model in this direction in the coming years; it might, but if so I bet it keeps the advertising).

In that world, LinkedIn might be our broker or agent negotiating terms with HiQ on our behalf rather than in its own interests. When the web came along, Berners-Lee might have thought pay-to-click logical, and today internet search might involve deciding which paid technology to use. If, that is, people found it economic to put the information up in the first place. The key problem with Lanier's alternative universe: there were no micropayments. A friend suggests that China might be able to run this experiment now: Golden Shield has full control, and everyone uses WeChat and AliPay.

I don't believe technology has a manifest destiny, but I do believe humans love free and convenient, and that overwhelms theory. The globally spreading all-you-can-eat internet rapidly killed the existing paid information services after commercial access was allowed in 1994. I'd guess that the more likely outcome of charging for email would have been the rise of free alternatives to email - instant messaging, for example, which happened in our world to avoid spam. The motivation to merge spam with viruses and crack into people's accounts to send spam would have arisen earlier than it did, so security would have been an earlier disaster. As the fundamental wrong turn, I'd instead pickcentralization.

Lanier noted the culminating irony: "The left built this authoritarian network. It needs to be undone."

The internet is still young. It might be possible, if we can agree on a path.

Illustrations: Jaron Lanier in conversation with Luke Robert Mason (Eva Pascoe);

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 27, 2017

The opposite of privilege

new-22portobelloroad.jpgA couple of weeks ago, Cybersalon held an event to discuss modern trends in workplace surveillance. In the middle, I found myself reminding the audience, many of whom were too young to remember, that 20 or so years ago mobile phones were known locally as "poserphones". "Poserphone" because they were still expensive enough recently enough that they were still associated with rich businessmen who wanted to show off their importance.

The same poseurship today looks like this: "I'm so grand I don't carry a mobile phone." In a sort of rerun of the 1997 anti-internet backlash, which was kicked off by Clifford Stoll's Silicon Snake-Oil, all over the place right now we're seeing numerous articles and postings about how the techies of Silicon Valley are disconnecting themselves and removing technology from the local classrooms. Granted, this has been building for a while: in 2014 the New York Times reported that Steve Jobs didn't let his children use iPhones or iPads.

It's an extraordinary inversion in a very short time. However, the notable point is that the people profiled in these stories are people with the agency to make this decision and not suffer for it. In April, Congressman Jim Sensenbrenner (R-WI), claimed airily that "Nobody has to use the internet", a statement easily disputed. A similar argument can be made about related technology such as phones and tablets: it's perfectly reasonable to say you need downtime or that you want your kids to have a solid classical education with plenty of practice forming and developing long-form thinking. But the option to opt out depends on a lot of circumstances outside of most people's control. You can't, for example disconnect your phone if your zero-hours contracts specifies you will be dumped if you don't answer when they call, nor if you're in high-urgency occupations like law, medicine, or journalism; nor can you do it if you're the primary carer for anyone else. For a homeless person, their mobile phone may be their only hope of finding a job or a place to live.

Battery concerns being what they are, I've long had the habit of turning off wifi and GPS unless I'm actively using them. As Transport for London increasingly seeks to use passenger data to understand passenger flow through the network and within stations, people who do not carry data-generating devices are arguably anti-social because they are refusing to contribute to improving the quality of the service. This argument has been made in the past with reference to NHS data, suggesting that patients who declined to share their data didn't deserve care.

cybersalon-october.jpgToday's employers, as Cybersalon highlighted and as speakers have previously pointed out at the annual Health Privacy Summit, may learn an unprecedented amount of intimate information about their employees via efforts like wellness programs and the data those capture from devices like Fitbits and smart watches. At Cornell, Karen Levy has written extensively about the because-safety black box monitoring coming to what historically has been the most independent of occupations, truck driving. At Middlesex Phoebe Moore is studying the impact of workplace monitoring on white collar workers. How do you opt out of monitoring if doing so means "opting out" of employment?

The latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location. Your voice may be captured by the waiting speech-driven device in your friend's car or home; ever tried asking someone to turn off Alexa-Siri-OKGoogle while you're there?

For these reasons, publicly highlighting your choice to opt out reads as, "Look how privileged I am", or some much more compact and much more offensive term. This will be even more true soon, when opting out will require vastly more effort than it does now and there will be vastly fewer opportunities to do it. Even today, someone walking around London has no choice about how many CCTV cameras capture them in motion. You can ride anonymously on the tube and buses as long as you are careful to buy, and thereafter always top up, your Oyster smart card with cash. But the latest in facial recognition can identify people in the backgrounds of photos, making it vastly harder to know which of the sidewalk-blockers around you snapping pictures of each other on their phones may capture and upload you as well, complete with time and location.

It's clear "normal" people are beginning to know this. This week, in a supermarket well outside of London, I was mocking a friend for paying for some groceries by tapping a credit card. "Cash," I said. "What's wrong with nice, anonymous cash?" "It took 20 seconds!" my friend said. The aging cashier regarded us benignly. "They can still track you by the mobile phones you're carrying," she said helpfully. Touché.

Illustrations: George Orwell's house at 22 Portobello road; Cybersalon (Phoebe Moore, center).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 29, 2017


London_Skyline.jpgIf it keeps growing, every company eventually reaches a moment where this message arrives: it's time to grow up. For Microsoft, IBM, and Intel it was antitrust suits. Google's had the EU's €2.4 billion fine. For Facebook and Twitter, it may be abuse and fake news.

This week, it was Uber's turn, when Transport for London declined to renew Uber's license to operate. Uber's response was to apologize and promise to "do more" while urging customers to sign its petition. At this writing, 824,000 have complied.

Travis_Kalanick_at_DLD_Munich_2015_(cropped).jpgI can't see the company as a victim here. The "sharing economy" rhetoric of evil protectionist taxi regulators has taken knocks from the messy reality of the company's behavior and the Grade A jerkishness of its (now former) founding CEO, the controversial Travis Kalanick. The tone-deaf "Rides of Glory" blog post. The safety-related incidents that TfL complains the company failed to report because: PR. Finally, the clashes with myriad city regulators the company would prefer to bypass: currently, it's threatening to pull out of Quebec. Previously, both Uber and Lyft quit Austin, Texas for a year rather than comply with a law requiring driver fingerprinting. In a second London case, Uber is arguing that its drivers are not employees; SumOfUs begs to differ.

People who use Uber love Uber, and many speak highly of drivers they use regularly. In one part of their brains, Uber-loving friends advocate for social justice, privacy, and fair wages and working conditions; in the other, Uber is so cool, cheap, convenient, and clean, and the app tracks the cab in real time...and city transport is old, grubby, and slow. But we're not at the beginning of this internet thing any more, and we know a lot about what happens when a cute, cuddly company people love grows into a winner-takes-all behemoth the size of a nation-state.

A consideration beyond TfL's pay grade is that transport doesn't really scale, as Hubert Horan explains in his detailed analysis of the company's business model. As Horan explains, Uber can't achieve new levels of cost savings and efficiency (as Amazon and eBay did) because neither the fixed costs of providing the service nor network externalities create them. More simply, predatory competition - that is, venture capitalists providing the large sums that allow Uber to undercut and put out of business existing cab firms (and potentially public transport) - is not sustainable until all other options have been killed and Uber can raise its prices.

Black_London_Cab.jpgEarlier this year, at a conferenceon autonomous vehicles, TfL's representative explained the problems it faces. London will grow from 8.6 million to 10 million people by 2025. On the tube, central zone trains are already running at near the safe frequency limit and space prohibits both wider and longer trains. Congestion will increase: trucks, cars, cabs, buses, bicycles, and pedestrians. All these interests - plus the thousands of necessary staff - need to be balanced, something self-interested companies by definition do not do. In Silicon Valley, where public transport is relatively weak, it may not be clearly understood how deeply a city like London depends on it.

At Wired UK, Matt Burgess says Uber will be back. When Uber and Lyft exited Austin, Texas rather than submit to a new law requiring them to fingerprint drivers, within a year state legislators had intervened. But that was several scandals ago, which is why I think that this once SorryWatch has it wrong: Uber's apology may be adequately drafted (as they suggest, minus the first paragraph), but the company's behaviour has been egregious enough to require clear evidence of active change. Uber needs a plan, not a PR campaign - and urging its customers to lobby for it does not suggest it's understood that..

At London Reconnections, John Bull explains the ins and outs of London's taxi regulation in fascinating detail. Bull argues that in TfL Uber has met a tech-savvy and forward-thinking regulator that is its own boss and too big to bully. Given that almost the only cost the company can squeeze is its drivers' compensation, what protections need to be in place? How does increasing hail-by-app taxi use fit into overall traffic congestion?

Uber is one of the very first of the new hybrid breed of cyber-physical companies. Bypassing regulators - asking forgiveness rather than permission - may have flown when the consequences were purely economic, but it can't be tolerated in the new era of convergence, in which the risks are. My iPhone can't stab me in my bed, (as Bill Smart has memorably observed, but that's not true of these hybrids..

TfL will presumably focus on rectifying the four areas in its announcement. Beyond that, though I'd like to see Uber pressed for some additional concessions. In particular, I think the company - and others like it - should be required to share their aggregate ride pattern data (not incidivual user accounts) with TfL to aid the authority to make better decisions for the benefit of all Londoners. As Tom Slee, the author of What's Yours Is Mine: Against the Sharing Economy, has put it, "Uber is not 'the future', it's 'a future'".

Illustrations: London skyline (by Mewiki); London black cab (Jimmy Barrett; Travis Kalanick (Dan Taylor).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 30, 2012

Robot wars

Who'd want to be a robot right now, branded a killer before you've even really been born? This week, Huw Price, a philosophy professor, Martin Rees, an emeritus professor of cosmology and astrophysics, and Jaan Tallinn, co-founder of Skype and a serial speaker at the Singularity Summit, announced the founding of the Cambridge Project for Existential Risk. I'm glad they're thinking about this stuff.

Their intention is to build a Centre for the Study of Existential Risk. There are many threats listed in the short introductory paragraph explaining the project - biotechnology, artificial life, nanotechnology, climate change - but the one everyone seems to be focusing on is: yep, you got it, KILLER ROBOTS - that is, artificial general intelligences so much smarter than we are that they may not only put us out of work but reshape the world for their own purposes, not caring what happens to us. Asimov would weep: his whole purpose in creating his Three Laws of Robotics was to provide a device that would allow him to tell some interesting speculative, what-if stories and get away from the then standard fictional assumption that robots were eeeevil.

The list of advisors to Cambridge project has some interesting names: Hermann Hauser, now in charge of a venture capital fund, whose long history in the computer industry includes founding Acorn and an attempt to create the first mobile-connected tablet (it was the size of a 1990s phone book, and you had to write each letter in an individual box to get it to recognize handwriting - just way too far ahead of its time); and Nick Bostrum of the Future of Humanity Institute at Oxford. The other names are less familiar to me, but it looks like a really good mix of talents, everything from genetics to the public understanding of risk.

The killer robots thing goes quite a way back. A friend of mine grew up in the time before television when kids would pay a nickel for the Saturday show at a movie theatre, which would, besides the feature, include a cartoon or two and the next chapter of a serial. We indulge his nostalgia by buying him DVDs of old serials such as The Phantom Creeps, which features an eight-foot, menacing robot that scares the heck out of people by doing little more than wave his arms at them.

Actually, the really eeeevil guy in that movie is the mad scientist, Dr Zorka, who not only creates the robot but also a machine that makes him invisible and another that induces mass suspended animation. The robot is really just drawn that way. But, like CSER, what grabs your attention is the robot.

I have a theory about this that I developed over the last couple of months working on a paper on complex systems, automation, and other computing trends, and this is that it's all to do with biology. We - and other animals - are pretty fundamentally wired to see anything that moves autonomously as more intelligent than anything that doesn't. In survival terms, that makes sense: the most poisonous plant can't attack you if you're standing out of reach of its branches. Something that can move autonomously can kill you - yet is also more cuddly. Consider the Roomba versus a modern dishwasher. Counterintuitively, the Roomba is not the smarter of the two.

And so it was that on Wednesday, when Voice of Russia assembled a bunch of us for a half-hour radio discussion, the focus was on KILLER ROBOTs, not synthetic biology (which I think is a much more immediately dangerous field) or climate change (in which the scariest new development is the very sober, grown-up, businesslike this-is-getting-expensive report from the insurer Munich Re). The conversation was genuinely interesting, roaming from the mysteries of consciousness to the problems of automated trading and the 2010 flash crash. Pretty much everyone agreed that there really isn't sufficient evidence to predict a date at which machines might be intelligent enough to pose an existential risk to humans. You might be worried about self-driving cars, but they're likely to be safer than drunk humans.

There is a real threat from killer machines; it's just that it's not super-human intelligence or consciousness that's the threat here. Last week, Human Rights Watch and the International Human Rights Clinic published Losing Humanity: the Case Against Killer Robots, arguing that governments should act pre-emptively to ban the development of fully autonomous weapons. There is no way, that paper argues, for autonomous weapons (which the military wants so fewer of *our* guys have to risk getting killed) to distinguish reliably between combatants and civilians.

There were some good papers on this at this year's We Robot conference from Ian Kerr and Kate Szilagyi (PDF) and Markus Wegner.

From various discussions, it's clear that you don't need to wait for *fully* autonomous weapons to reach the danger point. In today's partially automated systems, the operator may be under pressure to make a decision in seconds and "automation bias" means the human will most likely accept whatever the machines suggests it will do, the military equivalent of clicking OK. The human in the loop isn't as much of a protection as we might hope against the humans designing these things. Dr Zorka, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series

May 25, 2012

Camera obscura

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't), or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 6, 2012

I spy

"Men seldom make passes | At girls who wear glasses," Dorothy Parker incorrectly observed in 1937. (How would she know? She didn't wear any). You have to wonder what she could have made of Google Goggles which, despite the marketing-friendly alliterative name, are neither a product (yet) nor a new idea.

I first experienced the world according to a heads-up display in 1997 during a three-day conference (TXT) on wearable computing at MIT ($). The eyes-on demonstration was a game of pool with the headset augmenting my visual field with overlays showing cuing angles. (Could be the next level of Olympic testing: checking athletes for contraband contract lenses and earpieces for those in sports where coaching is not allowed.)

At that conference, a lot of ideas were discussed and demonstrated: temperature-controlling T-shirts, garments that could send back details of a fallen soldier's condition, and so on. Much in evidence were folks like Thad Starner, who scanned my business card and handed it back to me and whose friends commented on the way he'd shift his eyes to his email mid-conversation, and Steve Mann, who turned himself into a cyborg experiment as long ago as the 1980s. Checking their respective Web pages, I see that Mann hasn't updated the evolution of wearables graphic since the late 1990s, by which time the headset looked like an ordinary pair of sunglasses; in 2002, when airport security forced him to divest his gear, he had trouble adjusting to life without it. Starner is on leave to work at...Project Glass, the home of Google Goggles.

The problem when a technological dream spans decades is that between conception and prototype things change. In 1997, that conference seemed to think wearable computing - keyboards embroidered in conductive thread, garments made of cloth woven from copper-covered strands, souped-up eyeglasses, communications-enabled watches, and shoes providing from the energy generated in walking - surely was a decade or less away.

The assumptions were not particularly contentious. People wear wrist watches and jewelry, right? So they'll wear things with the same fashion consciousness, but functional. Like, it measures and displays your heart rhythms (a woman danced wearing a light-flashing pendant that sped up with her heart rate), or your moods (high-tech mood rings), or acts as the controller for your personal area network.

Today, a lot of people don't *wear* wrist watches any more.

For wearable guys, it's good progress. The functionality that required 12 pounds of machinery draped about your person - I see from my pieces linked above and my contemporaneous notes, that the rig I tried felt like wearing a very heavy, inflexible sandwich board - is an iPhone or Android. Even my old Palm Centro comes close. As Jack Schofield writes in the Guardian, the headset is really all that's left that we don't have. And Google has a lot of competition.

What interests me is let's say these things do take off in a big way. What then? Where will the information come from to display on those headsets? Who will be the gatekeepers? If we - some of us - want to see every building decorated with outsized female nudes, will we have to opt in for porn?

My speculation here is surely not going to be futuristic enough, because like most people I'm locked into current trends. But let's say that glasses bolt onto the mobile/Internet ecologies we have in place. It is easy to imagine that, if augmented reality glasses do take off, they will be an important gateway to the next generation of information services. Because if all the glasses are is a different way of viewing your mobile phone, then they're essentially today's ear pieces - surely not sufficient motivation for people with good vision to wear glasses. So, will Apple glasses require an iTunes account and an iOS device to gain access to a choice of overlays to turn on and off that you receive from the iTunes store in real time? Similarly, Google/Android/Android marketplace. And Microsoft/Windows Mobile/Bing or something. And whoever.

So my questions are things like: will the hardware and software be interoperable? Will the dedicated augmented reality consumer need to have several pairs? Will it be like, "Today I'm going mountain climbing. I've subscribed to the Ordnance Survey premium service and they have their own proprietary glasses, so I'll need those. And then I need the Google set with the GPS enhancement to get me there in the car and find a decent restaurant afterwards." And then your kids are like, "No, the restaurants are crap on Google. Take the Facebook pair, so we can ask our friends." (Well, not Facebook, because the kids will be saying, "Facebook is for *old* people." Some cool, new replacement that adds gaming.)

What's that you say? These things are going to collapse in price so everyone can afford 12 pairs? Not sure. Prescription glasses just go on getting more expensive. I blame the involvement of fashion designers branding frames, but the fact is that people are fussy about what they wear on their faces.

In short, will augmented reality - overlays on the real world - be a new commons or a series of proprietary, necessarily limited, world views?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 25, 2011

Paul Revere's printing press

There is nothing more frustrating than watching smart, experienced people reinvent known principles. Yesterday's Westminster Forum on cybersecurity was one such occasion. I don't blame them, or not exactly: it's just maddening that we have made so little progress, while the threats keep escalating. And it is from gatherings like this one that government policy is made.

Rephrasing Bill Clinton's campaign slogan, "It's the people, stupid," said Philip Virgo, chairman of the security panel of the IT Livery Company, to kick off the day, a sentiment echoed repeatedly by nearly every other speaker. Yes, it's the people - who trust when they shouldn't, who attach personal devices to corporate networks, who disclose passwords when they shouldn't, who are targeted by today's Facebook-friending social engineers. So how many people experts on the program? None. Psychologists? No. Nor any usability experts or people whose jobs revolve around communication, either. (Or women, but I'm prepared to regard that as a separate issue.)

Smart, experienced guys, sure, who did a great job of outlining problems and a few possible solutions. Somewhere toward the end of the proceedings, someone allowed in passing that yes, it's not a good idea to require people to use passwords that are too complex to remember easily. This is the state of their art? It's 12 years since Angela Sasse and Anne Adams covered this territory in Users Are Not the Enemy. Sasse has gone on to help found the field of security economics, which seeks to quantify the cost of poorly designed security - not just in data breaches and DoS attacks but in the lost productivity of frustrated, overburdened users. Sasse argues that the problem isn't so much the people as user-hostile systems and technology.

"As user-friendly as a cornered rat," Virgo says he wrote of security software back in 1983. Anyone who's looked at configuring a firewall lately knows things haven't changed that much. In a world of increasingly mass-market software and devices, security software has remained resolutely elitist: confusing error messages, difficult configuration, obscure technology. How many users know what to do when their browser says a Web site certificate is invalid? Or how to answer anti-virus software that asks whether you want to authorise HIPS/RegMod-007?

"The current approach is not working," said William Beer, director of information security and cybersecurity for PriceWaterhouseCoopers. "There is too much focus on technology, and not enough focus from business and government leaders." How about academics and consumers, too?

There is no doubt, though, that the threats are escalating. Twenty years ago, the biggest worry was that a teenaged kid would write a virus that spread fast and furious in the hope of getting on the evening news. Today, an organized criminal underground uses personal information to target a small group of users inside RSA, leveraging that into a threat to major systems worldwide. (Trend Micro CTO Andy Dancer said the attack began in the real world with a single user befriended at their church. I can't find verification, however.)

The big issue, said Martin Smith, CEO of The Security Company, is that "There's no money in getting the culture right." What's to sell if there's no technical fix? Like when your plane is held to ransom by the pilot, or when all it takes to publish 250,000 US diplomatic cables is one alienated, low-ranked person with a DVD burner and a picture of Lady Gaga? There's a parallel here to pharmaceuticals: one reason we have few weapons to combat rampaging drug resistance is that for decades developing new antibiotics was not seen as a profitable path.

Granted, you don't, as Dancer said afterwards, want to frame security as an issue of "fixing the people" (but we already know better than that). Nor is it fair to ban company employees from social media lest some attacker pick it up and use it to create a false sense of trust. Banning the latest new medium, said former GCHQ head John Bassett, is just the instinctive reaction in a disturbance; in 1775 Boston the "problem" was Paul Revere's printing press stirring up trouble.

Nor do I, personally, want to live in a trust-free world. I'm happy to assume the server next to me is compromised, but "Trust no one" is a lousy way to live.

Since perfect security is not possible, Dancer advised, organizations should plan for the worst. Good advice. When did I first hear it? Twenty years ago and most months since, by Peter Neumann in his RISKS Forum. It is depressing and frustrating that we are still having this conversation as if it were new - and that we will have it all over again over the next decade as smart meters roll out to 26 million British households by 2020, opening up the electrical grid to attacks that are already being predicted and studied.

Neumann - and Dancer - is right. There is no perfect security because it's in no one's interest to create it. Plan for the worst.

To Gene Spafford, 1989: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room protected by armed guards - and even then I have my doubts."

For everything else, there's a stolen Mastercard.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 11, 2011

The sentiment of crowds

Context is king.

Say to a human, "I'll meet you at the place near the thing where we went that time," and they'll show up at the right place. That's from the 1987 movieBroadcast News: Aaron (Albert Brooks) says it; cut to Jane (Holly Hunter), awaiting him at a table.

But what if Jane were a computer and what she wanted to know from Aaron's statement was not where to meet but how Aaron felt about it? This is the challenge facing sentiment analysis.

At Wednesday's Sentiment Analysis Symposium, the key question of context came up over and over again as the biggest challenge to the industry of people who claim that they can turn Tweets, blog postings, news stories, and other mass data sources into intelligence.

So context: Jane can parse "the place", "the thing", and "that time" because she has expert knowledge of her past with Aaron. It's an extreme example, but all human writing makes assumptions about the knowledge and understanding of the reader. Humans even use those assumptions to implement privacy in a public setting: Stephen Fry could retweet Aaron's words and still only Jane would find the cafe. If Jane is a large organization seeking to understand what people are saying about it and Aaron is 6 million people posting on Twitter, Tom can use sentiment analyzer tools to give a numerical answer. And numbers always inspire confidence...

My first encounter with sentiment analysis was this summer during Young Rewired State, when a team wanted to create a mood map of the UK comparing geolocated tweets to indices of multiple deprivation. This third annual symposium shows that here is a rapidly engorging industry, part PR, part image consultancy, and part artificial intelligence research project.

I was drawn to it out of curiosity, but also because it all sounds slightly sinister. What do sentiment analyzers understand when I say an airline lounge at Heathrow Terminal 4 "brings out my inner Sheldon? What is at stake is not precise meaning - humans argue over the exact meaning of even the greatest communicators - but extracting good-enough meaning from high-volume data streams written by millions of not-monkeys.

What could possibly go wrong? This was one of the day's most interesting questions, posed by the consultant Meta Brown to representatives of the Red Cross, the polling organization Harris Interactive, and Paypal. Failure to consider the data sources and the industry you're in, said the Red Cross's Banafsheh Ghassemi. Her example was the period just after Hurricane Irene, when analyzing social media sentiment would find it negative. "It took everyday disaster language as negative," she said. In addition, because the Red Cross's constituency is primarily older, social media are less indicative than emails and call center records. For many organizations, she added, social media tend to skew negative.

Earlier this year, Harris Interactive's Carol Haney, who has had to kill projects when they failed to produce sufficiently accurate results for the client, told a conference, "Sentiment analysis is the snake oil of 2011." Now, she said, "I believe it's still true to some extent. The customer has a commercial need for a dial pointing at a number - but that's not really what's being delivered. Over time you can see trends and significant change in sentiment, and when that happens I feel we're returning value to a customer because it's not something they received before and it's directionally accurate and giving information." But very small changes over short time scales are an unreliable basis for making decisions.

"The difficulty in social media analytics is you need a good idea of the questions you're asking to get good results," says Shlomo Argamon, whose research work seems to raise more questions than answers. Look at companies that claim to measure influence. "What is influence? How do you know you're measuring that or to what it correlates in the real world?" he asks. Even the notion that you can classify texts into positive and negative is a "huge simplifying assumption".

Argamon has been working on technology to discern from written text the gender and age - and perhaps other characteristics - of the author, a joint effort with his former PhD student Ken Bloom. When he says this, I immediately want to test him with obscure texts.

Is this stuff more or less creepy than online behavioral advertising? Han-Sheong Lai explained that Paypal uses sentiment analysis to try to glean the exact level of frustration of the company's biggest clients when they threaten to close their accounts. How serious are they? How much effort should the company put into dissuading them? Meanwhile Verint's job is to analyze those "This call may be recorded" calls. Verint's tools turn speech to text, and create color voiceprint maps showing the emotional high points. Click and hear the anger.

"Technology alone is not the solution," said Philip Resnik, summing up the state of the art. But, "It supports human insight in ways that were not previously possible." His talk made me ask: if humans obfuscate their data - for example, by turning off geolocation - will this industry respond by finding ways to put it all back again so the data will be more useful?

"It will be an arms race," he agrees. "Like spam."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 28, 2011

Crypto: the revenge

I recently had occasion to try out Gnu Privacy Guard, the Free Software Foundation's version of PGP, Phil Zimmermann's legendary Pretty Good Privacy software. It was the first time I'd encrypted an email message since about 1995, and I was both pleasantly surprised and dismayed.

First, the good. Public key cryptography is now implemented exactly the way it should have been all along: once you've installed it and generated a keypair, encrypting a message is ticking a box or picking a menu item inside your email software. Even key management is handled by a comprehensible, well-designed graphical interface. Several generations of hard work have created this and also ensured that the various versions of PGP, OpenPGP, and GPG are interoperable, so you don't have to worry about who's using what. Installation was straightforward and the documentation is good.

Now, the bad. That's where the usability stops. There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners.

Item: the subject line doesn't get encrypted. There is nothing you can do about this except put a lot of thought into devising a subject line that will compel people to read the message but that simultaneously does not reveal anything of value to anyone monitoring your email. That's a neat trick.

Item: watch out for attachments, which are easily accidentally sent in the clear; you need to encrypt them separately before bundling them into the message.

Item: while there is a nifty GPG plug-in for Thunderbird - Enigmail - Outlook, being commercial software, is less easily supported. GPG's GpgOL module works only with 2003 (SP2 and above) and 2007, and not on 64-bit Windows. The problem is that it's hard enough to get people to change *one* habit, let alone several.

Item: lacking appropriate browser plug-ins, you also have to tell them to stop using Webmail if the service they're used to won't support IMAP or POP3, because they won't be able to send encrypted mail or read what others send them over the Web.

Let's say you're running a field station in a hostile area. You can likely get users to persevere despite these points by telling them that this is their work system, for use in the field. Most people will put up with a some inconvenience if they're being paid to do so and/or it's temporary and/or you scare them sufficiently. But that strategy violates one of the basic principles of crypto-culture, which is that everyone should be encrypting everything so that sensitive traffic doesn't stand out. They are of course completely right, just as they were in 1993, when the big political battles over crypto were being fought.

Item: when you connect to a public keyserver to check or download someone's key, that connection is in the clear, so anyone surveilling you can see who you intend to communicate with.

Item: you're still at risk with regard to traffic data. This is what RIPA and data retention are all about. What's more significant? Being able to read a message that says, "Can you buy milk?" or the information that the sender and receiver of that message correspond 20 times a day? Traffic data reveals the pattern of personal relationships; that's why law enforcement agencies want it. PGP/GPG won't hide that for you; instead, you'll need to set up a proxy or use Tor to mix up your traffic and also protect your Web browsing, instant messaging, and other online activities. As Tor's own people admit, it slows performance, although they're working on it (PDF).

All this says we're still a long way from a system that the mass market will use. And that's a damn shame, because we genuinely need secure communications. Like a lot of people in the mid-1990s, I'd have thought that by now encrypted communications would be the norm. And yet not only is SSL, which protects personal details in transit to ecommerce and financial services sites, the only really mass-market use, but it's in trouble. Partly, this is because of the technical issues raised in the linked article - too many certification authorities, too many points of failure - but it's also partly because hardly anyone understands how to check that a certificate is valid or knows what to do when warnings pop up that it's expired or issued for a different name. The underlying problem is that many of the people who like crypto see it as both a cool technology and a cause. For most of us, it's just more fussy software. The big advance since the mid 1990s is that at least now the *developers* will use it.

Maybe mobile phones will be the thing that makes crypto work the way it should. See, for example, Dave Birch's current thinking on the future of identity. We've been arguing about how to build an identity infrastructure for 20 years now. Crypto is clearly the mechanism. But we still haven't solved the how.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 30, 2011

Trust exercise

When do we need our identity to be authenticated? Who should provide the service? Whom do we trust? And, to make it sustainable, what is the business model?

These questions have been debated ever since the early 1990s, when the Internet and the technology needed to enable the widespread use of strong cryptography arrived more or less simultaneously. Answering them is a genuinely hard problem (or it wouldn't be taking so long).

A key principle that emerged from the crypto-dominated discussions of the mid-1990s is that authentication mechanisms should be role-based and limited by "need to know"; information would be selectively unlocked and in the user's control. The policeman stopping my car at night needs to check my blood alcohol level and the validity of my driver's license, car registration, and insurance - but does not need to know where I live unless I'm in violation of one of those rules. Cryptography, properly deployed, can be used to protect my information, authenticate the policeman, and then authenticate the violation result that unlocks more data.

Today's stored-value cards - London's Oyster travel card, or Starbucks' payment/wifi cards - when used anonymously do capture some of what the crypto folks had in mind. But the crypto folks also imagined that anonymous digital cash or identification systems could be supported by selling standalone products people installed. This turned out to be wholly wrong: many tried, all failed. Which leads to today, where banks, telcos, and technology companies are all trying to figure out who can win the pool by becoming the gatekeeper - our proxy. We want convenience, security, and privacy, probably in that order; they want security and market acceptance, also probably in that order.

The assumption is we'll need that proxy because large institutions - banks, governments, companies - are still hung up on identity. So although the question should be whom do we - consumers and citizens - trust, the question that ultimately matters is whom do *they* trust? We know they don't trust *us*. So will it be mobile phones, those handy devices in everyone's pockets that are online all the time? Banks? Technology companies? Google has launched Google Wallet, and Facebook has grand aspirations for its single sign-on.

This was exactly the question Barclaycard's Tom Gregory asked at this week's Centre for the Study of Financial Innovation round-table discussion (PDF) . It was, of course, a trick, but he got the answer he wanted: out of banks, technology companies, and mobile network operators, most people picked banks. Immediate flashback.

The government representatives who attended Privacy International's 1997 Scrambling for Safety meeting assumed that people trusted banks and that therefore they should be the Trusted Third Parties providing key escrow. Brilliant! It was instantly clear that the people who attended those meetings didn't trust their banks as much as all that.

One key issue is that, as Simon Deane-Johns writes in his blog posting about the same event, "identity" is not a single, static thing; it is dynamic and shifts constantly as we add to the collection of behaviors and data representing it.

As long as we equate "identity" with "a person's name" we're in the same kind of trouble the travel security agencies are when they try to predict who will become a terrorist on a particular flight. Like the browser fingerprint, we are more uniquely identifiable by the collection of our behaviors than we are by our names, as detectives who search for missing persons know. The target changes his name, his jobs, his home, and his wife - but if his obsession is chasing after trout he's still got a fishing license. Even if a link between a Starbucks card and its holder's real-world name is never formed, the more data the card's use enters into the system the more clearly recognizable as an individual he will be. The exact tag really doesn't matter in terms of understanding his established identity.

What I like about Deane-Johns' idea -

the solution has to involve the capability to generate a unique and momentary proof of identity by reference to a broad array of data generated by our own activity, on the fly, which is then useless and can be safely discarded"

is two things. First, it has potential as a way to make impersonation and identity fraud much harder. Second is that implicit in it is the possibility of two-way authentication, something we've clearly needed for years. Every large organization still behaves as though its identity is beyond question whereas we - consumers, citizens, employees - need to be thoroughly checked. Any identity infrastructure that is going to be robust in the future must be built on the understanding that with today's technology anyone and anything can be impersonated.

As an aside, it was remarkable how many people at this week's meeting were more concerned about having their Gmail accounts hacked than their bank accounts. My reasoning is that the stakes are higher: I'd rather lose my email reputation than my house.. Their reasoning is that the banking industry is more responsive to customer problems than technology companies. That truly represents a shift from 1997, when technology companies were smaller and more responsive.

More to come on these discussions...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 19, 2011

Back to school

Is a university education worth paying for? the Guardian asked this week on the day A-level results came out. This question is doing the rounds. The Atlantic figures the next big US economic crash will be created by defaults on student loans. The Chicago Tribune panics about students' living expenses. The New York Times frets that you need a Master's degree to rise above minimum wage in a paper hat and calculates the return on investment of that decision. CNN Money mulls the debt load of business school.

The economic value of a degree is a good question with many variables, and one I was lucky not to have to answer from 1971 to 1975, when my parents paid Cornell $3,000, rising to $5,000, a year in tuition fees, plus living expenses. What's happened since is staggering (and foreseen). In 2011-2012, the equivalent tuition fee is $41,325. Plus living expenses. A four-year degree now costs more than most people pay for a house. A friend sending his kid to Columbia estimates the cost, all-in, for nine months per year at $60,000 (Manhattan is expensive). Times four. Eight, if his other kid chooses a similar school. And in ten years we may think these numbers are laughable, too: university endowments have fallen in value like everyone else's savings; the recession means both government grants and alumni donations are down; and costs are either fixed or continue to rise.

At Oxford, the tuition fees vary according to what you're studying. A degree comparable to mine starts at £3,375 for EU students and tops out at £12,700 for overseas students. Overseas students are also charged a "college fee" of nearly £6,000. Next year, it seems most universities will be charging home students the government-allowed maximum of £9,000. Even though these numbers look cheap to an American, I understand the sticker shock: as recently as 1998 university tuition was free. My best suggestion to English 13-year-olds is to get your parents to move to Scotland as soon as possible.

These costs, coupled with the recession, led Paypal founder Peter Thiel to suggest that the US is in the grip of an about-to-burst education bubble.

Business school was always a numbers proposition: every prospective student has always weighed up the costs of tuition and a two-year absence from their paid jobs against the improved career prospects they hoped to acquire. But those pursuing university degrees were always more of a mixed bag big enough to include those who wanted to put off becoming adults and who liked learning and being surrounded by smart people to do it with.

Is the Net the solution, as some suggest? A Russian at a party once explained her country's intellectual achievements to me: anyone, no matter how poor, could take pride in learning and improving their mind. Why couldn't we do the same? Certainly, the Net is a fantastic resource for the pursuit of learning for its own sake, particularly in the sciences. MIT led the way in putting its course materials online, and even without paying journal subscriptions there are full libraries ready for perusal.

It's a lovely thought, but I suspect it works best for those who are surrounded by or at least come from a culture that respects intellectual pursuits and that kind of self-disciplined application. My parents came from immigrant families and fervently believed in education as a way to a better life. Even though they themselves lacked formal education past high school they read a great deal of high-quality material throughout their lives; their house was full of newspapers, books, and magazines on almost every topic. My parents certainly saw a degree as a kind of economic passport, but that clearly wasn't the only reason they valued education. My mother was so ashamed that she hadn't finished high school that she spent her late 60s getting a GED and completing a college degree. At that age, she certainly wasn't doing a degree for its economic benefits.

The Net is a trickier education venue if you really do value learning solely in economic terms and what you need is the credential. If it's to become a substitute for today's university system, a number of things will have to change. Home higher education in at least some fields will need to go through the same process as home schooling has in order to establish itself as a viable alternative. Employers will need to find ways for people to prove their knowledge and ability. Universities will have to open up to the idea of admitting home-study students for a single, final year (distance learning specialists like the Open University ought to have a leg up here). Prestigious institutions will survive; cheap institutions will survive. At the biggest risk are the middle ones with good-but-not-great reputations and high costs.

Popular culture likes to depict top universities as elite clubs filled with arrogant, entitled snobs. The danger this will become true. If it does, as long as they continue to fill the ranks of politicians, CEOs, and the rest of the "great and good", that group will become ever more remote from the people they govern and employ. Bad news, all round.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 8, 2011

The grey hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views, also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers - not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea - and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time between the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden?
One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friend default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help - but for that to be effective we need far greater transparency from all these - largely American - companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 24, 2011

Bits of the realm

Money is a collective hallucination. Or, more correctly, money is an abstraction that allows us to exchange - for example - writing words for food, heat, or a place to live. Money means the owner of the local grocery store doesn't have to decide how many pounds of flour and Serrano ham 1,000 words are worth, and I don't have to argue copyright terms while paying my mortgage.

But, as I was reading lately in The Coming Collapse of the Dollar and How to Profit From It by James Turk, the owner of GoldMoney, that's all today's currencies are: abstractions. Fiat currencies. The real thing disappeared when we left the gold standard in 1972. Accordingly none of the currencies I regularly deal with - pounds, dollars, euros - are backed by anything more than their respective governments' "full faith and credit". Is this like Tinker Bell? If I stop believing will they cease to exist? Certainly some people think so, and that's why, as James Surowiecki wrote in The New Yorker in 2004, some people believe that gold is the One True Currency.

"I've never bought gold," my father said in the late 1970s. "When it's low, it's too expensive. When it's high, I wish I'd bought it when it was low." Gold was then working its way up to its 1980 high of $850 an ounce. Until 2004 it did nothing but decline. Yesterday, it closed at $1518.

That's if you view the world from the vantage point of the dollar. If gold is your sun and other currencies revolve around it like imaginary moths, nothing's happened. An ounce just buys a lot more dollars now than it did and someday will be tradable for wagonloads of massively devalued fiat currencies. You don't buy gold; you convert your worthless promises into real stored value.

Personally, I've never seen the point of gold. It has relatively few real-world uses. You can't eat it, wear it, or burn it for heat and light. But it does have the useful quality of being a real thing, and when you could swap dollars for gold held in the US government's vault, dollars, too, were real things.

The difficulty with Bitcoins is that they have neither physical reality nor a long history (even if that history is one of increasing abstraction). Using them requires people to make the jump from the national currency they know straight into bits of code backed by a bunch of mathematics they don't understand.

Alternative currencies have been growing for some time now - probably the first was Ithaca Hours, which are accepted by many downtown merchants in my old home town of Ithaca, NY. What gives Ithaca Hours their value is that you trade them with people you know and can trust to support the local economy. Bitcoins up-end that: you trade them with strangers who can't find out who you are. The big advantage, as Bitcoin Consultancy co-founder Amir Taaki explains on Slashdot, is that their transaction costs are very, very low.

The idea of cryptographic cash is not new, though the peer-to-peer implementation is. Anonymous digital cash was first mooted by David Chaum in the 1980s; his company Digicash, began life in 1990 and by 1993 had launched ecash. At the time, it was widely believed that electronic money was an inevitable development. And so it likely is, especially if you believe e-money specialist Dave Birch, who would like nothing more than to see physical cash die a painful death.

But the successful electronic transaction systems are those that build on existing currencies and structures. Paypal, founded in 1998, achieved its success by enabling online use of existing bank accounts and credit cards. M-pesa and other world-changing mobile phone schemes are enabling safe and instant transactions to the developing world. Meanwhile, Digicash went bankrupt in 1999 and every other digital cash attempt of the 1990s also failed.

For comparison, ten-year-old GoldMoney's latest report says it's holding $1.9 billion in precious metals and currencies for its customers - still tiny by global standards. The most interesting thing about GoldMoney, however, is not the gold bug aspect but its reinvention of gold as electronic currency: you can pay other GoldMoney customers in electronic shavings of gold (minimum one-tenth of a gram) at a fraction of international banking costs.

"Humans will trade anything," writes Danny O'Brien in his excellent discussion of Bitcoins. Sure: we trade favors, baseball cards, frequent flyer miles, and information. But Birch is not optimistic about Bitcoin's long-term chances, and neither am I, though for different reasons. I believe that people are very conservative about what they will take in trade for the money they've worked hard to earn. Warren Buffett and his mentor, Benjamin Graham, typically offer this advice about investing: don't buy things you don't understand. By that rule, Bitcoins fail. Geeks are falling on them like any exciting, new start-up, but I'll guess that most people would rather bet on horses than take Bitcoins. There's a limit to how abstract we like our money to be.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 27, 2011

Mixed media

In a fight between technology and the law, who wins? This question has been debated since Net immemorial. Techies often seem to be sure that law can't win against practical action. And often this has been true: the release of PGP defeated the International Traffic in Arms Regulations that banned the export of strong cryptography; TOR lets people all over the world bypass local Net censorship rules; and, in the UK, over the last few weeks Twitter has been causing superinjunctions to collapse.

On the other hand, technology by itself is often not enough. The final defeat of the ITAR had at least as much to do with the expansion of ecommerce and the consequent need for secured connections as it did with PGP. TOR is a fine project, but it is not a mainstream technology. And Twitter is a commercial company that can be compelled to disclose what information it has about its users (though granted, this may be minimal) or close down accounts.

Last week, two events took complementary approaches to this question. The first, Big Tent UK, hosted by Google, Privacy International, and Index on Censorship, featured panels and discussions loosely focused on how law can control technology. The second, OpenTech loosely focused on how technology can change our understanding of the world, if not up-end the law itself. At the latter event, projects like Lisa Evans' effort to understand government spending relied on government-published data, while others, such as OpenStreetMap and OpenCorporates seek to create open-source alternatives to existing proprietary services.

There's no question that doing things - or, in my case, egging on people who are doing things - is more fun than purely intellectual debate. I particularly liked the open-source hardware projects presented at OpenTech, some of which are, as presenter Paul Downey said, trying to disrupt a closed market. See for example, River Simple's effort to offer an open-source design for a haydrogen-powered car. Downey whipped through perhaps a dozen projects, all based on the notion that if something can be represented by lines on a PowerPoint slide you can send it to a laser cutter.

But here again I suspect the law will interfere at some point. Not only will open-source cars have to obey safety regulations, but all hardware designs will come up against the same intellectual property issues that have been dogging the Net from all directions. We've noted before Simon Bradshaw's work showing that copyright as applied to three-dimensional objects will be even more of a rat's nest than it has been when applied to "simple" things like books, music, and movies.

At BigTentUK, copyright was given a rest for once in favor of discussions of privacy, the limits of free speech, and revolution. As is so often the case with this type of discussion, it wasn't long before someone - British TV producer Peter Bazalgette - invoked George Orwell. Bizarrely, he aimed "Orwellian" at Privacy International executive director Simon Davies, who a minute before had proposed that the solution to at least some of the world's ongoing privacy woes would be for regulators internationally to collaborate on doing their jobs. Oddly, in an audience full of leading digital rights activists and entrepreneurs, no one admitted to representing the Information Commissioner's office.

Yet given these policy discussions as his prelude, the MP Jeremy Hunt (Con-South West Surry), the secretary of state for Culture, Olympics, Media, and Sport, focused instead on technical progress. We need two things for the future, he said: speed and mobility. Here he cited Bazalgette's great-great-grandfather's contribution to building the sewer system as a helpful model for today. Tasked with deciding the size of pipes to specify for London's then-new sewer system, Joseph Bazalgette doubled the size of pipe necessary to serve the area of London with the biggest demand; we still use those same pipes. We should, said Hunt, build bandwidth in the same foresighted way.

The modern-day Bazalgette, instead, wants the right to be forgotten: people, he said, should have the right to delete any information that they voluntarily surrender. Much like Justine Roberts, the founder of Mumsnet, who participated in the free speech panel, he seemed not to understand the consequences of what he was asking for. Roberts complained that the "slightly hysterical response" to any suggestion of moderating free speech in the interests of child safety inhibits real discussion; the right to delete is not easily implemented when people are embedded in a three-dimensional web of information.

The Big Tent panels on revolution and conflict would have fit either event, including href="">Wael Ghonim who ran a Facebook page that fomented pro-democracy demonstrations in Egypt and respresentatives of PAX and Unitar, projects to use the postings of "citizen journalists" and public image streams respectively to provide early warnings of developing conflict.

In the end, we need both technology and law, a viewpoint best encapsulated by Index on Censorship chief executive John Kampfner, who said he was worried by claims that the Internet is a force for good. "The Internet is a medium, a tool," he said. "You can choose to use it for moral good or moral ill."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 14, 2011

Face time

The history of the Net has featured many absurd moments, but this week was some sort of peak of the art. In the same week I read that a) based a $450 million round of investment from Goldman Sachs Facebook is now valued at $50 billion, higher than Boeing's market capitalization and b) Facebook's founder, Mark Zuckerberg, is so tired of the stress of running the service that he plans to shut it down on March 15. As I seem to recall a CS Lewis character remarking irritably, "Why don't they teach logic in these schools?" If you have a company worth $50 billion and you don't much like running it any more, you sell the damn thing and retire. It's not like Zuckerberg even needs to wait to be Time's Man of the Year.

While it's safe to say that Facebook isn't going anywhere soon, it's less clear what its long-term future might be, and the users who panicked at the thought of the service's disappearance would do well to plan ahead. Because: if there's one thing we know about the history of the Net's social media it's that the party keeps moving. Facebook's half-a-billion-strong user base is, to be sure, bigger than anything else assembled in the history of the Net. But I think the future as seen by Douglas Rushkoff, writing for CNN last week is more likely: Facebook, he argued based on its arguably inflated valuation, is at the beginning of its end, as MySpace was when Rupert Murdoch bought it in 2005 for $580 million. (Though this says as much about Murdoch's Net track record as it does about MySpace: Murdoch bought the text-based Delphi, at its peak moment in late 1993.)

Back in 1999, at the height of the dot-com boom, the New Yorker published an article (abstract; full text requires subscription) comparing the then-spiking stock price of AOL with that of the Radio Corporation of America back in the 1920s, when radio was the hot, new democratic medium. RCA was selling radios that gave people unprecedented access to news and entertainment (including stock quotes); AOL was selling online accounts that gave people unprecedented access to news, entertainment, and their friends. The comparison, as the article noted, wasn't perfect, but the comparison chart the article was written around was, as the author put it, "jolly". It still looks jolly now, recreated some months later for this analysis of the comparison.

There is more to every company than just its stock price, and there is more to AOL than its subscriber numbers. But the interesting chart to study - if I had the ability to create such a chart - would be the successive waves of rising, peaking, and falling numbers of subscribers of the various forms of social media. In more or less chronological order: bulletin boards, Usenet, Prodigy, Genie, Delphi, CompuServe, AOL...and now MySpace, which this week announced extensive job cuts.

At its peak, AOL had 30 million of those; at the end of September 2010 it had 4.1 million in the US. As subscriber revenues continue to shrink, the company is changing its emphasis to producing content that will draw in readers from all over the Web - that is, it's increasingly dependent on advertising, like many companies. But the broader point is that at its peak a lot of people couldn't conceive that it would shrink to this extent, because of the basic principle of human congregation: people go where their friends are. When the friends gradually start to migrate to better interfaces, more convenient services, or simply sites their more annoying acquaintances haven't discovered yet, others follow. That doesn't necessarily mean death for the service they're leaving: AOL, like CIX, the The WELL, and LiveJournal before it, may well find a stable size at which it remains sufficiently profitable to stay alive, perhaps even comfortably so. But it does mean it stops being the growth story of the day.

As several financial commentators have pointed out, the Goldman investment is good for Goldman no matter what happens to Facebook, and may not be ring-fenced enough to keep Facebook private. My guess is that even if Facebook has reached its peak it will be a long, slow ride down the mountain and between then and now at least the early investors will make a lot of money.

But long-term? Facebook is barely five years old. According to figures leaked by one of the private investors, its price-earnings ratio is 141. The good news is that if you're rich enough to buy shares in it you can probably afford to lose the money.

As far as I'm aware, little research has been done studying the Net's migration patterns. From my own experience, I can say that my friends lists on today's social media include many people I've known on other services (and not necessarily in real life) as the old groups reform in a new setting. Facebook may believe that because the profiles on its service are so complex, including everything from status updates and comments to photographs and games, users will stay locked in. Maybe. But my guess is that the next online party location will look very different. If email is for old people, it won't be long before Facebook is, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 7, 2011

Scanning the TSA

There are, Bruce Schneier said yesterday at the Electronic Privacy Information Center mini-conference on the TSA (video should be up soon), four reasons why airport security deserves special attention, even though it directly affects a minority of the population. First: planes are a favorite terrorist target. Second: they have unique failure characteristics - that is, the plane crashes and everybody dies. Third: airlines are national symbols. Fourth: planes fly to countries where terrorists are.

There's a fifth he didn't mention but that Georgetown lawyer Pablo Molina and We Won't Fly founder James Babb did: TSAism is spreading. Random bag searches on the DC Metro and the New York subways. The TSA talking about expanding its reach to shopping malls and hotels. And something I found truly offensive, giant LED signs posted along the Maryland highways announcing that if you see anything suspicious you should call the (toll-free) number below. Do I feel safer now? No, and not just because at least one of the incendiary devices sent to Maryland state offices yesterday apparently contained a note complaining about those very signs.

Without the sign, if you saw someone heaving stones at the cars you'd call the police. With it, you peer nervously at the truck in front of you. Does that driver look trustworthy? This is, Schneier said, counter-productive because what people report under that sort of instruction is "different, not suspicious".

But the bigger flaw is cover-your-ass backward thinking. If someone tries to bomb a plane with explosives in a printer cartridge, missing a later attempt using the exact same method will get you roasted for your stupidity. And so we have a ban on flying with printer cartridges over 500g and, during December, restrictions on postal mail, something probably few people in the US even knew about.

Jim Harper, a policy scholar with the Cato Institute and a member of the Department of Homeland Security's Data Privacy and Integrity Advisory Committee, outlined even more TSA expansion. There are efforts to create mobile lie detectors that measure physiological factors like eye movements and blood pressure.

Technology, Lillie Coney observed, has become "like butter - few things are not improved if you add it."

If you're someone charged with blocking terrorist attacks you can see the appeal: no one wants to be the failure who lets a bomb onto a plane. Far, far better if it's the technology that fails. And so expensive scanners roll through the nation's airports despite the expert assessment - on this occasion, from Schneier and Ed Luttwak, a senior associate with the Center for Strategic and International Studies - that the scanners are ineffective, invasive, and dangerous. As Luttwak said, the machines pull people's attention, eyes, and brains away from the most essential part of security: watching and understanding the passengers' behavior.

"[The machine] occupies center stage, inevitably," he said, "and becomes the focus of an activity - not aviation security, but the operation of a scanner."

Equally offensive in a democracy, many speakers argued, is the TSA's secrecy and lack of accountability. Even Meera Shankar, the Indian ambassador, could not get much of a response to her complaint from the TSA, Luttwak said. "God even answered Job." The agency sent no representative to this meeting, which included Congressmen, security experts, policy scholars, lawyers, and activists.

"It's the violation of the entire basis of human rights," said the Stanford and Oxford lawyer Chip Pitts around the time that the 112th Congress was opening up with a bipartisan reading of the US Constitution. "If you are treated like cattle, you lose the ability to be an autonomous agent."

As Libertarian National Committee executive director Wes Benedict said, "When libertarians and Ralph Nader agree that a program is bad, it's time for our government to listen up."

So then, what are the alternatives to spending - so far, in the history of the Department of Homeland Security, since 2001 - $360 billion, not including the lost productivity and opportunity costs to the US's 100 million flyers?

Well, first of all, stop being weenies. The number of speakers who reminded us that the US was founded by risk-takers was remarkable. More people, Schneier noted, are killed in cars every month than died on 9/11. Nothing, Ralph Nader said, is spent on the 58,000 Americans who die in workplace accidents every year or the many thousands more who are killed by pollution or medical malpractice.

"We need a comprehensive valuation of how to deploy resources in a rational manner that will be effective, minimally invasive, efficient, and obey the Constitution and federal law," Nader said

So: dogs are better at detecting explosives than scanners. Intelligent profiling can whittle down the mass of suspects to a more manageable group than "everyone" in a giant game of airport werewolf. Instead, at the moment we have magical thinking, always protecting ourselves from the last attack.

"We're constantly preparing for the rematch," said Lillie Coney. "There is no rematch, only tomorrow and the next day." She was talking as much about Katrina and New Orleans as 9/11: there will always, she said, be some disaster, and the best help in those situations is going to come from individuals and the people around them. Be prepared: life is risky.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 17, 2010

Sharing values

And then they came for Google...

The notion that the copyright industries' war on file-sharing would eventually rise to the Google level of abstraction used to be a sort of joke. It was the kind of thing the owners of torrent search sites (and before them, LimeWire and Gnutella nodes) said as an extreme way of showing how silly the whole idea was that file-sharing could be stamped out by suing people. It was the equivalent in airport terms of saying, "What are they going to do? Have us all fly naked?"

This week, it came true. You can see why: the British Phonographic Institute's annual report cites research it commissioned from Harris Interactive showing that 58 percent of "illegal downloaders" used Google to find free music. (Of course, all free music is not unauthorized copies of music, but we'll get to that in a minute.)

The rise of Google in particular (it has something like 90 percent of the UK market, somewhat less in the US) and search engines in general as the main gateway through which people access the Internet made it I think inevitable that at some point the company would become a focus for the music industry. And Google is responding, announcing on December 2 that it would favor authorized content in its search listings and remove prevent "terms closely related with piracy" from appearing in AutoComplete.

Is this censorship? Perhaps, but I find it hard to get too excited about, partly because Autocomplete is the annoying boor who's always finishing my sentences wrongly, partly because having to type "torrent" doesn't seem like much of a hardship, and partly because I don't believe this action will make much of a difference. Still, as Google's design shifts more toward the mass market, such subtle changes will create ever-larger effects.

I would be profoundly against demonizing file-sharing technology by making it technically impossible to use Google to find torrent/cyber locker/forum sites - because such sites are used for many other things that have nothing to do with distributing music - but that's not what's being talked about here. It's worth noting, however, that this is (yet another) example of Google's double standards when it comes to copyright. Obliging the music industry's request costs them very little and also creates the opportunity to nudge its own YouTube a little further up the listings. Compare and contrast, however, to the company's protracted legal battle over its having digitized and made publicly available millions of books without the consent of the rights holders.

If I were the music industry I think I'd be generally encouraged by the BPI's report. It shows that paid, authorized downloads are really beginning to take off; digital now accounts for nearly 25 percent of UK record industry revenues. Harris Interactive found that approximately 7.7 million people in the UK continue to download music "illegally". Jupiter Research estimated the foregone revenues at £219 million. The BPI's arithmetic estimates that paid, authorized downloads represent about a quarter of all downloads. Seems to me that's all moving in the right direction - without, mind you, assistance from the draconian Digital Economy Act.

The report also notes the rise of unauthorized, low-cost pay sites that siphon traffic away from authorized pay services. These are, to my view, the equivalent of selling counterfeit CDs, and I have no problem with regarding them as legitimately lost sales or seeing them shut down.

Is the BPI's glass half-empty or half-full? I think it's filling up, just like we told them it would. They are progressively competing successfully with free, and they'd be a lot further along that path if they had started sooner.

As a former full-time musician with many friends still in the trade, it's hard to argue that encouraging people towards services that pay the artist at the expense of those that don't is a bad principle. What I really care about is that it should be as easy to find Andy Cohen playing "Oh, Glory" as it is to find Lady Gaga singing anything. And that's an area where the Internet is the best hope for parity we've ever had; as a folksinger friend of mine said a couple of years back, "The music business never did anything for us."

I've been visiting Cohen this week, and he's been explicating the German sociologist Ferdinand Tönnies' structure, with the music business as gesellschaft (society) versus folk music as community (gemeinschaft)

"Society has rules, communities have customs," he said last night. "When a dispute over customs has to be adjudicated, that's the border of society." Playing music for money comes under society's rules - that is, copyright. But for Cohen, a professional musician for more than 40 years with multiple CDs, music is community.

We've been driving around Memphis visiting his friends, all of whom play themselves, some easily, some with difficulty. Music is as much a part of their active lives as breathing. This is a fundamental disconnect from the music industry, which sees us all as consumers and every unpaid experience of music as a lost sale, This is what "sharing music" really means: playing and singing together - wherever.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 24, 2010

Lost in a Haystack

In the late 1990s you could always tell when a newspaper had just gotten online because it would run a story about the Good Times virus.

Pause for historical detail: the Good Times virus (and its many variants) was an email hoax. An email message with the subject heading "Good Times" or, later, "Join the Crew", or "Penpal Greetings", warned recipients that opening email messages with that header would damage their computers or delete the contents of their hard drives. Some versions cited Microsoft, the FCC, or some other authority. The messages also advised recipients to forward the message to all their friends. The mass forwarding and subsequent complaints were the payload.

The point, in any case, is that the Good Times virus was the first example of mass social engineering that spread by exploiting not particularly clever psychology and a specific kind of technical ignorance. The newspaper staffers of the day were very much ordinary new users in this regard, and they would run the story thinking they were serving their readers. To their own embarrassment, of course. You'd usually see a retraction a week or two later.

Austin Heap, the progenitor of Haystack, software he claimed was devised to protect the online civil liberties of Iranian dissidents, seems unlikely to have been conducting an elaborate hoax rather than merely failing to understand what he was doing. Either way, Haystack represents a significant leap upward in successfully taking mainstream, highly respected publications for a technical ride. Evgeny Morozov's detailed media critique underestimates the impact of the recession and staff cuts on an already endangered industry. We will likely see many more mess-equals-technology-plus-journalism stories because so few technology specialists remain in the post-recession mainstream media.

I first heard Danny O'Brien's doubts about Haystack in June, and his chief concern was simple and easily understood: no one was able to get a copy of the software to test it for flaws. For anyone who knows anything about cryptography or security, that ought to have been damning right out of the gate. The lack of such detail is why experienced technology journalists, including Bruce Schneier, generally avoided commenting on it. There is a simple principle at work here: the *only* reason to trust technology that claims to protect its users' privacy and/or security is that it has been thoroughly peer-reviewed - banged on relentlessly by the brightest and best and they have failed to find holes.

As a counter-example, let's take Phil Zimmermann's PGP, email encryption software that really has protected the lives and identities of far-flung dissidents. In 1991, when PGP first escaped onto the Net, interest in cryptography was still limited to a relatively small, though very passionate, group of people. The very first thing Zimmermann wrote in the documentation was this: why should you trust this product? Just in case readers didn't understand the importance of that question, Zimmermann elaborated, explaining how fiendishly difficult it is to write encryption software that can withstand prolonged and deliberate attacks. He was very careful not to claim that his software offered perfect security, saying only that he had chosen the best algorithms he could from the open literature. He also distributed the source code freely for review by all and sundry (who have to this day failed to find substantive weaknesses). He concludes: "Anyone who thinks they have devised an unbreakable encryption scheme either is an incredibly rare genius or is naive and inexperienced." Even the software's name played down its capabilities: Pretty Good Privacy.

When I wrote about PGP in 1993, PGP was already changing the world by up-ending international cryptography regulations, blocking mooted US legislation that would have banned the domestic use of strong cryptography, and defying patent claims. But no one, not even the most passionate cypherpunks, claimed the two-year-old software was the perfect, the only, or even the best answer to the problem of protecting privacy in the digital world. Instead, PGP was part of a wider argument taking shape in many countries over the risks and rewards of allowing civilians to have secure communications.

Now to the claims made for Haystack in its FAQ:

However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy.

Without proper and open testing of the entire system - peer review - they could not possibly know this. The strongest cryptographic algorithm is only as good as its implementation. And even then, as Clive Robertson writes in Financial Cryptography, technology is unlikely to be a complete solution.

What a difference a sexy news hook makes. In 1993, the Clinton Administration's response to PGP was an FBI investigation that dogged Zimmermann for two years; in 2010, Hillary Clinton's State Department fast-tracked Haystack through the licensing requirements. Why such a happy embrace of Haystack rather than existing privacy technologies such as Freenet, Tor, or other anonymous remailers and proxies remains as a question for the reader.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 9, 2010

The big button caper

There's a moment early in the second season of the TV series Mad Men when one of the Sterling Cooper advertising executives looks out the window and notices, in a tone of amazement, that young people are everywhere. What he was seeing was, of course, the effect of the baby boom. The world really *was* full of young people.

"I never noticed it," I said to a friend the next day.

"Well, of course not," he said. "You were one of them."

Something like this will happen to today's children - they're going to wake up one day and think the world is awash in old people. This is a fairly obvious consequence of the demographic bulge of the Baby Boomers, which author Ken Dychtwald has compared to "a pig going through a python".

You would think that mobile phone manufacturers and network operators would be all over this: carrying a mobile phone is an obvious safety measure for an older, perhaps infirm or cognitively confused person. But apparently the concept is more difficult to grasp than you'd expect, and so Simon Rockman, the founder and former publisher of What Mobile and now working for the GSM Association, convened a senior mobile market conference on Tuesday.

Rockman's pitch is that the senior market is a business opportunity: unlike other market sectors it's not saturated; older users are less likely to be expensive data users and more loyal. The margins are better, he argues, even if average revenue per user is low.

The question is, how do you appeal to this market? To a large extent, seniors are pretty much like everyone else: they want gadgets that are attractive, even cool. They don't want the phone equivalent of support stockings. Still, many older people do have difficulties with today's ultra-tiny buttons, icons, and screens, iffy sound quality, and complex menu structures. Don't we all?

It took Ewan MacLeod, the editor of Mobile Industry Review to point out the obvious. What is the killer app for most seniors in any device? Grandchildren, pictures of. MacLeod has a four-week-old son and a mother whose desire to see pictures apparently could only be fully satisfied by a 24-hour video feed. Industry inadequacy means that MacLeod is finding it necessary to write his own app to make sending and receiving pictures sufficiently simple and intuitive. This market, he pointed out, isn't even price-sensitive. Tell his mother she'll need to spend £60 on a device so she can see daily pictures of her grandkids, and she'll say, "OK." Tell her it will cost £500, and she'll say..."OK."

I bet you're thinking, "But the iPhone!" And to some extent you're right: the iPhone is sleek, sexy, modern, and appealing; it has a zoom function to enlarge its display fonts, and it is relatively easy to use. And so MacLeod got all the grandparents onto iPhones. But he's having to write his own app to easily organize and display the photos the phones receive: the available options are "Rubbish!"

But even the iPhone has problems (even if you're not left-handed). Ian Hosking, a senior research associate at the Cambridge Engineering Design Centre, overlaid his visual impairment simulation software so it was easy to see. Lack of contrast means the iPhone's white on black type disappears unreadably with only a small amount of vision loss. Enlarging the font only changes the text in some fields. And that zoom feature, ah, yes, wonderful - except that enabling it requires you to double-tap and then navigate with three fingers. "So the visual has improved, but the dexterity is terrible."


In all this you may have noticed something: that good design is good design, and a phone design that accommodates older people will also most likely be a more usable phone for everyone else. These are principles that have not changed since Donald Norman formulated them in his classic 1998 book The Design of Everyday Things. To be sure there is some progress. Evelyne Pupeter-Fellner, co-founder of Emporia, for example, pointed out the elements of her company's designs that are quietly targeted at seniors: the emergency call system that automatically dials, in turn, a list of selected family members or friends until one answers; the ringing mechanism that lights up the button to press to answer. The radio you can insert the phone into that will turn itself down and answer the phone when it rings. The design that lets you attach it to a walker - or a bicycle. The single-function buttons. Similarly, the Doro was praised.

And yet it could all be so different - if we would only learn from Japan, where nearly 86 percent of seniors have - and use data on - mobile phones, according to Kei Shimada, founder of Infinita.

But in all the "beyond big buttons" discussion and David Doherty's proposition that health applications will be the second killer app, one omission niggled: the aging population is predominantly female, and the older the cohort the more that is true.

Who are least represented among technology designers and developers?

Older women.

I'd call that a pretty clear mismatch. Somewhere between we who design and they who consume is your problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 4, 2010

Return to the hacker crackdown

Probably many people had forgotten about the Gary McKinnon case until the new government reversed their decision to intervene in his extradition. Legal analysis is beyond our expertise, but we can outline some of the historical factors at work.

By 2001, when McKinnon did his breaking and entering into US military computers, hacking had been illegal in the UK for just over ten years - the Computer Misuse Act was passed in 1990 after the overturned conviction of Robert Schifreen and Steve Gold for accessing Prince Philip's Prestel mailbox.

Early 1990s hacking (earlier, the word meant technological cleverness) was far more benign than today's flat-out crimes of identity fraud, money laundering, and raiding bank accounts. The hackers of the era - most famously Kevin Mitnick were more the cyberspace equivalent of teenaged joyriders: they wandered around the Net rattling doorknobs and playing tricks to get passwords, and occasionally copied some bit of trophy software for bragging rights. Mitnick, despite spending four and a half years in jail awaiting trial, was not known to profit from his forays.

McKinnon's claim that he was looking for evidence that the US government was covering up information about alternative energy and alien visitations seems to me wholly credible. There was and is a definite streak of conspiracy theorists - particularly about UFOs - among the hacker community.

People seemed more alarmed by those early-stage hackers than they are by today's cybercriminals: the fear of new technology was projected onto those who seemed to be its masters. The series of 1990 "Operation Sundown" raids in the US, documented in Bruce Sterling's book , inspired the creation of the Electronic Frontier Foundation. Among other egregious confusions, law enforcement seized game manuals from Steven Jackson Games in Austin, Texas, calling them hacking instruction books.

The raids came alongside a controversial push to make hacking illegal around the world. It didn't help when police burst in at the crack of dawn to arrest bright teenagers and hold them and their families (including younger children) at gunpoint while their computers and notebooks were seized and their homes ransacked for evidence.

"I think that in the years to come this will be recognized as the time of a witch hunt approximately equivalent to McCarthyism - that some of our best and brightest were made to suffer this kind of persecution for the fact that they dared to be creative in a way that society didn't understand," 21-year-old convicted hacker Mark Abene ("Phiber Optik") told filmmaker Annaliza Savage for her 1994 documentary, Unauthorized Access (YouTube).

Phiber Optik was an early 1990s cause célèbre. A member of the hacker groups Legion of Doom and Masters of Deception, he had an exceptionally high media profile. In January 1990, he and other MoD members were raided on suspicion of having caused the AT&T crash of January 15, 1990, when more than half of the telephone network ceased functioning for nine hours. Abene and others were eventually charged in 1991, with law enforcement demanding $2.5 million in fines and 59 years in jail. Plea agreements reduced that a year in prison and 600 hours of community service. The company eventually admitted the crash was due to its own flawed software upgrade.

There are many parallels between these early days of hacking and today's copyright wars. Entrenched large businesses (then AT&T; now RIAA, MPAA, BPI, et al) perceive mostly young, smart Net users as dangerous enemies and pursue them with the full force of the law claiming exaggeratedly large-figure sums in damages. Isolated, often young, targets were threatened with jail and/or huge sums in damages to make examples of them to deter others. The upshot in the 1990s was an entrenched distrust of and contempt for law enforcement on the part of the hacker community, exacerbated by the fact that back then so few law enforcement officers understood anything about the technology they were dealing with. The equivalent now may be a permanent contempt for copyright law.

In his 1990 essay Crime and Puzzlement examining the issues raised by hacking, EFF co-founder John Perry Barlow wrote of Phiber Optik, whom he met on the WELL: "His cracking impulses seemed purely exploratory, and I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves."

When McKinnon was first arrested in March 2002 and then indicted in a Virginia court in October 2002 for cracking into various US military computers - with damage estimated at $800,000 - all this history will still fresh. Meanwhile, the sympathy and good will toward the US engendered by the 9/11 attacks had been dissipated by the Bush administration's reaction: the PATRIOT Act (passed October 2001) expanded US government powers to detain and deport foreign citizens, and the first prisoners arrived at Guantanamo in January 2002. Since then, the US has begun fingerprinting all foreign visitors and has seen many erosions to civil liberties. The 2005 changes to British law that made hacking into an extraditable offense were controversial for precisely these reasons.

As McKinnon's case has dragged on through extradition appeals this emotional background has not changed. McKinnon's diagnosis with Asperger's Syndrome in 2008 made him into a more fragile and sympathetic figure. Meanwhile, the really dangerous cybercriminals continue committing fraud, theft, and real damage, apparently safe from prosecution.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 28, 2010

Privacy theater

On Wednesday, in response to widespread criticism and protest Facebook finally changed its privacy settings to be genuinely more user-friendly - and for once, the settings actually are. It is now reasonably possible to tell at a glance which elements of the information you have on the system are visible and to what class of people. To be sure, the classes available - friends, friends of friends, and everyone - are still broad, but it is a definite improvement. It would be helpful if Facebook provided a button so you could see what your profile looks like to someone who is not on your friends list (although of course you can see this by logging out of Facebook and then searching for your profile). If you're curious just how much of your information is showing, you might want to try out Outbook.

Those changes, however, only tackle one element of a four-part problem.

1: User interface. Fine-grained controls are, as the company itself has said, difficult to present in a simple way. This is what the company changed this week and, as already noted, the new design is a big improvement. It can still be improved, and it's up to users and governments to keep pressure on the company to do so.

2: Business model. Underlying all of this, however, is the problem that Facebook still has make money. To some extent this is our own fault: if we don't want to pay money to use the service - and it's pretty clear we don't - then it has to be paid for some other way. The only marketable asset Facebook has is its user data. Hence Andrew Brown's comment that users are Facebook's product; advertisers are its customers. As others have commented, traditional media companies also sell their audience to their advertisers; but there's a qualitative difference in that traditional media companies also create their own content, which gives them other revenue streams.

3. Changing the defaults. As this site's graphic representation makes clear, since 2005 the changes in Facebook's default privacy settings have all gone one way: towards greater openness. We know from decades of experience that defaults matter because so many computer users never change them. It's why Microsoft has had to defend itself against antitrust actions regarding bundling Internet Explorer and Windows Media Player into its operating system. On Facebook, users should have to make an explicit decision to make their information public - opt in, rather than opt out. That would also be more in line with the EU's Data Protection Directive.

4: Getting users to understand what they're disclosing. Back in the early 1990s, AT&T ran a series of TV ads in the US targeting a competitor's having asked its customers the names of their friends and family for marketing purposes, "I don't want to give those out," the people in the ads were heard to say. Yet they freely disclose on Facebook every day exactly that sort of information. As director of the Foundation for Information Policy Research Caspar Bowden argued persuasively that traffic analysis - seeing who is talking to whom and with what frequency - is far more revealing than the actual contents of messages.

What makes today's social networks different from other messaging systems (besides their scale) is that typically those - bulletin boards, conferencing systems, CompuServe, AOL, Usenet, today's Web message boards - were and are organized around topics of interest: libel law reform, tennis, whatever. Even blogs, whose earliest audiences are usually friends, become more broadly successful because of the topics they cover and the quality of that coverage. In the early days, that structure was due to the fact that most people online were strangers meeting for the first time. These days, it allows those with minority interests to find each other. But in social media the organizing principle is the social connections of individual people whose tenure on the service begins, by and large, by knowing each other. This vastly simplifies traffic analysis.

A number of factors contributed to the success of Facebook. One was the privacy promises the company made (and have since revised). But another was certainly elements of dissatisfaction with the wider Net. I've heard Facebook described as an effort to reinvent the Net, and there's some truth to that in that it presents itself as a safer space. That image is why people feel comfortable posting pictures of their kids. But a key element in Facebook's success has, I think, also been the brokenness of email and, to a lesser degree, instant messaging. As these became overridden with spam, rather than grapple with spam and other unwanted junk or the uncertainty of knowing which friend was using which incompatible IM service, many people gravitated to social networks as a way of keeping their inboxes as personal space.

Facebook is undoubtedly telling the truth when it says that the privacy complaints have, so far, made little difference to the size and engagement of its user base. It's extreme to say that Facebook victimizes its users, but it is true that the active core of long-term users' expectations have been progressively betrayed. Facebook's users have no transparency about or control over what data Facebook shares with its advertisers. Making that visible would go a long way toward restoring users' trust.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 30, 2010

Child's play

In the TV show The West Wing (Season 6, Episode 17, "A Good Day") young teens tackle the president: why shouldn't they have the right to vote? There's probably no chance, but they made their point: as a society we trust kids very little and often fail to take them or their interests seriously.

That's why it was so refreshing to read in 2008's < a href="">Byron Review the recommendation that we should consult and listen to children in devising programs to ensure their safety online. Byron made several thoughtful, intelligent analogies: we supervise as kids learn to cross streets, we post warning signs at swimming pools but also teach them to swim.

She also, more controversially, recommended that all computers sold for home use in the UK should have Kitemarked parental control software "which takes parents through clear prompts and explanations to help set it up and that ISPs offer and advertise this prominently when users set up their connection."

The general market has not adopted this recommendation; but it has been implemented with respect to the free laptops issued to low-income families under Becta's £300 million Home Access Laptop scheme, announced last year as part of efforts to bridge the digital divide. The recipients - 70,000 to 80,000 so far - have a choice of supplier, of ISP, and of hardware make and model. However, the laptops must meet a set of functional technical specifications, one of which is compliance with PAS 74:2008, the British Internet safety standard. That means anti-virus, access control, and filtering software: NetIntelligence.

Naturally, there are complaints; these fall precisely in line with the general problems with filtering software, which have changed little since 1996, when the passage of the Communications Decency Act inspired 17-year-old Bennett Haselton to start Peacefire to educate kids about the inner working of blocking software - and how to bypass it. Briefly:

1. Kids are often better at figuring out ways around the filters than their parents are, giving parents a false sense of security.

2. Filtering software can't block everything parents expect it to, adding to that false sense of security.

3. Filtering software is typically overbroad, becoming a vehicle for censorship.

4. There is little or no accountability about what is blocked or the criteria for inclusion.

This case looks similar - at first. Various reports claim that as delivered NetIntelligence blocks social networking sites and even Google and Wikipedia, as well as Google's Chrome browser because the way Chrome installs allows the user to bypass the filters.

NetIntelligence says the Chrome issue is only temporary; the company expects a fix within three weeks. Marc Kelly, the company's channel manager, also notes that the laptops that were blocking sites like Google and Wikipedia were misconfigured by the supplier. "It was a manufacturer and delivery problem," he says; once the software has been reinstalled correctly, "The product does not block anything you do not want it to." Other technical support issues - trouble finding the password, for example - are arguably typical of new users struggling with unfamiliar software and inadequate technical support from their retailer.

Both Becta and NetIntelligence stress that parents can reconfigure or uninstall the software even if some are confused about how to do it. First, they must first activate the software by typing in the code the vendor provides; that gets them password access to change the blocking list or uninstall the software.

The list of blocked sites, Kelly says, comes from several sources: the Internet Watch Foundation's list and similar lists from other countries; a manual assessment team also reviews sites. Sites that feel they are wrongly blocked should email NetIntelligence support. The company has, he adds, tried to make it easier for parents to implement the policies they want; originally social networks were not broken out into their own category. Now, they are easily unblocked by clicking one button.

The simple reaction is to denounce filtering software and all who sail in her - censorship! - but the Internet is arguably now more complicated than that. Research Becta conducted on the pilot group found that 70 percent of the parents surveyed felt that the built-in safety features were very important. Even the most technically advanced of parents struggle to balance their legitimate concerns in protecting their children with the complex reality of their children's lives.

For example: will what today's children post to social networks damage their chances of entry into a good university or a job? What will they find? Not just pornography and hate speech; some parents object to creationist sites, some to scary science fiction, others to Fox News. Yesterday's harmless flame wars are today's more serious cyber-bullying and online harassment. We must teach kids to be more resilient, Byron said; but even then kids vary widely in their grasp of social cues, common sense, emotional make-up, and technical aptitude. Even experts struggle with these issues.

"We are progressively adding more information for parents to help them," says Kelly. "We want the people to keep the product at the end. We don't want them to just uninstall it - we want them to understand it and set the policies up the way they want them." Like all of us, Kelly thinks the ideal is for parents to engage with their children on these issues, "But those are the rules that have come along, and we're doing the best we can."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 12, 2010

The cost of money

Everyone except James Allan scrabbled in the bag Joe DiVanna brought with him to the Digital Money Forum (my share: a well-rubbed 1908 copper penny). To be fair, Allan had already left by then. But even if he hadn't he'd have disdained the bag. I offered him my pocketful of medium-sized change and he looked as disgusted as if it were a handkerchief full of snot. That's what living without cash for two years will do to you.

Listen, buddy, like the great George Carlin said, your immune system needs practice.

People in developed countries talk a good game about doing away with cash in favor of credit cards, debit cards, and Oyster cards, but the reality, as Michael Salmony pointed out, is that 80 percent of payments in Europe Cash seems free to consumers (where cards have clearer charges), but costs European banks €84 billion a year. Less visibly banks also benefit (when the shadow economy hoards high-value notes it's an interest-free loan), and governments profit from Seigniorage (when people buy but do not spend coins).

"Any survey about payment methods," Salmony said Wednesday, "reveals that in all categories cash is the preferred payment method." You can buy a carrot or a car; it costs you nothing directly; it's anonymous, fast, and efficient. "If you talk directly to supermarkets, they all agree that cash is brilliant - they have sorting machines, counting machines...It's optimized so well, much better than cards."

The "unbanked", of course, such as the London migrants Kavita Datta studies, have no other options. Talk about the digital divide, this is the digital money divide: the cashless society excludes people who can't show passports, can't prove their address, or are too poor to have anything to bank with.

"You can get a job without a visa, but not without a bank account," one migrant worker told her. Electronic payments, ain't they grand?

But go to Africa, Asia, or South America, and everything turns upside down. There, too, cash is king - but there, unlike here with banks and ATMs on every corner and a fully functioning system of credit cards and other substitutes, cash is a terrible burden. Of the 2.6 billion people living on less than $2 a day, said Ignacio Mas, fewer than 10 percent have access to formal financial services. Poor people do save, he said, but their lack of good options means they save in bad ways.

They may not have banks, but most do have mobile phones, and therefore digital money means no long multi-bus rides to pay bills. It means being able to send money home at low cost. It means saving money that can't be easily stolen. In Ghana 80 percent of the population have no access to financial services - but 80 percent are covered by MTN, which is partnering with the banks to fill the gap. In Pakistan, Tameer Microfinance Bank partnered with Telenor to launch Easy-Peisa, which did 150,000 transactions its first month and expects a million by December. One million people produce milk in Pakistan; Nestle pays them all painfully by check every month. The opportunity in these countries to leapfrog traditional banking and head into digital payments is staggering, and our banks won't even care. The average account balance of customers for Kenya's M-Pesa customers is...$3.

When we're not destroying our financial system, we have more choices. If we're going to replace cash, what do we replace it with and what do we need? Really smart people to figure out how to do it right - like Isaac Newton, said Thomas Levenson. (Really. Who knew Isaac Newton had a whole other life chasing counterfeiters?) Law and partnership protocols and banks to become service providers for peer-to-peer finance, said Chris Cook. "An iTunes moment," said Andrew Curry. The democratization of money, suggested conference organizer David Birch.

"If money is electronic and cashless, what difference does it make what currency we use?" Why not...kilowatt hours? You're always going to need to heat your house. Global warming doesn't mean never having to say you're cold.

Personally, I always thought that if our society completely collapsed, it would be an excellent idea to have a stash of cigarettes, chocolate, booze, and toilet paper. But these guys seemed more interested in the notion of Facebook units. Well, why not? A currency can be anything. Second Life has Linden dollars, and people sell virtual game world gold for real money on eBay.

I'd say for the same reason that most people still walk around with notes in their wallet and coins in their pocket: we need to take our increasing abstraction step by step. Many have failed with digital cash, despite excellent technology, because they asked people to put "real" money into strange units with no social meaning and no stored trust. Birch is right: storing value in an Oyster card is no different than storing value in Beenz. But if you say that money is now so abstract that it's a collective hallucination, then the corroborative details that give artistic verisimilitude to an otherwise bald and unconvincing currency really matter.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

March 5, 2010

The surveillance chronicles

There is a touching moment at the end of the new documentary Erasing David, which had an early screening last night for some privacy specialists. In it, Katie, the wife of the film's protagonist, filmmaker David Bond, muses on the contrast between the England she grew up in and the "ugly" one being built around her. Of course, many people become nostalgic for a kinder past when they reach a certain age, but Katie Bond is probably barely 30, and what she is talking about is the engorging Database State (PDF).

Anyone watching this week's House of Lords debate on the Digital Economy Bill probably knows how she feels. (The Open Rights Group has advice on appropriate responses.)

At the beginning, however, Katie's biggest concern is that her husband is proposing to "disappear" for a month leaving her alone with their toddler daughter and her late-stage pregnancy.

"You haven't asked," she points out firmly. "You're leaving me with all the child care." Plus, what if the baby comes? They agree in that case he'd better un-disappear pretty quickly.

And so David heads out on the road with a Blackberry, a rucksack, and an increasingly paranoid state of mind. Is he safe being video-recorded interviewing privacy advocates in Brussels? Did "they" plant a bug in his gear? Is someone about to pounce while he's sleeping under a desolate Welsh tree?

There are real trackers: Cerberus detectives Duncan Mee and Cameron Gowlett, who took up the challenge to find him given only his (rather common) name. They try an array of approaches, both high- and low-tech. Having found the Brussels video online, they head to St Pancras to check out arriving Eurostar trains. They set up a Web site to show where they think he is and send the URL to his Blackberry to see if they can trace him when he clicks on the link.

In the post-screening discussion, Mee added some new detail. When they found out, for example, that David was deleting his Facebook page (which he announced on the site and of which they'd already made a copy), they set up a dummy "secret replacement" and attempted to friend his entire list of friends. About a third of Bond's friends accepted the invitation. The detectives took up several party invitations thinking he might show.

"The Stasi would have had to have a roomful of informants," said Mee. Instead, Facebook let them penetrate Bond's social circle quickly on a tiny budget. Even so, and despite all that information out on the Internet, much of the detectives' work was far more social engineering than database manipulation, although there was plenty of that, too. David himself finds the material they compile frighteningly comprehensive.

In between pieces of the chase, the filmmakers include interviews with an impressive array of surveillance victims, politicians (David Blunkett, David Davis), and privacy advocates including No2ID's Phil Booth and Action on Rights for Children's Terri Dowty. (Surprisingly, no one from Privacy International, I gather because of scheduling issues.)

One section deals with the corruption of databases, the kind of thing that can make innocent people unemployable or, in the case of Operation Ore, destroy lives such as that of Simon Bunce. As Bunce explains in the movie, 98.2 percent of the Operation Ore credit card transactions were fraudulent.

Perhaps the most you-have-got-to-be-kidding moment is when former minister David Blunkett says that collecting all this information is "explosive" and that "Government needs to be much more careful" and not just assume that the public will assent. Where was all this people-must-agree stuff when he was relentlessly championing the ID card ? Did he - my god! - learn something from having his private life exposed in the press?

As part of his preparations, Bond investigates: what exactly do all these organizations know about him? He sends out more than 80 subject access requests to government agencies, private companies, and so on. sends him a pile of paper the size of a phone book. Transport for London tell hims that even though his car is exempt his movements in and out of the charging zone are still recorded and kept. This is a very English moment: after bashing his head on his desk in frustration over the length of his wait on hold, when a woman eventually starts to say, "Sorry for keeping you..." he replies, "No problem".

Some of these companies know things about him he doesn't or has forgotten: the time he "seemed angry" on the phone to a customer service representative. "What was I angry about on November 21, 2006?" he wonders.

But probably the most interesting journey, after all, is Katie's. She starts with some exasperation: her husband won't sign this required form giving the very good nursery they've found the right to do anything it wants with their daughter's data. "She has no data," she pleads.

But she will have. And in the Britain she's growing up in, that could be dangerous. Because privacy isn't isolation and it isn't not being found. Privacy means being able to eat sand without fear.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 29, 2010

Game night

Why can't computer games get any serious love? The maverick Labour MP Tom Watson convened a meeting this week to ask just that. (Watson is also pushing for the creation of an advocacy group, Gamers' Voice (Facebook).) From the dates, the meeting is not in response to claims that playing computer games causes rickets.

Pause to go, "Huh?"

We all know what causes rickets in the UK. Winter at these crazy high latitudes causes rickets in the UK. Given the amount of atmosphere and cloud it has to get through in the darker months, sunlight can't muster enough oomph to make Vitamin D on the skins of the pasty, blue-white people they mostly have here. The real point of the clinical review paper that kicked off this round of media nonsense, Watson rants, is that half of all UK adults are deficient in Vitamin D in the winter and spring. Well, duh. Wearing sunscreen has made it worse. So do clothes. And this: to my vast astonishment on arrival here they don't put Vitamin D in the milk. But, hey, let's blame computer games!

And yet: games are taking over. In December Chart-Track market researchfound that the UK games industry is now larger than its film industry. Yesterday's game-playing kids are today's game-playing parents. One day we'll all be gamers on this bus. Criminals pay more for stolen World of Warcraft accounts than for credit card accounts (according to Richard Bartle), and the real-money market for virtual game world props is worth billions (PDF). But the industry gets no government support. Hence Watson's meeting.

At this point, I must admit that net.wars, too, has been deficient: I hardly ever cover games. As a freelance, I can't afford to be hooked on them, so I don't play them, so I don't know enough to write about them. In the early-to-mid 1990s I did sink hours into Hitchhiker's Guide to the Galaxy, Minesweeper, Commander Keen, Lemmings, Wolfenstein 3D, Doom, Doom 2, and some of Duke Nukem. At some point, I decided it was a bad road. When I waste time unproductively I need to feel that I'm about to do something useful. I switched the mouse to the left hand, mostly for ergonomic reasons, and my slightly lower competence with it was sufficient to deter further exploration. The other factor: Quake made it obvious that I'd reached my theoretical limit.

I know games are different now. I've watched a 20-something friend play World of Warcraft and Grand Theft Auto; I've even traded deaths with him in one of those multiplayer games where your real-life best friends are your mortal enemies. Watching him play The Sims as a recalcitrant teenager (is there any other kind?) was the most fun. It seemed like Cosmic Justice to see him shriek in frustration at the computer because the adults in his co-op household were *refusing to wash the dishes*. Ha!

For people who have jobs, games are a (sometimes shameful) hobby; for people who are self-employed they are a dangerous menace. Games are amateur sports without the fresh air. And they are today's demon medium, replacing TV, comic books (my parents believed these rotted the brain), and printed multi-volume novels. All of that contributes to why games get relatively little coverage outside of specialist titles and writers such as Aleks Krotoski and are studied by rare academics like Douglas Thomas and Richard Bartle.

Except: it's arguable that the structure of games and the kind of thinking they require - logical, problem-solving, exploratory, experimental - does in fact inspire a kind of mental fitness that is a useful background skill for our computer-dominated world. There are, as Tom Chatfield, one of the evening's three panelists and an editor at Prospect, says in his new book Fun, Inc, many valuable things people can and do learn from games. (I once watched an inveterate game-playing teen extract himself from the maze at Hampton Court in 15 seconds flat.)

And in fact, that's the thought with which the seminal game cum virtual world was started: in writing MUD, Bartle wanted to give people the means to explore their identities by creating different ones.

It's also fun. And an escape from drab reality. And a challenge. And active, rather than passive, entertainment. The critic Sam Leith (who has compared World of Warcraft to Chartres Cathedral) pointed out that the violent shoot-'em-up games that get the media attention are a small, stereotyped sector of the market that deliberately insert shocking violence recursively to get media attention and increase sales. Limiting the conversation to one stereotypical theme is the problem, not games themselves.

Philip Oliver, founder and CEO of the UK's large independent games developer, Blitz Games, listed some cases in point: in their first 12 weeks of release his company sold 500,000 copies of its The Biggest Loser TV and 3.8 million copies of its Burger King advertising game. And what about that wildly successful Wii Fit?

If you say, "That's different", there is the problem.

Still, if game players are all going to be stereotyped as violent players shooting things...I'm not sure who pointed out that the Houses of Parliament are a fabulous gothic castle in which to set a shoot-'em-up, but it's a great idea. Now, that would really be government support!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to (but please turn off HTML).

January 9, 2010

Car talk

The most interesting thing I've heard all week was a snippet on CNBC in which a commentator talked about cars going out of style. The story was that in 2009 the US fleet of cars shrank by four million. That is, four million cars were scrapped without being replaced.

The commentator and the original story have a number of reasons: increasing urbanization, uncertainty about oil prices, frustration about climate change, and so on. But the really interesting trend is a declining interest in cars on the part of young people. (Presumably these are the same young people who don't watch enough TV.)

A pause to reminisce. In 1967, when I was 13, my father bought a grey Mercedes 230SL with a red interior. It should tell you something when I say that I don't like sports cars, have always owned Datsuns/Nissans (including a pickup truck and two Prairies), and am not really interested in cars that aren't mine but I still remember the make and model number of this car from 42 years ago. I remember hoping he wouldn't trade it in before I turned 16 and was old enough to drive. (He did. Nerts.)

When, at 21, I eventually did get my own first car (a medium blue Nissan 710 station wagon with a white leather-like interior), it felt like I had finally achieved independence. Having a car meant that I could leave my parents' house any time I wanted. The power of that was shocking; it utterly changed how I felt about being in their home.

In London, I hardly drive. The public transportation is too good and the traffic too dense. There are exceptions, of course, but the fact is that it would be cheaper for me to book a taxi every time I needed a car than it is to own one. And yet, the image of being behind the wheel on the open road, going nowhere and everywhere retains its power.

People think of the US as inextricably linked to car culture, but the fact is that our national love affair with the car is quite recent and was imposed on us. The 1988 movie Who Framed Roger Rabbit? had it right: at one time even Los Angeles had a terrific public transportation system. But starting in 1922, General Motors, acting in concert with a number of oil companies, most notably Chevron, deliberately set out to buy up and close down thousands of municipal streetcar systems. The scheme was not popular: people did not want to have to buy cars.

CNBC's suggestion was that today's young people find their independence differently: through their cell phones and the Internet. He has a point. As children, many baby boomers shared bedrooms with siblings. Use of the family phone was often restricted. The home was most emphatically not a place where a young adult could expect any privacy.

Today, kids go out less, first because their parents worry about their safety, later because their friends and social lives are on tap from the individual bedrooms they now tend to have. And even if they have to share the family computer and use it in a well-trafficked location, they can carve themselves out a private space inside their phones, by text if not by voice.

The Internet's potential to destroy or remake whole industries is much discussed: see also newspapers, magazines, long-distance telecommunications, music, film, and television. The "Google decade" so many commentators say is ending is, according to Slate, just the beginning of how Google, all by itself, will threaten industries: search portals, ad agencies, media companies, book publishers, telephone companies, Mapquest, soon smart phone manufacturers, and then the big man on campus, Microsoft.

But if there's one thing we know, it's that technology companies are bad bets because they can be and are challenged when the next wave comes along. Who thought ten years ago that Microsoft wouldn't kill everyone else in its field? Twenty years ago, IBM was the unbeatable gorilla.

The happening wave is mobile phones, and it isn't at all clear that Google will dominate, any more than Microsoft has succeeded in dominating the Internet. But the interesting thing is what mobile phones will kill. So far, it's made a dent in the watchmaking industry (because a lot of people carrying phones don't see why they need a watch, too). Similarly, smart phones have subsumed MP3 players, pocket televisions. Now, cars. And, if I had to guess, smart phones will be the most popular vehicles for ebooks, too, and for news. Tim O'Reilly, for example, says that ebooks really began to take off with the iPhone. Literary agents and editors may love the Kindle, but consumers reading while waiting for trains are more likely to choose their phones. Ray Kurzweil is very likely right on track with his cross-platform ereader software, Blio.

All this seems to me to validate the questions we pose whenever we're asked to subsidize the entertainment industry in its struggle to find its feet in this new world. Is it the right business model? Is it the right industry? Is it the right time?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to

December 25, 2009

Second acts

Reviewing the big names of 2009 versus the big names of 1999 for ZDNet UK last week turned up some interesting trends there wasn't space to go into. Also worth noting: still unpublished is the reverse portion, looking at what the names who are Internet-famous in 2009 were doing in 1999. These were: Mark Zuckerberg (Facebook), Sergey Brin and Larry Page (Google), Rupert Murdoch, Barack Obama, and Jimmy Wales (Wikipedia).

One of the trends, of course, is the fact that there were so many women making technology headlines in 1999: Kim Polese (Marimba), Martha Lane Fox (, Carly Fiorina (running - and arguably nearly destroying - HP), Donna Dubinsky (co-founder of Palm), and Eva Pascoe (a media darling for having started the first Internet café, London's Cyberia, and writing a newspaper column). It isn't easy now to come up with names of similar impact in 2009.

You can come up with various theories about this. For example: the shrinking pipeline reported ten years ago by both the ACM and the BCS has borne fruit, so that there are actually fewer women available to play the prominent parts these women did. As against that (as a female computer scientist friend points out) one of the two heads of Oracle is female.

The other obvious possibility is the opposite: that women in prominent roles in technology companies have become so commonplace that they don't command the splashy media attention they did ten years ago. I doubt this; if they're commonplace, you'd expect to see some of their names in common use. I will say, though, that I know quite a few start-ups founded or co-founded by women. It was interesting to learn, in looking up Eva Pascoe's current whereabouts, that part of her goal in starting Cyberia was to educate women about the Internet. She was, of course, right: at the time, particularly in Britain, the attitude was very much that computers were boys' toys and few women then had found the confidence to navigate the online world.

The other interesting thing is the varying fortunes of the technologies the names represent. Some, such as Napster (Shawn Fanning), Netscape (Marc Andreesen) and Cyberia, live on through their successors. Others have changed much less: HP (Fiorina) is still with us, and Palm (Dubinsky and Jeff Hawkins) may yet manage a comeback. Symbian has achieved pretty much everything Colly Myers hoped.

Several of the technologies present the earliest versions of the hot topics of 2009, most notably Napster, which kicked off the file-sharing wars. If I were a music industry executive, I'd be thinking now that I was a numb-nut not to make a deal with the original Napster: it was a company with a central server. Suing it out of existence begat the distributed Gnutella, the even more distributed eDonkey, and then the peer-to-peer BitTorrent and all the little Torrents. Every year, more material is available online with or without the entertainment industry's sanction. This year's destructive industry proposal, three strikes, will hurt all sorts of people if it becomes law - but it will not stop file-sharing.

Of course, Napster's - and contemporary's - mistake was not being big enough. The Google Books case, one of the other big stories of the year, shows that size matters: had Brin and Page, still graduate students with an idea and some venture capital funding, tried scanning in library books in 1999 Google would be where Napster is now. Instead, of course, it's too big to fail.

The AOL/Time-Warner merger, for all that it has failed utterly, was the first warning of what has become a long-running debate about network neutrality. At the time, AOL was the biggest conduit for US consumer Internet access; merging with Time-Warner seemed to put dangerous control over that access in the hands of one of the world's largest owners of content. In the event, the marriage was a disastrous failure for both companies. But AOL, now divorced, may not be done yet: the "walled garden" approach to Internet content is finding new life with sites like Facebook. If, of course, it doesn't get run over by the juggernaut of 2009, Twitter.

If AOL does come back into style, it won't be the only older technology finding new life: the entire history of technology seems to be one of constant rediscovery. What, after all, is 2009's cloud computing but a reworking of what the 1960s called time-sharing?

Certainly, a revival of the walled garden would make life much easier for the >deep packet inspectors who would like to snoop intensively on all of us. Phorm, Home Office, it doesn't much matter: computers weren't really fast enough to peek inside data packets in real time much before this year.

One recently resurfaced name from the Net's early history that I didn't flag in the ZDNet piece is Sanford ("Spamford") Wallace, who in the late 1990s was widely blacklisted for sending spam email. By 1999, he had supposedly quit the business. And yet, this year he was convicted of 14,214,753 violations of the CAN-SPAM anti-spam act and told to pay Facebook more than $711 million. How times do not change.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to

October 30, 2009

Kill switch

There's an old sort-of joke that goes, "What's the best way to kill the Internet?" The number seven answer, according to Simson Garfinkel, writing for HotWired in 1997: "Buy ten backhoes." Ba-boom.

The US Senate, never folks to avoid improving a joke, came up with a new suggestion: install a kill switch. They published this little gem (as S.773) on April 1. It got a flurry of attention and then forgotten until the last week or two. (It's interesting to look back at Garfinkel's list of 50 ways to kill the Net and notice that only two are government actions, and neither is installing a "kill switch").

To be fair, "kill switch" is an emotive phrase for what they have in mind, which is that the president:

may declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network

Now, there's a lot of wiggle room in a vague definition like "critical infrastructure system". That could be the Federal government's own servers. Or the electrical grid, the telephone network, the banking system, the water supply, or even, arguably, Google. (It has 64+ percent of US search queries, and if you can't find things the Internet might as well be dead.) But what this particular desire of the Senate's sounds most like is those confused users who think they can catch a biological virus from their computers.

Still, for the media, calling the Senate's idea a "kill switch" is attention-getting political genius. We don't call the president's power to order the planes out of the sky, as happened on 9/11 a "crash switch", but imagine the outcry against it if we did.

Technically, the idea that there's a single off switch waiting to be implemented somewhere, is of course ridiculous.

The idea is also administrative silliness: Obama, we hope, is kind of busy. The key to retaining sanity when you're busy is to get other people to do all the things they can without your input. We would hope that the people running the various systems powering the federal government's critical infrastructure could make their own, informed decisions - faster than Obama can - about when they need to take down a compromised server.

Despite wishful thinking, John Gilmore's famous aphorism, "The Net perceives censorship as damage and routes around", doesn't really apply here. For one thing, even a senator knows - probably - that you can't literally shut down the entire Internet from a single switch sitting in the President's briefcase (presumably next to the nuclear attack button). Much of the Internet is, after all, outside the US; much of it is in private ownership. (Perhaps the Third Amendment could be invoked here?)

For another, Gilmore's comment really didn't apply to individual Internet-linked computer networks; Google's various bits of outages this year ought to prove that it's entirely possible for those to be down without affecting the network at large. No, the point was that if you try to censor the Net its people will stop you by putting up mirror servers and passing the censored information around until everyone has a copy. The British Chiropractic Association (quacklash!) and Trafigura are the latest organizations to find out what Gilmore knew in 1993. He also meant, I suppose, that the Internet protocols were designed for resilience and to keep trying by whatever alternate routes are available if data packets don't get through.

Earlier this week another old Net hand, Web inventor Tim Berners-Lee, gave some rather sage advice to the Web 2.0 conference. One key point: do not build your local laws into the global network. That principle would not, unfortunately, stop the US government from shutting off its own servers (to spite its face?), but it does nix the idea of, say, building the network infrastructure to the specification of any one particular group - the MPAA or the UK government, in defiance of the increasingly annoyed EU. In the same talk, Berners-Lee also noted (according to CNET): "I'm worried about anything large coming in to take control, whether it's large companies or government."

Threats like these were what he set up W3C to protect against. People talk with reverence of Berners-Lee's role as inventor, but many fewer understand that the really big effort is the 30 years since the aha! moment of creation, during which Berners-Lee has spent his time and energy nurturing the Web and guiding its development. Without that, it could easily have been strangled by competing interests, both corporate and government. As, of course, it still could be, depending on the outcome of the debates over network neutrality rules.

Dozens of decisions like Berners-Lee's were made in creating the Internet. They have not made it impossible to kill - I'm not sure how many backhoes you'd need now, but I bet it's still a surprisingly finite number - but they have made it a resilient and robust network. A largely democratic medium, in fact, unlike TV and radio, at least so far. The Net was born free; the battles continue over whether it should be in chains.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or by email to

October 23, 2009

The power of Twitter

It was the best of mobs, it was the worst of mobs.

The last couple of weeks have really seen the British side of Twitter flex its 140-character muscles. First, there was the next chapter of the British Chiropractic Association's ongoing legal action against science writer Simon Singh. Then there was the case of Jan Moir, who wrote a more than ordinarily Daily Mailish piece for the Daily Mail about the death of Boyzone's Stephen Gately. And finally, the shocking court injunction that briefly prevented the Guardian from reporting on a Parliamentary question for the first time in British history.

I am on record as supporting Singh, and I, too, cheered when, ten days ago, Singh was granted leave to appeal Justice Eady's ruling on the meaning of Singh's use of the word "bogus". Like everyone, I was agog when the BCA's press release called Singh "malicious". I can see the point in filing complaints with the Advertising Standards Authority over chiropractors' persistent claims, unsupported by the evidence, to be able to treat childhood illnesses like colic and ear infections.

What seemed to edge closer to a witch hunt was the gleeful take-up of George Monbiot's piece attacking the "hanging judge", Justice Eady. Disagree with Eady's ruling all you want, but it isn't hard to find libel lawyers who think his ruling was correct under the law. If you don't like his ruling, your correct target is the law. Attacking the judge won't help Singh.

The same is not true of Twitter's take-up of the available clues in the Guardian's original story about the gag to identify the Parliamentary Question concerned and unmask Carter-Ruck, the lawyers who served it and their client, Trafigura. Fueled by righteous and legitimate anger at the abrogation of a thousand years of democracy, Twitterers had the PQ found and published thousands of times in practically seconds. Yeah!

Of course, this phenomenon (as I'm so fond of saying) is not new. Every online social medium, going all the way back to early text-based conferencing systems like CIX, the WELL, and, of course, Usenet, when it was the Internet's town square (the function in fact that Twitter now occupies) has been able to mount this kind of challenge. Scientology versus the Net was probably the best and earliest example; for me it was the original net.war. The story was at heart pretty simple (and the skirmishes continue, in various translations into newer media, to this day). Scientology has a bunch of super-secrets that only the initiate, who have spent many hours in expensive Scientology training, are allowed to see. Scientology's attempts to keep those secrets off the Net resulted in their being published everywhere. The dust has never completely settled.

Three people can keep a secret if two of them are dead, said Mark Twain. That was before the Internet. Scientology was the first to learn - nearly 15 years ago - that the best way to ensure the maximum publicity for something is to try to suppress it. It should not have been any surprise to the BCA, Trafigura, or Trafigura's lawyers. Had the BCA ignored Singh's article, far fewer people would know now about science's dim view of chiropractic. Trafigura might have hoped that a written PQ would get lost in the vastness that is Hansard; but they probably wouldn't have succeeded in any case.

The Jan Moir case, and the demonstration outside Carter-Ruck's offices are, however rather different. These are simply not the right targets. As David Allen Green (Jack of Kent) explains, there's no point in blaming the lawyers; show your anger to the client (Trafigura) or to Parliament.

The enraged tweets and Facebook postings about Moir's article helped send a record number of over 25,000 complaints to the Press Complaints Commission, whose Web site melted down under the strain. Yes, the piece was badly reasoned and loathsome, but isn't that what the Daily Mail lives for? Tweets and links create hits and discussion. The paper can only benefit. In fact, it's reasonable to suppose that in the Trafigura and Moir cases both the Guardian and the Daily Mail manipulated the Net perfectly to get what they wanted.

But the stupid part about let's-get-Moir is that she does not *matter*. Leave aside emotional reactions, and what you're left with is someone's opinion, however distasteful.

This concerted force would be more usefully turned to opposing the truly dangerous. See for example, the AIDS denialism on parade by Fraser Nelson at The Spectator. The "come-get-us" tone e suggests that they saw attention New Humanist got for Caspar Melville's mistaken - and quickly corrected - endorsement of the film House of Numbers and said, "Let's get us some of that." There is no more scientific dispute about whether HIV causes AIDS than there is about climate change or evolutionary theory.

If we're going to behave like a mob, let's stick to targets that matter. Jan Moir's column isn't going to kill anybody. AIDS denialism will. So: we'll call Trafigura a win, chiropractic a half-win, and Moir a loser.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to

October 16, 2009

Unsocial media

"No one under 30 will use email," the convenor objected.

There was a bunch of us, a pre-planning committee for an event, and we were talking about which technology we should have the soon-to-be appointed program committee use for discussions. Email! Convenient. Accessible by computer or phone. Easily archived, forwarded, quoted, or copied into any other online medium. Why are we even talking about this?

And that's when he said it.

Not so long ago, if you had email you were one of the cool kids, the avant-garde who saw the future and said it was electronic. Most of us spent years convincing our far-flung friends and relatives to get email so we didn't have to phone or - gasp - write a letter that required an envelope and a stamp. Being told that "email is for old people" is a lot like a 1960s "Never trust anyone over 30" hippie finding out that the psychedelic school bus he bought to live in to support the original 1970 Earth Day is a gas-guzzling danger to the climate and ought to be scrapped.

Well, what, then? (Aside: we used to have tons of magazines called things like Which PC? and What Micro? to help people navigate the complex maze of computer choices. Why is there no magazine called Which Social Medium??)

Facebook? Clunky interface. Not everyone wants to join. Poor threading. No easy way to export, search, or archive discussions. IRC or other live chat? No way to read discussion that took place before you joined the chat. Private blog with comments and RSS? Someone has to set the agenda. Twitter? Everything is public, and if you're not following all the right people the conversation is disjointed and missing links you can't retrieve. IM? Skype? Or a wiki? You get the picture.

This week, the Wall Street Journal claimed that "the reign of email is over" while saying only a couple of sentences later, "We all still use email, of course." Now that the Journal belongs to Rupert Murdoch, does no one check articles for sense?

Yes, we all still use email. It can be archived, searched, stored locally, read on any device, accessed from any location, replied to offline if necessary, and read and written thoughtfully. Reading that email is dead is like reading, in 2000, that because a bunch of companies went bust the Internet "fad" was over. No one then who had anything to do with the Internet believed that in ten years the Internet would be anything but vastly bigger than it was then. So: no one with any sense is going to believe that ten years from now we'll be sending and receiving less email than we are now. What very likely will be smaller, especially if industrial action continues, is the incumbent postal services.

What "No one under 30 uses email" really means is that it's not their medium of first choice. If you're including college students, the reason is obvious: email is the official stuff they get from their parents and universities. Facebook, MySpace, Twitter, and texting is how they talk to their friends. Come the day they join the workforce, they'll be using email every day just like the rest of us - and checking the post and their voicemail every morning, too.

But that still leave the question: how do you organize anything if no one can agree on what communications technology to use? It's that question that the new Google Wave is trying to answer. It's too soon, really, to tell whether it can succeed. But at a guess, it lacks one of the fundamental things that makes email such a lowest common denominator: offline storage. Yes, I know everything is supposed to be in "the cloud" and even airplanes have wifi. But for anything that's business-critical you want your own archive where you can access it when the network fails; it's the same principle as backing up your data.

Reviews vary in their take on Wave. LifeHacker sees it as a collaborative tool. ZDNet UK editor Rupert Goodwins briefly called it Usenet 2.0 and then retracted and explained using the phrase "unified comms".

That, really, is the key. Ideally, I shouldn't have to care whether you - or my fellow committee members - prefer to read email, participate in phone calls (via speech-to-text, text-to-speech synthesizers), discuss via Usenet, Skype, IRC, IM, Twitter, Web forums, blogs, or Facebook pages. Ideally, the medium you choose should be automatically translated in to the medium I choose. A Babel medium. The odds that this will happen in an age when what companies most want is to glue you to their sites permanently so they can serve you advertising are very small.

Which brings us back to email. Invented in an era when the Internet was commercial-free. Designed to open standards, so that anyone can send and receive it using any reader they like. Used, in fact, to alert users to updates they want to know about to their accounts on Facebook/IRC/Skype/Twitter/Web forums. Yes, it's overrun with corporate CYA memos and spam. But it's still the medium of record - and it isn't going anywhere. Whereas: those 20-somethings will turn 30 one day soon.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on follow on Twitter, or send email to (but please turn off HTML).

September 25, 2009

Dead technology

The longevity of today's digital media is a common concern. Less than 20 years after the creation of a digital Domesday Book, a batch of researchers had to work to make it readable again, whereas the 900-year-old paper-based original is still readable. Anyone maintaining an archive of digital content knows that all that material has to be kept up to date and transferred to new formats as machines and players change.

On friend of mine, a professional sound engineer, begged me to keep my magnetic media when I told him I was transferring it to digital formats. You can, he argued, always physically examine a magnetic tape and come up with some kind of reader for it; with digital media, you're completely stuck if you don't know how the data was organized.

Where was he in 1984, when I bought my sewing machine, a Singer Futura 2000? That machine was, as it turns out, one of the earliest electronic models on the market. I had no idea of that at the time; the particular feature I was looking for (the ability to lock the machine in reverse, so I could have both hands free when reverse-stitching) was available on very few models. This was the best of those few. No one said to me, "And it's electronic!" They said stuff like, "It has all these stitches!" Most of which, to be sure, hardly anyone is likely ever to use other than the one-step buttonhole and a zigzag stitch or two.

Cut to 2009, when one day I turn the machine on and discover the motor works but the machine won't select a stitch or drive that motor. "Probably the circuit board," says the first repair person I talk to. Words of doom.

The problem with circuit boards is - as everyone knows who's had a modern electronic machine fail - that a) they're expensive to replace; b) they're hard to find; c) they're even harder to get repaired. Still, people don't buy sewing machines to use for a year or five; they buy them for a lifetime. In fact, before cars and computers, washing machines and refrigerators, sewing machines were the first domestic machines, they were an expensive purchase, and they were expected to last.

You can repair - and buy parts for - a 150-year-old treadle Singer sewing machine. People still use them, particularly for heavy sewing jobs like leather, many-layered denim, or neoprene. You can also repair the US Singer machine my parents gave me as a present in the mid 1970s. That machine is what they now call "mechanical", by which they mean electric but not electronic. What you can't do is repair a machine from the 1980s: Singer stopped making the circuit boards. If you're very, very lucky, you might be able to find someone who can repair one.

But even that is difficult. One such skilled repairman told me that even though Singer itself had recommended him to me he was unable to get the company to give him the circuit diagrams so he could use his skill for the benefit of both his own customers (and therefore himself) and Singer itself. The concept of open-sourcing has not landed in the sewing machine market; sewing machines are as closed as modern cars with what seems like much less justification. (At least with a car you can argue that a ham-fisted circuit board repairman could cost you your life; hard to make that argument about a sewing machine.)

Of course, from Singer's point of view things are far worse than irreplaceable circuit boards that send a few resentful customers into the gathering feet of Husqvarna Viking or Bernina. Singer's problem is that the market for sewing machines has declined dramatically. In 1902, the owner of Eastleigh Sewing Centre told me, Singer was producing 5 million machines a year. Now, the entire industry of many more manufacturers sells about 500,000. Today's 30- and 40-year-olds never learned to use a sewing machine in school, nor were they taught by their mothers. If they now learn to use one, they're more likely to use a computerized machine (a level up from just "electronic"). What they learn is graphics: the fanciest modern machines can take a GIF or JPG and embroider it on a section of fabric held taut by a hoop.

You can't blame them. Store-bought, mass-market clothing, even when it's made out of former "luxury" fabrics like silk, is actually cheaper than anything you can make at home these days. Only a few things make sense for anyone but the most obsessive to sew at home any more: 1) textile-based craft items like stuffed dolls and quilts, (and embroidered images); 2) items that would be prohibitively expensive to buy or impossible to find, like stage and re-enactment costumes; 3) items you want to be one-of-a-kind and personal such as, perhaps a wedding dress; 4) items that are straightforward to sew but expensive to buy, like curtains and other soft furnishings. The range of machines available now reflects that, so that you're stuck with either buying a beginner's machine or one intended for experts; the middle ground (like my Futura) has vanished. No one has the time to sew garments any more; no one, seemingly, even repairs torn clothing any more.

But damn, I hate throwing stuff out that's mostly functional.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to

July 24, 2009

Security for the rest of us

Many governments, faced with the question of how to improve national security, would do the obvious thing: round up the usual suspects. These would be, of course, the experts - that is, the security services and law enforcement. This exercise would be a lot like asking the record companies and film studios to advise on how to improve copyright: what you'd get is more of the same.

This is why it was so interesting to discover that the US National Academies of Science was convening a workshop to consult on what research topics to consider funding, and began by appointing a committee that included privacy advocates and usability experts, folks like Microsoft researcher Butler Lampson, Susan Landau, co-author of books on privacy and wiretapping, and Donald Norman, author of the classic book The Design of Everyday Things. Choosing these people suggests that we might be approaching a watershed like that of the late 1990s, when the UK and the US governments were both forced to understand that encryption was not just for the military any more. The peace-time uses of cryptography to secure Internet transactions and protect mobile phone calls from casual eavesdropping are much broader than crypto's war-time use to secure military communications.

Similarly, security is now everyone's problem, both individually and collectively. The vulnerability of each individual computer is a negative network externality, as NYU economist Nicholas Economides pointed out. But, as many asked, how do you get people to understand remote risks? How do you make the case for added inconvenience? Each company we deal with makes the assumption that we can afford the time to "just click to unsubscribe" or remember one password, without really understanding the growing aggregate burden on us. Norman commented that door locks are a trade-off, too: we accept a little bit of inconvenience in return for improved security. But locks don't scale; they're acceptable as long as we only have to manage a small number of them.

In his 2006 book, Revolutionary Wealth, Alvin Toffler comments that most of us, without realizing it, have a hidden third, increasingly onerous job, "prosumer". Companies, he explained, are increasingly saving money by having us do their work for them. We retrieve and print out our own bills, burn our own CDs, provide unpaid technical support for ourselves and our families. One of Lorrie Cranor's students did the math to calculate the cost in lost time and opportunities if everyone in the US read annually the privacy policy of each Web site they visited once a month. Most of these things require college-level reading skills; figure 244 hours per year per person, $3,544 each...$781 billion nationally. Weren't computers supposed to free us of that kind of drudgery? As everything moves online, aren't we looking at a full-time job just managing our personal security?

That, in fact, is one characteristic that many implementations of security share with welfare offices - and that is becoming pervasive: an utter lack of respect for the least renewable resource, people's time. There's a simple reason for that: the users of most security systems are deemed to be the people who impose it, not the people - us - who have to run the gamut.

There might be a useful comparison to information overload, a topic we used to see a lot about ten years back. When I wrote about that for ComputerActive in 1999, I discovered that everyone I knew had a particular strategy for coping with "technostress" (the editor's term). One dealt with it by never seeking out information and never phoning anyone. His sister refused to have an answering machine. One simply went to bed every day at 9pm to escape. Some refused to use mobile phones, others to have computers at home..

But back then, you could make that choice. How much longer will we be able to draw boundaries around ourselves by, for example, refusing to use online banking, file tax returns online, or participate in social networks? How much security will we be able to opt out of in future? How much do security issues add to technostress?

We've been wandering in this particular wilderness a long time. Angela Sasse, whose 1999 paper Users Are Not the Enemy talked about the problems with passwords at British Telecom, said frankly, "I'm very frustrated, because I feel nothing has changed. Users still feel security is just an obstacle there to annoy them."

In practice, the workshop was like the TV game Jeopardy: the point was to generate research questions that will go into a report, which will be reviewed and redrafted before its eventual release. Hopefully, eventually, it will all lead to a series of requests for proposals and some really good research. It is a glimmer of hope.

Unless, that is, the gloominess of the beginning presentations wins out. If you listened to Lampson, Cranor, and to Economides, you got the distinct impression that the best thing that could happen for security is that we rip out the Internet (built to be open, not secure), trash all the computers (all of whose operating systems were designed in the pre-Internet era), and start over from scratch. Or, like the old joke about the driver who's lost and asking for directions, "Well, I wouldn't start from here".

So, here's my question: how can we make security scale so that the burden stays manageable?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to

July 17, 2009

Human factors

For the last several weeks I've been mulling over the phrase security fatigue. It started with a paper (PDF) co-authored by Angela Sasse, in which she examined the burden that complying with security policies imposes upon corporate employees. Her suggestion: that companies think in terms of a "compliance budget" that, like any other budget (money, space on a newspaper page), has to be managed and used carefully. And, she said, security burdens weigh differently on different people and at different times, and a compliance budget needs to comprehend that, too.

Some examples (mine, not hers). Logging onto six different machines with six different user IDs and passwords (each of which has to be changed once a month) is annoying but probably tolerable if you do it once every morning when you get to work and once in the afternoon when you get back from lunch. But if the machines all log you out every time you take your hands off the keyboard for two minutes, by the end of the day they will be lucky to survive your baseball bat. Similarly, while airport security is never fun, the burden of it is a lot less to a passenger traveling solo after a good night's sleep who reaches the checkpoints when they're empty than it is to the single parent with three bored and overtired kids under ten who arrives at the checkpoint after an overnight flight and has to wait in line for an hour. Context also matters: a couple of weeks ago I turned down a ticket to Court 1 at Wimbledon on men's semi-finals day because I couldn't face the effort it would take to comply with their security rules and screening. I grudgingly accept airport security as the trade-off for getting somewhere, but to go through the same thing for a supposedly fun day out?

It's relatively easy to see how the compliance budget concept could be worked out in practice in a controlled environment like a company. It's very difficult to see how it can be worked out for the public at large, not least because none of the many companies each of us deals with sees it as beneficial to cooperate with the others. You can't, for example, say to your online broker that you just can't cope with making another support phone call, can't they find some other way to unlock your account? Or tell Facebook that 61 privacy settings is too many because you're a member of six other social networks and Life is Too Short to spend a whole day configuring them all.

Bruce Schneier recently highlighted that last-referenced paper, from Joseph Bonneau and Soeren Preibusch at Cambridge's computer lab, alongside another by Leslie John, Alessandro Acquisti, and George Loewenstein from Carnegie-Mellon, to note a counterintuitive discovery: the more explicit you make privacy concerns the less people will tell you. "Privacy salience" (as Schneier calls it) makes people more cautious.

In a way, this is a good thing and goes to show what privacy advocates have been saying along: people do care about privacy if you give them the chance. But if you're the owners of Facebook, a frequent flyer program, or Google it means that it is not in your business interest to spell out too clearly to users what they should be concerned about. All of these businesses rely on collecting more and more data about more and more people. Fortunately for them, as we know from research conducted by Lorrie Cranor (also at Carnegie-Mellon), people hate reading privacy policies. I don't think this is because people aren't interested in their privacy. I think this goes back to what Sasse was saying: it's security fatigue. For most people, security and privacy concerns are just barriers blocking the thing they came to do.

But choice is a good thing, right? Doesn't everyone want control? Not always. Go back a few years and you may remember some widely publicized research that pointed out that too many choices stall decision-making and make people feel...tired. A multiplicity of choices adds weight and complexity to the decision you're making: shouldn't you investigate all the choices, particularly if you're talking about which of 56 mutual funds to add to your 401(k)?

It seems obvious, therefore, that the more complex the privacy controls offered by social networks and other services the less likely people are to use them: too many choices, too little time, too much security fatigue. In minor cases in real life, we handle this by making a decision once and sticking to it as a kind of rule until we're forced to change: which brand of toothpaste, what time to leave for work, never buy any piece of clothing that doesn't have pockets. In areas where rules don't work, the best strategy is usually to constrain the choices until what you have left is a reasonable number to investigate and work with. Ecommerce sites notoriously get this backwards: they force you to explore group by group instead of allowing you to exclude choices you'll never use.

How do we implement security and privacy so that they're usable? This is one of the great unsolved, under-researched questions in security. I'm hoping to know more next week.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to

February 20, 2009

Control freaks

It seems like every year or two some currently populat company revises its Terms of Service in some stupid way that gets all its users mad and then either 1) backs down or 2) watches a stampede for the exits. This year it's Facebook.

In announcing the reversal, founder Mark Zuckerberg writes that given its 175 million users, if Facebook were a country it would be the sixth most populous country in the world, and called the TOS a "governing document". While those numbers must sound nice on the business plan - wow! Facebook has more people than Pakistan! - in reality Facebook doesn't have 175 million users in the sense that Pakistan has 172 million inhabitants. I'm sure that Facebook, like every other Internet site or service, has a large percentage of accounts that are opened, used once or twice, and left for dead. Countries must plan governance and health care for all their residents; no one's a lapsed user of the country they live in.

Actually, the really interesting thing about 175 million people: that's how many live outside the countries they were born in. Facebook more closely matches the 3 percent of the world's population who are migrants.

It is nice that Zuckerberg is now trying to think of the TOS as collaborative, but the other significant difference is of course that Facebook is owned by a private company that is straining to find a business model before it stops being flavor of the month. (Which, given Twitter's explosive growth, could be any time now.) The Bill of Rights in progress has some good points (that sound very like the WELL's "You own your own words", written back in the 1980s. The WELL has stuck to its guns for 25 years, and any user can delete ("scribble") any posting at any time, but the WELL has something Facebook doesn't: subscription income. Until we know what Facebook's business model is - until *Facebook* knows what Facebook's business model is - it's impossible to put much faith in the durability of any TOS the company creates.

At the Guardian, Charles Arthur argues that Facebook should just offer a loyalty card because no one reads the fine print on those. That's social media for you: grocery shopping isn't designed for sharing information. Facebook and other Net companies get in this kind of trouble is because they *are* social media, and it only takes a few obsessives to spread the word. If you do read the fine print of TOSs on other sites, you'll be even more suspicious.

But it isn't safe to assume - as many people seem to have - that Facebook is just making a land grab. Its missing-or-unknown business model is what makes us so suspicious. But the problem he's grappling with is a real one: when someone wants to delete their account and leave a social network, where is the boundary of their online self?

The WELL's history, however, does suggest that the issues Zuckerberg raises are real. The WELL's interface always allowed hosts and users to scribble postings; the function, according to Howard Rheingold in The Virtual Community and in my own experience was and is very rarely used. But scribble only deletes one posting at a time. In 1990, a departing staffer wrote and deployed a mass scribble tool to seek out and destroy every posting he had ever made. Some weeks later, more famously, a long-time, prolific WELL user named Blair Newman, turned it loose on his own work and then, shortly afterwards, committed suicide.

Any suicide leaves a hole in the lives of the people he knows, but on the WELL the holes are literal. A scribbled posting doesn't just disappear. Instead, the shell of the posting remains, with the message "" in place of the former content. Also, after a message is scribbled even long-dead topics pop up when you read a conference, so a mass scribble hits you in the face repeatedly. It doesn't happen often; the last I remember was about 10 years ago, when a newly appointed CEO of a public company decided to ensure that no trace remained of anything inappropriate he might ever have posted.

Of course, scribbling your own message doesn't edit other people's. While direct quoting is not common on the WELL - after all, the original posting is (usually) still right there, unlike email or Usenet - people refer to and comment on each other's postings all the time. So what's left is a weird echo, as if all copies of the Bible suddenly winked out of existence leaving only the concordances behind.

It is this problem that Zuckerberg is finding difficult. The broad outline so far posted seems right: you can delete the material you've posted, but messages you've sent to others remain in their inboxes. There are still details: what about comments you post to others' status updates or on their Walls? What about tags identifying you that other people have put in their photographs?

Of course, Zuckerberg's real problem is getting people to want to stay. Companies like to achieve this by locking them in, but ironically, just like in real life, reassuring people that they can leave is the better way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

October 31, 2008

Machine dreams

Just how smart are humans anyway? Last week's Singularity Summit spent a lot of time talking about the exact point at which computer processing power would match that of the human brain, but that's only the first step. There's the software to make the hardware do stuff, and then there's the whole question of consciousness. At that point, you've strayed from computer science into philosophy and you might as well be arguing about angels on the heads of pins. Of course everyone hopes they'll be alive to see these questions settled, but in the meantime all we have is speculation and the snide observation that it's typical that a roomful of smart people would think that all problems can be solved by more intelligence.

So I've been trying to come up with benchmarks for what constitutes artificial intelligence, and the first thing I think is that the Turing test is probably too limited. In it, a judge has to determine which of two typing correspondents is the machine and which the human, That's fine as far as it goes, but one of the consistent threads that un through all this is a noticeable disdain for human bodies.

While our brain power is largely centralized, it still seems to me likely that both its grey matter and the rest of our bodies are an important part of the substrate. How we move through space, how our bodies react and feed our brains is part and parcel of how our minds work, however much we may wish to transcend biology. The fact that we can watch films of bonobos and chimpanzees and recognise our own behaviour in their interactions should show us that we're a lot closer to most animal species than we think - and a lot further from most machines.

For that sort of reason, the Turing test seems limited. A computer passes that test if, when paired against a human, the judge can't tell which is which. At the moment, it seems clear the winner is going to be spambots - some spam messages are already devised cleverly enough to fool even Net-savvy individuals into opening them sometimes. But they're hardly smart - they're just programmed that way. And a lot depends on the capability of the judge - some people even find Eliza convincing, though it's incredibly easy to send off-course into responses that are clearly those of a machine. Find a judge who wants to believe and you're into the sort of game that self-styled psychics like to play.

Nor can we judge a superhuman intelligence by the intractable problems it solves. One of the more evangelist speakers last weekend talked about being able to instantly create tall buildings via nanotechnology. (I was, I'm afraid, irresistibly reminded of that Bugs Bunny cartoon where Marvin pours water on beans to produce instant Martians to get rid of Bugs.) This is clearly just silly: you're talking about building a gigantic building out of molecules. I don't care how many billions of nanobots you have, the sheer scale means it's going to take time. And, as Kevin Kelly has written, no matter how smart a machine is, figuring out how to cure cancer or roll back aging won't be immediate either because you can't really speed up the necessary experiments. Biology takes time.

Instead, one indicator might be variability of response; that is, that feeding several machines the same input - or giving the same machine the same input at different times - produces different, equally valid interpretations. If, for example, you give a 10th grade class Jane Austen's Pride and Prejudice to read and report on, different students might with equal legitimacy describe it as a historical account of the economic forces affecting 18th century women, a love story, the template for romantic comedy, or even the story of the plain sister in a large family whose talents were consistently overlooked until her sisters got married.

In The Singularity Is Near, Ray Kurzweil laments that each human must read a text separately and that knowledge can't be quickly transferred from one to another the way a speech recognition program can be loaded into a new machine in seconds - but that's the point. Our strength is that our intelligences are all different, and we aren't empty vessels into which information is poured but stews in which new information causes varying chemical reactions.

You might argue that search engines can already do this, in that you don't get the same list of hits if you type the same keywords into Google versus Yahoo! versus, and if you come back tomorrow you may get a different response from any one of them. That's true. It isn't the kind of input I had in mind, but fair enough.

The other benchmark that's occurred to me so far is that machines will be getting really smart when they get bored.

ZDNet UK editor Rupert Goodwins has a variant on this from when he worked at Sinclair Research. "If it went out one evening, drank too much, said the next morning, 'never again' and repeated the exercise immediately. Truly human." But see? There again: a definition of human intelligence that requires a body.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

October 17, 2008

Mind the gap

"Everyone in my office is either 50 or 25," said my neighbor, who is clearly not 25. "We call them 'knowledge-free'. I blame the Internet."

Well, the Internet is a handy thing to blame; it's there and today's generation of 20-somethings grew up with the Web - if you're 25 today you were 12 when Netscape went public. My parents, who were born in 1906 and 1913, would have blamed comic books; my older siblings, born between 1938 and 1943, might blame TV.

What are they "knowledge-free" about? The way she tells it, pretty much everything. They have grown up in a world where indoor temperature is the same year-round. Where bananas and peaches are native, year-round fruit that grows on supermarket shelves. Where World War II might as well be World of Warcraft II. Where dryers know when the clothes are dry, and anything worth seeing on TV will show up as a handily edited clip on YouTube. And where probably the biggest association with books is waiting for JK Rowling's next installment of Harry Potter.

Of course, every 50-something generation is always convinced that the day's 20-somethings are inadequate; it's a way of denying you were ever that empty-headed yourself. My generation - today's 50-somethings - and the decade or so ahead of us absolutely terrified our parents: let those dope-smoking, draft-dodging, "Never trust anyone over 30", free-lovers run things?

It's also true that she seems to know a different class of 20-somethings than I do; my 20-plus friends are all smart, funny, thoughtful, well educated, and interested in everything, even if they are curiously lacking in detailed knowledge of early 1970s movies. They read history books. They study science. They worry about the economy. They think about their carbon production and how much fossil fuels they consume. Whereas, the 20-odds in her office write and think about climate change and energy use apparently without ever connecting those global topics with the actual individual fact that they personally expect to wear the same clothes year-round in an indoor environment controlled to a constant temperature.

Just as computers helped facilitate but didn't cause the current financial crisis, the Internet has notthe problem - if anything it ought to be the antidote. What causes this kind of disconnect is simply what happens when you grow up in a certain way; you think the conditions you grew up with are normal. When you're 25, 50 years is an impossibly long time to think about. When you're 55, centuries become graspable notions. All of which has something to do with the way the current economic crisis has developed.

If you compare - as the Washington Post and the Financial Times have - the current mess to the Great Depression, there's a certain logic to thinking that 80 years is just about exactly the right length of time for a given culture to recreate its past mistakes. That's four generations. The first lived through the original crisis; the second heard their parents talk about it; the third heard their grandparents talk about it; the fourth has no memory and hubris sets in.

In this case, part of the hubris that set in was the idea that the Glass-Steagall Act, enacted in 1933 to control the banks after the Great Depression, was no longer needed. The banking industry had of course been trying to get rid of the separation of deposit-taking banks and investment banks for years, and they finally succeeded in 1999. Clinton had no choice but to sign it into law in 1999; the margin by which it passed both Houses was too large. There is no point in blaming only him, as Republicans trying to get McCain into office seem bent on doing.

That year was of course the year of maximum hubris anyway. The Internet bubble was at its height and so was the level of denial in the financial markets that it was a bubble. You can go on to blame the housing bubble brought about by easier access to mortgage money, cheap credit, credit default swaps, and all the other hideous weapons of financial mass destruction, but for me the repeal of Glass-Steagall is where it started. It was a clear sign that the foxes had won the chance to wreck the henhouse again. And fox - or human - or scorpion - nature being what it is, it was quite right to think that they would take it. As Benjamin Graham observed many years ago in The Intelligent Investor, bright young men have offered to work miracles - usually with other people's money - since time immemorial.

At that, maybe we're lucky if the 20-somethings in my neighbor's office are unconscious. Imagine if they were conscious. They would look at today's 50- and 60-somethings and say: you wrecked the environment, you will leave me no energy sources, social security, or health insurance in my old age, you have bankrupted the economy so I will never be able to own a house, and you got to have sex without worrying about dying from it. They'd be like the baby boomers were in the 1960s: mad as hell.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

August 22, 2008

Intimate exchanges

A couple of years ago I did an interview with Ed Iacobucci CEO and founder of Dayjet, a new kind of airline. Dayjet has no published timetable; instead, prospective passengers (mostly company CEOs and other business types with little time to spare for driving between ill-served smaller cities in the American south) specify their departure point, their destination, and a window of time for Dayjet to get them there. Dayjet responds with a price based on the number of full seats in the plane. The airline, said Iacobucci, is software expressed as a service. And - and this is the key point here - constructing an intellectual property business in such a way meant he didn't have to worry about copying.

Cut to: the current battles over P2P. Danny O'Brien observed recently that with terabyte disk drives becoming luggable and the back catalogue of recorded music being "only" 4Tb, in the medium term the big threat to the music companies isn't P2P but file-swapping between directly connected hard drives, no Internet needed; no detection possible.

Cut to: the amazing career of Alan Ayckbourn and the Stephen Joseph Theatre in Scarborough, North Yorkshire.

Ayckbourn is often thought of as Britain's answer to Neil Simon, but the comparison is unfair to Ayckbourn. Simon is of course a highly skilled playwright and jokesmith, but his characters are in nothing like the despair that Ayckbourn's are, and he has none of the stagecraft. Partly, that may be because Ayckbourn has his own theatre to play with. Since 1959, when his first play was produced, Ayckbourn has written 71 plays (and still counting), and just about all of them were guaranteed production in advance at the Stephen Joseph Theatre, where Ayckbourn has been artistic director since 1974.

Many of them play with space and time. In How the Other Half Loves two dinners share stage space and two characters though they occur on different nights in different living rooms. In Communicating Doors characters shift through the same hotel room over four decades. In Taking Steps three stories of a house are squashed flat into a single stage set. He also has several sets of complementary plays, such as The Norman Conquests, a trilogy which sets each of the plays - the story of a weekend house party - in a different room.

It was in 1985, during a period of obsession with the plays Intimate Exchanges that I decided that at some point I really had to see Alan Ayckbourn's work in its native habitat. Partly, this was due to the marvellous skill with which Lavinia Bertram and Robin Herford shifted among four roles each. Intimate Exchanges is scored for just two actors, and the plays' conceit is that they chronicle, via a series of two-person scenes, 16 variant consequences of a series of escalating choices. Bertram and Herford were the original cast, imported into London from Scarborough. So my thought was: if this is the kind of acting they have up there, one must go. (As bizarre as it seems to go from London to anywhere to go to the theater.)

This year, reading that Ayckbourn is about to retire as artistic director, it seemed like now or never. It's worth the trip: although many of Ayckbourn's plays work perfectly well on a traditional proscenium stage and he's had a lot of success in London's West End and on Broadway (and in fact around the world; he's the most performed playwright who isn't Shakespeare), the theatre-in-the-round adds intimacy. That's particularly true in this summer's trio of ghost plays: Haunting Julia (1994, a story of the aftermath of a suicide)), Snake in the Grass (2002, a story of inheritance and blackmail), and Life and Beth (2008, a story of survival and widowhood). In all these stories, the closer you can get to the characters the better, and the compared to the proscenium stage SJT's round theatre is the equivalent of the cinematic close-up.

That intimacy may be a partial explanation of why so little of Ayckbourn's work has been adapted to movies - and when it has, the results have been so disappointing. Generally, they're either shallow caricatures (such as A Chorus of Disapproval) or wistful and humorless rather than robust and funny (like Alain Resnais' attempts, including Intimate Exchanges). There have been some good TV productions (The Norman Conquests, Season's Greetings (set in a hall surrounded by bits of a living room and dining room)), but these are mysteriously not available commercially.

That being the case, it's hard to understand the severity of the official Ayckbourn Web site's warning about bootleg copies. Given that they know the demand is there, and given the amount those 71 plays are making in royalties and licensing fees, why not buy up the rights to those productions and release them, or begin a project of recording current SJT productions and revivals with a view to commercial release? The SJT shop sells scripts. Why not DVDs?

Asking that risks missing the essential nature of theater, which, along with storytelling, is probably one of the earliest forms of intellectual property expressed as a service. A film is infinitely copiable; every live performance is different, if only subtly, because audience feedback varies. I still wish they'd do it, though.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

June 6, 2008

The Digital Revolution turns 15

"CIX will change your life," someone said to me in 1991 when I got a commission to review a bunch of online systems and got my first modem. At the time, I was spending most or all of every day sitting alone in my house putting words in a row for money.

The Net, Louis Rossetto predicted in 1993, when he founded Wired, would change everybody's lives. He compared it to a Bengali typhoon. And that was modest compared to others of the day, who compared it favorably to the discovery of fire.

Today, I spend most or all of every day sitting alone in my house putting words in a row for money.

But yes: my profession is under threat, on the one hand from shrinkage of the revenues necessary to support newspapers and magazines - which is indeed partly fuelled by competition from the Internet - and on the other hand from megacorporate publishers who routinely demand ownership of the copyrights freelances used to resell for additional income - a practice that the Internet was likely to largely kill off anyway. Few have ever gotten rich from journalism, but freelance rates haven't budged in years; staff journalists get very modest raises and for those they are required to work more hours a week and produce more words.

That embarrassingly solipsistic view aside, more broadly, we're seeing the Internet begin to reshape the entertainment, telecommunications, retail, and software industries. We're seeing it provide new ways for people to organize politically and challenge the control of information. And we're seeing it and natural laziness kill off our history: writers and students alike rely on online resources at the expense of offline archives.

Wired was, of course, founded to chronicle the grandly capitalized Digital Revolution, and this month, 15 years on, Rossetto looked back to assess the magazine's successes and failures.

Rossetto listed three failures and three successes. The three failures: history has not ended; Old Media are not dead (yet); and governments and politics still thrive. The three successful predictions: the long boom; the One Machine, a man/machine planetary consciousness; that technology would change the way we relate to each other and cause us to reinvent social institutions.

I had expected to see the long boom in the list of failures, and not just because it was so widely laughed at when it was published. Rossetto is fair to say that the original 1997 feature was not invalidated by the 2000 stock market bust. It wasn't about that (although one couldn't resist snickering about it as the NASDAQ tanked). Instead, what the piece predicted was a global economic boom covering the period 1980 to 2020.

Wrote Peter Schwartz and Peter Leyden, "We are riding the early waves of a 25-year run of a greatly expanding economy that will do much to solve seemingly intractable problems like poverty and to ease tensions throughout the world. And we'll do it without blowing the lid off the environment."

Rossetto, assessing it now, says, " There's a lot of noise in the media about how the world is going to hell. Remember, the truth is out there, and it's not necessarily what the politicians, priests, or pundits are telling you."

I think: 1) the time to assess the accuracy of an article outlining the future to 2020 is probably around 2050; 2) the writers themselves called it a scenario that might guide people through traumatic upheavals to a genuinely better world rather than a prediction; 3) that nonetheless, it's clear that the US economy, which they saw as leading the way has suffered badly in the 2000s with the spiralling deficit and rising consumer debt; 4) that media alarm about the environment, consumer debt, government deficits, and poverty is hardly a conspiracy to tell us lies; and 5) that they signally underestimated the extent to which existing institutions would adapt to cyberspace (the underlying flaw in Rossetto's assumption that governments would be disbanding by now).

For example, while timing technologies is about as futile as timing the stock market, it's worth noting that they expected electronic cash to gain acceptance in 1998 and to be the key technology to enable electronic commerce, which they guessed would hit $10 billion by 2000. Last year it was close to $200 billion. Writing around the same time, I predicted (here) that ecommerce would plateau at about 10 percent of retail; I assumed this was wrong, but it seems that it hasn't even reached 4 perecent yet, though it's obvious that, particularly in the copyright industries, the influence of online commerce is punching well above its statistical weight.

No one ever writes modestly about the future. What sells - and gets people talking - are extravagant predictions, whether optimistic or pessimistic. Fifteen years is a tiny portion even of human history, itself a blip on the planet. Tom Standage, writing in his 1998 book The Victorian Internet, noted that the telegraph was a far more radically profound change for the society of its day than the Internet is for ours. A century from now, the Internet may be just as obsolete. Rossetto, like the rest of us, will have to wait until he's dead to find out if his ideas have lasting value.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

April 25, 2008

The shape of the mushroom

The digital universe is big. Really big. You just can't believe how mind-bogglingly big... Oh, never mind.

There's nothing like a good the-sky-is-falling scenario to get a one-day conference awake, and today at the LSE was no exception.

"It's a catastrophe waiting to happen," said Leslie Willcocks, the head of the Information Systems and Innovation Group at the LSE, putting up a chart. What it showed: the typical data center's use of energy and processing power. Only 1.5 percent of the total energy usage powers processing; 80 percent of CPU is idle. Well. They weren't built to be efficient. They were built to be reliable.

But Willcocks wasn't gearing up to save the planet. Instead, his point was that all this wastage reflects a fetish for connectedness: "The assumption is you have to have reliable information on tap at all times." (Cue Humphrey Appleby: "I need to know everything. How else can I judge whether I need to know it?") Technology design, he argued, is being driven by the explosion in data. The US's 28 million servers today represent 2.5 percent of the US's electricity needs; in 2010 that will be 43 million. This massively inefficient use of energy is trying to fix what he called a far bigger problem: the "data explosion". And, concurrently, the inability to manage same.

In 2007, John Gantz, chief research officer at IDC, said, for the first time in human history the amount of information being created was larger than the amount of storage available. That sounds alarming at first, like the moment you contemplate the mortgage you're thinking of taking out to buy a house and realize that it is larger than the sum of all your financial assets. At second glance, the situation isn't quite so bad.

For one thing, a lot of information is transient. We aren't required to keep a copy of every TV signal - otherwise, imagine the number of copies we'd add every Christmas just for rebroadcasts of It's a Wonderful Life. But once you've added in the impact of regulatory compliance and legal requirements, along with good IT practice, consider the digital footprint of a single email message with a 1Mb attachment. By the time it's done being backed up, sent to four recipients, backed up, and sent to tape at both sending and receiving organizations it's consuming over 51.5Mb of storage.

And things are only going to get exponentially worse between now and 2011. The digital universe will grow by an order of magnitude in five years, from about 177EB in 2006 to 1,773EB in 2011. More than 90 percent of it is unstructured information. Even more alarming for businesses is that while individual consumers account for about 70 percent of the information created, enterprises have responsibility or liability for about 85 percent of it. Think Google buying YouTube and taking on its copyright liability, or NASA's problem with its astronauts' email.

"The information bomb has already happened," said Gantz. "I'm just describing the shape of the mushroom."
To be sure, video amps up the data flows. But it's not the most important issue. Take, for example, the electronification of the NHS. Discarding paper in favor of electronics saves one kind of space - there's a hospital in Bangkok that claims to have been able to open a whole new pediatric wing in the space saved by digitizing its radiography department - but consumes another. All those electronic patient records will have to be stored, backed up and stored and backed up again in each new location they're sent to. Say it all over again with MP3s, electronic patient records, digital radio, VOIP, games, telematics, toys...

No wonder we're all so tired.

And the problem the NHS is solving with barcoding - that people cannot find what they already have - is not so easily solved with information.

Azeem Azhar, seven months away from a job as head of innovation at Reuters, said that one thing he'd learned was that every good idea he had - had already been had by someone else in the organization at some point. As social networks enable people to focus less on documents than on expertise, he suggested, we may finally find a way around that problem.

The great thing about a conference like this is that for every solution someone can find a problem. The British Library, for example, is full of people who ought to know what to keep; that's what librarians do. But the British Library has its roots in an era when it could arrogantly assume it had the resources to keep everything. Ha. Though you sympathized with the trouble they have explaining stuff when an audience member asked why, given that the British Library has made digital copies, it should bother to keep the original, physical Magna Carta.

That question indicates a kind of data madness; the information we derive from studying the physical Magna Carta can't all be digitized. If looking at the digital simulacrum evokes wonder, it's precisely because we know that it is an image - a digital shadow - of the real thing. If the real thing ceases to exist, the shadow grows less meaningful.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

October 26, 2007

Tomorrow's world

"It's like 1994," Richard Bartle, the longest-serving virtual world creator, said this week. We were at the Virtual Worlds Forum. Sure enough: most of the panels were about how businesses could make money! in virtual worlds! Substitute Web! and Bartle was right.

"Virtual worlds are poised to revolutionize today's Web ecommerce," one speaker said enthusiastically. "They will restore to ecommerce the social and recreational aspect of shopping, the central element in the real world, which was stripped away when retailers went online."

There's gold in them thar cartoon hills.

But which hills? Second Life is, to be sure, the virtual world du jour, and it provides the most obviously exploitable platform for businesses. But in 1994 so did CompuServe. It was only three years later – ten years ago last month – that it had shrunk sufficiently for AOL to buy it as revenge. In turn, AOL is itself shrinking – its subscription revenues for the quarter ending June 30, 2007 were half those in the same quarter in 2006.

If there is one thing we know about Internet communities it's that they keep reforming in new technologies, often with many of the same people. Today's kids bop from world to world in groups, every few months. The people I've known on CIX or the WELL turn up on IRC, LiveJournal, Facebook, and IM. Sometimes you flee, as Corey Bridges said of social networks, because your friends list has become "crufted" up with people you don't like. You take your real friends somewhere else until mutatis mutandem. In the older text-based conferencing systems, same pattern: public conferences filled with too many annoying people joined sent old-timers to gated communities like mailing lists or closed conferences. And so it goes.

In a post pointed at by the VWF blog Metaversed's Nick Wilson defines social virtual worlds and concludes that there are only eight of them – the rest are not yet available to the general public, children's worlds, or simply development platforms. "The virtual worlds space," he concludes, "is not as large as many people think."

Probably anyone who's tried to come to grips with Second Life, number one on Wilson's list, without the benefit of friends to go there with knows that. Many parts of SL are resoundingly empty much of the time, and it seems inarguable that most of SL's millions of registered users try it out a few times and then leave their avatars as records in the database. Nonetheless, companies keep experimenting and find the results valuable. A batch of Italian IBMers even used the world to stage a strike last month. Naturally it crashed IBM's SL Business Center: the 1,850 strikers were spread around seven IBM locations, but you can only put about 50 avatars on an island before server lag starts to get you. Strikes: the original denial-of-service attacks.

But questioning whether there's a whole lot of there there is a nice reminder that in another sense, it's 1999. Perfect World, a Chinese virtual world, went public at the end of July, and is currently valued at $1.6 billion. It is, of course, losing money. Meanwhile Microsoft has invested $240 million of the change rattling around the back of its sofas in Facebook to become its exclusive "advertising partner", giving that company an overall value of $515 billion. That should do nicely to ensure that Google or Yahoo! doesn't buy it outright, anyway. Rupert Murdoch bought MySpace only two years ago for $580 million – which sounds like a steal by comparison if it weren't for the fact that Murdoch has made many online plays and they've all so far been wrong.

Two big issues seem to be dominating discussions about "the virtual world space". One: how to make money. Two: how and whether to make world interoperable, so when you get tired of one you can pick up your avatar and reputation and take them somewhere new. It was in discussing this latter point that Bridges made the comment noted above: after a while in a particular world shedding that world's character might be the one thing you really want to do. In real life, wherever you go, there you are. Freely exploring your possible selves is what Richard Bartle had in mind when he wrote the first MUD.

The first of those is, of course, the pesky thing only a venture capitalist or a journalist would ask. So far, in general game worlds make their money on subscriptions, and social worlds make their money selling non-existent items like land and maintenance fees thereupon (actually, says Linden Labs, "server resources"). But Asia seems already to be moving toward free play with the real money coming from in-game item sales: 80 million Koreans are buying products in and from Cyworld.

But the two questions are related. If your avatar only functions in a single world, the argument goes, that makes virtual worlds closed environments like the ones CompuServe and AOL failed with. That is of course true – but only after someone comes up with an open platform everyone can use. Unlike the Internet at large, though, it's hard to see who would benefit enough from building one to actually do it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

August 24, 2007

Game gods

Virtual worlds have been with us for a long time. Depending who you listen to, they began in 1979, or 1982, or it may have been the shadows on the walls of Plato's cave. We'll go with the University of Essex MUD, on the grounds that its co-writer Richard Bartle can trace its direct influence on today's worlds.

At State of Play this week, it was clear that just as the issues surrounding the Internet in general have changed very little since about 1988, neither have the issues surrounding virtual worlds.

True, the stakes are higher now and, as Professor Yee Fen Lim noted, when real money starts to be involved people become protective.

Level 70 warrior accounts on World of Warcraft go for as little as $10 (though your level number cannot disguise your complete newbieness), but the unique magic sword you won in a quest may go for much more. The best-known pending case is Bragg versus Second Life over virtual property the world's owners confiscated when they realized that Bragg was taking advantage of a loophole in their system to buy "land" at exceptionally cheap prices. Lim had an interesting take on the Bragg case: as a legal concept, she argued, property is right of control, even though Linden Labs itself defines its virtual property as rental of a processor. As computer science that's fine, but it's not law. Otherwise, she said, "Property is mere illusion."

Ultimately, the issues all come down to this: who owns the user experience? In subscription gaming worlds, the owners tend to keep very tight control of everything – they claim ownership in all intellectual property in the world, limit users' ability to create their own content, and block the sale of cheats as much as possible. In a free-form world like Second Life which may host games but is itself a platform rather than a game, users are much freer to do what they want but the EULAs or Terms of Service may be just as unfair.

Ultimately, no matter what the agreement says, today's privately owned virtual worlds all function under the same reality: the game gods can pull the plug at any time. They own and control the servers. Possession is nine-tenths of the law, and all that. Until someone implements open source world software on a P2P platform, this will always be the way. Linden Labs says, for what it's worth, that its long-term intention is to open-source its platform so that anyone may set up a world. This, too, has been done before, with The Palace.

One consequence of this is that there is no such thing as virtual privacy, a topic that everyone is aware of but no one's talking about. The piecemeal nature of the Net means that your friend's IRC channel doesn't know anything about your Web use, and doesn't track what you do on eBay. But virtual worlds log everything. If you buy a new shirt at a shop and then fly to a distant island to have sex with it, all that is logged. (Just try to ensure the shirt doesn't look like a child's shirt and you don't get into litigation over who owns the island…)

There are, as scholars say, legitimate reasons. Logging everything that happens is important in helping game developers pinpoint the source of crashes and eliminate bugs. Logs help settle disputes over who did what to whose magic sword. And in a court case, they may be important evidence (although how you can ensure that the logs haven't been adjusted to suit the virtual world provider, who is usually one of the parties to the litigation, I don't know).

As long as you think of virtual worlds as games, maybe this isn't that big a problem. After all, no one is forced to spend half their waking hours killing enough monsters in World of Warcraft to join a guild for a six-hour quest.

But something like Second Life aspires to be a lot more than that. The world is adding voice communication, which will be interesting: if you have to use your real voice, the relative anonymity conferred by the synthetic world are gone. Quite apart from bandwidth demands (lag is the bane of every SLer's existence), exploring what virtual life is like in the opposite gender isn't going to work. They're going to need voice synthesizers.

Much of the law in this area is coming out of Asia, where massively multi-player online games took off so early with such ferocity that, according to Judge Unggi Yoon, in a recent case a member of a losing team in one such game ran to the café where the winning team was playing and physically battered one of its members. Yoon, who explained some of the new laws, is an experienced online gamer, all the way back to playing Ultima Online in middle school. In his country, a law has recently come into force taxing virtual world transactions (it works like a VAT threshold – under $100 a month you don't owe anything). For Westerners, who are used to the idea that we make laws and export them rather than the other way around, this is quite a reality shift.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

April 27, 2007

My so-called second life

It's a passing fad. It's all hype. They've got good PR. Only sad, pathetic people with no real lives would be interested.
All things that were said about the Internet 12 years ago. All things being said now about Second Life today. Wrong about the Internet. Wrong, too, about Second Life.

Hanging around a virtual world dressed as a cartoon character isn't normally my idea of a good time, but last weekend Wired News asked me to attend the virtual technology exposition going on inworld, and so I finally fired up Gwyndred Wuyts, who I'd created some weeks back.

Second Life is of course a logical continuation of the virtual worlds that went before it. The vending machines, avatars, attachments (props such as fancy items of clothing, laptops, or, I am given to understand, quite detailed, anatomically correct genitals), and money all have direct ancestors in previous virtual worlds such as Worlds Away (Fujitsu), The Palace, and Habitat (Lucasfilm). In fact, though, the prior art Second Life echoed most at first was CompuServe, which in 1990 had no graphics except ASCII art and little sense of humor – but was home to technology companies of all sizes, who spoke glowingly of the wonders of having direct contact with their customers. In 1990 every techie had a CompuServe ID.

Along came the Web, and those same companies gratefully retreated to the Web, where they could publish their view of the world and their support documents and edit out the abuse and backtalk. Now, in Second Life, the pendulum is swinging back it's flattened hierarchies all over again.

"You have to treat everyone equally because you can't tell who anyone is. They could be the CEO of a big company," Odin Liam Wright (SL: Liam Kanno) told me this week. " In SL, he says, what you see is "more the psyche than the economic class or vocation or stature."

Having to take people as they present themselves without the advantage of familiar cues and networked references was a theme frequently exploited by Agatha Christie. Britain was then newly mobile, and someone moving to a village no longer came endorsed by letters from mutual friends. People could be anybody, her characters frequently complain.

Americans are raised to love this kind of social mobility. But its downside was on display yesterday in a panel on professionalism at the Information Security conference, where several speakers complained that the informal networks they used to use to check out their prospective security hires no longer exist. International mobility has made it worse: how do you assess a CV when both the credentials and the organizations issuing them are unknown to you?

Well, great: if the information security professionals don't know whom to trust, what hope is there for the rest of us?

Nonetheless, the speaker was wrong. The informal networks exist, just not where he's looking for them. When informal networks get overrun by the mainstream, they move elsewhere. In the late 1980s, Usenet was such a haven; by 1994, when September stopped ending and AOL moved in, everyone had retreated to gated communities (private forums, mailing lists, and so on). Right now, some of those informal networks are on Second Life, and the window is closing as the mainstream becomes more aware of the potential of the virtual world as a platform.

Previous world were popular and still died. But Second Life is different, first and foremost because of timing. People have broadband. They have computers powerful enough to handle the graphics and multiple applications. Their movement around the virtual world is limited only by their manual dexterity and the capacity of the servers to handle so many interacting simulations at once.

Second: experimentation. At this week's show, I picked up a (beta) headset that plugs Skype into Second Life (Second Talk). People (Cattle Puppy Productions) are providing inworld TV displays (and extracted video clips for the rest of us). Reallusion, one of the show's main sponsors, does facial animation it hopes will transform Second Life from a world of text-typing avatars into one of talking characters. You can pick up a portable office including virtual laptop, unpack it in a park, and write and post real blog entries. Why would you do this when you already have blogging software on your desktop? Because Second Life has the potential to roll everything – all the different forms of communication open on your desktop today – into a single platform. And if you grew up with computer games, it's a more familiar platform than the desktop metaphor generations of office workers required.

Third: advertising. The virtual show looked empty compared to a real-world show; it had 6,000-plus visitors over three days. The emptiness was by design to allow more visitors while minimizing lag. Nonetheless, Dell was there with a virtual configurator on which you could specify your new laptop. Elsewhere inworld, you can drive your new Toyota or Pontiac and read your Reuters news. Moving into Second Life is a way for old, apparently stuffy companies to reinvent their image for the notoriously hard-to-reach younger crowd who are media-savvy and ad-cynical. There is real gold in them thar virtual hills.

Finally, a real reason to upgrade my desktop.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

March 30, 2007

Re-emerging technologies

Along about the third day of this year's email software that works offline; the display looked just like Ameol,an offline mail and news reader I've used for 14 years. (The similarity is only partial, though; Zimbra does synching with mobile devices and a bunch of other things that Ameol doesn't - but that Lotus Notes probably did).

"Reverse pioneering", Tim O'Reilly said the first day, while describing a San Francisco group who build things - including a beer-hauling railway car and carnival rides - out of old bicycle parts.

At some point, also, O'Reilly editor Dale Dougherty gave a talk on the publisher's two relatively new magazines Make and Craft. He illustrated it with pictures of: log cabin quilts, Jacquard looms, the Babbage Differential Calculator, Hollerith tabulating machines, punch cards, and an ancient Singer treadle sewing machine. And oh look! Sewing patterns! And, I heard someone say quite seriously, what about tatting? Do you know anyone who does it who could teach me?

A day later, in Boston, I hear that knitting is taking East Coast geeks by storm. Apparently geek gatherings now produce as many baby hats as the average nursing home.

Not that I'm making fun of all this. After all, recovering old knowledge is a lot of what we do on the folk scene, and I have no doubt that today's geek culture will plunder these past technologies and, very like the Society for Creative Anachronism, which has a large geek (and also folk music community) crossover, mutate them into something newer and stranger. I'd guess that we're about two years away from a quilting bee in the lobby. Of course, the quilting thread will be conductive, and the quilt will glow in the dark with beaded LEDs so you can read under the covers, and version 2.0 will incorporate miniature solar panels (like those little mirrors in the Eastern stuff you used to get in the 1970s) that release heat at night like an electric blanket...and it will be all black.

Of course, this isn't really new even in geek terms. A dozen years ago, the MIT Media Lab held a fashion show to display its latest ideas for embroidering functional keyboards onto networked but otherwise standard Levi's denim jackets and dresses made of conductive fabrics. We don't seem to have come very far toward the future they were predicting then, in which we'd all be wearing T-shirts with sensors that measured our body heat and controlled the room thermostat accordingly (another idea for that quilt).

Instead, geeks, like everyone else, adopted the mobile phone, which has the advantage that you don't have to worry about how to cope with that important conference when your personal area network is in the dirty laundry.

But this is Generation C, as Matt Webb, from the two-man design consultancy Schulze and Webb told us. Generation C likes complexity, connection, and control. GenC is not satisfied with technologies that expect us to respond as passive consumers. We ought to despise mobile phones, especially in the US: they are locked down, controlled by the manufacturers and network operators. Everything should come with an open applications programming interface and...and...a serial port. Hack your washing machine so it only shows the settings you use; hack your luggage so it phones home its GPS coordinates when it's lost.

The conference speaker who drew the most enthusiastic response was Danah Boyd, who had a simple message: people outside of Silicon Valley are different. Don't assume all your users are like you. They have different life stages. This seems so basic and obvious it's shocking to hear people cheer it.
It was during a talk on building technology to selectively jam RFID chips that I had a simple thought: every technology breeds its opposite. Radar to trap speeders begets radar scanners. Cryptography breeds cryptanalysis. Email breeds spam, which breeds spam filtering, which breeds spam smart enough to pass the Turing test.

The same is true of every social trend and phenomenon. John Perry Barlow used to say that years of living in the virtual world had made him appreciate the physical world far more. It's not much of a jump from that to all sorts of traditional crafts.

Don't get me wrong. I'm glad geeks want to knit, sew, and build wooden telescopes. Sewing used to be a relatively mainstream activity, and over the last couple of decades it's been progressively dumbed down. The patterns you buy today are far simpler (and less interesting) to construct than the ones you used to get in the 1970s. It would be terrific if geeks brought some complexity back to it.

But jeez, guys, you need to get out more. Not only is there an entire universe of people who are different from Silicon Valley, there's an entire industry of magazines and books about fabric arts. Next, you get to reinvent colors.

I blogged more serious stuff from etech at Blindside.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her , or by email to (but please turn off HTML).

February 9, 2007

Getting out the vote

Voter-verified paper audit trails won't save us. That was the single clearest bit of news to come out of this week's electronic voting events.

This is rather depressing, because for the last 15 years it's looked as though VVPAT (as they are euphoniously calling it) might be something everyone could compromise on.: OK, we'll let you have your electronic voting machines as long as we can have a paper backup that can be recounted in case of dispute. But no. According to Rebecca Mercuri in London this week (and others who have been following this stuff on the ground in the US), what we thought a paper trail meant is definitely not what we're getting. This is why several prominent activist organisations have come out against the Holt bill HR811, introduced into Congress this week, despite its apparent endorsement of paper trails.

I don't know about you, but when I imagined a VVPAT, what I saw in my mind's eye was something like an IBM punch card dropping individually into some kind of display where a voter would press a key to accept or reject. Instead, vendors (who hate paper trails) are providing cheap, flimsy, thermal paper in a long roll with no obvious divisions to show where individual ballots are. The paper is easily damaged, it's not clear whether it will survive the 22 months it's supposed to be stored, and the mess is not designed to ease manual recounts. Basically, this is paper that can't quite aspire to the lofty quality of a supermarket receipt.

The upshot is that yesterday you got a programme full of computer scientists saying they want to vote with pencils and paper. Joseoph Kiniry, from University College, Dublin, talked about using formal methods to create a secure system – and says he wants to vote on paper. Anne-Marie Ostveen told the story of the Dutch hacker group who bought up a couple of Nedap machines to experiment on and wound up publicly playing chess on them – and exposing their woeful insecurity – and concluded, "I want my pencil back." And so on.

The story is the same in every country. Electronic voting machines – or, more correctly, electronic ballot boxes – are proposed and brought in without public debate. Vendors promise the machines will be accurate, reliable, secure, and cheaper than existing systems. Why does anyone believe this? How can a voting computer possibly be cheaper than a piece of paper and a pencil? In fact, Jason Kitcat, a longtime activist in this area, noted that according to the Electoral Commission the cost of the 2003 pilots were astounding – in Sheffield £55 per electronic vote, and that's with suppliers waiving some charges they didn't expect either. Bear in mind, also, that the machines have an estimated life of only ten years.

Also the same: governments lack internal expertise on IT, basically because anyone who understand IT can make a lot more money in industry than in either government or the civil service.

And: everywhere vendors are secretive about the inner workings of their computers. You do not have to be a conspiracy theorist to see that privatizing democracy has serious risks.

On Tuesday, Southport LibDem MP John Pugh spoke of the present UK government's enchantment with IT. "The procurers who commission IT have a starry-eyed view of what it can do," he said. "They feel it's a very 'modern' thing." Vendors, also, can be very persuasive (I'd like to see tests on what they put in the ink in those brochures, personally). If, he said, Bill Gates were selling voting machines and came up against Tony Blair, "We would have a bill now."

Politicians are, probably, also the only class of people to whom quick counts appeal. The media, for example, ought to love slow counts that keep people glued to their TV sets, hitting the refresh button on their Web browsers, and buying newspapers throughout. Florida 2000 was a media bonanza. But it's got to be hard on the guys who can't sleep until they know whether they have a job next month.

I would propose the following principles to govern the choice of balloting systems:

- The mechanisms by which votes are counted should be transparent. Voters should be able to see that the vote they cast is the vote they intended to cast,

- Vendors should be contractually prohibited from claiming the right to keep secret their source code, the workings of their machines, or their testing procedures, and they should not be allowed to control the circumstances or personnel under which or by whom their machines are tested. (That's like letting the psychic set the controls of the million-dollar test.)

- It should always be possible to conduct a public recount of individual ballots.

Pugh made one other excellent point: paper-based voting systems are mature. "The old system was never perfect," he said, but over time "we've evolved a way of dealing with almost every conceivable problem." Agents have the right to visit every polling station and watch the count, recounts can consider every single spoiled ballot. By contrast, electronic voting presumes everything will go right.

Guys, it's a computer. Next!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

January 26, 2007

Vote early, vote often...

It is a truth that ought to be universally acknowledged that the more you know about computer security the less you are in favor of electronic voting. We thought – optimists that we are – that the UK had abandoned the idea after all the reports of glitches from the US and the rather indeterminate results of a couple of small pilots a few years ago. But no: there are plans for further trials for the local elections in May.

It's good news, therefore, that London is to play host to two upcoming events to point out all the reasons why we should be cautious. The first, February 6, is a screening of the HBO movie Hacking Democracy, a sort of documentary thriller. The second, February 8, is a conference bringing together experts from several countries, most prominently Rebecca Mercuri, who was practically the first person to get seriously interested in the security problems surrounding electronic voting. Both events are being sponsored by the Open Rights Group and the Foundation for Information Policy Research, and will be held at University College London. Here is further information and links to reserve seats. Go, if you can. It's free.

Hacking Democracy (a popular download) tells the story of ,a href="">Bev Harris and Andy Stephenson. Harris was minding her own business in Seattle in 2000 when the hanging chad hit the Supreme Court. She began to get interested in researching voting troubles, and then one day found online a copy of the software that runs the voting machines provided by Diebold, one of the two leading manufacturers of such things. (And, by the way, the company whose CEO vowed to deliver Ohio to Bush.) The movie follows this story and beyond, as Harris and Stephenson dumpster-dive, query election officials, and document a steady stream of glitches that all add up to the same point: electronic voting is not secure enough to protect democracy against fraud.

Harris and Stephenson are not, of course, the only people working in this area. Among computer experts such as Mercuri, David Chaum, David Dill, Deirdre Mulligan, Avi Rubin, and Peter Neumann, there's never been any question that there is a giant issue here. Much argument has been spilled over the question of how votes are recorded; less so around the technology used by the voter to choose preferences. One faction – primarily but not solely vendors of electronic voting equipment – sees nothing wrong with Direct Recording Electronic, machines that accept voter input all day and then just spit out tallies. The other group argues that you can't trust a computer to keep accurate counts, and that you have to have some way for voters to check that the vote they thought they cast is the vote that was actually recorded. A number of different schemes have been proposed for this, but the idea that's catching on across the US (and was originally promoted by Mercuri) is adding a printer that spits out a printed ballot the voter can see for verification. That way, if an audit is necessary there is a way to actually conduct one. Otherwise all you get is the machine telling you the same number over again, like a kid who has the correct answer to his math homework but mysteriously can't show you how he worked the problem.

This is where it's difficult to understand the appeal of such systems in the UK. Americans may be incredulous – I was – but a British voter goes to the polls and votes on a small square of paper with a stubby, little pencil. Everything is counted by hand. The UK can do this because all elections are very, very simple. There is only one election – local council, Parliament – at a time, and you vote for one of only a few candidates. In the US, where a lemon is the size of an orange, an orange is the size of a grapefruit, and a grapefruit is the size of a soccer ball, elections are complicated and on any given polling day there are a lot of them. The famous California governor's recall that elected Arnold Schwarzeneger, for example, had hundreds of candidates; even a more average election in a less referendum-happy state than California may have a dozen races, each with six to ten candidates. And you know Americans: they want results NOW. Like staying up for two or three days watching the election returns is a bad thing.

It is of course true that election fraud has existed in all eras; you can "lose" a box of marked paper ballots off the back of a truck, or redraw districts according to political allegiance, or "clean" people off the electoral rolls. But those types of fraud are harder to cover up entirely. A flawed count in an electronic machine run by software the vendor allows no one to inspect just vanishes down George Orwell's memory hole.

What I still can't figure out is why politicians are so enthusiastic about all this. Yes, secure machines with well-designer user interfaces might get rid of the problem of "spoiled" and therefore often uncounted ballots. But they can't really believe – can they? – that fancy voting technology will mean we're more likely to elect them? Can it?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

October 6, 2006

A different kind of poll tax

Elections have always had two parts: the election itself, and the dickering beforehand (and occasionally afterwards) over who gets to vote. The latest move in that direction: at the end of September the House of Representatives passed the Federal Election Integrity Act of 2006 (H.R. 4844), which from 2010 will prohibit election officials from giving anyone a ballot who can't present a government-issued photo ID whose issuing requirements included proof of US citizenship. (This lets out driver's licenses, which everyone has, though I guess it would allow passports, which relatively few have.)
These days, there is a third element: specifying the technology that will tabulate the votes. Democracy depends on the voters' being able to believe that what determines the election is the voters' choices rather than the latter two.

The last of these has been written about a great deal in technology circles over the last decade. Few security experts are satisfied with the idea that we should trust computers to do "black box voting" where they count up and just let us know the results. Even fewer security experts are happy with the idea that so many politicians around the world want to embrace: Internet (and mobile phone) voting.

The run-up to this year's mid-term US elections has seen many reports of glitches. My favorite recent report comes from a test in Maryland, where it turned out that the machines under test did not communicate with each other properly when the touch screens were in use. If they don't communicate correctly, voters might be able to vote more than once. Attaching mice to the machines solves the problem – but the incident is exactly the kind of wacky glitch that's familiar from everyday computing life and that can take absurd amounts of time to resolve. Why does anyone think that this is a sensible way to vote? (Internet voting has all the same risks of machine glitches, and then a whole lot more.)

The 2000 US Presidential election isn’t as famous for the removal from the electoral rolls in Florida of few hundred thousand voters as it is for hanging chad – but read or watch on the subject. Of course, wrangling over who gets to vote didn't start then. Gerrymandering districts, fighting over giving the right to vote to women, slaves, felons, expatriates…

The latest twist in this fine, old activity is the push in the US towards requiring Voter ID. Besides the federal bill mentioned above, a couple of dozen states have passed ID requirements since 2000, though state courts in Missouri, Kentucky, Arizona, and California are already striking them down. The target here seems to be that bogeyman of modern American life, illegal immigrants.

Voter ID isn't as obviously a poll tax. After all, this is just about authenticating voters, right? Every voter a legal voter. But although these bills generally include a requirement to supply a voter ID free of charge to people too poor to pay for one, the supporting documentation isn't free: try getting a free copy of your birth certificate, for example. The combination of the costs involved in that aspect, plus the effort involved in getting the ID are a burden that falls disproportionately on the usual already disadvantaged groups (the same ones stopped from voting in the past by road blocks, insufficient provision of voting machines in some precincts, and indiscriminate cleaning of the electoral rolls). Effectively, voter ID creates an additional barrier between the voter and the act of voting. It may not be the letter of a poll tax, but it is the spirit of one.

This is in fact the sort of point that opponents are making.

There are plenty of other logistical problems, of course, such as: what about absentee voters? I registered in Ithaca, New York, in 1972. A few months before federal primaries, the Board of Elections there mails me a registration form; returning it gets me absentee ballots for the Democratic primaries and the elections themselves. I've never known whether my vote is truly anonymous; nor whether it's actually counted. I take those things on trust, just as, I suppose, the Board of Elections trusts that the person sending back these papers is not some stray British person who's does my signature really well. To insert voter ID into that process would presumably require turning expatriate voters over to, say, the US Embassies, who are familiar with authentication and checking identity documents.

Given that most countries have few such outposts, the barriers to absentee voting would be substantially raised for many expatriates. Granted, we're a small portion of the problem. But there's a direct clash between the trend to embrace remote voting - the entire state of Oregon votes by mail – and the desire to authenticate everyone.
We can fix most of the voting technology problems by requiring voter-verifiable, auditable, paper trails, as Rebecca Mercuri began pushing for all those years ago (and most computer scientists now agree with), and there seem to be substantial moves in that direction as state electors test the electronic equipment and scientists find more and more serious potential problems. Twenty-seven states now have laws requiring paper trails. But how we control who votes is the much more difficult and less talked-about frontier.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to (but please turn off HTML).

May 12, 2006

Map quest

The other week, I drove through London with a musician friend who spent a lot of the trip telling me how much he loved his new dashboard-mounted GPS.

I could see his point. In my own folksinging days I averaged 50,000 miles a year across the US, and even with a box of maps in the back seat every day the moment invariably came when you discovered that the directions you'd been given were wrong, impenetrable, or missing. At that point one of two things would happen: either you would find the place after much trial and error and many wrong turns or you would get lost. Either way, you would arrive at the gig intemperate, irascible, and cranky, and they'd never hire you again. Me, that is. I'm sure you are sweet and kind and gentle and good and would never yell at someone you've just met for the first time that they miscounted and it's three traffic lights, not two.

By contrast, all my friend had to do was punch in the destination address, and after briefly communing with satellites the GPS directed us in a headmistressy English voice he called Agatha. Stuff like, "Turn left, 100 meters,"

Of course, I don't actually have any sense of how far 100 meters is. I lean more toward "Turn left opposite that gas station up there." But Agatha doesn't know from landmarks or the things humans see. I imagine that will change as the resolution, graphics, and network connections improve. I don't, for example, see why eventually everyone shouldn't be equipped with a complete set of world maps and a display that can be set to show a customizable level of detail (up to full, real-time video) with a recognition program that would enable Agatha to say exactly that while recalculating routes using up-to-the-minute information about traffic jams and other impedimenta. (Doubtless some public-spirited hacker will create a speed trap avoidance add-on.) Today's kids, in fact, are so used to reading multiple screens with multiple scrolls of information on them that the GPS will probably migrate to lower-windshield with user-selectable information overlays. And glasses, watches, or clothing so that if, like the Prisoner, someone abducted you from your London flat you would be able to identify Your Village's location.

Back in today's world, Agatha is also not terribly bright about traffic. We were driving from Kew to Crouch End, and she routed us through…through…Central London. A brief digression. Back in 1972, before the M25 was built, although long after the North Circular Road was cobbled together out of existing streets, I remember a British folk band telling me that that you had to allow an extra two hours any time you had to go through London. I accordingly regard driving inside the M25 with horror and an immediate desire to escape to a train. Yet Agatha was routing us down Marylebone Road.

You cannot tell me she knew it was Good Friday and that the streets would therefore be comparatively empty.

The received wisdom among people who know North London is that the most efficient way from K to C is to take the North Circular Road to Finchley (I think it was) and then do something complicated with London streets. On the way back, we tried a comparative test by turning off the GPS, getting directions to the NCR from the club organizer, and following the signs from there. (You would have to be as navigationally challenged as a blind woodpecker not to be able to find Kew from the NCR, and anyway I knew the way.) It was a quiet, peaceful way to drive and talk, without Agatha's constant interruptions. Or it would have been, except that my friend kept worrying whether we were on the right road, going the right way, speculating it was longer than the other way…

The problem is, of course, that GPS does not teach you geography, any more than the tube map does. Following the serial sequence of instructions never adds up to understanding how the pieces connect. Wherever you go, as the saying is, there you are.

To lament the loss of geographical understanding (to say nothing of the box of maps in the back seat) is, I suppose, not much different from lamenting that people other than Scrabble players can no longer do mental arithmetic because everyone has calculators or whining that no one has the mental capacity to recite The Odyssey any more. Technology changes, and we gladly hand over yet another task. Soon, knowing where Manhattan is in relation to Philadelphia or Finchley Road is in relation to Wembley will seem as quaint as knowing how to load an 8mm projector.

The world will look very different then: no one will ever be lost, since you will always be able to punch in a destination and recalculate. On the other hand, you'll never be really found, either, since pretty much all geography will be in offline storage. We folk travelers used to talk about how the whole country was our back yard. In the GPS world, your own back yard might as well be Minnesota.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her | | Comments (0) | TrackBacks (0)