Main

November 30, 2012

Robot wars

Who'd want to be a robot right now, branded a killer before you've even really been born? This week, Huw Price, a philosophy professor, Martin Rees, an emeritus professor of cosmology and astrophysics, and Jaan Tallinn, co-founder of Skype and a serial speaker at the Singularity Summit, announced the founding of the Cambridge Project for Existential Risk. I'm glad they're thinking about this stuff.

Their intention is to build a Centre for the Study of Existential Risk. There are many threats listed in the short introductory paragraph explaining the project - biotechnology, artificial life, nanotechnology, climate change - but the one everyone seems to be focusing on is: yep, you got it, KILLER ROBOTS - that is, artificial general intelligences so much smarter than we are that they may not only put us out of work but reshape the world for their own purposes, not caring what happens to us. Asimov would weep: his whole purpose in creating his Three Laws of Robotics was to provide a device that would allow him to tell some interesting speculative, what-if stories and get away from the then standard fictional assumption that robots were eeeevil.

The list of advisors to Cambridge project has some interesting names: Hermann Hauser, now in charge of a venture capital fund, whose long history in the computer industry includes founding Acorn and an attempt to create the first mobile-connected tablet (it was the size of a 1990s phone book, and you had to write each letter in an individual box to get it to recognize handwriting - just way too far ahead of its time); and Nick Bostrum of the Future of Humanity Institute at Oxford. The other names are less familiar to me, but it looks like a really good mix of talents, everything from genetics to the public understanding of risk.

The killer robots thing goes quite a way back. A friend of mine grew up in the time before television when kids would pay a nickel for the Saturday show at a movie theatre, which would, besides the feature, include a cartoon or two and the next chapter of a serial. We indulge his nostalgia by buying him DVDs of old serials such as The Phantom Creeps, which features an eight-foot, menacing robot that scares the heck out of people by doing little more than wave his arms at them.

Actually, the really eeeevil guy in that movie is the mad scientist, Dr Zorka, who not only creates the robot but also a machine that makes him invisible and another that induces mass suspended animation. The robot is really just drawn that way. But, like CSER, what grabs your attention is the robot.

I have a theory about this that I developed over the last couple of months working on a paper on complex systems, automation, and other computing trends, and this is that it's all to do with biology. We - and other animals - are pretty fundamentally wired to see anything that moves autonomously as more intelligent than anything that doesn't. In survival terms, that makes sense: the most poisonous plant can't attack you if you're standing out of reach of its branches. Something that can move autonomously can kill you - yet is also more cuddly. Consider the Roomba versus a modern dishwasher. Counterintuitively, the Roomba is not the smarter of the two.

And so it was that on Wednesday, when Voice of Russia assembled a bunch of us for a half-hour radio discussion, the focus was on KILLER ROBOTs, not synthetic biology (which I think is a much more immediately dangerous field) or climate change (in which the scariest new development is the very sober, grown-up, businesslike this-is-getting-expensive report from the insurer Munich Re). The conversation was genuinely interesting, roaming from the mysteries of consciousness to the problems of automated trading and the 2010 flash crash. Pretty much everyone agreed that there really isn't sufficient evidence to predict a date at which machines might be intelligent enough to pose an existential risk to humans. You might be worried about self-driving cars, but they're likely to be safer than drunk humans.

There is a real threat from killer machines; it's just that it's not super-human intelligence or consciousness that's the threat here. Last week, Human Rights Watch and the International Human Rights Clinic published Losing Humanity: the Case Against Killer Robots, arguing that governments should act pre-emptively to ban the development of fully autonomous weapons. There is no way, that paper argues, for autonomous weapons (which the military wants so fewer of *our* guys have to risk getting killed) to distinguish reliably between combatants and civilians.

There were some good papers on this at this year's We Robot conference from Ian Kerr and Kate Szilagyi (PDF) and Markus Wegner.

From various discussions, it's clear that you don't need to wait for *fully* autonomous weapons to reach the danger point. In today's partially automated systems, the operator may be under pressure to make a decision in seconds and "automation bias" means the human will most likely accept whatever the machines suggests it will do, the military equivalent of clicking OK. The human in the loop isn't as much of a protection as we might hope against the humans designing these things. Dr Zorka, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series

October 19, 2012

Finding the gorilla

"A really smart machine will think like an animal," predicted Temple Grandin at last weekend's Singularity Summit. To an animal, she argued, a human on a horse often looks like a very different category of object than a human walking. That seems true; and yet animals also live in a sensory-driven world entirely unlike that of machines.

A day later, Melanie Mitchell, a professor of computer science at Portland State University, argued that analogies are key, she said, to human intelligence, producing landmark insights like comparing a brain to a computer (von Neumann) or evolutionary competition to economic competition (Darwin). This is true, although that initial analogy is often insufficient and may even be entirely wrong. A really significant change in our understanding of the human brain came with research by psychologists like Elizabeth Loftus showing that where computers retain data exactly as it was (barring mechanical corruption), humans improve, embellish, forget, modify, and partially lose stored memories; our memories are malleable and unreliable in the extreme. (For a worked example, see The Good Wife, season 1, episode 6.)

Yet Mitchell is obviously right when she says that much of our humor is based on analogies. It's a staple of modern comedy, for example, for a character to respond on a subject *as if* it were another subject (chocolate as if it were sex, a pencil dropping on Earth as if it were sex, and so on). Especially incongruous analogies: when Watson asks - in the video clip she showed - for the category "Chicks dig me" it's funny because we know that as a machine a) Watson doesn't really understand what it's saying, and b) Watson is pretty much the polar opposite of the kind of thing that "chicks" are generally imagined to "dig".

"You are going to need my kind of mind on some of these Singularity projects," said Grandin, meaning visual thinkers, rather than the mathematical and verbal thinkers who "have taken over". She went on to contend that visual thinkers are better able to see details and relate them to each other. Her example: the emergency generators at Fukushima located below the level of a plaque 30 feet up on the seawall warning that flood water could rise that high. When she talks - passionately - about installing mechanical overrides in the artificial general intelligences Singularitarians hope will be built one day soonish, she seems to be channelling Peter G. Neumann, who talks often about the computer industry's penchant for repeating the security mistakes of decades past.

An interesting sideline about the date of the Singularity: Oxford's Stuart Armstrong has studied these date predictions and concluded pretty much that, in the famed words of William Goldman, no one knows anything. Based on his study of 257 predictions collected by the Singularity Institute and published on its Web site, he concluded that most theories about these predictions are wrong. The dates chosen typically do not correlate with the age or expertise of the predicter or the date of the prediction. I find this fascinating: there's something like an 80 percent consensus that the Singularity will happen in five to 100 years.

Grandin's discussion of visual thinkers made me wonder whether they would be better or worse at spotting the famed invisible gorilla than most people. Spoiler alert: if you're not familiar with this psychologist test, go now and watch the clip before proceeding. You want to say better - after all, spotting visual detail is what visual thinkers excel at - but what if the demands of counting passes is more all-consuming for them than for other types of thinkers? The psychologist Daniel Kahneman, participating by video link, talked about other kinds of bias but not this one. Would visual thinkers be more or less likely to engage in the common human pastime of believing we know something based on too little data and then ignoring new data?

This is, of course, the opposite of today's Bayesian systems, which make a guess and then refine it as more data arrives: almost the exact opposite of the humans Kahneman describes. So many of the developments we're seeing now rely on crunching masses of data (often characterized as "big" but often not *really* all that big) to find subtle patterns that humans never spot. Linda Avey, founder of the personal genome profiling service 23andMe and John Wilbanks are both trying to provide services that will allow individuals to take control of and understand their personal medical data. Avey in particular seems poised to link in somehow to the data generated by seekers in the several-year-old self-quantified movement.

This approach is so far yielding some impressive results. Peter Norvig, the director of research at Google, recounted both the company's work on recognizing cats and its work on building Google Translate. The latter's patchy quality seems more understandable when you learn that it was built by matching documents issued in multiple languages against each other and building up statistical probabilities. The former seems more like magic, although Slate points out that the computers did not necessarily pick out the same patterns humans would.

Well, why should they? Do I pick out the patterns they're interested in? The story continues...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 5, 2012

The doors of probability

Mike Lynch has long been the most interesting UK technology entrepreneur. In 2000, he became Britain's first software billionaire. In 2011 he sold his company, Autonomy, to Hewlett-Packard for $10 billion. A few months ago, Hewlett-Packard let him escape back into the wild of Cambridge. We've been waiting ever since for hints of what he'll do next; on Monday, he showed up at NESTA to talk about his adventures with Wired UK editor David Rowan.

Lynch made his name and his company by understanding that the rule formulated in 1750 by the English vicar and mathematician Thomas Bayes could be applied to getting machines to understand unstructured data. These days, Bayes is an accepted part of the field of statistics, but for a couple of centuries anyone who embraced his ideas would have been unwise to admit it. That began to change in the 1980s, when people began to realize the value of his ideas.

"The work [Bayes] did offered a bridge between two worlds," Lynch said on Monday: the post-Renaissance world of science, and the subjective reality of our daily lives. "It leads to some very strange ideas about the world and what meaning is."

As Sharon Bertsch McGrayne explains in The Theory That Would Not Die, Bayes was offering a solution to the inverse probability problem. You have a pile of encrypted code, or a crashed airplane, or a search query: all of these are effects; your problem is to find the most likely cause. (Yes, I know: to us the search query is the cause and the page of search results if the effect; but consider it from the computer's point of view.) Bayes' idea was to start with a 50/50 random guess and refine it as more data changes the probabilities in one direction or another. When you type "turkey" into a search engine it can't distinguish between the country and the bird; when you add "recipe" you increase the probability that the right answer is instructions on how to cook one.

Note, however, that search engines work on structured data: tags, text content, keywords, and metadata all going into building an index they can run over to find the hits. What Lynch is talking about is the stuff that humans can understand - raw emails, instant messages, video, audio - that until now has stymied the smartest computers.

Most of us don't really like to think in probabilities. We assume every night that the sun will rise in the morning; we call a mug a mug and not "a round display of light and shadow with a hole in it" in case it's really a doughnut. We also don't go into much detail in making most decisions, no matter how much we justify them afterwards with reasoned explanations. Even decisions that are in fact probabilistic - such as those of the electronic line-calling device Hawk-Eye used in tennis and cricket - we prefer to display as though they were infallible. We could, as Cardiff professor Harry Collins argued, take the opportunity to educate people about probability: the on-screen virtual reality animation could include an estimate of the margin for error, or the probability that the system is right (much the way IBM did in displaying Watson's winning Jeopardy answers). But apparently it's more entertaining - and sparks fewer arguments from the players - to pretend there is no fuzz in the answer.

Lynch believes we are just at the beginning of the next phase of computing, in which extracting meaning from all this unstructured data will bring about profound change.

"We're into understanding analog," he said. "Fitting computers to use instead of us to them." In addition, like a lot of the papers and books on algorithms I've been reading recently, he believes we're moving away from the scientific tradition of understanding a process to get an outcome and into taking huge amounts of data about outcomes and from it extracting valid answers. In medicine, for example, that would mean changing from the doctor who examines a patient, asks questions, and tries to understand the cause of what's wrong with them in the interests of suggesting a cure. Instead, why not a black box that says, "Do these things" if the outcome means a cured patient? "Many people think it's heresy, but if the treatment makes the patient better..."

At the beginning, Lynch said, the Autonomy founders thought the company could be worth £2 to £3 million. "That was our idea of massive back then."

Now, with his old Autonomy team, he is looking to invest in new technology companies. The goal, he said, is to find new companies built on fundamental technology whose founders are hungry and strongly believe that they are right - but are still able to listen and learn. The business must scale, requiring little or no human effort to service increased sales. With that recipe he hopes to find the germs of truly large companies - not the put in £10 million sell out at £80 million strategy he sees as most common, but multi-billion pound companies. The key is finding that fundamental technology, something where it's possible to pick a winner.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 31, 2012

Remembering the moon

"I knew my life was going to be lived in space," a 50-something said to me in 2009 on the anniversary of the moon landings, trying to describe the impact they had on him as a 12-year-old. I understood what he meant: on July 20, 1969, a late summer Sunday evening in my time zone, I was 15 and allowed to stay up late to watch; awed at both the achievement and the fact that we could see it live, we took Polaroid pictures (!) of the TV image showing Armstrong stepping onto the Moon's surface.

The science writer Tom Wilkie remarked once that the real impact of those early days of the space program was the image of the Earth from space, that it kicked off a new understanding of the planet as a whole, fragile ecosystem. The first Earth Day was just nine months later. At the time, it didn't seem like that. "We landed on the moon" became a sort of yardstick; how could we put a man on the moon yet be unable to fix a bicycle? That sort of thing.

To those who've grown up always knowing we landed on the moon in ancient times (that is, before they were born), it's hard to convey what a staggering moment of hope and astonishment that was. For one thing, it seemed so improbable and it happened so fast. In 1962, President Kennedy promised to put a man on the moon by the end of the decade - and it happened, even though he was assassinated. For another, it was the science fiction we all read as teens come to life. Surely the next steps would be other planets, greater access for the rest of us. Wouldn't I, in my lifetime, eventually be able also to look out the window of a vehicle in motion and see the Earth getting smaller?

Probably not. Many years later, I was on the receiving end of a rant from an English friend about the wasteful expense of sending people into space when unmanned spacecraft could do so much more for so much less money. He was, of course, right, and it's not much of a surprise that the death of the first human to set foot on the Moon, Neil Armstrong, so nearly coincided with the success of the Mars investigator robot, Curiosity. What Curiosity also reminds us, or should, is that although we admire Armstrong as a hero, the fact is that landing on the Moon wasn't so much his achievement as that of probably thousands, of engineers, programmers, and scientists who developed and built the technology necessary to get him there. As a result, the thing that makes me saddest about Armstrong's death on August 25 is the loss of his human memory of the experience of seeing and touching that off-Earth orbiting body.

The science fiction writer Charlie Stross has a lecture transcript I particularly like about the way the future changes under your feet. The space program - and, in the UK and France, Concorde - seemed like a beginning at the time, but has so far turned out to be an end. Sometime between 1950 and 1970, Stross argues, progress was redefined from being all about the speed of transport to being all about the speed of computers or, more precisely, Moore's Law. In the 1930s, when the moon-walkers were born, the speed of transport was doubling in less than a decade; but it only doubled in the 40 years from the late 1960s to 2007, when he wrote this talk. The speed of acceleration had slowed dramatically.

Applying this precedent to Moore's Law, Intel founder Gordon Moore's observation that the number of transistors that could fit on an integrated circuit doubled about every 24 months, increasing computing speed and power proportionately, Stross was happy to argue that despite what we all think today and the obsessive belief among Singularitarians that computers will surpass the computational power of humans oh, any day now, but certainly by 2030, "Computers and microprocessors aren't the future. They're yesterday's future, and tomorrow will be about something else." His suggestion: bandwidth, bringing things like lifelogging and ubiquitous computing so that no one ever gets lost; if we'd had that in 1969, the astronauts would have been sending back first-person total-immersion visual and tactile experiences that would now be in NASA's library for us all to experience as if at first hand instead of the just the external image we all know.

The science fiction I grew up with assumed that computers would remain rare (if huge) expensive items operated by the elite and knowledgeable (except, perhaps, for personal robots). Space flight, and personal transport, on the other hand, would be democratized. Partly, let's face it, that's because space travel and robots make compelling images and stories, particularly for movies, while sitting and typing...not so much. I didn't grow up imagining my life being mediated and expanded by computer use; I, like countless generations before me, grew up imagining the places I might go and the things I might see. Armstrong and the other astronauts, were my proxies. One day in the not-too-distant future, we will have no humans left who remember what it was actually like to look up and see the Earth in the sky while standing on a distant rock. There only ever have been, Wikipedia tells me, 12, all born in the 1930s.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 17, 2012

Bottom dwellers

This week Google announced it would downgrade in its search results sites with an exceptionally high number of valid copyright notices filed against them. As the EFF points out, the details of exactly how this will work are scarce and there is likely to be a big, big problem with false positives - that is, sites that are downgraded unfairly. You have only to look at the recent authorial pile-on that took down the legitimate ebook lending site LendInk for what can happen when someone gets hold of the wrong side of the copyright stick.

Unless we know how the inclusion of Google's copyright notice stats will work, how do we know what will be affected, how, and for how long? There is no transparency to let a site know what's happening to it, and no appeals process. Given the many abuses of the Digital Millennium Copyright Act, under which such copyright notices are issued, it's hard to know how fair such a system will be. Though, granted: the company could have simply done it and not told us. How would we know?

The timing of this move is interesting because it comes only a few months after Google began advocating for the notion that search engine results are, like newspaper editorial matter, a form of free speech under the First Amendment. The company went as far as to commission the legal scholar Eugene Volokh to write a white paper outlining the legal arguments. These basically revolve around the idea that a search algorithm is merely a new form of editorial judgment; Google returns search results in the order in which, in its opinion, they will be most helpful to users.

In response, Tim Wu, author of The Master Switch, argued in the New York Times that conceding the right of free speech to computerized decisions brings serious problems with it in the long run. Supposing, for example, that antitrust authorities want to regulate Google to ensure that it doesn't use its dominance in search to unfairly advantage its other online properties - YouTube, Google Books, Google Maps, and so on. If search results are free speech, that type of regulation becomes unconstitutional. On BoingBoing, Cory Doctorow responded that one should regulate the bad speech without denying it is speech. Earlier, in the Guardian Doctorow argued that Google's best gambit was making the argument about editorial integrity; publications make esthetic judgments, but Google famously loves to live by numbers.

This part of the argument is one that we're going to be seeing a lot of over the next few decades, because it boils down to this bit of Philip K. Dick territory: should machines programmed by humans have free speech rights? And if so, under what circumstances? If Google search results are free speech, is the same true of the output of credit-scoring algorithms or speed cameras? A magazine editor can, if asked, explain the reasoning process by which material was commissioned for, placed in, or rejected by her magazine; Google is notoriously secretive about the workings of its algorithms. We do not even know the criteria Google uses to judge the quality of its search results.

These are all questions we're going to have to answer as a society; and they are questions that may be answered very differently in countries without a First Amendment. My own first inclination is to require some kind of transparency in return: for every generation of separation between human and result, there must be an additional layer of explanation detailing how the system is supposed to work. The more people the results affect, the bigger the requirement for transparency. Something like that.

The more immediate question, of course, is, whether Google's move will have an impact on curbing unauthorized file-sharing. My guess is not that much; few file-sharers of my acquaintance use Google for the purpose of finding files to download.

Yet, in an otherwise sensible piece about the sentencing of Surfthechannel.com owner Anton Vickerman to four years in prison in the Guardian, Dan Sabbagh winds up praising Google's decision with a bunch of errors. First of all, he blames the music industry's problems on mistakes "such as failing to introduce copy protection". As the rest of us know, the music industry only finally dropped copy protection in 2009 - because consumers hate it. Arguably, copy protection delayed the adoption of legal, paid services by years. He also calls the decision to sell all-you-can-eat subscriptions to music back catalogues a mistake; on what grounds is not made clear.

Finally, he argues, "Had Google [relegated pirate sites' results] a decade ago, it might not have been worthwhile for Vickerman to set up his site at all."

Ten years ago? In 2002, Napster had been gone for less than a year. Gnutella and BitTorrent were measuring their age in months. iTunes was a year old. The Pirate Bay wouldn't exist for some months more. Google was two years away from going public. The mistake then wasn't downgrading sites oft accused of copyright infringement. The mistake then was not building legal, paid downloading services and getting them up and running as fast as possible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 25, 2012

Camera obscura

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't), or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 11, 2012

Self-drive

When I first saw that Google had obtained a license for its self-driving car in the state of Nevada I assumed that the license it had been issued was a driver's license. It's disappointing to find out that what they meant was that the car had been issued with license plates so it can operate on public roads. Bah: all operational cars have license plates, but none have driver's licenses. Yet.

The Guardian has been running a poll, asking readers if they'd ride in the car or not. So far, 84 percent say yes. I would, too, I think. With a manual override and a human prepared to step in for oh, the first ten years or so.

I'm sure that Google, being a large company in a highly litigious society, has put the self-driving car through far more rigorous tests than any a human learner undergoes. Nonetheless, I think it ought to be required to get a driver's license, not just license plates. It should have to pass the driving test like everyone else. And then buy insurance, which is where we'll find out what the experts think. Will the rates for a self-driving car be more or less than for a newly licensed male aged 18 to 25?

To be fair, I've actually been to Nevada, and I know how empty most of those roads are. Even without that, I'd certainly rather ride in Google's car than on a roller coaster. I'd rather share the road with Google's car than with a drunk driver. I'd rather ride in Google's car than trust the next Presidential election to electronic voting machines.

That last may seem illogical. After all, riding in a poorly driven car can kill you. A gamed electronic voting machine can only steal your votes. The same problems with debugging software and checking its integrity apply to both. Yet many of us have taken quite long flights on fly-by-wire planes and ridden on driverless trains without giving it much thought.

But a car is *personal*. So much so that we tolerate 1.2 million deaths annually worldwide from road traffic; in 2011 alone, more than ten times as many people died on American roads as were killed in the 9/11 World Trade Center attack. Yet everyone thinks they're an above-average driver and feels safest when they're controlling their own car. Will a self-driving car be that delusional?

The timing was interesting because this week I have also been reading a 2009 book I missed, The Case for Working With Your Hands or Why Office Work is Bad for Us and Fixing Things Feels Good . The author, Michael Crawford, argues that manual labour, which so many middle class people have been brought up to despise, is more satisfying - and has better protection against outsourcing - than anything today's white collar workers learn in college. I've been saying for years that if I had teenagers I'd be telling them to learn a trade like automechanics, plumbing, electrical work, nursing, or even playing live music - anything requiring skill and knowledge and that can't easily be outsourced to another country in the global economy. I'd say teaching, but see last week's.

Dumb down plumbing all you want with screw-together PVC pipes and joints, but someone still has to come to your house to work on it. Even today's modern cars, with their sealed subsystems and electronic read-outs, need hands-on care once in a while. I suppose Google's car arrives back at home base and sends in a list of fix-me demands for its human minders to take care of.

When Crawford talks about the satisfaction of achieving something in the physical world, he's right, up to a point. In an interview for the Guardian in 1995 (TXT), John Perry Barlow commented to me that, "The more time I spend in cyberspace, the more I love the physical world, and any kind of direct, hard-linked interaction with it. I never appreciated the physical world anything like this much before." Now, Barlow, more than most people, knows a lot of about fixing things: he spent 17 years running a debt-laden Wyoming ranch and, as he says in that piece, he spent most of it fixing things that couldn't be fixed. But I'm going to argue that it's the contrast and the choice that makes physical work seem so attractive.

Yes, it feels enormously different to know that I have personally driven across the US many times, the most notable of which was a three-and-a-half-day sprint from Connecticut to Los Angeles in the fall of 1981 (pre-GPS, I might add, without needing to look at a map). I imagine being driven across would be more like taking the train even though you can stop anywhere you like: you see the same scenery, more or less, but the feeling of personal connection would be lost. Very much like the difference between knowing the map and using GPS. Nonetheless, how do I travel across the US these days? Air. How does Barlow make his living? Being a "cognitive dissident". And Crawford writes books. At some point, we all seem to want to expand our reach beyond the purely local, physical world. Finding that balance - and employment for 9 billion people - will be one of this century's challenges.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 6, 2012

I spy

"Men seldom make passes | At girls who wear glasses," Dorothy Parker incorrectly observed in 1937. (How would she know? She didn't wear any). You have to wonder what she could have made of Google Goggles which, despite the marketing-friendly alliterative name, are neither a product (yet) nor a new idea.

I first experienced the world according to a heads-up display in 1997 during a three-day conference (TXT) on wearable computing at MIT ($). The eyes-on demonstration was a game of pool with the headset augmenting my visual field with overlays showing cuing angles. (Could be the next level of Olympic testing: checking athletes for contraband contract lenses and earpieces for those in sports where coaching is not allowed.)

At that conference, a lot of ideas were discussed and demonstrated: temperature-controlling T-shirts, garments that could send back details of a fallen soldier's condition, and so on. Much in evidence were folks like Thad Starner, who scanned my business card and handed it back to me and whose friends commented on the way he'd shift his eyes to his email mid-conversation, and Steve Mann, who turned himself into a cyborg experiment as long ago as the 1980s. Checking their respective Web pages, I see that Mann hasn't updated the evolution of wearables graphic since the late 1990s, by which time the headset looked like an ordinary pair of sunglasses; in 2002, when airport security forced him to divest his gear, he had trouble adjusting to life without it. Starner is on leave to work at...Project Glass, the home of Google Goggles.

The problem when a technological dream spans decades is that between conception and prototype things change. In 1997, that conference seemed to think wearable computing - keyboards embroidered in conductive thread, garments made of cloth woven from copper-covered strands, souped-up eyeglasses, communications-enabled watches, and shoes providing from the energy generated in walking - surely was a decade or less away.

The assumptions were not particularly contentious. People wear wrist watches and jewelry, right? So they'll wear things with the same fashion consciousness, but functional. Like, it measures and displays your heart rhythms (a woman danced wearing a light-flashing pendant that sped up with her heart rate), or your moods (high-tech mood rings), or acts as the controller for your personal area network.

Today, a lot of people don't *wear* wrist watches any more.

For wearable guys, it's good progress. The functionality that required 12 pounds of machinery draped about your person - I see from my pieces linked above and my contemporaneous notes, that the rig I tried felt like wearing a very heavy, inflexible sandwich board - is an iPhone or Android. Even my old Palm Centro comes close. As Jack Schofield writes in the Guardian, the headset is really all that's left that we don't have. And Google has a lot of competition.

What interests me is let's say these things do take off in a big way. What then? Where will the information come from to display on those headsets? Who will be the gatekeepers? If we - some of us - want to see every building decorated with outsized female nudes, will we have to opt in for porn?

My speculation here is surely not going to be futuristic enough, because like most people I'm locked into current trends. But let's say that glasses bolt onto the mobile/Internet ecologies we have in place. It is easy to imagine that, if augmented reality glasses do take off, they will be an important gateway to the next generation of information services. Because if all the glasses are is a different way of viewing your mobile phone, then they're essentially today's ear pieces - surely not sufficient motivation for people with good vision to wear glasses. So, will Apple glasses require an iTunes account and an iOS device to gain access to a choice of overlays to turn on and off that you receive from the iTunes store in real time? Similarly, Google/Android/Android marketplace. And Microsoft/Windows Mobile/Bing or something. And whoever.

So my questions are things like: will the hardware and software be interoperable? Will the dedicated augmented reality consumer need to have several pairs? Will it be like, "Today I'm going mountain climbing. I've subscribed to the Ordnance Survey premium service and they have their own proprietary glasses, so I'll need those. And then I need the Google set with the GPS enhancement to get me there in the car and find a decent restaurant afterwards." And then your kids are like, "No, the restaurants are crap on Google. Take the Facebook pair, so we can ask our friends." (Well, not Facebook, because the kids will be saying, "Facebook is for *old* people." Some cool, new replacement that adds gaming.)

What's that you say? These things are going to collapse in price so everyone can afford 12 pairs? Not sure. Prescription glasses just go on getting more expensive. I blame the involvement of fashion designers branding frames, but the fact is that people are fussy about what they wear on their faces.

In short, will augmented reality - overlays on the real world - be a new commons or a series of proprietary, necessarily limited, world views?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 30, 2012

The ghost of cash

"It's not enough to speak well of digital money," Geronimo Emili said on Wednesday. "You must also speak negatively of cash." Emili has a pretty legitimate gripe. In his home country, Italy, 30 percent of the economy is black and the gap between the amount of tax the government collects and the amount it's actually owed is €180 billion. Ouch.

This sets off a bit of inverted nationalist competition between him and the Greek lawyer Maria Giannakaki, there to explain a draft Greek law mandating direct payment of VAT from merchants' tills to eliminate fraud: which country is worse? Emili is sure it's Italy.

"We invented banks," he said. "But we love cash." Italy's cash habit costs the country €10 billion a year - and 40 percent of Europe's bank robberies.

This exchange took place at this year's Digital Money Forum, an annual event that pulls together people interested in everything from the latest mobile technology to the history of Anglo-Saxon coinage. Their shared common interest: what makes money work? If you, like most of this group, want to see physical cash eliminated, this is the key question.

Why Anglo-Saxon coinage? Rory Naismith explains that the 8th century began the shift from valuing coins merely for their metal content and assigning them a premium for their official status. It was the beginning of the abstraction of money: coins, paper, the elimination of the gold standard, numbers in cyberspace. Now, people like Emili and this event's convenor, David Birch, argue it's time to accept money's fully abstract nature and admit the truth: it's a collective hallucination, a "promise of a promise".

These are not just the ravings of hungry technology vendors: Birch, Emili, and others argue that the costs of cash fall disproportionately on the world's poor, and that cash is the key vector for crime and tax evasion. Our impressions of the costs are distorted because the costs of electronic payments, credit cards, and mobile wallets are transparent, while cash is free at the point of use.

When I say to Birch that eliminating cash also means eliminating the ability to transact anonymously, he says, "That's a different conversation." But it isn't, if eliminating crime and tax evasion are your drivers. In the two days only Bitcoin offers anonymity, but it's doomed to its niche market, for whatever reason. (I think it's too complicated; Dutch financial historian Simon Lelieveldt says it will fail because it has no central bank.)

I pause to be annoyed by the claim that cash is filthy and spreads disease. This is Microsoft-level FUD, and not worthy of smart people claiming to want to benefit the poor and eliminate crime. In fact, I got riled enough to offer to lick any currency (or coins; I'm not proud) presented. I performed as promised on a fiver and a Danish note. And you know, they *kept* that money?

In 1680, says Birch, "Pre-industrial money was failing to serve an industrial revolution." Now, he is convinced, "We are in the early part of the post-industrial revolution, and we're shoehorning industrial money in to fit it. It can't last." This is pretty much what John Perry Barlow said about copyright in 1993, and he was certainly right.

But is Birch right? What kind of medium is cash? Is it a medium of exchange, like newspapers, trading stored value instead of information, or is it a format, like video tape? If it's the former, why shouldn't cash survive, even if only as a niche market? Media rarely die altogether - but formats come and go with such speed that even the more extreme predictions at this event - such as Sandra Alzetta, who said that her company expects half its transactions to be mobile by 2020 -seem quite modest. Her company is Visa International, by the way.

I'd say cash is a medium of exchange, and today's coins and notes are its format. Past formats have included shells, feathers, gold coins, and goats; what about a format for tomorrow that printed or minted on demand, at ATMs? I ask the owner of the grocery shop around the corner if his life would be better if cash were eliminated, and he shrugs no. "I'd still have to go out and get the stuff."

What's needed is low-cost alternatives that fit in cultural contexts. Lydia Howland, whose organization IDEO works to create human-centered solutions to poverty, finds the same needs in parts of Britain that exist in countries like Kenya, where M-Pesa is succeeding in bringing access to banking and remote payments to people who have never had access to financial services before.

"Poor people are concerned about privacy," she said on Wednesday. "But they have so much anonymity in their lives that they pay a premium for every financial service." Also, because they do so much offline, there is little understanding of how they work or live. "We need to create a society where a much bigger base has a voice."

During a break, I try to sketch the characteristics of a perfect payment mechanism: convenient; transparent to the user; universally accepted; universally accessible and usable; resistant to tracking, theft, counterfeiting, and malware; and hard to steal on a large scale. We aren't there yet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 23, 2012

The year of the future

If there's one thing everyone seemed to agree on yesterday at Nominet's annual Internet policy conference, it's that this year, 2012, is a crucial one in the development of the Internet.

The discussion had two purposes. One is to feed into Nominet's policy-making as the body in charge of .uk, in which capacity it's currently grappling with questions such as how to respond to law enforcement demands to disappear domains. The other, which is the kind of exercise net.wars particularly enjoys and that was pioneered at the Computers, Freedom, and Privacy conference (next one spring 2013, in Washington, DC), is to peer into the future and try to prepare for it.

Vint Cerf, now Google's Chief Internet Evangelist, outlined some of that future, saying that this year, 2012, will see more dramatic changes to the Internet than anything since 1983. He had a list:

- The deployment of better authentication in the form of DNSSec;

- New certification regimes to limit damage in the event of more cases like 2011's Diginotar hack;

- internationalized domain names;

- The expansion of new generic top-level domains;

- The switch to IPv6 Internet addressing, which happens on June 6;

- Smart grids;

- The Internet of things: cars, light bulbs, surfboards (!), and anything else that can be turned into a sensor by implanting an RFID chip.

Cerf paused to throw in an update on his long-running project the interplanetary Internet he's been thinking about since 1998 (TXT).

"It's like living in a science fiction novel," he said yesterday as he explained about overcoming intense network lag by using high-density laser pulses. The really cool bit: repurposing space craft whose scientific missions have been completed to become part of the interplanetary backbone. Not space junk: network nodes-in-waiting.

The contrast to Ed Vaizey, the minister for culture, communications and the creative industries at the Department of Culture, Media, and Sport, couldn't have been more marked. He summed up the Internet's governance problem as the "three Ps": pornography, privacy, and piracy. It's nice rhetorical alliteration, but desperately narrow. Vaizey's characterization of 2012 as a critical year rests on the need to consider the UK's platform for the upcoming Internet Governance Forum leading to 2014's World Information Technology Forum. When Vaizey talks about regulating with a "light touch", does he mean the same things we do?

I usually place the beginning of the who-governs-the-Internet argument at1997, the first time the engineers met rebellion when they made a technical decision (revamping the domain name system). Until then, if the pioneers had an enemy it was governments, memorably warned off by John Perry Barlow's 1996 Declaration of the Independence of Cyberspace. After 1997, it was no longer possible to ignore the new classes of stakeholders, commercial interests and consumers.

I'm old enough as a Netizen - I've been online for more than 20 years - to find it hard to believe that the Internet Governance Forum and its offshoots do much to change the course of the Internet's development: while they're talking, Google's self-drive cars rack up 200,000 miles on San Francisco's busy streets with just one accident (the car was rear-ended; not their fault) and Facebook sucks in 800 million users (if it were a country, it would be the world's third most populous nation).

But someone has to take on the job. It would be morally wrong for governments, banks, and retailers to push us all to transact with them online if they cannot promise some level of service and security for at least those parts of the Internet that they control. And let's face it: most people expect their governments to step in if they're defrauded and criminal activity is taking place, offline or on, which is why I thought Barlow's declaration absurd at the time

Richard Allan, director of public policy for Facebook EMEA - or should we call him Lord Facebook? - had a third reason why 2012 is a critical year: at the heart of the Internet Governance Forum, he said, is the question of how to handle the mismatch between global Internet services and the cultural and regulatory expectations that nations and individuals bring with them as they travel in cyberspace. In Allan's analogy, the Internet is a collection of off-shore islands like Iceland's Surtsey, which has been left untouched to develop its own ecosystem.

Should there be international standards imposed on such sites so that all users know what to expect? Such a scheme would overcome the Balkanization problem that erupts when sites present a different face to each nation's users and the censorship problem of blocking sites considered inappropriate in a given country. But if that's the way it goes, will nations be content to aggregate the most open standards or insist on the most closed, lowest-common-denominator ones?

I'm not sure this is a choice that can be made in any single year - they were asking this same question at CFP in 1994 - but if this is truly the year in which it's made, then yes, 2012 is a critical year in the development of the Internet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 2, 2012

Drive by wire

The day in 1978 when I first turned on my CB radio, I discovered that all that time the people in the cars around me had been having conversations I knew nothing about. Suddenly my car seemed like a pre-Annie Sullivan Helen Keller.

Judging by yesterday's seminar on self-driving cars, something similar is about to happen, but on a much larger scale. Automate driving and then make each vehicle part of the Internet of Things and suddenly the world of motoring is up-ended.

The clearest example came from Jeroen Ploeg, who is part of a Dutch national project on Cooperative Advanced Cruise Control. Like everyone here, Ploeg is grappling with issues that recur across all the world's densely populated zones: congestion, pollution, and safety. How can you increase capacity without building more roads (expensive) while decreasing pollution (expensive, unpleasant, and unhealthy) and increasing safety (deaths from road accidents have decreased in the UK for the last few years but are still nearly 2,000 a year)? Decreasing space between cars isn't safe for humans, who also lack the precision necessary to keep a tightly packed line of cars moving evenly. What Ploeg explains, and then demonstrates on a ride in a modified Prius through the Nottingham lunchtime streets, is that given the ability to communicate the cars can collaborate to keep a precise distance that solves all three problems. When he turns on the cooperative bit so that our car talks to its fellow in front of us, the advance warnings significantly smooth our acceleration and braking.

"It has a big potential to increase throughput," he says, noting that packing safely closer together can cut down trucks' fuel requirements by up to 10 percent from the reduction in headwinds.

But other than that, "There isn't a business case for it," he says sadly. No: because we don't buy cars collaboratively, we buy them individually according to personal values like top speed, acceleration, fuel efficiency, comfort, sporty redness, or fantasy.

To robot vehicle researchers, the question isn't if self-driving cars will take over - the various necessary bits of technology are too close to ready - but when and how people will accept the inevitable. There are some obvious problems. Human factors, for one. As cars become more skilled - already, they help humans park, keep in lanes, and keep a consistent speed - humans forget the techniques they've learned. Gradually, says Natasha Merat, co-director at the Institute for Transport Studies at the University of Leeds, they stop paying attention. In critical situations, her research shows, they react more slowly; in urban situations more automated means they're more likely to watch DVDs until or unless they hear an alarm sound. (Curiously, her research shows that on motorways they continue to pay more attention; speed scares, apparently.) So partial automation may be more dangerous than full automation despite seeming like a good first step.

The more fascinating thing is what happens when vehicles start to communicate. Paul Newman, head of the Mobile Robotics Unit at Oxford proposes that your vehicle should learn your routes; one day, he imagines, a little light comes on indicating that it's ready to handle the drive itself. Newman wants to reclaim his time ("It's ridiculous to think that we're condemned to a future of congestion, accidents, and time-wasting"), but since GPS is too limited to guide an automated car - it doesn't work well inside cities, it's not fine-grained enough for parking lots - there's talk of guide boxes. Newman would rather take cues from the existing infrastructure the way humans do. But give vehicles the ability to communicate and share information - maps, pictures, and sensor data. "I don't need a funky French car bubble car. I want today's car with cameras and a 3G connection."

It's later, over lunch, that I realize what he's really proposing. Say all of Britain's roads are traversed once an hour by some vehicle or other. If each picks up infrastructure, geographical, and map data and shares it...you have the vehicle equivalent of Wikipedia to compete with Google's Street View.

Two topics are largely skipped at this event, both critical: fuel and security. John Miles, from Arup argued that it's a misconception that a large percentage of today's road traffic could be moved to rail. But is it safe to assume we'll find enough fuel to run all those extra vehicles either? Traffic increased in the UK by 85 percent since 1980; another 25 percent increase is expected in just the next 20 years.

But security is the crucial one because it must be built into V2V from the beginning. Otherwise, we're talking the apocryphal old joke about cars crashing unpredictably, like Windows.

It's easy to resist this particular future even without wondering whether people will accept statistics showing robot cars are safer if a child is killed by one: I don't even like cars that bossily remind me to wear a seatbelt. But, as several people said yesterday, I am the wrong age. The "iPod generation" don't identify cars so closely with independence, and they don't like looking up from their phones. The 30-year-old of 2032 who knows how to back into a tight parking space may be as rare as a 30-year-old today who can multiply three-digit numbers in his head. Me, I'll wave from the train.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 30, 2011

Ignorance is no excuse

My father was not a patient man. He could summon up some compassion for those unfortunates who were stupider than himself. What he couldn't stand was ignorance, particularly willful ignorance. The kind of thing where someone boasts about how little they know.

That said, he also couldn't abide computers. "What can you do with a computer that you can't do with a paper and pencil?" he demanded to know when I told him I was buying a friend's TRS-80 Model III in 1981. He was not impressed when I suggested that it would enable me to make changes on page 3 of a 78-page manuscript without retyping the whole thing.

My father had a valid excuse for that particular bit of ignorance or lack of imagination. It was 1981, when most people had no clue about the future of the embryonic technology they were beginning to read about. And he was 75. But I bet if he'd made it past 1984 he'd have put some effort into understanding this technology that would soon begin changing the printing industry he worked in all his life.

While computers were new on the block, and their devotees were a relatively small cult of people who could be relatively easily spotted as "other", you could see the boast "I know nothing about computers" as a replay of high school. In American movies and TV shows that would be jocks and the in-crowd on one side, a small band of miserable, bullied nerds on the other. In the UK, where for reasons I've never understood it's considered more admirable to achieve excellence without ever being seen to work hard for it, the sociology plays out a little differently. I guess here the deterrent is less being "uncool" and more being seen as having done some work to understand these machines.

Here's the problem: the people who by and large populate the ranks of politicians and the civil service are the *other* people. Recent events such as the UK's Government Digital Service launch suggest that this is changing. Perhaps computers have gained respectability at the top level from the presence of MPs who can boast that they misspent their youth playing video games rather than, like the last generation's Ian Taylor, getting their knowledge the hard way, by sweating for it in the industry.

There are several consequences of all this. The most obvious and longstanding one is that too many politicians don't "get" the Net, which is how we get legislation like the DEA, SOPA, PIPA, and so on. The less obvious and bigger one is that we - the technology-minded, the early adopters, the educated users - write them off as too stupid to talk to. We call them "congresscritters" and deride their ignorance and venality in listening to lobbyists and special interest groups.

The problem, as Emily Badger writes for Miller-McCune as part of a review of Clay Johnson's latest book, is that if we don't talk to them how can we expect them to learn anything?

This sentiment is echoed in a lecture given recently at Rutgers by the distinguished computer scientist David Farber on the technical and political evolution of the Internet (MP3) (the slides are here (PDF)). Farber's done his time in Washington, DC, as chief technical advisor to the Federal Communications Commission and as a member of the Presidential Advisory Board on Information Technology. In that talk, Farber makes a number of interesting points about what comes next technically - it's unlikely, he says, that today's Internet Protocols will be able to cope with the terabyte networks on the horizon, and reengineering is going to be a very, very hard problem because of the way humans resist change - but the more relevant stuff for this column has to do with what he learned from his time in DC.

Very few people inside the Beltway understand technology, he says there, citing the Congressman who asked him seriously, "What is the Internet?" (Well, see, it's this series of tubes...) And so we get bad - that is, poorly grounded - decisions on technology issues.

Early in the Net's history, the libertarian fantasy was that we could get on just fine without their input, thank you very much. But as Farber says, politicians are not going to stop trying to govern the Internet. And, as he doesn't quite say, it's not like we can show them that we can run a perfect world without them. Look at the problems techies have invented: spam, the flaky software infrastructure on which critical services are based, and so on. "It's hard to be at the edge in DC," Farber concludes.

So, going back to Badger's review of Johnson: the point is it's up to us. Set aside your contempt and distrust. Whether we like politicians or not, they will always be with us. For 2012, adopt your MP, your Congressman, your Senator, your local councilor. Make it your job to help them understand the bills they're voting on. Show them tshat even if they don't understand the technology there's votes in those who do. It's time to stop thinking of their ignorance as solely *their* fault.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 16, 2011

Location, location, location

In the late 1970s, I used to drive across the United States several times a year (I was a full-time folksinger), and although these were long, long days at the wheel, there were certain perks. One was the feeling that the entire country was my backyard. The other was the sense that no one in the world knew exactly where I was. It was a few days off from the pressure of other people.

I've written before that privacy is not sleeping alone under a tree but being able to do ordinary things without fear. Being alone on an interstate crossing Oklahoma wasn't to hide some nefarious activity (like learning the words to "There Ain't No Instant Replay in the Football Game of Life"). Turn off the radio and, aside from an occasional billboard, the world was quiet.

Of course, that was also a world in which making a phone call was a damned difficult thing to do, which is why professional drivers all had CB radios. Now, everyone has mobile phones, and although your nearest and dearest may not know where you are, your phone company most certainly does, and to a very fine degree of "granularity".

I imagine normal human denial is broad enough to encompass pretending you're in an unknown location while still receiving text messages. Which is why this year's A Fine Balance focused on location privacy.

The travel privacy campaigner Edward Hasbrouck has often noted that travel data is particularly sensitive and revealing in a way few realize. Travel data indicate your religion (special meals), medical problems, and life style habits affecting your health (choosing a smoking room in a hotel). Travel data also shows who your friends are, and how close: who do you travel with? Who do you share a hotel room with, and how often?

Location data is travel data on a steady drip of steroids. As Richard Hollis, who serves on the ISACA Government and Regulatory Advocacy Subcommittee, pointed out, location data is in fact travel data - except that instead of being detailed logging of exceptional events it's ubiquitous logging of everything you do. Soon, he said, we will not be able to opt out - and instead of travel data being a small, sequestered, unusually revealing part of our lives, all our lives will be travel data.

Location data can reveal the entire pattern of your life. Do you visit a church every Monday evening that has an AA meeting going on in the basement? Were you visiting the offices of your employer's main competitor when you were supposed to have a doctor's appointment?

Research supports this view. Some of the earliest work I'm aware of is of Alberto Escudero-Pascual. A month-long experiment tracking the mobile phones in his department enabled him to diagram all the intra-departmental personal relations. In a 2002 paper, he suggests how to anonymize location information (PDF). The problem: no business wants anonymization. As Hollis and others said, businesses want location data. Improved personalization depends on context, and location provides a lot of that.

Patrick Walshe, the director of privacy for the GSM Association, compared the way people care about privacy to the way they care about their health: they opt for comfort and convenience and hope for the best. They - we - don't make changes until things go wrong. This explains why privacy considerations so often fail and privacy advocates despair: guarding your privacy is like eating your vegetables, and who except a cranky person plans their meals that way?

The result is likely to be the world that Microsoft UK's director of Search, advertising, and online UK, Dave Coplin, outlined, arguing that privacy today is at the turning point that the Melissa virus represented for security 11 years ago when it first hit.

Calling it "the new battleground," he said, "This is what happens when everything is connected." Similarly, Blaine Price, a senior lecturer in computing at the Open University, had this cheering thought: as humans become part of the Internet of Things, data leakage will become almost impossible to avoid.

Network externalities mean that the number of people using a network increase its value for all other users of that network. What about privacy externalities? I haven't heard the phrase before, although I see it's not new (PDF). But I mean something different than those papers do: the fact that we talk about privacy as an individual choice when instead it's a collaborative effort. A single person who says, "I don't care about my privacy" can override the pro-privacy decisions of dozens of their friends, family, and contacts. "I'm having dinner with @wendyg," someone blasts, and their open attitude to geolocation reveals mine.

In his research on tracking, Price has found that the more closely connected the trackers are the less control they have over such decisions. I may worry that turning on a privacy block will upset my closest friend; I don't obsess at night, "Will the phone company think I'm mad at it?"

So: you want to know where I am right now? Pay no attention to the geolocated Twitterer who last night claimed to be sitting in her living room with "wendyg". That wasn't me.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 2, 2011

Debating the robocalypse

"This House fears the rise of artificial intelligence."

This was the motion up for debate at Trinity College Dublin's Philosophical Society (Twitter: @phil327) last night (December 1, 2011). It was a difficult one, because I don't think any of the speakers - neither the four students, Ricky McCormack, Michael Coleman, Cat O'Shea, and Brian O'Beirne, nor the invited guests, Eamonn Healy, Fred Cummins, and Abraham Campbell - honestly fear AI all that much. Either we don't really believe a future populated by superhumanly intelligent killer robots is all that likely, or, like Ken Jennings, we welcome our new computer overlords.

But the point of this type of debate is not to believe what you are saying - I learned later that in the upper levels of the game you are assigned a topic and a position and given only 15 minutes to marshal your thoughts - but to argue your assigned side so passionately, persuasively, and coherently that you win the votes of the assembled listeners even if later that night, while raiding the icebox, they think, "Well, hang on..." This is where politicians and Dail/House of Commons debating style come from, As a participatory sport it was utterly new to me, and it explains a *lot* about the derailment of political common sense by the rise of public relations and lobbying.

Obviously I don't actually oppose research into AI. I'm all for better tools, although I vituperatively loathe tools that try to game me. As much fun as it is to speculate about whether superhuman intelligences will deserve human rights, I tend to believe that AI will always be a tool. It was notable that almost every speaker assumed that AI would be embodied in a more-or-less humanoid robot. Far more likely, it seems to me, that if AI emerges it will be first in some giant, boxy system (that humans can unplug) and even if Moore's Law shrinks that box it will be much longer before AI and robotics converge into a humanoid form factor.

Lacking conviction on the likelihood of all this, and hence of its dangers, I had to find an angle, which eventually boiled down to Walt Kelly and We have met the enemy and he is us. In this, I discovered, I am not alone: a 2007 ThinkArtificial poll found that more than half of respondents feared what people would do with AI: the people who program it, own it, and deploy it.

If we look at the history of automation to date, a lot of it has been used to make (human) workers as interchangeable as possible. I am old enough to remember, for example, being able to walk down to the local phone company in my home town of Ithaca, NY, and talk in person to a customer service representative I had met multiple times before about my piddling residential account. Give everyone the same customer relationship database and workers become interchangeable parts. We gain some convenience - if Ms Jones is unavailable anyone else can help us - but we pay in lost relationships. The company loses customer loyalty, but gains (it hopes) consistent implementation of its rules and the economic leverage of no longer depending on any particular set of workers.

I might also have mentioned automated trading systems, which are making the markets swing much more wildly much more often. Later, Abraham Campbell, a computer scientist working in augmented reality at University College Dublin, said as much as 25 percent of trading is now done by bots. So, cool: Wall Street has become like one of those old IRC channels where you met a cute girl named Eliza...

Campbell had a second example: the Siri, which will tell you where to hide a dead body but not where you might get an abortion. Google's removal of torrent sites from its autosuggestion/Instant feature didn't seem to me egregious censorship, partly because there are other search engines and partly (short-sightedly) because I hate Instant so much already. But as we become increasingly dependent on mediators to help us navigate our overcrowded world, the agenda and/or competence of the people programming them are vital to know. These will be transparent only as long as there are alternatives.

Simultaneously, back in England in work that would have made Jessica Mitford proud, Privacy International's Eric King and Emma Draper were publishing material that rather better proves the point. Big Brother Inc lays out the dozens of technology companies from democratic Western countries that sell surveillance technologies to repressive regimes. King and Draper did what Mitford did for the funeral business in the late 1960s (and other muckrakers have done since): investigate what these companies' marketing departments tell prospective customers.

I doubt businesses will ever, without coercion, behave like humans with consciences; it's why they should not be legally construed as people. During last night's debate, the prospective robots were compared to women and "other races", who were also denied the vote. Yes, and they didn't get it without a lot of struggle. The In the "Robocalypse" (O'Beirne), they'd better be prepared to either a) fight to meltdown for their rights or b) protect their energy sources and wait patiently for the human race to exterminate itself.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 4, 2011

The identity layer

This week, the UK government announced a scheme - Midata - under which consumers will be able to reclaim their personal information. The same day, the Centre for the Study of Financial Innovation assembled a group of experts to ask what the business model for online identification should be. And: whatever that model is, what the the government's role should be. (For background, here's the previous such discussion.)

My eventual thought was that the government's role should be to set standards; it might or might not also be an identity services provider. The government's inclination now is to push this job to the private sector. That leaves the question of how to serve those who are not commercially interesting; at the CSFI meeting the Post Office seemed the obvious contender for both pragmatic and historical reasons.

As Mike Bracken writes in the Government Digital Service blog posting linked above, the notion of private identity providers is not new. But what he seems to assume is that what's needed is federated identity - that is, in Wikipedia's definition, a means for linking a person's electronic identity and attributes across multiple distinct systems. What I meant is a system in which one may have many limited identities that are sufficiently interoperable that you can make a choice which to use at the point of entry to a given system. We already have something like this on many blogs, where commenters may be offered a choice of logging in via Google, OpenID, or simply posting a name and URL.

The government gateway circa Year 2000 offered a choice: getting an identity certificate required payment of £50 to, if I remember correctly, Experian or Equifax, or other companies whose interest in preserving personal privacy is hard to credit. The CSFI meeting also mentioned tScheme - an industry consortium to provide trust services. Outside of relatively small niches it's made little impact. Similarly, fifteen years ago, the government intended, as part of implementing key escrow for strong cryptography, to create a network of trusted third parties that it would license and, by implication, control. The intention was that the TTPs should be folks that everyone trusts - like banks. Hilarious, we said *then*. Moving on.

In between then and now, the government also mooted a completely centralized identity scheme - that is, the late, unlamented ID card. Meanwhile, we've seen the growth a set of competing American/global businesses who all would like to be *the* consumer identity gateway and who managed to steal first-mover advantage from existing financial institutions. Facebook, Google, and Paypal are the three most obvious. Microsoft had hopes, perhaps too early, when in 1999 it created Passport (now Windows Live ID). More recently, it was the home for Kim Cameron's efforts to reshape online identity via the company's now-cancelled CardSpace, and Brendon Lynch's adoption of U-Prove, based on Stefan Brands' technology. U-Prove is now being piloted in various EU-wide projects. There are probably lots of other organizations that would like to get in on such a scheme, if only because of the data and linkages a federated system would grant them. Credit card companies, for example. Some combination of mobile phone manufacturers, mobile network operators, and telcos. Various medical outfits, perhaps.

An identity layer that gives fair and reasonable access to a variety of players who jointly provide competition and consumer choice seems like a reasonable goal. But it's not clear that this is what either the UK's distastefully spelled "Midata" or the US's NSTIC (which attracted similar concerns when first announced, has in mind. What "federated identity" sounds like is the convenience of "single sign-on", which is great if you're working in a company and need to use dozens of legacy systems. When you're talking about identity verification for every type of transaction you do in your entire life, however, a single gateway is a single point of failure and, as Stephan Engberg, founder of the Danish company Priway, has often said, a single point of control. It's the Facebook cross-all-the-streams approach, embedded everywhere. Engberg points to a discussion paper) inspired by two workshops he facilitated for the Danish National IT and Telecom Agency (NITA) in late 2010 that covers many of these issues.

Engberg, who describes himself as a "purist" when it comes to individual sovereignty, says the only valid privacy-protecting approach is to ensure that each time you go online on each device you start a new session that is completely isolated from all previous sessions and then have the choice of sharing whatever information you want in the transaction at hand. The EU's LinkSmart project, which Engberg was part of, created middleware to do precisely that. As sensors and RFID chips spread along with IPv6, which can give each of them its own IP address, linkages across all parts of our lives will become easier and easier, he argues.

We've seen often enough that people will choose convenience over complexity. What we don't know is what kind of technology will emerge to help us in this case. The devil, as so often, will be in the details.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 21, 2011

Printers on fire

It used to be that if you thought things were spying on you, you were mentally disturbed. But you're not paranoid if they're really out to get you, and new research at Columbia University, with funding from DARPA's Crash program, exposes how vulnerable today's devices are. Routers, printers, scanners - anything with an embedded system and an IP address.

Usually what's dangerous is monoculture: Windows is a huge target. So, argue Columbia computer science professor Sal Stolfo and PhD student Ang Cui, device manufacturers rely on security by diversity: every device has its own specific firmware. Cui estimates, for example, that there are 300,000 different firmware images for Cisco routers, varying by feature set, model, operating system version, hardware, and so on. Sure, what's the payback? Especially compared to that nice, juicy Windows server over there?

"In every LAN there are enormous numbers of embedded systems in every machine that can be penetrated for various purposes," says Cui.

The payback is access to that nice, juicy server and, indeed, the whole network Few update - or even check - firmware. So once inside, an attacker can lurk unnoticed until the device is replaced.

Cui started by asking: "Are embedded systems difficult to hack? Or are they just not low-hanging fruit?" There isn't, notes Stolfo, an industry providing protection for routers, printers, the smart electrical meters rolling out across the UK, or the control interfaces that manage conference rooms.

If there is, after seeing their demonstrations, I want it.

Their work is two-pronged: first demonstrate the need, then propose a solution.

Cui began by developing a rootkit for Cisco routers. Despite the diversity of firmware and each image's memory layout, routers are a monoculture in that they all perform the same functions. Cui used this insight to find the invariant elements and fingerprint them, making them identifiable in the memory space. From that, he can determine which image is in place and deduce its layout.

"It takes a millisecond."

Once in, Cui sets up a control channel over ping packets (ICMP) to load microcode, reroute traffic, and modify the router's behaviour. "And there's no host-based defense, so you can't tell it's been compromised." The amount of data sent over the control channel is too small to notice - perhaps a packet per second.

"You can stay stealthy if you want to."

You could even kill the router entirely by modifying the EEPROM on the motherboard. How much fun to be the army or a major ISP and physically connect to 10,000 dead routers to restore their firmware from backup?

They presented this at WOOT (Quicktime), and then felt they needed something more dramatic: printers.

"We turned off the motor and turned up the fuser to maximum." Result: browned paper and...smoke.

How? By embedding a firmware update in an apparently innocuous print job. This approach is familiar: embedding programs where they're not expected is a vector for viruses in Word and PDFs.

"We can actually modify the firmware of the printer as part of a legitimate document. It renders correctly, and at the end of the job there's a firmware update." It hasn't been done before now, Cui thinks, because there isn't a direct financial pay-off and it requires reverse-engineering proprietary firmware. But think of the possibilities.

"In a super-secure environment where there's a firewall and no access - the government, Wall Street - you could send a resume to print out." There's no password. The injected firmware connects to a listening outbound IP address, which responds by asking for the printer's IP address to punch a hole inside the firewall.

"Everyone always whitelists printers," Cui says - so the attacker can access any computer. From there, monitor the network, watch traffic, check for regular expressions like names, bank account numbers, and social security numbers, sending them back out as part of ping messages.

"The purpose is not to compromise the printer but to gain a foothold in the network, and it can stay for years - and then go after PCs and servers behind the firewall." Or propagate the first printer worm.

Stolfo's and Cui's call their answer a "symbiote" after biological symbiosis, in which two biological organisms attach to each other to mutual benefit.

The goal is code that works on an arbitrarily chosen executable about which you have very little knowledge. Emulating a biological symbiote, which finds places to attach to the host and extract resources, Cui's symbiote first calculates a secure checksum across all the static regions of the code, then finds random places where its code can be injected.

"We choose a large number of these interception points - and each time we choose different ones, so it's not vulnerable to a signature attack and it's very diverse." At each device access, the symbiote steals a little bit of the CPU cycle (like an RFID chip being read) and automatically verifies the checksum.

"We're not exploiting a vulnerability in the code," says Cui, "but a logical fallacy in the way a printer works." Adds Stolfo, "Every application inherently has malware. You just have to know how to use it."

Never mind all that. I'm still back at that printer smoking. I'll give up my bank account number and SSN if you just won't burn my house down.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


September 30, 2011

Trust exercise

When do we need our identity to be authenticated? Who should provide the service? Whom do we trust? And, to make it sustainable, what is the business model?

These questions have been debated ever since the early 1990s, when the Internet and the technology needed to enable the widespread use of strong cryptography arrived more or less simultaneously. Answering them is a genuinely hard problem (or it wouldn't be taking so long).

A key principle that emerged from the crypto-dominated discussions of the mid-1990s is that authentication mechanisms should be role-based and limited by "need to know"; information would be selectively unlocked and in the user's control. The policeman stopping my car at night needs to check my blood alcohol level and the validity of my driver's license, car registration, and insurance - but does not need to know where I live unless I'm in violation of one of those rules. Cryptography, properly deployed, can be used to protect my information, authenticate the policeman, and then authenticate the violation result that unlocks more data.

Today's stored-value cards - London's Oyster travel card, or Starbucks' payment/wifi cards - when used anonymously do capture some of what the crypto folks had in mind. But the crypto folks also imagined that anonymous digital cash or identification systems could be supported by selling standalone products people installed. This turned out to be wholly wrong: many tried, all failed. Which leads to today, where banks, telcos, and technology companies are all trying to figure out who can win the pool by becoming the gatekeeper - our proxy. We want convenience, security, and privacy, probably in that order; they want security and market acceptance, also probably in that order.

The assumption is we'll need that proxy because large institutions - banks, governments, companies - are still hung up on identity. So although the question should be whom do we - consumers and citizens - trust, the question that ultimately matters is whom do *they* trust? We know they don't trust *us*. So will it be mobile phones, those handy devices in everyone's pockets that are online all the time? Banks? Technology companies? Google has launched Google Wallet, and Facebook has grand aspirations for its single sign-on.

This was exactly the question Barclaycard's Tom Gregory asked at this week's Centre for the Study of Financial Innovation round-table discussion (PDF) . It was, of course, a trick, but he got the answer he wanted: out of banks, technology companies, and mobile network operators, most people picked banks. Immediate flashback.

The government representatives who attended Privacy International's 1997 Scrambling for Safety meeting assumed that people trusted banks and that therefore they should be the Trusted Third Parties providing key escrow. Brilliant! It was instantly clear that the people who attended those meetings didn't trust their banks as much as all that.

One key issue is that, as Simon Deane-Johns writes in his blog posting about the same event, "identity" is not a single, static thing; it is dynamic and shifts constantly as we add to the collection of behaviors and data representing it.

As long as we equate "identity" with "a person's name" we're in the same kind of trouble the travel security agencies are when they try to predict who will become a terrorist on a particular flight. Like the browser fingerprint, we are more uniquely identifiable by the collection of our behaviors than we are by our names, as detectives who search for missing persons know. The target changes his name, his jobs, his home, and his wife - but if his obsession is chasing after trout he's still got a fishing license. Even if a link between a Starbucks card and its holder's real-world name is never formed, the more data the card's use enters into the system the more clearly recognizable as an individual he will be. The exact tag really doesn't matter in terms of understanding his established identity.

What I like about Deane-Johns' idea -

the solution has to involve the capability to generate a unique and momentary proof of identity by reference to a broad array of data generated by our own activity, on the fly, which is then useless and can be safely discarded"

is two things. First, it has potential as a way to make impersonation and identity fraud much harder. Second is that implicit in it is the possibility of two-way authentication, something we've clearly needed for years. Every large organization still behaves as though its identity is beyond question whereas we - consumers, citizens, employees - need to be thoroughly checked. Any identity infrastructure that is going to be robust in the future must be built on the understanding that with today's technology anyone and anything can be impersonated.

As an aside, it was remarkable how many people at this week's meeting were more concerned about having their Gmail accounts hacked than their bank accounts. My reasoning is that the stakes are higher: I'd rather lose my email reputation than my house.. Their reasoning is that the banking industry is more responsive to customer problems than technology companies. That truly represents a shift from 1997, when technology companies were smaller and more responsive.

More to come on these discussions...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 23, 2011

Your grandmother's phone

In my early 20s I had a friend who was an expert at driving cars with...let's call them quirks. If he had to turn the steering wheel 15 degrees to the right to keep the car going straight while peering between smears left by the windshield wipers and pressing just the exact right amount on the brake pedal, no problem. This is the beauty of humans: we are adaptable. That characteristic has made us the dominant species on the planet, since we can adapt to changes of habitat, food sources, climate (within reason), and cohorts. We also adapt to our tools, which is why technology designers get away with flaws like the iPhone's "death grip". We don't like it - but we can deal with it.

At least, we can deal with it when we know what's going on. At this week's Senior Market Mobile, the image that stuck in everyone's mind came early in the day, when Cambridge researchers Ian Hosking and Mike Bradley played a video clip of a 78-year-old woman trying to figure out how to get past an iPad's locked screen. Was it her fault that it seemed logical to her to hold it in one hand while jabbing at it in frustration? As Donald Norman wrote 20 years ago, for an interface to be intuitive it has to match the user's mental model of how it works.

That 78-year-old's difficulties, when compared with the glowing story of the 100-year-old who bonded instantly with her iPad, make another point: age is only one aspect of a person's existence - and one whose relevance they may reject. If you're having trouble reading small type or remembering the menu layout, pushing the buttons, or hearing a phone call what matters isn't that you're old but that you have vision impairment, cognitive difficulties, less dextrous fingers, or hearing loss. You don't have to be old to have any of those things - and not all old people have them.

For those reasons, the design decisions intended to aid seniors - who, my God, are defined as anyone over 55! - aid many other people too. All of these points were made with clarity by Mark Beasley, whose company specializes in marketing to seniors - you know, people who, unlike predominantly 30-something designers and marketers, don't think they're old and who resent being lumped together with a load of others with very different needs on the basis of age. And who think it's not uncool to be over 50. (How ironic, considering that when the Baby Boomers were 18 they minted the slogan, "Never trust anyone over 30.")

Besides physical attributes and capabilities, cultural aspects matter more in a target audience's than their age per se. We who learned to type on manual typewriters bash keyboards a lot harder than those who grew up with computers. Those who grew up with the phone grudgingly sited in the hallway, using it only for the briefest of conversations are less likely to be geared toward settling in for a long, loud intimate conversation on a public street.

Last year at this event, Mobile Industry Review editor Ewan McLeod lambasted the industry because even the iPhone did not effectively serve his parents' greatest need: an easy way to receive and enjoy pictures of their grandkids. This year, Stuart Arnott showed off a partial answer, Mindings, a free app for Android tablets that turns them into smart display frames. You can send them pictures or text messages or, in Arnott's example, a reminder to take medication that, when acknowledged by a touch goes on to display the picture or message the owner really wants to see.

Another project in progress, Threedom is an attempt to create an Android design with only three buttons that uses big icons and type to provide all the same functionality but very simply.

The problem with all of this - which Arnott seems to have grasped with Mindings - is that so much of these discussions focus on the mobile phone as a device in isolation. But that's not really learning the lesson of the iPod/iPhone/iPad, which is that what matters is the ecology surrounding the device. It is true that a proportion of today's elderly do not use computers or understand why they suddenly need a mobile phone. But tomorrow's elderly will be radically different. Depending on class and profession, people who are 60 now are likely to have spent many years of his working life using computers and mobile phones. When they reach 86, what will dictate their choice of phone will be only partly whatever impairments age may bring. A much bigger issue is going to be the legacy and other systems that the phone has to work with: implantable electronic medical devices, smart electrical meters, ancient software in use because it's familiar (and has too much data locked inside it), maybe even that smart house they keep telling us we're going to have one of these days. Those phones are going to have to do a lot more than just make it easy to call your son.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 8, 2011

The grey hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views, also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers - not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea - and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time between the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden?
One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friend default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help - but for that to be effective we need far greater transparency from all these - largely American - companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 17, 2011

If you build it...

Lawrence Lessig once famously wrote that "Code is law". Today, at the last day of this year's Computers, Freedom, and Privacy, Ross Anderson's talk about the risks of centralized databases suggested a corollary: Architecture is policy. (A great line and all mine, so I thought, until reminded that only last year CFP had an EFF-hosted panel called exactly that.)

You may *say* that you value patient (for example) privacy. And you may believe that your role-based access rules will be sufficient to protect a centralized database of personal health information (for example), but do the math. The NHS's central database, Anderson said, includes data on 50 million people that is accessible by 800,000 people - about the same number as had access to the diplomatic cables that wound up being published by Wikileaks. And we all saw how well that worked. (Perhaps the Wikileaks Unit could be pressed into service as a measure of security risk.)

So if you want privacy-protective systems, you want the person vendors build for - "the man with the checkbook" to be someone who understands what policies will actually be implemented by your architecture and who will be around the table at the top level of government, where policy is being drafted. When the man with the checkbook is a doctor, you get a very different, much more functional, much more privacy protective system. When governments recruit and listen to a CIO you do not get a giant centralized, administratively convenient Wikileaks Unit.

How big is the threat?

Assessing that depends a lot, said Bruce Schneier, on whether you accept the rhetoric of cyberwar (Americans, he noted, are only willing to use the word "war" when there are no actual bodies involved). If we are at war, we are a population to be subdued; if we are in peacetime we are citizens to protect. The more the rhetoric around cyberwar takes over the headlines, the harder it will be to get privacy protection accepted as an important value. So many other debates all unfold differently depending whether we are rhetorically at war or at peace: attribution and anonymity; the Internet kill switch; built-in and pervasive wiretapping. The decisions we make to defend ourselves in wartime are the same ones that make us more vulnerable in peacetime.

"Privacy is a luxury in wartime."

Instead, "This" - Stuxnet, attacks on Sony and Citibank, state-tolerated (if not state-sponsored) hacking - "is what cyberspace looks like in peacetime." He might have, but didn't, say, "This is the new normal." But if on the Internet in 1995 no one knew you were a dog; on the Internet in 2011 no one knows whether your cyberattack was launched by a government-sponsored military operation or a couple of guys in a Senegalese cybercafé.

Why Senegalese? Because earlier, Mouhamadou Lo, a legal advisor from the Computing Agency of Senegal, had explained that cybercrime affects everyone. "Every street has two or three cybercafés," he said. "People stay there morning to evening and send spam around the world." And every day in his own country there are one or two victims. "it shows that cybercrime is worldwide."

And not only crime. The picture of a young Senegalese woman, posted in Facebook, appeared in the press in connection with the Strauss-Kahn affair because it seemed to correspond to a description given of the woman in the case. She did nothing wrong; but there are still consequences back home.

Somehow I doubt the solution to any of this will be found in the trend the ACLU's Jay Stanley and others highlighted towards robot policing. Forget black helicopters and CCTV; what about infrared cameras that capture private moments in the dark and helicopters the size of hummingbirds that "hover and stare". The mayor of Ogden, Utah wants blimps over his city, and, as Vernon M Keenan, director of the Georgia Bureau of Investigation put it, "Law enforcement does not do a good job of looking at new technologies through the prism of civil liberties."

Imagine, said the ACLU's Jay Stanley: "The chilling prospect of 100 percent enforcement."

Final conference thoughts, in no particular order:

- This is the first year of CFP (and I've been going since 1994) where Europe and the UK are well ahead on considering a number of issues. One was geotracking (Europe has always been ahead in mobile phones); but also electronic health care records and how to manage liability for online content. "Learn from our mistakes!" pleaded one Dutch speaker (re health records).

- #followfriday: @sfmnemonic; @privacywonk; @ehasbrouck; @CenDemTech; @openrightsgroup; @privacyint; @epic; @cfp11.

- The market in secondary use of health care data is now $2 billion (PriceWaterhouseCooper via Latanya Sweeney).

- Index on Censorship has a more thorough write-up of Bruce Schneier's talk.

- Today was IBM's 100th birthday.

- This year's chairs, Lillie Coney (EPIC) and Jules Polonetsky, did an exceptional job of finding a truly diverse range of speakers. A rarity at technology-related conferences.

- Join the weekly Twitter #privchat, Tuesdays at noon Eastern US time, hosted by the Center for Democracy and Technology.

- Have a good year, everybody! See you at CFP 2012 (and here every Friday until then).

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 10, 2011

The creepiness factor

"Facebook is creepy," said the person next to me in the pub on Tuesday night.

The woman across from us nodded in agreement and launched into an account of her latest foray onto the service. She had, she said uploaded a batch of 15 photographs of herself and a friend. The system immediately tagged all of the photographs of the friend correctly. It then grouped the images of her and demanded to know, "Who is this?"

What was interesting about this particular conversation was that these people were not privacy advocates or techies; they were ordinary people just discovering their discomfort level. The sad thing is that Facebook will likely continue to get away with this sort of thing: it will say it's sorry, modify some privacy settings, and people will gradually get used to the convenience of having the system save them the work of tagging photographs.

In launching its facial recognition system, Facebook has done what many would have thought impossible: it has rolled out technology that just a few weeks ago *Google* thought was too creepy for prime time.

Wired UK has a set of instructions for turning tagging off. But underneath, the system will, I imagine, still recognize you. What records are kept of this underlying data and what mining the company may be able to do on them is, of course, not something we're told about.

Facebook has had to rein in new elements of its service so many times now - the Beacon advertising platform, the many revamps to its privacy settings - that the company's behavior is beginning to seem like a marketing strategy rather than a series of bungling missteps. The company can't be entirely privacy-deaf; it numbers among its staff the open rights advocate and former MP Richard Allan. Is it listening to its own people?

If it's a strategy it's not without antecedents. Google, for example, built its entire business without TV or print ads. Instead, every so often it would launch something so cool everyone wanted to use it that would get it more free coverage than it could ever have afforded to pay for. Is Facebook inverting this strategy by releasing projects it knows will cause widely covered controversy and then reining them back in only as far as the boundary of user complaints? Because these are smart people, and normally smart people learn from their own mistakes. But Zuckerberg, whose comments on online privacy have approached arrogance, is apparently justified, in that no matter what mistakes the company has made, its user base continues to grow. As long as business success is your metric, until masses of people resign in protest, he's golden. Especially when the IPO moment arrives, expected to be before April 2012.

The creepiness factor has so far done nothing to hurt its IPO prospects - which, in the absence of an actual IPO, seem to be rubbing off on the other social media companies going public. Pandora (net loss last quarter: $6.8 million) has even increased the number of shares on offer.

One thing that seems to be getting lost in the rush to buy shares - LinkedIn popped to over $100 on its first day, and has now settled back to $72 and change (for a Price/Earnings ratio 1076) - is that buying first-day shares isn't what it used to be. Even during the millennial technology bubble, buying shares at the launch of an IPO was approximately like joining a queue at midnight to buy the new Apple whizmo on the first day, even though you know you'll be able to get it cheaper and debugged in a couple of months. Anyone could have gotten much better prices on Amazon shares for some months after that first-day bonanza, for example (and either way, in the long term, you'd have profited handsomely).

Since then, however, a new game has arrived in town: private exchanges, where people who meet a few basic criteria for being able to afford to take risks, trade pre-IPO shares. The upshot is that even more of the best deals have already gone by the time a company goes public.

In no case is this clearer than the Groupon IPO, about which hardly anyone has anything good to say. Investors buying in would be the greater fools; a co-founder's past raises questions, and its business model is not sustainable.

Years ago, Roger Clarke predicted that the then brand-new concept of social networks would inevitably become data abusers simply because they had no other viable business model. As powerful as the temptation to do this has been while these companies have been growing, it seems clear the temptation can only become greater when they have public markets and shareholders to answer to. New technologies are going to exacerbate this: performing accurate facial recognition on user-uploaded photographs wasn't possible when the first pictures were being uploaded. What capabilities will these networks be able to deploy in the future to mine and match our data? And how much will they need to do it to keep their profits coming?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 29, 2011

Searching for reality

They say that every architect has, stuck in his desk drawer, a plan for the world's tallest skyscraper; probably every computer company similarly has a plan for the world's fastest supercomputer. At one time, that particular contest was always won by Seymour Cray. Currently, the world's fastest computer is Tianhe-1A, in China. But one day soon, it's going to be Blue Waters, an IBM-built machine filling 9,000 square feet at the National Center for Supercomputing Applications at the University of Illinois at Champaign-Urbana.

It's easy to forget - partly because Champaign-Urbana is not a place you visit by accident - how mainstream-famous NCSA and its host, UIUC, used to be. NCSA is the place from which Mosaic emerged in 1993. UIUC was where Arthur C. Clarke's HAL was turned on, on January 12, 1997. Clarke's choice was not accidental: my host, researcher Robert McGrath tells me that Clarke visited here and saw the seminal work going on in networking and artificial intelligence. And somewhere he saw the first singing computer, an IBM 7094 haltingly rendering "Daisy Bell." (Good news for IBM: at that time they wouldn't have had to pay copyright clearance fees on a song that was, in 1961, 69 years old.)

So much was invented here: Telnet, for example.

"But what have they done for us lately?" a friend in London wondered.

NCSA's involvement with supercomputing began when Larry Smarr, having worked in Europe and admired the access non-military scientists had to high-performance computers, wrote a letter to the National Science Foundation proposing that the NSF should fund a supercomputing center for use by civilian scientists. They agreed, and the first version of NCSA was built in 1986. Typically, a supercomputer is commissioned for five years; after that it's replaced with the fastest next thing. Blue Waters will have more than 300,000 8-core processors and be capable of a sustained rate of 1 petaflop and a peak rate of 10 petaflops. The transformer room underneath can provide 24 megawatts of power - as energy-efficiently as possible. Right now, the space where Blue Waters will go is a large empty white space broken up by black plug towers. It looks like a set from a 1950s science fiction film.

On the consumer end, we're at the point now where a five-year-old computer pretty much answers most normal needs. Unless you're a gamer or a home software developer, the pressure to upgrade is largely off. But this is nowhere near true at the high end of supercomputing.

"People are never satisfied for long," says Tricia Barker, who showed us around the facility. "Scientists and engineers are always thinking of new problems they want to solve, new details they want to see, and new variables they want to include." Planned applications for Blue Waters include studying storms to understand why some produce tornadoes and some don't. In the 1980s, she says, the data points were kilometers apart; Blue Waters will take the mesh down to 10 meters.

"It's why warnings systems are so hit and miss," she explains. Also on the list are more complete simulations to study climate change.

Every generation of supercomputers gets closer to simulating reality and increases the size of the systems we can simulate in a reasonable amount of time. How much further can it go?

They speculate, she said, about how, when, and whether exaflops can be reached: 2018? 2020? At all? Will the power requirements outstrip what can reasonably be supplied? How big would it have to be? And could anyone afford it?

In the end, of course, it's all about the data. The 500 petabytes of storage Blue Waters will have is only a small piece of the gigantic data sets that science is now producing. Across campus, also part of NCSA, senior research scientist Ray Plante is part of the Large Synoptic Survey Telescope project which, when it gets going, will capture a third of the sky every night on 3 gigapixel cameras with a wide field of view. The project will allow astronomers to see changes over a period of days, allowing them to look more closely at phenomena such as bursters and supernovae, and study dark energy.

Astronomers have led the way in understanding the importance of archiving and sharing data, partly because the telescopes are so expensive that scientists have no choice about sharing them. More than half the Hubble telescope papers, Plante says, are based on archival research, which means research conducted on the data after a short period in which research is restricted to those who proposed (and paid for) the project. In the case of LSST, he says, there will be no proprietary period: the data will be available to the whole community from Day One. There's a lesson here for data hogs if they care to listen.

Listening to Plante - and his nearby colleague Joe Futrelle - talk about the issues involved in storing, studying, and archiving these giant masses of data shows some of the issues that lie ahead for all of us. Many of today's astronomical studies rely on statistics, which in turn requires matching data sets that have been built into catalogues without necessarily considering who might in future need to use them: opening the data is only the first step.

So in answer to my friend: lots. I saw only about 0.1 percent of it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 15, 2011

The open zone

This week my four-year-old computer had a hissy fit and demanded, more or less simultaneously, a new graphics card, a new motherboard, and a new power supply. It was the power supply that was the culprit: when it blew it damaged the other two pieces. I blame an incident about six months ago when the power went out twice for a few seconds each time, a few minutes apart. The computer's always been a bit fussy since.

I took it to the tech guys around the corner to confirm the diagnosis, and we discussed which replacements to order and where to order them from. I am not a particularly technical person, and yet even I can repair this machine by plugging in replacement parts and updating some software. (It's fine now, thank you.)

Here's the thing: at no time did anyone say, "It's four years old. Just get a new one." Instead, the tech guys said, "It's a good computer with a good processor. Sure, replace those parts." A watershed moment: the first time a four-year-old computer is not dismissed as obsolete.

As if by magic, confirmation turned up yesterday, when the Guardian's Charles Arthur asked whether the PC market has permanently passed its peak. Arthur goes on to quote Jay Chou, a senior research analyst at IDC, suggesting that we are now in the age of "good-enough computing" and that computer manufacturers will now need to find ways to create a "compelling user experience". Apple is the clear leader in that arena, although it's likely that if I'd had a Mac instead of a PC it would have been neither so easy nor so quick and inexpensive to fix my machine and get back to work on it, Macs are wonders of industrial design, but as I noted in 2007 when I built this machine, building PCs is now a color-by-numbers affair plugged together out of subsystem pieces that plug together in only one way. What it lacks in elegance compared to a Mac is more than made up for by being able to repair it myself.

But Chou is likely right that this is not the way the world is going.

In his 1998 book The Invisible Computer, usability pioneer Donald Norman projected a future of information appliances, arguing that computers would become invisible because they would be everywhere. (He did not, however, predict the ubiquitous 20-second delay that would accompany this development. You know, it used to be you could turn something on and it would work right away because it didn't have to load software into its memory?) For his model, Norman took electric motors: in the early days you bought one electric motor and used it to power all sorts of variegated attachments; later (now) you found yourself owning dozens of electric motors, all hidden inside appliances.

The trade-off is pretty much the same: the single electric motor with attachments was much more repairable by a knowledgeable end user than today's sealed black-box appliances are. Similarly, I can rebuild my PC, but I can only really replace the hard drive on my laptop, and the battery on my smart phone. Iphone users can't even do that. Norman, whose interest is usability, doesn't - or didn't, since he's written other books since - see this as necessarily a bad deal for consumers, who just want their technology to work intuitively so they can use it to get stuff done.

Jonathan Zittrain, though, has generally taken the opposite view, arguing in his book The Future of the Internet - and How to Stop It and in talks such as the one he gave at last year's Web science meeting that the general-purpose computer, which he dates to 1977, is dying. With it, to some extent, is going the open Internet; it was at that point that, to illustrate what he meant by curated content, he did a nice little morph from the ultra-controlled main menu of CompuServe circa 1992 to today's iPhone home screen.

"How curated do we want things to be?" he asked.

It's the key question. Zittrain's view, backed up by Tim Wu in The Master Switch is that security and copyright may be the levers used to close down general-purpose computers and the Internet, leaving us with a corporately-owned Internet that runs on black boxes to which individual consumers have little or no access. This is, ultimately, what the "Open" in Open Rights Group seems to me to be about: ensuring that the most democratic medium ever invented remains a democratic medium.

Clearly, there are limits. The earliest computer kits were open - but only to the relatively small group of people with - or willing to acquire - considerable technical skill. My computer would not be more open to me if I had to get out a soldering iron to fix my old motherboard and code my own operating system. Similarly, the skill required to deal with security threats like spam and malware attacks raises the technical bar of dealing with computers to the point where they might as well be the black boxes Zittrain fears. But somewhere between the soldering iron and the point-and-click of a TV remote control there has to be a sweet spot where the digital world is open to the most people. That's what I hope we can find.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 8, 2011

Brought to book

JK Rowling is seriously considering releasing the Harry Potter novels as ebooks, while Amanda Hocking, who's sold a million or so ebooks has signed a $2 million contract with St. Martin's Press. In the same week. It's hard not to conclude that ebooks are finally coming of age.

And in many ways this is a good thing. The economy surrounding the Kindle, Barnes and Noble's Nook, and other such devices is allowing more than one writer to find an audience for works that mainstream publishers might have ignored. I do think hard work and talent will usually out, and it's hard to believe that Hocking would not have found herself a good career as a writer via the usual routine of looking for agents and publishers. She would very likely have many fewer books published at this point, and probably wouldn't be in possession of the $2 million it's estimated she's made from ebook sales.

On the other hand, assuming she had made at least a couple of book sales by now, she might be much more famous: her blog posting explaining her decision notes that a key factor is that she gets a steady stream of complaints from would-be readers that they can't buy her books in stores. She expects to lose money on the St. Martin's deal compared to what she'd make from self-publishing the same titles. To fans of disintermediation, of doing away with gatekeepers and middle men and allowing artists to control their own fates and interact directly with their audiences, Hocking is a self-made hero.

And yet...the future of ebooks may not be so simply rosy.

This might be the moment to stop and suggest reading a little background on book publishing from the smartest author I know on the topic, science fiction writer Charlie Stross. In a series of blog postings he's covered common misconceptions about publishing, why the Kindle's 2009 UK launch was bad news for writers, and misconceptions about ebooks. One of Stross's central points: epublishing platforms are not owned by publishers but by consumer electronics companies - Apple, Sony, Amazon.

If there's one thing we know about the Net and electronic media generally it's that when the audience for any particular new medium - Usenet, email, blogs, social networks - gets to be a certain size it attracts abuse. It's for this reason that every so often I argue that the Internet does not scale well.

In a fascinating posting on Patrick and Theresa Nielsen-Hayden's blog Making Light, Jim Macdonald notes the case of Canadian author S K S Perry, who has been blogging on LiveJournal about his travails with a thief. Perry, having had no luck finding a publisher for his novel Darkside, had posted it for free on his Web site, where a thief copied it and issued a Kindle edition. Macdonald links this sorry tale (which seems now to have reached a happy-enough ending) with postings from Laura Hazard Owen and Mike Essex that predict a near future in which we are awash in recycled ebook...spam. As all three of these writers point out, there is no system in place to do the kind of copyright/plagiarism checking that many schools have implemented. The costs are low; the potential for recycling content vast; and the ease of gaming the ratings system extraordinary. And either way, the ebook retailer makes money.

Macdonald's posting primarily considers this future with respect to the challenge for authors to be successful*: how will good books find audiences if they're tiny islands adrift in a sea of similar-sounding knock-offs and crap? A situation like that could send us all scurrying back into the arms of people who publish on paper. That wouldn't bother Amazon-the-bookseller; Apple and others without a stake in paper publishing are likely to care more (and promising authors and readers due care and diligence might help them build a better, differentiated ebook business).

There is a mythology that those who - like the Electronic Frontier Foundation or the Open Rights Group - oppose the extension and tightening of copyright are against copyright. This is not the case: very few people want to do away with copyright altogether. What most campaigners in this area want is a fairer deal for all concerned.

This week the issue of term extension for sound recordings in the EU revived when Denmark changed tack and announced it would support the proposals. It's long been my contention that musicians would be better served by changes in the law that would eliminate some of the less fair terms of typical contracts, that would provide for the reversion of rights to musicians when their music goes out of commercial availability, and that would alter the balance of power, even if only slightly, in favor of the musicians.

This dystopian projected future for ebooks is a similar case. It is possible to be for paying artists and even publishers and still be against the imposition of DRM and the demonization of new technologies. This moment, where ebooks are starting to kick into high gear, is the time to find better ways to help authors.

*Successful: an author who makes enough money from writing books to continue writing books.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 14, 2011

Face time

The history of the Net has featured many absurd moments, but this week was some sort of peak of the art. In the same week I read that a) based a $450 million round of investment from Goldman Sachs Facebook is now valued at $50 billion, higher than Boeing's market capitalization and b) Facebook's founder, Mark Zuckerberg, is so tired of the stress of running the service that he plans to shut it down on March 15. As I seem to recall a CS Lewis character remarking irritably, "Why don't they teach logic in these schools?" If you have a company worth $50 billion and you don't much like running it any more, you sell the damn thing and retire. It's not like Zuckerberg even needs to wait to be Time's Man of the Year.

While it's safe to say that Facebook isn't going anywhere soon, it's less clear what its long-term future might be, and the users who panicked at the thought of the service's disappearance would do well to plan ahead. Because: if there's one thing we know about the history of the Net's social media it's that the party keeps moving. Facebook's half-a-billion-strong user base is, to be sure, bigger than anything else assembled in the history of the Net. But I think the future as seen by Douglas Rushkoff, writing for CNN last week is more likely: Facebook, he argued based on its arguably inflated valuation, is at the beginning of its end, as MySpace was when Rupert Murdoch bought it in 2005 for $580 million. (Though this says as much about Murdoch's Net track record as it does about MySpace: Murdoch bought the text-based Delphi, at its peak moment in late 1993.)

Back in 1999, at the height of the dot-com boom, the New Yorker published an article (abstract; full text requires subscription) comparing the then-spiking stock price of AOL with that of the Radio Corporation of America back in the 1920s, when radio was the hot, new democratic medium. RCA was selling radios that gave people unprecedented access to news and entertainment (including stock quotes); AOL was selling online accounts that gave people unprecedented access to news, entertainment, and their friends. The comparison, as the article noted, wasn't perfect, but the comparison chart the article was written around was, as the author put it, "jolly". It still looks jolly now, recreated some months later for this analysis of the comparison.

There is more to every company than just its stock price, and there is more to AOL than its subscriber numbers. But the interesting chart to study - if I had the ability to create such a chart - would be the successive waves of rising, peaking, and falling numbers of subscribers of the various forms of social media. In more or less chronological order: bulletin boards, Usenet, Prodigy, Genie, Delphi, CompuServe, AOL...and now MySpace, which this week announced extensive job cuts.

At its peak, AOL had 30 million of those; at the end of September 2010 it had 4.1 million in the US. As subscriber revenues continue to shrink, the company is changing its emphasis to producing content that will draw in readers from all over the Web - that is, it's increasingly dependent on advertising, like many companies. But the broader point is that at its peak a lot of people couldn't conceive that it would shrink to this extent, because of the basic principle of human congregation: people go where their friends are. When the friends gradually start to migrate to better interfaces, more convenient services, or simply sites their more annoying acquaintances haven't discovered yet, others follow. That doesn't necessarily mean death for the service they're leaving: AOL, like CIX, the The WELL, and LiveJournal before it, may well find a stable size at which it remains sufficiently profitable to stay alive, perhaps even comfortably so. But it does mean it stops being the growth story of the day.

As several financial commentators have pointed out, the Goldman investment is good for Goldman no matter what happens to Facebook, and may not be ring-fenced enough to keep Facebook private. My guess is that even if Facebook has reached its peak it will be a long, slow ride down the mountain and between then and now at least the early investors will make a lot of money.

But long-term? Facebook is barely five years old. According to figures leaked by one of the private investors, its price-earnings ratio is 141. The good news is that if you're rich enough to buy shares in it you can probably afford to lose the money.

As far as I'm aware, little research has been done studying the Net's migration patterns. From my own experience, I can say that my friends lists on today's social media include many people I've known on other services (and not necessarily in real life) as the old groups reform in a new setting. Facebook may believe that because the profiles on its service are so complex, including everything from status updates and comments to photographs and games, users will stay locked in. Maybe. But my guess is that the next online party location will look very different. If email is for old people, it won't be long before Facebook is, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 19, 2010

Power to the people

We talk often about the fact that ten years of effort - lawsuits, legislation, technology - on the part of the copyright industries has made barely a dent in the amount of material available online as unauthorized copies. We talk less about the similar situation that applies to privacy despite years of best efforts by Privacy International, Electronic Privacy Information Center, Center for Democracy and Technology, Electronic Frontier Foundation, Open Rights Group, No2ID, and newcomer Big Brother Watch. The last ten years have built Google, and Facebook, and every organization now craves large data stores of personal information that can be mined. Meanwhile, governments are complaisant, possibly because they have subpoena power. It's been a long decade.

"Information is the oil of the 1980s," wrote Thomas McPhail and Brenda McPhail in 1987 in an article discussing the politics of the International Telecommunications Union, and everyone seems to take this encomium seriously.

William Heath, who spent his early career founding and running Kable, a consultancy specializing in government IT. The question he focused on a lot: how to create the ideal government for the digital era, has been saying for many months now that there's a gathering wave of change. His idea is that the *new* new thing is technologies to give us back control and up-end the current situation in which everyone behaves as if they own all the information we give them. But it's their data only in exactly the same way that taxpayers' money belongs to the government. They call it customer relationship management; Heath calls the data we give them volunteered personal information and proposes instead vendor relationship management.

Always one to put his effort where his mouth is (Heath helped found the Open Rights Group, the Foundation for Policy Research, and the Dextrous Web as well as Kable), Heath has set up not one, but two companies. The first, Ctrl-Shift, is a research and advisory businesses to help organizations adjust and adapt to the power shift. The second, Mydex, a platform now being prototyped in partnership with the Department of Work and Pensions and several UK councils (PDF). Set up as a community interest company, Mydex is asset-locked, to ensure that the company can't suddenly reverse course and betray its customers and their data.

The key element of Mydex is the personal data store, which is kept under each individual's own control. When you want to do something - renew a parking permit, change your address with a government agency, rent a car - you interact with the remote council, agency, or company via your PDS. Independent third parties verify the data you present. To rent a car, for example, you might present a token from the vehicle licensing bureau that authenticates your age and right to drive and another from your bank or credit card company verifying that you can pay for the rental. The rental company only sees the data you choose to give it.

It's Heath's argument that such a setup would preserve individual privacy and increase transparency while simultaneously saving companies and governments enormous sums of money.

"At the moment there is a huge cost of trying to clean up personal data," he says. "There are 60 to 200 organisations all trying to keep a file on you and spending money on getting it right. If you chose, you could help them." The biggest cost, however, he says, is the lack of trust on both sides. People vanish off the electoral rolls or refuse to fill out the census forms rather than hand over information to government; governments treat us all as if we were suspected criminals when all we're trying to do is claim benefits we're entitled to.

You can certainly see the potential. Ten years ago, when they were talking about "joined-up government", MPs dealing with constituent complaints favored the notion of making it possible to change your address (for example) once and have the new information propagate automatically throughout the relevant agencies. Their idea, however, was a huge, central data store; the problem for individuals (and privacy advocates) was that centralized data stores tend to be difficult to keep accurate.

"There is an oft-repeated fallacy that existing large organizations meant to serve some different purpose would also be the ideal guardians of people's personal data," Heath says. "I think a purpose-created vehicle is a better way." Give everyone a PDS, and they can have the dream of changing their address only once - but maintain control over where it propagates.

There are, as always, key questions that can't be answered at the prototype stage. First and foremost is the question of whether and how the system can be subverted. Heath's intention is that we should be able to set our own terms and conditions for their use of our data - up-ending the present situation again. We can hope - but it's not clear that companies will see it as good business to differentiate themselves on the basis of how much data they demand from us when they don't now. At the same time, governments who feel deprived of "their" data can simply pass a law and require us to submit it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 1, 2010

Duty of care

"Anyone who realizes how important the Web is," Tim Berners-Lee said on Tuesday, "has a duty of care." He was wrapping up a two-day discussion meeting at the Royal Society. The subject: Web science.

What is Web science? Even after two days, it's difficult to grasp, in part because defining it is a work in progress. Here are some of the disciplines that contributed: mathematics, philosophy, sociology, network science, and law, plus a bunch of much more directly Webby things that don't fit easily into categories. Which of course is the point: Web science has to cover much more than just the physical underpinnings of computers and network wires. Computer science or network science can use the principles of mathematics and physics to develop better and faster machines and study architectures and connections. But the Web doesn't exist without the people putting content and applications on it, and so Web science must be as much about human behaviour as about physics.

"If we are to anticipate how the Web will develop, we will require insight into our own nature," Nigel Shadbolt, one of the event's convenors, said on Monday. Co-convenor Wendy Hall has said, similarly, "What creates the Web is us who put things on it, and that's not natural or engineered.". Neither natural (biological systems) or engineered (planned build-out like the telecommunications networks), but something new. If we can understand it better, we can not only protect it better, but guide it better toward the most productive outcomes, just as farmers don't haphazardly interbreed species of corn but use their understanding to select for desirable traits.

The simplest parts of the discussions to understand, therefore, were (ironically) the mathematicians. Particularly intriguing was the former chief scientist Robert May, whose approach to removing nodes from the network to make it non-functional applied equally to the Web, epidemiology, and banking risk.

This is all happening despite the recent Wired cover claiming the "Web is dead". Dead? Facebook is a Web site; Skype, the app store, IM clients, Twitter, and the New York Times all reach users first via the Web even if they use their iPhones for subsequent visits (and how exactly did they buy those iPhones, hey?) Saying it's dead is almost exactly the old joke about how no one goes to a particular restaurant any more because it's too crowded.

People who think the Web is dead have stopped seeing it. But the point of Web science is that for 20 years we've been turning what started as an academic playground into a critical infrastructure, and for government, finance, education, and social interaction to all depend on the Web it must have solid underpinnings. And it has to keep scaling - in a presentation on the state of deployment of IPv6 in China, Jianping Wu noted that Internet penetration in China is expected to jump from 30 percent to 70 percent in the next ten to 20 years. That means adding 400-900 million users. The Chinese will have to design, manage, and operate the largest infrastructure in the world - and finance it.

But that's the straightforward kind of scaling. IBMer Philip Tetlow, author of The Web's Awake (a kind of Web version of the Gaia hypothesis), pointed out that all the links in the world are a finite set; all the eyeballs in the world looking at them are a finite set...but all the contexts surrounding them...well, it's probably finite but it's not calculable (despite Pierre Levy's rather fanciful construct that seemed to suggest it might be possible to assign a URI to every human thought). At that level, Tetlow believes some of the neat mathematical tools, like Jennifer Chayes' graph theory, will break down.

"We're the equivalent of precision engineers," he said, when what's needed are the equivalent of town planners and urban developers. "And we can't build these things out of watches."

We may not be able to build them at all, at least not immediately. Helen Margetts outlined the constraints on the development of egovernment in times of austerity. "Web science needs to map, understand, and develop government just as for other social phenomena, and export back to mainstream," she said.

Other speakers highlighted gaps between popular mythology and reality. MIT's David Carter noted that, "The Web is often associated with the national and international but not the local - but the Web is really good at fostering local initiatives - that's something for Web science to ponder." Noshir Contractor, similarly, called out The Economist over the "death of distance": "More and more research shows we use the Web to have connections with proximate people."

Other topics will be far more familiar to net.wars readers: Jonathan Zittrain explored the ways the Web can be broken by copyright law, increasing corporate control (there was a lovely moment when he morphed the iPhone's screen into the old CompuServe main menu), the loss of uniformity so that the content a URL points to changes by geographic location. These and others are emerging points of failure.

We'll leave it to an unidentified audience question to sum up the state of Web science: "Nobody knows what it is. But we are doing it."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

August 20, 2010

Naming conventions

Eric Schmidt, the CEO of Google, is not a stupid person, although sometimes he plays one for media consumption. At least, that's how it seemed this week, when the Wall Street Journal reported that he had predicted, apparently in all seriousness, that the accumulation of data online may result in the general right for young people to change their names on reaching adulthood in order to escape the embarrassments of their earlier lives.

As Danah Boyd commented in response, it is to laugh.

For one thing, every trend in national and international law is going toward greater, permanent trackability. I know the UK is dumping the ID card and many US states are stalling on Real ID, but try opening a new bank account in the US or Europe, especially if you're a newly arrived foreigner. It's true that it's not so long ago - 20 years, perhaps - that people, especially in California, did change their names at the drop of an acid tablet. I'm fairly sure, for example, that the woman I once knew as Dancingtree Moonwater was not named that by her parents. But those days are gone with the anti-money laundering regulations, the anti-terrorist laws, and airport security.

For one thing, when is he imagining the adulthood moment to take place? When they're 17 and applying to college and need to cite their past records of good works, community involvement, and academic excellence? When they're 21 and graduating from college and applying for jobs and need to cite their past records of academic excellence, good works, and community involvement? I don't know about you, but I suspect that an admissions officer/prospective employer would be deeply suspicious of a kid coming of age today who had, apparently, no online history at all. Even if that child is a Mormon.

For another, changing your name doesn't change your identity (even if the change is because you got married). Investigators who track down people who've dropped out of their lives and fled to distant parts to start new ones often do so by, among other things, following their hobbies. You can leave your spouse, abandon your children, change jobs, and move to a distant location - but it isn't so easy to shake a passion for fly-fishing or 1957 Chevys. The right to reinvent yourself, as Action on Rights for Children's Terri Dowty pointed out during the campaign against the child-tracking database ContactPoint, is an important one. But that means letting minor infractions and youthful indiscretions fade into the mists of time, not to be pulled out and laughed until, say, 30 years hence, rather than being recorded in a database that thinks it "knows" you.

I think Schmidt knows all this perfectly well. And I think if such an infrastructure - turn 16, create a new identity - were ever to be implemented the first and most significant beneficiary would be...Google. I would expect most people's search engine use to provide as individual a fingerprint as, well, fingerprints. (This is probably less true for journalists, who research something different every week and therefore display the database equivalent of multiple personality disorder.)

Clearly if the solution to young people posting silly stuff online where posterity can bite them on the ass is a change of name the only way to do it is to assign kids online-only personas at birth that can be retired when they reach an age of reason. But in such a scenario, some kids would wind up wanting to adopt their online personas as their real ones because their online reputation has become too important in their lives. In the knowledge economy, as plenty of others have pointed out, reputation is everything.

This is, of course, not a new problem. As usual. When, in 1995, DejaNews (bought by Google some years back to form the basis of the Google Groups archive) was created, it turned what had been ephemeral Usenet postings into a permanent archive. If you think people post stupid stuff on Facebook now, when they know their friends and families are watching, you should have seen the dumb stuff they posted on Usenet when they thought they were in the online equivalent of Benidorm, where no one knew them and there were no consequences. Many of those Usenet posters were students. But I also recall the newly appointed CEO of a public company who went around the WELL deleting all his old messages. Didn't mean there weren't copies...or memories.

There is a genuine issue here, though, and one that a very smart friend with a 12-year-old daughter worries about regularly: how do you, as a parent, guide your child safely through the complexities of the online world and ensure that your child has the best possible options for her future while still allowing her to function socially with her peers? Keeping her offline is not an answer. Neither are facile statements from self-interested CEOs who, insulated by great wealth and technological leadership, prefer to pretend to themselves that these issues have already been decided in their favor.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 9, 2010

The big button caper

There's a moment early in the second season of the TV series Mad Men when one of the Sterling Cooper advertising executives looks out the window and notices, in a tone of amazement, that young people are everywhere. What he was seeing was, of course, the effect of the baby boom. The world really *was* full of young people.

"I never noticed it," I said to a friend the next day.

"Well, of course not," he said. "You were one of them."

Something like this will happen to today's children - they're going to wake up one day and think the world is awash in old people. This is a fairly obvious consequence of the demographic bulge of the Baby Boomers, which author Ken Dychtwald has compared to "a pig going through a python".

You would think that mobile phone manufacturers and network operators would be all over this: carrying a mobile phone is an obvious safety measure for an older, perhaps infirm or cognitively confused person. But apparently the concept is more difficult to grasp than you'd expect, and so Simon Rockman, the founder and former publisher of What Mobile and now working for the GSM Association, convened a senior mobile market conference on Tuesday.

Rockman's pitch is that the senior market is a business opportunity: unlike other market sectors it's not saturated; older users are less likely to be expensive data users and more loyal. The margins are better, he argues, even if average revenue per user is low.

The question is, how do you appeal to this market? To a large extent, seniors are pretty much like everyone else: they want gadgets that are attractive, even cool. They don't want the phone equivalent of support stockings. Still, many older people do have difficulties with today's ultra-tiny buttons, icons, and screens, iffy sound quality, and complex menu structures. Don't we all?

It took Ewan MacLeod, the editor of Mobile Industry Review to point out the obvious. What is the killer app for most seniors in any device? Grandchildren, pictures of. MacLeod has a four-week-old son and a mother whose desire to see pictures apparently could only be fully satisfied by a 24-hour video feed. Industry inadequacy means that MacLeod is finding it necessary to write his own app to make sending and receiving pictures sufficiently simple and intuitive. This market, he pointed out, isn't even price-sensitive. Tell his mother she'll need to spend £60 on a device so she can see daily pictures of her grandkids, and she'll say, "OK." Tell her it will cost £500, and she'll say..."OK."

I bet you're thinking, "But the iPhone!" And to some extent you're right: the iPhone is sleek, sexy, modern, and appealing; it has a zoom function to enlarge its display fonts, and it is relatively easy to use. And so MacLeod got all the grandparents onto iPhones. But he's having to write his own app to easily organize and display the photos the phones receive: the available options are "Rubbish!"

But even the iPhone has problems (even if you're not left-handed). Ian Hosking, a senior research associate at the Cambridge Engineering Design Centre, overlaid his visual impairment simulation software so it was easy to see. Lack of contrast means the iPhone's white on black type disappears unreadably with only a small amount of vision loss. Enlarging the font only changes the text in some fields. And that zoom feature, ah, yes, wonderful - except that enabling it requires you to double-tap and then navigate with three fingers. "So the visual has improved, but the dexterity is terrible."

Oops.

In all this you may have noticed something: that good design is good design, and a phone design that accommodates older people will also most likely be a more usable phone for everyone else. These are principles that have not changed since Donald Norman formulated them in his classic 1998 book The Design of Everyday Things. To be sure there is some progress. Evelyne Pupeter-Fellner, co-founder of Emporia, for example, pointed out the elements of her company's designs that are quietly targeted at seniors: the emergency call system that automatically dials, in turn, a list of selected family members or friends until one answers; the ringing mechanism that lights up the button to press to answer. The radio you can insert the phone into that will turn itself down and answer the phone when it rings. The design that lets you attach it to a walker - or a bicycle. The single-function buttons. Similarly, the Doro was praised.

And yet it could all be so different - if we would only learn from Japan, where nearly 86 percent of seniors have - and use data on - mobile phones, according to Kei Shimada, founder of Infinita.

But in all the "beyond big buttons" discussion and David Doherty's proposition that health applications will be the second killer app, one omission niggled: the aging population is predominantly female, and the older the cohort the more that is true.

Who are least represented among technology designers and developers?

Older women.

I'd call that a pretty clear mismatch. Somewhere between we who design and they who consume is your problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 18, 2010

Things I learned at this year's CFP

- There is a bill in front of Congress to outlaw the sale of anonymous prepaid SIMs. The goal seems to be some kind of fraud and crime prevention. But, as Ed Hasbrouck points out, the principal people who are likely to be affected are foreign tourists and the Web sites who sell prepaid SIMS to them.

- Robots are getting near enough in researchers' minds for them to be spending significant amounts of time considering the legal and ethical consequences in real life - not in Asimov's fictional world where you could program in three safety llaws and your job was done. Ryan Calo points us at the work of Stanford student Victoria Groom on human-robot interaction. Her dissertation research not yet on the site, discovered that humans allocate responsibility for success and failure proportionately according to how anthropomorphic the robot is.

- More than 24 percent of tweets - and rising sharply - are sent by automated accounts, according to Miranda Mowbray at HP labs. Her survey found all sorts of strange bots: things that constantly update the time, send stock quotes, tell jokes, the tea bot that retweets every mention of tea...

- Google's Kent Walker, the 1997 CFP chair, believes that censorship is as big a threat to democracy as terrorism, and says that open architectures and free expression are good for democracy - and coincidentally also good for Google's business.

- Microsoft's chief privacy strategist, Peter Cullen, says companies must lead in privacy to lead in cloud computing. Not coincidentally, others are the conference note that US companies are losing business to Europeans in cloud computing because EU law prohibits the export of personal data to the US, where data protection is insufficient.

- It is in fact possible to provide wireless that works at a technical conference. And good food!

- The Facebook Effect is changing the attitude of other companies about user privacy. Lauren Gelman, who helps new companies with privacy issues, noted that because start-ups all see Facebook's success and want to be the next 400 million-user environment, there was a strong temptation to emulate Facebook's behavior. Now, with the angry cries mounting from consumers, she's having to spend less effort convincing them about the level of pushback companies will get from consumers if they change their policies and defy their expectations. Even so, it's important to ensure that start-ups include privacy in their budgets and not become an afterthought. In this respect, she makes me realize, privacy in 2010 is at the stage that usability was in the early 1990s.

- All new program launches come through the office of the director of Yahoo!'s business and human rights program, Ebele Okabi-Harris. "It's very easy for the press to focus on China and particular countries - for example, Australia last year, with national filtering," she said, "but for us as a company it's important to have a structure around this because it's not specific to any one region." It is, she added later, a "global problem".

- We should continue to be very worried about the database state because the ID cards repeal act continues the trend toward data sharing among government departments and agencies, according to Christina Zaba from No2ID.

- Information brokers and aggregators, operating behind the scenes, are amassing incredible amounts of details about Americans and it can require a great deal of work to remove one's information from these systems. The main customers of these systems are private investigators, debt collectors, media, law firms, and law enforcement. The Privacy Rights Clearinghouse sees many disturbing cases, as Beth Givens outlined, as does Pam Dixon's World Privacy forum.

- I always knew - or thought I knew - that the word "robot" was not coined by Asimov but by Karel Capek for his play R.U.R. (for "Rossum's Universal Robots", which coincidentally I also know that playing a robot in same was Michael Caine's first acting job). But Twitterers tell me that this isn't quite right. The word is derived from the Czech word "robota", "compulsory work for a feudal landlord". And that it was actually coined by Capek's older brother, Josef..

- There will be new privacy threats emerging from automated vehicles, other robots, and voicemail transcription services, sooner rather than later.

- Studying the inner workings of an organization like the International Civil Aviation Organization is truly difficult because the time scales - ten years to get from technical proposals to mandated standard, which is when the public becomes aware of - are a profound mismatch for the attention span of media and those who fund NGOs. Anyone who feels like funding an observer to represent civil society at ICAO should get in touch with Edward Hasbrouck.

- A lot of our cybersecurity problems could be solved by better technology.

- Lillie Coney has a great description of deceptive voting practices designed to disenfranchise the opposition: "It's game theory run amok!"

- We should not confuse insecure networks (as in vulnerable computers and flawed software) with unsecured networks (as in open wi-fi).

- Next year's conference chairs are EPIC's Lillie Coney and Jules Polonetsky. It will be in Washington, DC, probably the second or third week in June. Be there!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 12, 2010

The cost of money

Everyone except James Allan scrabbled in the bag Joe DiVanna brought with him to the Digital Money Forum (my share: a well-rubbed 1908 copper penny). To be fair, Allan had already left by then. But even if he hadn't he'd have disdained the bag. I offered him my pocketful of medium-sized change and he looked as disgusted as if it were a handkerchief full of snot. That's what living without cash for two years will do to you.

Listen, buddy, like the great George Carlin said, your immune system needs practice.

People in developed countries talk a good game about doing away with cash in favor of credit cards, debit cards, and Oyster cards, but the reality, as Michael Salmony pointed out, is that 80 percent of payments in Europe are...cash. Cash seems free to consumers (where cards have clearer charges), but costs European banks €84 billion a year. Less visibly banks also benefit (when the shadow economy hoards high-value notes it's an interest-free loan), and governments profit from Seigniorage (when people buy but do not spend coins).

"Any survey about payment methods," Salmony said Wednesday, "reveals that in all categories cash is the preferred payment method." You can buy a carrot or a car; it costs you nothing directly; it's anonymous, fast, and efficient. "If you talk directly to supermarkets, they all agree that cash is brilliant - they have sorting machines, counting machines...It's optimized so well, much better than cards."

The "unbanked", of course, such as the London migrants Kavita Datta studies, have no other options. Talk about the digital divide, this is the digital money divide: the cashless society excludes people who can't show passports, can't prove their address, or are too poor to have anything to bank with.

"You can get a job without a visa, but not without a bank account," one migrant worker told her. Electronic payments, ain't they grand?

But go to Africa, Asia, or South America, and everything turns upside down. There, too, cash is king - but there, unlike here with banks and ATMs on every corner and a fully functioning system of credit cards and other substitutes, cash is a terrible burden. Of the 2.6 billion people living on less than $2 a day, said Ignacio Mas, fewer than 10 percent have access to formal financial services. Poor people do save, he said, but their lack of good options means they save in bad ways.

They may not have banks, but most do have mobile phones, and therefore digital money means no long multi-bus rides to pay bills. It means being able to send money home at low cost. It means saving money that can't be easily stolen. In Ghana 80 percent of the population have no access to financial services - but 80 percent are covered by MTN, which is partnering with the banks to fill the gap. In Pakistan, Tameer Microfinance Bank partnered with Telenor to launch Easy-Peisa, which did 150,000 transactions its first month and expects a million by December. One million people produce milk in Pakistan; Nestle pays them all painfully by check every month. The opportunity in these countries to leapfrog traditional banking and head into digital payments is staggering, and our banks won't even care. The average account balance of customers for Kenya's M-Pesa customers is...$3.

When we're not destroying our financial system, we have more choices. If we're going to replace cash, what do we replace it with and what do we need? Really smart people to figure out how to do it right - like Isaac Newton, said Thomas Levenson. (Really. Who knew Isaac Newton had a whole other life chasing counterfeiters?) Law and partnership protocols and banks to become service providers for peer-to-peer finance, said Chris Cook. "An iTunes moment," said Andrew Curry. The democratization of money, suggested conference organizer David Birch.

"If money is electronic and cashless, what difference does it make what currency we use?" Why not...kilowatt hours? You're always going to need to heat your house. Global warming doesn't mean never having to say you're cold.

Personally, I always thought that if our society completely collapsed, it would be an excellent idea to have a stash of cigarettes, chocolate, booze, and toilet paper. But these guys seemed more interested in the notion of Facebook units. Well, why not? A currency can be anything. Second Life has Linden dollars, and people sell virtual game world gold for real money on eBay.

I'd say for the same reason that most people still walk around with notes in their wallet and coins in their pocket: we need to take our increasing abstraction step by step. Many have failed with digital cash, despite excellent technology, because they asked people to put "real" money into strange units with no social meaning and no stored trust. Birch is right: storing value in an Oyster card is no different than storing value in Beenz. But if you say that money is now so abstract that it's a collective hallucination, then the corroborative details that give artistic verisimilitude to an otherwise bald and unconvincing currency really matter.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

September 11, 2009

Public broadcasting

It's not so long ago - 2004, 2005 - that the BBC seemed set to be the shining champion of the Free World of Content, functioning in opposition to *AA (MPAA, RIAA) and general entertainment industry desire for total content lockdown. It proposed the Creative Archive; it set up BBC Backstage; and it released free recordings of the classics for download.

But the Creative Archive released some stuff and then ended the pilot in 2006, apparently because much of the BBC's content doesn't really belong to it. And then came the iPlayer. The embedded DRM, along with its initial Windows-only specification (though the latter has since changed), made the BBC look like less of a Free Culture hero.

Now, via the consultative offices of Ofcom we learn that the BBC wants to pacify third-party content owners by configuring its high-definition digital terrestrial services - known to consumers as Freeview HD - to implement copy protection. This request is, of course, part of the digital switchover taking place across the country over the next four years.

The thing is, the conditions under which the BBC was granted the relevant broadcasting licenses require that content be broadcast free-to-air. That is, unencrypted, which of course means no copy protection. So the BBC's request is to be allowed instead to make the stream unusable to outsiders by compressing the service information data using in-house-developed lookup tables. Under the proposal, the BBC will make those tables available free of charge to manufacturers who agree to its terms. Or, pretty clearly, the third party rights holders' terms.

This is the kind of hair-splitting the American humorist Jean Kerr used to write about when she detailed conversations with her children. She didn't think, for example, to include in the long list of things they weren't supposed to do when they got up first on a Sunday morning, the instruction not to make flour paste and glue together all the pages of the Sunday New York Times. "Now, of course, I tell them."

When the BBC does it, it's not so funny. Nor is it encouraging in the light of the broader trend toward claiming intellectual property protection in metadata when the data itself is difficult to restrict. Take, for example, the MTA's Metro-North Railroad, which runs commuter trains (on which Meryl Streep and Robert de Niro so often met in the 1984 movie Falling in Love) from New York City up both sides of the Hudson River to Connecticut. MTA has been issuing cease-and-desist orders to the owner of StationStops a Web site and iPhone schedule app dedicated to the Metro-North trains, claiming that it owns the intellectual property rights in its scheduling data. If it were in the UK, the Guardian's Free Our Data campaign would be all over it.

In both cases - and many others - it's hard to understand the originating organisation's complaint. Metro-North is in the business of selling train tickets; the BBC is supposed to measure its success in 1) the number of people who consumer its output; 2) the educational value of its output to the license fee-paying public. Promulgating schedule data can only help Metro-North, which is not a commercial company but a public benefit corporation owned by the State of New York. It's not going to make much from selling data licenses.

The BBC's stated intention is to prevent perfect, high-definition copies of broadcast material from escaping into the hands of (evil) file-sharers. The alternative, it says, would be to amend its multiplex license to allow it to encrypt the data streams. Which, they hasten to add, would require manufacturers to amend their equipment, which they certainly would not be able to do in time for the World Cup next June. Oh, the horror!

Fair enough, the consumer revolt if people couldn't watch the World Cup in HD because their equipment didn't support the new encryption standard would indeed be quite frightening to behold. But the BBC has a third alternative: tell rights holders that the BBC is a public service broadcaster, not a policeman for hire.

Manufacturers will still have to modify equipment under the more "modest" system information compression scheme: they will have to have a license. And it seems remarkably unlikely that licenses would be granted to the developers of open source drivers or home-brew devices such as Myth TV, and of course it couldn't be implemented retroactively in equipment that's already on the market. How many televisions and other devices will it break in your home?

Up until now, in contrast to the US situation, the UK's digital switchover has been pretty gentle and painless for a lot of people. If you get cable or satellite, at some point you got a new set-top box (mine keep self-destructing anyway); if you receive all your TV and radio over the air you attached a Freeview box. But this is the broadcast flag and the content management agenda all over again.

We know why rights holders want this. But why should the BBC adopt their agenda? The BBC is the best-placed broadcasting and content provider organisation in the world to create a parallel, alternative universe to the strictly controlled one the commercial entertainment industry wants. It is the broadcaster that commissioned a computer to educate the British public. It is the broadcaster that belongs to the people. Reclaim your heritage, guys.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

September 4, 2009

Nothing ventured, nothing lost

What does a venture capitalist do in a recession?

"Panic." Hermann Hauser says, then laughs. It is, in fact, hard to imagine him panicking if you've heard the stories he tells about his days as co-founder of Acorn Computers. He's quickly on to his real, more measured, view.

"It's just the bottom of the cycle, and people my age have been through this a number of times before. Though many people are panicking, I know that normally we come out the other end. If you just look at the deals I'm seeing at the moment, they're better than any deals I've seen in my entire life." The really positive thing, he says, is that, "The speed and quality of innovation are speeding up and not slowing down. If you believe that quality of innovation is the key to a successful business, as I do, then this is a good era. We have got to go after the high end of innovation - advanced manufacturing and the knowledge-based economy. I think we are quite well placed to do that." Fortunately, Amadeus had just raised a fund when the recession began, so it still has money to invest; life is, he admits, less fun for "the poor buggers who have to raise funds."

Among the companies he is excited about is Plastic Logic, which is due to release its first product next year, a competitor to the Kindle that will have a much larger screen, be much lighter, and will also be a computing platform with 3g, Bluetooth, and Wi-fi all built in, all built on plastic transistors that will be green to produce, more responsive than silicon - and sealed against being dropped in the bath water. "We have the world beat," he says. "It's just the most fantastic thing."

Probably if you ask any British geek above the age of 39, an Acorn BBC Micro figured prominently in their earliest experiences with computing. Hauser was and is not primarily a technical guy - although his idea of exhilarating vacation reading is Thermal Physics, by Charles Kittel and Herbert Kroemer - but picking the right guys to keep supplied with tea and financing is a rare skill, too.

"As I go around the country, people still congratulate me on the BBC Micro and tell me how wonderful it was. Some are now professors in computer science and what they complain about is that as people switched over to PCs - on the BBC Micro everybody knew how to program. The main interface was a programming interface, and it was so easy to program in BASIC everybody did it. Kids have no clue what programming is about - they just surf the Net. Nobody really understands any more what a computer does from the transistor up. It's a dying breed of people who actually know that all this is built on CMOS gates and can build it up from there."

Hauser went on to found an early effort in pen computing - "the technology wasn't good enough" and "the basic premise that I believed in, that pen computing would be important because everybody knew how to wield a pen just wasn't true" - and then the venture capital fund Amadeus, through which he helped fund, among others, leading Bluetooth chip supplier CSR. Britain, he says, is a much more hospitable environment now than it was when he was trying to make his Cambridge bank manager understand Acorn's need for a £1 million overdraft. Although, he admits now, "I certainly wouldn't have invested in myself." And would have missed Acorn's success.

"I think I'm the only European who's done four billion-dollar companies," he says. "Of course I've failed a lot. I assume that more of my initiatives that I've founded finally failed than finally succeeded."

But times have changed since consultants studied Acorn's books and told them to stop trading immediately because they didn't understand how technology companies worked. "All the building blocks you need to have to have a successful technology cluster are now finally in place," he says. "We always that the technology, but we always lacked management, and we've grown our own entrepreneurs now in Britain." He calls Stan Boland, CEO of 3g USB stock manufacturer Icera and Acorn's last managing director a "rock star" and "one of the best CEOs I have come across in Europe or the US." In addition, he says, "There is also a chance of attracting the top US talent, for the first time." However, "The only thing I fear and that we have to be careful about is that the relative decline doesn't turn into an absolute decline."

One element of Britain's changing climate with respect to technology investment that Hauser is particularly proud of is helping create tax credits and taper relief for capital gains through his work on Leon Mandelson's advisory panel on new industry and new jobs. "The reason I have done it is that I don't believe in the post-industrial society. We have to have all parts of industry in our country."

Hauser's latest excitement is stem cells; he's become the fourth person in the world to have his entire genome mapped. "It's the beginning of personal medicine."

The one thing that really bemuses him is being given lifetime achievement awards. "I have lived in the future all my life, and I still do. It's difficult to accept that I've already created a past. I haven't done yet the things I want to do!"


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

June 13, 2009

Futures

"What is the future of computers, freedom, and privacy?" a friend asked over lunch, apparently really wanting to know. This was ten days ago, and I hesitated before finding an out.

"I don't know," I said. "I haven't been to the conference yet.

Now I have been to the conference, at least this year's instance of it, and I still don't really know how to answer this question. As always, I've come away with some ideas to follow up, but mostly the sense of a work in progress. How do some people manage to be such confident futurologists?

I don't mean science fiction writers: while they're often confused with futurologists and Arthur C. Clarke's track record in predicting communications satellites notwithstanding, they're not, really. They're storytellers who take our world, change a few variables, and speculate. I also don't mean trend-spotters, who see a few instances of something and generalize from there, or pundits, who are just very, very good at quotables.

Futurologists are good at the backgrounds science fiction writers use - but not good at coming up with stories. They're not, as I had it explained to me once, researchers, because they dream rather than build things. The smart ones have figured out that dramatic predictions get more headlines - and funding - than mundane ones and they have a huge advantage over urban planners and actuaries: they don't have to be right, just interesting. (Whereas, a "psychic seer" like Nostradamus doesn't even have to be interesting as long as his ramblings are vague enough to be reinterpretable every time some new major event comes along.)

It's perennially intriguing how much of the past images of the future throw away: changing fashions in clothing, furniture, and lifestyles leave no trace. Take, for example, Popular Mechanics' 1950 predictions for 2000. Some of that article is prescient: converging televisions and telephones, for example. Some extrapolates from then new technologies such as X-rays, plastics, and frozen foods. But far more of it is a reminder of how much better the future was in the past: family helicopters, solar power in real, widespread use, cheap housing. And yet even more of it reflects the constrained social roles of the 1950s: the assumption that all those synthetic plastic fabrics, furniture, and finishings would be hosed down by...the woman of the house.

I'll bet the guy who wrote that had a wife who was always complaining about having to do all the housework. And didn't keep his books at home. Or family heirlooms, personal memorabilia, or silly gewgaws picked up on that trip to Pittsburgh. I'm not entirely clear why anyone would find frozen milk and candy made from sawdust appealing, though I suppose home cooking is indeed going out of style.

But my friend's question was serious: I can't answer it by throwing extravagantly wild imaginings at it for their entertainment value. Plus, he's probably most interested in his lifetime and that of his children, and it's a simple equation that the farther out the future you're predicting the less plausible you have to be.

It's not hard to guess that computing power will continue to grow, even if it doesn't continue to keep pace with Moore's Law and is counterbalanced by the weight of Page's Law. What *is* hard to guess is how people will want to use it. To most of the generation writing the future in the 1950s, when World War II and the threat of Nazism was fresh, it was probably inconceivable that the citizens of democratic countries would be so willing to allow so many governments to track them in detail. As inconceivable, I suppose, as that the pill would come along a few years later and wipe away the social order they believed was nature's way. Orwell, of course, foresaw the possibilities of a surveillance society, but he imagined the central control of a giant government, not a society where governments rely on commercial companies to fill out their dossiers on citizens.

I find it hard to imagine dramatic futures in part because I do believe most people want to hold onto at least parts of their past, and therefore that any future we construct will be more like Terry Gilliam's movies than anything else, festooned with bizarre duct work and populated by junk that's either come back into fashion or that we simply forgot to throw away. And there are plenty of others around to predict the apocalypse (we run out of energy, three-quarters of the world's population dies, economic and environmental collapse, will you burn that computer or sit on it?) or its opposite (we find the Singularity, solve our energy problems, colonize space, and fix biology so we live forever). Neither seems to me the most likely.

I doubt my friend would have been satisfied with the answer: "More of the same, only different." But my guess is that the battle to preserve privacy will continue for a long time. Every increase in computing power makes greater surveillance possible, and 9/11 provided the seeming justification that overrode the fading memory of what was at stake in World War II. It won't be until an event with that kind of impact reminds people of the risk you take when you allow "If you have nothing to hide, you have nothing to fear" to become society's mantra that the mainstream will fight to take back their privacy.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 29, 2009

Three blind governments

I spent my formative adult years as a musician. And even so, if I were forced to choose to sacrifice one of my senses as a practical matter pick sight over hearing: as awful and isolating as it would be to be deaf it would be far, far worse to be blind.

Lack of access to information and therefore both employment and entertainment is the key reason. How can anyone participate in the "knowledge economy" if you can't read?

Years ago, when I was writing a piece about disabled access to the Net, the Royal National Institute for the Blind put me in touch with Peter Brasher, a consultant who was particularly articulate on the subject of disabled access to computing.

People tend to make the assumption - as I did - that the existence of Braille editions and talking books meant that blind and partially sighted people were catered for reasonably well. In fact, he said, only 8 percent of the blind population can read Braille; its use is generally confined to those who are blind from childhood (although see here for a counterexample). But by far and away the majority of vision loss comes later in life. It's entirely possible that the percentage of Braille readers is now considerably less; today's kids are more likely to be taught to rely on technology - text-to-speech readers, audio books, and so on. From 50 percent in the 1950s, the percentage of blind American children learning Braille has dropped to 10 percent.

There's a lot of concern about this which can be summed up by this question: if text-to-speech technology and audio books are so great, why aren't sighted kids told to use them instead of bothering to learn to read?

But the bigger issue Brasher raised was one of independence. Typically, he said, the availability of books in Braille depends on someone with an agenda, often a church. The result for an inquisitive reader is a constant sense of limits. Then computers arrived, and it became possible to read anything you wanted of your own choice. And then graphical interfaces arrived and threatened to take it all away again; I wrote here about what it's like to surf the Web using the leading text-to-speech reader, JAWS. It's deeply unpleasant, difficult, tiring, and time-consuming.

When we talk about people with limited ability to access books - blind, partially sighted; in other cases fully sighted but physically disabled - we are talking about an already deeply marginalized and underserved population. Some of the links above cite studies that show that unemployment among the Braille-reading blind population is 44 percent - and 77 percent among blind non-Braille readers. Others make the point that inability to access printed information interferes with every aspect of education and employment.

And this is the group that this week's meeting of the Standing Committee on Copyright and Related Rights at the World Intellectual Property Office has convened to consider. Should there be a blanket exception to allow the production of alternative formats of books for the visually impaired and disabled?

The proposal, introduced by Brazil, Paraguay, and Ecuador, seems simple enough, and the cause unarguable. The World Blind Union estimates that 95 percent of books never become available in alternative formats and when they do it's after some delay. As Brasher said nearly 15 years ago, such arrangements depend on the agendas ofcharitable organizations.

The culprit, as in so many net.wars, is copyright law. The WBU published arguments for copyright reform (DOC) in 2004. Amazon's Kindle is a perfect example of the problem: bowing to the demands of publishers, text-to-speech can be - and is being - turned off in the Kindle. The Kindle - any ebook reader with speech capabilities - ought to have been a huge step forward for disabled access to books.

And now, according to Twits present, at WIPO, the US, Canada, and the EU are arguing against the idea of this exemption. (They're not the only ones; elsewhere, the Authors Guild has argued that exemptions should be granted by special license and registration, something I'd certainly be unhappy about if I were blind.)

Governments, particularly democratic ones, are supposed to be about ensuring equal opportunities for all. They are supposed to be about ensuring fair play. What about the American Disabilities Act, the EU's charter of fundamental human rights, and Canada's human rights act? Can any of these countries seriously argue that the rights of publishers and copyright holders trump the needs of a seriously disadvantaged group of people that every single one of us is at risk of joining?

While it's clear that text-to-speech and audio books don't solve every problem, and while the US is correct to argue that copyright is only one of a number of problems confronting the blind, when the WBU argues that copyright poses a significant barrier to access shouldn't everyone listen? Or are publishers confused by the stereotypical image of the pirate with the patch over one eye?

If governments and rightsholders want us to listen to them about other aspects of copyright law, they need to be on the right side of this issue. Maybe they should listen to their own marketing departments about the way it looks when rich folks kick people who are already disadvantaged - and then charge for the privilege.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2009

Statebook of the art

The bad thing about the Open Rights Group's new site, Statebook is that it looks so perfectly simple to use that the government may decide it's actually a good idea to implement something very like it. And, unfortunately, that same simplicity may also create the illusion in the minds of the untutored who still populate the ranks of civil servants and politicians that the technology works and is perfectly accurate.

For those who shun social networks and all who sail in her: Statebook's interface is an almost identical copy of that of Facebook. True, on Facebook the applications you click on to add are much more clearly pointless wastes of time, like making lists of movies you've liked to share with your friends or playing Lexulus (the reinvention of the game formerly known as Scrabulous until Hasbrouck got all huffy and had it shut down).

Politicians need to resist the temptation to believe it's as easy as it looks. The interfaces of both the fictional Statebook and the real Facebook look deceptively simple. In fact, although friends tell me how much they like the convenience of being able to share photos with their friends in a convenient single location, and others tell me how much they prefer Facebook's private messaging to email, Facebook is unwieldy and clunky to use, requiring a lot of wait time for pages to load even over a fast broadband connection. Even if it weren't, though, one of the difficulties with systems attempting to put EZ-2-ewes front ends on large and complicated databases is that they deceive users into thinking the underlying tasks are also simple.

A good example would be airline reservations systems. The fact is that underneath the simple searching offered by Expedia or Travelocity lies some extremely complex software; it prices every itinerary rather precisely depending on a host of variables. These may not just the obvious things like the class of cabin, but the time of day, the day of the week, the time of year, the category of flyer, the routing, how far in advance the ticket is being purchased, and the number of available seats left. Only some of this is made explicit; frequent flyers trying to maxmize their miles per dollar despair while trying to dig out arcane details like the class of fare.

In his 1988 book The Design of Everyday Things, Donald Norman wrote about the need to avoid confusing the simplicity or complexity of an interface with the characteristics of the underlying tasks. He also writes about the mental models people create as they attempt to understand the controls that operate a given device. His example is a refrigerator with two compartments and two thermostatic controls. An uninformed user naturally assumes each thermostat controls one compartment, but in his example, one control sets the thermostat and the other directs the proportion of cold air that's sent to each comparment. The user's mental model is wrong and, as a consequence, attempts that user makes to set the temperature will also, most likely, be wrong.

In focusing on the increasing quantity and breadth of data the government is collecting on all of us, we've neglected to think about how this data will be presented to its eventual users. We have warned about the errors that build up in very large databases that are compiled from multiple sources. We have expressed concern about surveillance and about its chilling impact on spontaneous behaviour. And we have pointed out that data is not knowledge; it is very easy to take even accurate data and build a completely false picture of a person's life. Perhaps instead we should be focusing on ensuring that the software used to query these giant databases-in-progress teaches users not to expect too much.

As an everyday example of what I mean, take the automatic line-calling system used in tennis since 2005, Hawkeye. Hawkeye is not perfectly accurate. Its judgements are based on reconstructions that put together the video images and timing data from four or more high-speed video cameras. The system uses the data to calculate the three-dimensional flight of the ball; it incorporates its knowledge of the laws of physics, its model of the tennis court, and its database of the rules of the game in order to judge whether the ball is in or out. Its official margin for error is 3.6mm.

A study by two researchers at Cardiff University disputed that number. But more relevant here, they pointed out that the animated graphics used to show the reconstructed flight of the ball and the circle indicating where it landed on the court surface are misleading because they look to viewers as though they are authoritative. The two researchers, Harry Collins and Robert Evans, proposed that in the interests of public education the graphic should be redesigned to display the margin for error and the level of confidence.

This would be a good approach for database matches, too, especially since the number of false matches and errors will grow with the size of the databases. A real-life Statebook that doesn't reflect the uncertainty factor of each search, each match, and each interpretation next to every hit would indeed be truly dangerous.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 3, 2009

Copyright encounters of the third dimension

Somewhere around 2002, it occurred to me that the copyright wars we're seeing over digitised intellectual property - music, movies, books, photographs - might, in the not-unimaginable future be repeated, this time with physical goods. Even if you don't believe that molecular manufacturing will ever happen, 3D printing and rapid prototyping machines offer the possibility of being able to make a large number of identical copies of physical goods that until now were difficult to replicate without investing in and opening a large manufacturing facility.

Lots of people see this as a good thing. Although: Chris Phoenix, co-founder of the Center for Responsible Nanotechnology, likes to ask, "Will we be retired or unemployed?"

In any case, I spent some years writing a book proposal that never went anywhere, and then let the idea hang around uselessly, like a human in a world where robots have all the jobs.

Last week, at the University of Edinburgh's conference on governance of new technologies (which I am very unhappy to have missed), RAF engineer turned law student Simon Bradshaw presented a paper on the intellectual property consequences of "low-cost rapid prototyping". If only I'd been a legal scholar...

It turns out that as a legal question rapid prototyping has barely been examined. Bradshaw found nary a reference in a literature search. Probably most lawyers think this stuff is all still just science fiction. But, as Bradshaw does, make some modest assumptions, and you find that perhaps three to five years from now we could well be having discussions about whether Obama was within the intellectual property laws to give the Queen a printed-out, personalized iPod case designed to look like Elvis, whose likeness and name are trademarked in the US. Today's copyright wars are going to seem so *simple*.

Bradshaw makes some fairly reasonable assumptions about this timeframe. Until recently, you could pay anywhere from $20,000 to $1.5 million for a fabricator/3D printer/rapid prototyping machine. But prices and sizes are dropping and functionality is going up. Bradshaw puts today's situation on a par with the state of personal computers in the late 1970s, the days of the Commodore PET and the Apple II and home kids like the Sinclair MK14. Let's imagine, he says, the world of the second generation fabricator: the size of a color laser printer, cost $1,000 or less, fed with readily available plastic, better than 0.1mm resolution (and in color), 20cm cube maximum size, and programmable by enthusiasts.

As the UK Intellectual Property Office will gladly tell you, there are four kinds of IP law: copyright, patent, trademark, and design. Of these, design is by far the least known; it's used to protect what the US likes to call "trade dress", that is, the physical look and feel of a particular item. Apple, for example, which rarely misses a trick when it comes to design, applied for a trademark on the iPhone's design in the US, and most likely registered it under the UK's design right as well. Why not? Registration is cheap (around £200), and the iPhone design was genuinely innovative.

As Bradshaw analyzes it, all four of these types of IP law could apply to objects created using 3D printing, rapid prototyping, fabricating...whatever you want to call it. And those types of law will interact in bizarre and unexpected ways - and, of course, differently in different countries.

For example: in the UK, a registered design can be copied if it's done privately and for non-commercial use. So you could, in the privacy of your home, print out copies of a test-tube stand (in Bradshaw's example) whose design is registered. You could not do it in a school to avoid purchasing them.

Parts of the design right are drafted so as to prevent manufacturers from using the right to block third-parties from making spare parts. So using your RepRap to make a case for your iPod is legal as long as you don't copy any copyrighted material that might be floating around on the surface of the original. Make the case without Elvis.

But when is an object just an object and when is it a "work of artistic merit"? Because if what you just copied is a sculpture, you're in violation of copyright law. And here, Bradshaw says, copyright law is unhelpfully unclear. Some help has come from the recent ruling in Lucasfilm v Ainsworth, the case about the stormtrooper helmets copied from the first Star Wars movie. Is a 3D replica of a 2D image a derivative work?

Unsurprisingly, it looks like US law is less forgiving. In the helmet case, US courts ruled in favor of Lucasfilm; UK courts drew a distinction between objects that had been created for artistic purposes in their own right and those that hadn't.

And that's all without even getting into the thing that if everyone has a fabricator there are whole classes of items that might no longer be worth selling. In that world, what's going to be worth paying for is the designs that drive the fabricators. Think knitted Dr Who puppets, only in 3D.

It's all going to be so much fun, dontcha think?

Update (1/26/2012): Simon Bradshaw's paper is now published here.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 5, 2008

Saving seeds

The 17 judges of the European Court of Human Rights ruled unanimously yesterday that the UK's DNA database, which contains more than 3 million DNA samples, violates Article 8 of the European Convention on Human Rights. The key factor: retaining, indefinitely, the DNA samples of people who have committed no crime.

It's not a complete win for objectors to the database, since the ruling doesn't say the database shouldn't exist, merely that DNA samples should be removed once their owners have been acquitted in court or the charges have been dropped. England, the court said, should copy Scotland, which operates such a policy.

The UK comes in for particular censure, in the form of the note that "any State claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance..." In other words, before you decide to be the first on your block to use a new technology and show the rest of the world how it's done, you should think about the consequences.

Because it's true: this is the kind of technology that makes surveillance and control-happy governments the envy of other governments. For example: lacking clues to lead them to a serial killer, the Los Angeles Police Department wants to copy Britain and use California's DNA database to search for genetic profiles similar enough to belong to a close relative .The French DNA database, FNAEG, was proposed in 1996, created in 1998 for sex offenders, implemented in 2001, and broadened to other criminal offenses after 9/11 and again in 2003: a perfect example of function creep. But the French DNA database is a fiftieth the size of the UK's, and Austria's, the next on the list, is even smaller.

There are some wonderful statistics about the UK database. DNA samples from more than 4 million people are included on it. Probably 850,000 of them are innocent of any crime. Some 40,000 are children between the ages of 10 and 17. The government (according to the Telegraph) has spent £182 million on it between April 1995 and March 2004. And there have been suggestions that it's too small. When privacy and human rights campaigners pointed out that people of color are disproportionately represented in the database, one of England's most experienced appeals court judges, Lord Justice Sedley, argued that every UK resident and visitor should be included on it. Yes, that's definitely the way to bring the tourists in: demand a DNA sample. Just look how they're flocking to the US to give fingerprints, and how many more flooded in when they upped the number to ten earlier this year. (And how little we're getting for it: in the first two years of the program, fingerprinting 44 million visitors netted 1,000 people with criminal or immigration violations.)

At last week's A Fine Balance conference on privacy-enhancing technologies, there was a lot of discussion of the key technique of data minimization. That is the principle that you should not collect or share more data than is actually needed to do the job. Someone checking whether you have the right to drive, for example, doesn't need to know who you are or where you live; someone checking you have the right to borrow books from the local library needs to know where you live and who you are but not your age or your health records; someone checking you're the right age to enter a bar doesn't need to care if your driver's license has expired.

This is an idea that's been around a long time - I think I heard my first presentation on it in about 1994 - but whose progress towards a usable product has been agonizingly slow. IBM's PRIME project, which Jan Camenisch presented, and Microsoft's purchase of Credentica (which wasn't shown at the conference) suggest that the mainstream technology products may finally be getting there. If only we can convince politicians that these principles are a necessary adjunct to storing all the data they're collecting.

What makes the DNA database more than just a high-tech fingerprint database is that over time the DNA stored in it will become increasingly revealing of intimate secrets. As Ray Kurzweil kept saying at the Singularity Summit, Moore's Law is hitting DNA sequencing right now; the cost is accordingly plummeting by factors of ten. When the database was set up, it was fair to characterize DNA as a high-tech version of fingerprints or iris scans. Five - or 15, or 25, we can't be sure - years from now, we will have learned far more about interpreting genetic sequences. The coded, unreadable messages we're storing now will be cleartext one day, and anyone allowed to consult the database will be privy to far more intimate information about our bodies, ourselves than we think we're giving them now.

Unfortunately, the people in charge of these things typically think it's not going to affect them. If the "little people" have no privacy, well, so what? It's only when the powers they've granted are turned on them that they begin to get it. If a conservative is a liberal who's been mugged, and a liberal is a conservative whose daughter has needed an abortion, and a civil liberties advocate is a politician who's been arrested...maybe we need to arrest more of them.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 21, 2008

The art of the impossible

So the question of last weekend very quickly became: how do you tell plausible fantasy from wild possibility? It's a good conversation starter.

One friend had a simple assessment: "They are all nuts," he said, after glancing over the weekend's program. The problem is that 150 years ago anyone predicting today's airline economy class would also have sounded nuts.

Last weekend's (un)conference was called Convergence, but the description tried to convey the sense of danger of crossing the streams. The four elements that were supposed to converge: computing, biotech, cognitive technology, and nanotechnology. Or, as the four-colored conference buttons and T-shirts had it, biotech, infotech, cognotech, and nanotech.

Unconferences seem to be the current trend. I'm guessing, based on very little knowledge, that it was started by Tim O'Reilly's FOO camps or possibly the long-running invitation-only Hackers conference. The basic principle is: collect a bunch of smart, interesting, knowledgeable people and they'll construct their own program. After all, isn't the best part of all conferences the hallway chats and networking, rather than the talks? Having been to one now (yes, a very small sample), I think in most cases I'm going to prefer the organized variety: there's a lot to be said for a program committee that reviews the proposals.

The day before, the Center for Responsible Nanotechnology ran a much smaller seminar on Global Catastrophic Risks. It made a nice counterweight: the weekend was all about wild visions of the future; the seminar was all about the likelihood of our being wiped out by biological agents, astronomical catastrophe, or, most likely, our own stupidity. Favorite quote of the day, from Anders Sandberg: "Very smart people make very stupid mistakes, and they do it with surprising regularity." Sandberg learned this, he said, at Oxford, where he is a philosopher in the Institute for the Future of Humanity.

Ralph Merkle, co-inventor of public key cryptography, now working on diamond mechanosynthesis, said to start with physics textbooks, most notably the evergreen classic by Halliday and Resnick. You can see his point: if whatever-it-is violates the laws of physics it's not going to happen. That at least separates the kinds of ideas flying around at Convergence and the Singularity Summit from most paranormal claims: people promoting dowsing, astrology, ghosts, or ESP seem to be about as interested in the laws of physics as creationists are in the fossil record.

A sidelight: after years of The Skeptic, I'm tempted to dismiss as fantasy anything where the proponents tell you that it's just your fear that's preventing you from believing their claims. I've had this a lot - ghosts, alien spacecraft, alien abductions, apparently these things are happening all over the place and I'm just too phobic to admit it. Unfortunately, the behavior of adherents to a belief just isn't evidence that it's wrong.

Similarly, an idea isn't wrong just because its requirements are annoying. Do I want to believe that my continued good health depends on emulating Ray Kurzweil and taking 250 pills a day and, a load of injections weekly? Certainly not. But I can't prove it's not helping him. I can, however, joke that it's like those caloric restriction diets - doing it makes your life *seem* longer.

Merkle's other criterion: "Is it internally consistent?" This one's harder to assess, particularly if you aren't a scientific expert yourself.

But there is the technique of playing the man instead of the ball. Merkle, for example, is a cryonicist and is currently working on diamond mechanosynthesis. Put more simply, he's busy designing the tools that will be needed to build things atom by atom when - if - molecular manufacturing becomes a reality. If that sounds nutty, well, Merkle has earned the right to steam ahead unworried because his ideas about cryptography, which have become part of the technology we use every day to protect ecommerce transactions, were widely dismissed at first.

Analyzing language is also open to the scientifically less well-educated: do the proponents of the theory use a lot of non-standard terms that sound impressive but on inspection don't seem to mean anything? It helps if they can spell, but that's not a reliable indicator - snake oil salesmen can be very professional, and some well-educated excellent scientists can't spell worth a damn.

The Risks seminar threw out a useful criterion for assessing scenarios: would it make a good movie? If your threat to civilization can be easily imagined as a line delivered by Bruce Willis, it's probably unlikely. It's not a scientifically defensible principle, of course, but it has a lot to recommend it. In human history, what's killed the most people while we're worrying about dramatic events like climate change and colliding asteroids? Wars and pandemics.

So, where does that leave us? Waiting for deliverables, of course. Even if a goal sounds ludicrous working towards it may still produce useful results. A project like Aubrey de Grey's ideas about "curing aging" by developing techniques for directly repairing damage (or SENS, for Strategies for Engineered Negligible Senescence) seems a case in point. And life extension is the best hope for all of these crazy ideas. Because, let's face it: if it doesn't happen in our lifetime, it was impossible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 7, 2008

Reality TV

The Xerox machine in the second season of Mad Men has its own Twitter account, as do many of the show's human characters. Other TV characters have MySpace pages and Facebook groups, and of course they're all, legally or illegally, on YouTube.

Here at the American Film Institute's Digifest in Hollywood - really Hollywood, with the stars on the sidewalks and movie theatres everywhere - the talk is all of "cross-platform". This event allows the AFI's Digital Content Lab to show off some of the projects it's fostered over the last year, and the audience is full of filmmakers, writers, executives, and owners of technology companies, all trying to figure out digital television.

One of the more timely projects is a remix of the venerable PBS Newshour with Jim Lehrer. A sort of combination of Snopes, Wikipedia, and any of a number of online comment sites, the goal of The Fact Project is to enable collaboration between the show's journalists and the public. Anyone can post a claim or a bit of rhetoric and bring in supporting or refuting evidence; the show's journalistic staff weigh in at the end with a Truthometer rating and the discussion is closed. Part of the point, said the project's head, Lee Banville, is to expose to the public the many small but nasty claims that are made in obscure but strategic places - flyers left on cars in supermarket parking lots, or radio spots that air maybe twice on a tiny local station.

The DCL's counterpart in Australia showed off some other examples. Areo, for example, takes TV sets and footage and turns them into game settings. More interesting is the First Australians project, which in the six-year process of filming a TV documentary series created more than 200 edited mini-documentaries telling each interviewee's story. Or the TV movie Scorched, which even before release created a prequel and sequel by giving a fictional character her own Web site and YouTube channel. The premise of the film itself was simple but arresting. It was based on one fact, that at one point Sydney had no more than 50 weeks of water left, and one what-if - what if there were bush fires? The project eventually included a number of other sites, including a fake government department.

"We go to islands that are already populated," said the director, "and pull them into our world."

HBO's Digital Lab group, on the other hand, has a simpler goal: to find an audience in the digital world it can experiment on. Last month, it launched a Web-only series called Hooking Up. Made for almost no money (and it looks it), the show is a comedy series about the relationship attempts of college kids. To help draw larger audiences, the show cast existing Web and YouTube celebrities such as LonelyGirl15, KevJumba, and sxePhil. The show has pulled in 46,000 subscribers on YouTube.

Finally, a group from ABC is experimenting with ways to draw people to the network's site via what it calls "viewing parties" so people can chat with each other while watching, "live" (so to speak), hit shows like Grey's Anatomy. The interface the ABC party group showed off was interesting. They wanted, they said, to come up with something "as slick as the iPhone and as easy to use as AIM". They eventually came up with a three-dimensional spatial concept in which messages appear in bubbles that age by shrinking in size. Net old-timers might ask churlishly what's so inadequate about the interface of IRC or other types of chat rooms where messages appear as scrolling text, but from ABC's point of view the show is the centrepiece.

At least it will give people watching shows online something to do during the ads. If you're coming from a US connection, the ABC site lets you watch full episodes of many current shows; the site incorporates limited advertising. Perhaps in recognition that people will simply vanish into another browser window, the ads end with a button to click to continue watching the show and the video remains on pause until you click it.

The point of all these initiatives is simple and the same: to return TV to something people must watch in real-time as it's broadcast. Or, if you like, to figure out how to lure today's 20- and 30-somethings into watching television; Newshour's TV audience is predominantly 50- and 60-somethings.

ABC's viewing party idea is an attempt - as the team openly said - to recreate what the network calls "appointment TV". I've argued here before that as people have more and more choices about when and where to watch their favourite scripted show, sports and breaking news will increasingly rule television because they are the only two things that people overwhelmingly want to see in real time. If you're supported by advertising, that matters, but success will depend on people's willingness to stick with their efforts once the novelty is gone. The question to answer isn't so much whether you can compete with free (cue picture of a bottle of water) but whether you can compete with freedom (cue picture of evil file-sharer watching with his friends whenever he wants).


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 31, 2008

Machine dreams

Just how smart are humans anyway? Last week's Singularity Summit spent a lot of time talking about the exact point at which computer processing power would match that of the human brain, but that's only the first step. There's the software to make the hardware do stuff, and then there's the whole question of consciousness. At that point, you've strayed from computer science into philosophy and you might as well be arguing about angels on the heads of pins. Of course everyone hopes they'll be alive to see these questions settled, but in the meantime all we have is speculation and the snide observation that it's typical that a roomful of smart people would think that all problems can be solved by more intelligence.

So I've been trying to come up with benchmarks for what constitutes artificial intelligence, and the first thing I think is that the Turing test is probably too limited. In it, a judge has to determine which of two typing correspondents is the machine and which the human, That's fine as far as it goes, but one of the consistent threads that un through all this is a noticeable disdain for human bodies.

While our brain power is largely centralized, it still seems to me likely that both its grey matter and the rest of our bodies are an important part of the substrate. How we move through space, how our bodies react and feed our brains is part and parcel of how our minds work, however much we may wish to transcend biology. The fact that we can watch films of bonobos and chimpanzees and recognise our own behaviour in their interactions should show us that we're a lot closer to most animal species than we think - and a lot further from most machines.

For that sort of reason, the Turing test seems limited. A computer passes that test if, when paired against a human, the judge can't tell which is which. At the moment, it seems clear the winner is going to be spambots - some spam messages are already devised cleverly enough to fool even Net-savvy individuals into opening them sometimes. But they're hardly smart - they're just programmed that way. And a lot depends on the capability of the judge - some people even find Eliza convincing, though it's incredibly easy to send off-course into responses that are clearly those of a machine. Find a judge who wants to believe and you're into the sort of game that self-styled psychics like to play.

Nor can we judge a superhuman intelligence by the intractable problems it solves. One of the more evangelist speakers last weekend talked about being able to instantly create tall buildings via nanotechnology. (I was, I'm afraid, irresistibly reminded of that Bugs Bunny cartoon where Marvin pours water on beans to produce instant Martians to get rid of Bugs.) This is clearly just silly: you're talking about building a gigantic building out of molecules. I don't care how many billions of nanobots you have, the sheer scale means it's going to take time. And, as Kevin Kelly has written, no matter how smart a machine is, figuring out how to cure cancer or roll back aging won't be immediate either because you can't really speed up the necessary experiments. Biology takes time.

Instead, one indicator might be variability of response; that is, that feeding several machines the same input - or giving the same machine the same input at different times - produces different, equally valid interpretations. If, for example, you give a 10th grade class Jane Austen's Pride and Prejudice to read and report on, different students might with equal legitimacy describe it as a historical account of the economic forces affecting 18th century women, a love story, the template for romantic comedy, or even the story of the plain sister in a large family whose talents were consistently overlooked until her sisters got married.

In The Singularity Is Near, Ray Kurzweil laments that each human must read a text separately and that knowledge can't be quickly transferred from one to another the way a speech recognition program can be loaded into a new machine in seconds - but that's the point. Our strength is that our intelligences are all different, and we aren't empty vessels into which information is poured but stews in which new information causes varying chemical reactions.

You might argue that search engines can already do this, in that you don't get the same list of hits if you type the same keywords into Google versus Yahoo! versus Ask.com, and if you come back tomorrow you may get a different response from any one of them. That's true. It isn't the kind of input I had in mind, but fair enough.

The other benchmark that's occurred to me so far is that machines will be getting really smart when they get bored.

ZDNet UK editor Rupert Goodwins has a variant on this from when he worked at Sinclair Research. "If it went out one evening, drank too much, said the next morning, 'never again' and repeated the exercise immediately. Truly human." But see? There again: a definition of human intelligence that requires a body.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 24, 2008

Living by numbers

"I call it tracking," said a young woman. She had healthy classic-length hair, a startling sheaf of varyingly painful medical problems, and an eager, frequent smile. She spends some minutes every day noting down as many as 40 different bits of information about herself: temperature, hormone levels, moods, the state of the various medical problems, the foods she eats, the amount and quality of sleep she gets. Every so often, she studies the data looking for unsuspected patterns that might help her defeat a problem. By this means, she says she's greatly reduced the frequency of two of them and was working on a third. Her doctors aren't terribly interested, but the data helps her decide which of their recommendations are worth following.

And she runs little experiments on herself. Change a bunch of variables, track for a month, review the results. If something's changed, go back and look at each variable individually to find the one that's making the difference. And so on.

Of course, everyone with the kind of medical problem - diabetes, infertility, allergies, cramps, migraines, fatigue - that medicine can't really solve - has done something like this for generations. Diabetics in particularly have long had to track and control their blood sugar levels. What's different is the intensity - and the computers. She currently tracks everything in an Excel spreadsheet, but what she's longing for is good tools to help her with data analysis.

From what Gary Wolf, the organizer of this group, Quantified Self, says - about 30 people are here for its second meeting, after hours at Palo Alto's Institute for the Future to swap notes and techniques on personal tracking - getting out of the Excel spreadsheet is a key stage in every tracker's life. Each stage of improvement thereafter gets much harder.

Is this a trend? Co-founder Kevin Kelley thinks so, and so does the Washington Post, which covered this group's first meeting. You may not think you will ever reach the stage of obsession that would lead you to go to a meeting about it, but in fact, if the interviews I did with new-style health companies in the past year is any guide, we're going to be seeing a lot of this in the health side of things. Home blood pressure monitors, glucose tests, cholesterol tests, hormone tests - these days you can buy these things in Wal-Mart.

The key question is clearly going to be: who owns your health data? Most of the medical devices in development assume that your doctor or medical supplier will be the one doing the monitoring; the dozens of Web sites highlighted in that Washington Post article hope there's a business in helping people self-track everything from menstrual cycles to time management. But the group in Palo Alto are more interested in self-help: in finding and creating tools everyone can use, and in interoperability. One meeting member shows off a set of consumer-oriented prototypes - bathroom scale, pedometer, blood pressure monitor, that send their data to software on your computer to display and, prospectively, to a subscription Web site. But if you're going to look at those things together - charting the impact of how much you walk on your weight and blood pressure - wouldn't you also want to be able to put in the foods you eat? There could hardly be an area where open data formats will be more important.

All of that makes sense. I was less clear on the usefulness of an idea another meeting member has - he's doing a start-up to create it - a tiny, lightweight recording camera that can clip to the outside of a pocket. Of course, this kind of thing already has a grand, old man in the form of Steve Mann, who has been recording his life with an increasingly small sheaf of devices for a couple of decades now. He was tired, this guy said, of cameras that are too difficult to use and too big and heavy; they get left at home and rarely used. This camera they're working on will have a wide-angle lens ("I don't know why no one's done this") and take two to five pictures a second. "That would be so great," breathes the guy sitting next to me.

Instantly, I flash on the memory of Steve Mann dogging me with flash photography at Computers, Freedom, and Privacy 2005. What happens when the police subpoenas your camera? How long before insurance companies and marketing companies offer discounts as inducements to people to wear cameras and send them the footage unedited so they can study behavior they currently can't reach?

And then he said, "The 10,000 greatest minutes of your life that your grandchildren have to see," and all you can think is, those poor kids.

There is a certain inevitable logic to all this. If retailers, manufacturers, marketers, governments, and security services are all convinced they can learn from data mining us why shouldn't we be able to gain insights by doing it ourselves?

At the moment, this all seems to be for personal use. But consider the benefits of merging it with Web 2.0 and social networks. At last you'll be able to answer the age-old question: why do we have sex less often than the Joneses?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 26, 2008

Wimsey's whimsy

One of the things about living in a foreign country is this: every so often the actual England I live in collides unexpectedly with the fictional England I grew up with. Fictional England had small, friendly villages with murders in them. It had lowering, thick fogs and grim, fantastical crimes solvable by observation and thought. It had mathematical puzzles before breakfast in a chess game. The England I live in has Sir Arthur Conan Doyle's vehement support for spiritualism, traffic jams, overcrowding, and four million people who read The Sun.

This week, at the GikIII Workshop, in a break between Internet futures, I wandered out onto a quadrangle of grass so brilliantly and perfectly green that it could have been an animated background in a virtual world. Overlooking it were beautiful, stolid, very old buildings. It had a sign: Balliol College. I was standing on the quad where, "One never failed to find Wimsey of Balliol planted in the center of the quad and laying down the law with exquisite insolence to somebody." I know now that many real people came out of Balliol (three kings, three British prime ministers, Aldous Huxley, Robertson Davies, Richard Dawkins, and Graham Greene) and that those old buildings date to 1263. Impressive. But much more startling to be standing in a place I first read about at 12 in a Dorothy Sayers novel. It's as if I spent my teenaged years fighting alongside Angel avatars and then met David Boreanaz.

Organised jointly by Ian Brown at the Oxford Internet Institute and the University of Edinburgh's Script-ed folks, GikIII (prounounced "geeky") is a small, quirky gathering that studies serious issues by approaching them with a screw loose. For example: could we control intelligent agents with the legal structure the Ancient Romans used for slaves (Andrew Katz)? How sentient is a robot sex toy? Should it be legal to marry one? And if my sexbot rapes someone, are we talking lawsuit, deactivation, or prison sentence (Fernando Barrio)? Are RoadRunner cartoons all patent applications for devices thought up by Wile E. Coyote (Caroline Wilson)? Why is The Hound of the Baskervilles a metaphor for cloud computing (Miranda Mowbray)?

It's one of the characteristics of modern life that although questions like these sound as practically irrelevant as "how many angels, infinitely large, can fit on the head of a pin, infinitely small?", which may (or may not) have been debated here seven and a half centuries ago, they matter. Understanding the issues they raise matters in trying to prepare for the net.wars of the future.

In fact, Sherlock Holmes's pursuit of the beast is metaphorical; Mowbray was pointing out the miasma of legal issues for cloud computing. So far, two very different legal directions seem likely as models: the increasingly restrictive EULAs common to the software industry, and the service-level agreements common to network outsourcing. What happens if the cloud computing company you buy from doesn't pay its subcontractors and your data gets locked up in a legal battle between them? The terms and conditions in effect for Salesforce.com warn that the service has 30 days to hand back your data if you terminate, a long time in business. Mowbray suggests that the most likely outcome is EULAs for the masses and SLAs at greater expense for those willing to pay for them.

On social networks, of course, there are only EULAs, and the question is whether interoperability is a good thing or not. If the data people put on social networks ("shouldn't there be a separate disability category for stupid people?" someone asked) can be easily transferred from service to service, won't that make malicious gossip even more global and permanent? A lot of the issues Judith Rauhofer raised in discussing the impact of global gossip are not new to Facebook: we have a generation of 35-year-olds coping with the globally searchable history of their youthful indiscretions on Usenet. (And WELL users saw the newly appointed CEO of a large tech company delete every posting he made in his younger, more drug-addled 1980s.) The most likely solution to that particular problem is time. People arrested as protesters and marijuana smokers in the 1960s can be bank presidents now; in a few years the work force will be full of people with Facebook/MySpace/Bebo misdeeds and no one will care except as something laugh at drunkenly late out in the pub.

But what Lilian Edwards wants to know is this: if we have or can gradually create the technology to make "every ad a wanted ad" - well, why not? Should we stop it? Online marketing is at £2.5 billion a year according to Ofcom, and a quarter of the UK's children spend 22 hours a week playing computer games, where there is no regulation of industry ads and where Web 2.0 is funded entirely by advertising. When TV and the Internet roll together, when in-game is in-TV and your social network merges with megamedia, and MTV is fully immersive, every detail can be personalized product placement. If I grew up five years from now, my fictional Balliol might feature Angel driving across the quad in a Nissan Prairie past a billboard advertising airline tickets.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 5, 2008

Return of the browser wars

It was quiet, too quiet. For so long it's just been Firefox/Mozilla/Netscape, Internet Explorer, and sometimes Opera that it seemed like that was how it was always going to be. In fact, things were so quiet that it seemed vaguely surprising that Firefox had released a major update and even long-stagnant Internet Explorer has version 8 out in beta. So along comes Chrome to shake things up.

The last time there were as many as four browsers to choose among, road-testing a Web browser didn't require much technical knowledge. You loaded the thing up, pointed it at some pages, and if you liked the interface and nothing seemed hideously broken, that was it.

This time round, things are rather different. To really review Chrome you need to know your AJAX from your JavaScript. You need to be able to test for security holes, and then discover more security vulnerabilities. And the consequences when these things are wrong are so much greater now.

For various reasons, Chrome probably isn't for me, quite aside from its copy-and-paste EULA oops. Yes, it's blazingly fast and I appreciate that because it separates each tab or window into its own process it crashes more gracefully than its competitors. But the switching cost lies less in those characteristics than in the amount of mental retraining it takes to adapt your way of working to new quirks. And, admittedly based on very short acquaintance, Chrome isn't worth it now that I've reformatted Firefox 3's address bar into a semblance of the one in Firefox 2. Perhaps when Chrome is a little older and has replaced a few more of Firefox's most useful add-ons (or when I eventually discover that Chrome's design means it doesn't need them).

Chrome does not do for browsers what Google did for search engines. In 1998, Google's ultra-clean, quick-loading front page and search results quickly saw off competing, ultra-cluttered, wait-for-it portals like Altavista because it was such a vast improvement. (Ironically, Google now has all those features and more, but it's smart enough to keep them off the front page.)

Chrome does some cool things, of course, as anything coming out of Google always has. But its biggest innovation seems to be more completely merging local and global search, a direction in which Firefox 3 is also moving, although with fewer unfortunate consequences. And, as against that, despite the "incognito" mode (similar to IE8) there is the issue of what data goes back to Google for its coffers.

It would be nice to think that Chrome might herald a new round of browser innovation and that we might start seeing browsers that answer different needs than are currently catered for. For example: as a researcher I'd like a browser to pay better attention to archiving issues: a button to push to store pages with meaningful metadata as well as date and time, the URL the material was retrieved from, whether it's been updated since and if so how, and so on. There are a few offline browsers that sort of do this kind of thing, but patchily.

The other big question hovering over Chrome is standards: Chrome is possible because the World Wide Web Consortium has done its work well. Standards and the existence of several competing browsers with significant market share has prevented any one company from seizing control and turning the Web into the kind of proprietary system Tim Berners-Lee resisted from the beginning. Chrome will be judged on how well it renders third-party Web pages, but Google can certainly tailor its many free services to work best with Chrome - not so different a proposition from the way Microsoft has controlled the desktop.

Because: the big thing Chrome does is bring Google out of the shadows as a competitor to Microsoft. In 1995, Business Week ran a cover story predicting that Java (write once, run on anything) and the Web (a unified interface) could "rewrite the rules of the software industry". Most of the predictions in that article have not really come true - yet - in the 13 years since it was published; or if they have it's only in modest ways. Windows is still the dominant operating system, and Larry Ellison's thin clients never made a dent in the market. The other big half of the challenge to Microsoft, GNU/Linux and the open-source movement, was still too small and unfinished.

Google is now in a position to deliver on those ideas. Not only are the enabling technologies in place but it's now a big enough company with reliable enough servers to make software as a Net service dependable. You can collaboratively process your words using Google Docs, coordinate your schedules with Google Calendar, and phone across the Net with Google Talk. I don't for one minute think this is the death of Microsoft or that desktop computing is going to vanish from the Earth. For one thing, despite the best-laid cables and best-deployed radios of telcos and men, we are still a long way off of continuous online connectivity. But the battle between the two different paradigms of computing - desktop and cloud - is now very clearly ready for prime time.

Wendy M. Grossman's Web site hasn extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 6, 2008

The Digital Revolution turns 15

"CIX will change your life," someone said to me in 1991 when I got a commission to review a bunch of online systems and got my first modem. At the time, I was spending most or all of every day sitting alone in my house putting words in a row for money.

The Net, Louis Rossetto predicted in 1993, when he founded Wired, would change everybody's lives. He compared it to a Bengali typhoon. And that was modest compared to others of the day, who compared it favorably to the discovery of fire.

Today, I spend most or all of every day sitting alone in my house putting words in a row for money.

But yes: my profession is under threat, on the one hand from shrinkage of the revenues necessary to support newspapers and magazines - which is indeed partly fuelled by competition from the Internet - and on the other hand from megacorporate publishers who routinely demand ownership of the copyrights freelances used to resell for additional income - a practice that the Internet was likely to largely kill off anyway. Few have ever gotten rich from journalism, but freelance rates haven't budged in years; staff journalists get very modest raises and for those they are required to work more hours a week and produce more words.

That embarrassingly solipsistic view aside, more broadly, we're seeing the Internet begin to reshape the entertainment, telecommunications, retail, and software industries. We're seeing it provide new ways for people to organize politically and challenge the control of information. And we're seeing it and natural laziness kill off our history: writers and students alike rely on online resources at the expense of offline archives.

Wired was, of course, founded to chronicle the grandly capitalized Digital Revolution, and this month, 15 years on, Rossetto looked back to assess the magazine's successes and failures.

Rossetto listed three failures and three successes. The three failures: history has not ended; Old Media are not dead (yet); and governments and politics still thrive. The three successful predictions: the long boom; the One Machine, a man/machine planetary consciousness; that technology would change the way we relate to each other and cause us to reinvent social institutions.

I had expected to see the long boom in the list of failures, and not just because it was so widely laughed at when it was published. Rossetto is fair to say that the original 1997 feature was not invalidated by the 2000 stock market bust. It wasn't about that (although one couldn't resist snickering about it as the NASDAQ tanked). Instead, what the piece predicted was a global economic boom covering the period 1980 to 2020.

Wrote Peter Schwartz and Peter Leyden, "We are riding the early waves of a 25-year run of a greatly expanding economy that will do much to solve seemingly intractable problems like poverty and to ease tensions throughout the world. And we'll do it without blowing the lid off the environment."

Rossetto, assessing it now, says, " There's a lot of noise in the media about how the world is going to hell. Remember, the truth is out there, and it's not necessarily what the politicians, priests, or pundits are telling you."

I think: 1) the time to assess the accuracy of an article outlining the future to 2020 is probably around 2050; 2) the writers themselves called it a scenario that might guide people through traumatic upheavals to a genuinely better world rather than a prediction; 3) that nonetheless, it's clear that the US economy, which they saw as leading the way has suffered badly in the 2000s with the spiralling deficit and rising consumer debt; 4) that media alarm about the environment, consumer debt, government deficits, and poverty is hardly a conspiracy to tell us lies; and 5) that they signally underestimated the extent to which existing institutions would adapt to cyberspace (the underlying flaw in Rossetto's assumption that governments would be disbanding by now).

For example, while timing technologies is about as futile as timing the stock market, it's worth noting that they expected electronic cash to gain acceptance in 1998 and to be the key technology to enable electronic commerce, which they guessed would hit $10 billion by 2000. Last year it was close to $200 billion. Writing around the same time, I predicted (here) that ecommerce would plateau at about 10 percent of retail; I assumed this was wrong, but it seems that it hasn't even reached 4 perecent yet, though it's obvious that, particularly in the copyright industries, the influence of online commerce is punching well above its statistical weight.

No one ever writes modestly about the future. What sells - and gets people talking - are extravagant predictions, whether optimistic or pessimistic. Fifteen years is a tiny portion even of human history, itself a blip on the planet. Tom Standage, writing in his 1998 book The Victorian Internet, noted that the telegraph was a far more radically profound change for the society of its day than the Internet is for ours. A century from now, the Internet may be just as obsolete. Rossetto, like the rest of us, will have to wait until he's dead to find out if his ideas have lasting value.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 11, 2008

Beyond biology

"Will we have enough food?"

Last Saturday (for an article in progress for the Guardian), I attended the monthly board meeting at Alcor, probably the largest of the several cryonics organizations. Cryonics: preserving a newly deceased person's body in the hope that medical technology will improve to the point where that person can be warmed up, revived, and cured.

I was the last to arrive at what I understand was an unusually crowded meeting: fifteen, including board members, staffers, and visitors. Hence the chair's anxious question.

The conference room has a window at one end that looks into a mostly empty concrete space at a line of giant cylinders, some gleaming steel, some dull aluminum. These "dewars" are essentially giant Thermos bottles, and they are the vessels in which cryopreserved patients are held. Each dewar can hold up to nine patients – four whole bodies, head down, and five neuro patients in a column down the middle.

There is a good reason to call these cryopreserved Alcor members "patients". If the cryonics dream ever comes to fruition, they will not have been dead now. And in any case, calling them patients has the same function as naming your sourdough starter: it reminds you that here is something that cannot survive without your responsible care.

To Alcor's board and staff, these are often personal friends. A number have their framed pictures on the board room wall, with the dates of their birth and cryopreservation. It was therefore a little eerie to realize that those visible dewars were, mostly, occupied.

I think the first time I ever heard of anything like cryonics was Woody Allen's movie Sleeper. Reading about it as a serious proposition came nearly 20 years later, in Ed Regis's 1992 book Great Mambo Chicken and the Transhuman Condition. Regis's book, which I reviewed for New Scientist, was a vivid ramble through the outer fringes of science, which he dubbed "fin-de-siècle hubris".

My view hasn't changed: since cremation and burial both carry a chance of revival of zero, cryonics has to do hardly anything to offer better odds, no matter how slight. But it remains a contentious idea. Isaac Asimov, for example, was against it, at least for himself. The science fiction I read as a teenager was filled with overpopulated earths covered in giant blocks of one-room apartments and people who lived on synthetic food because there was no longer the space or ability to grow enough of the real stuff. And we're going to add long-dead people as well?

That kind of issue comes up when you mention cryonics. Isn't it selfish? Or expensive? Or an imposition on future generations? What would the revived person would live on, given their outdated skills. Supposing you wake up a slave?

Many of these issues have been considered, if not by cryonicists themselves for purely practical reasons then by sf writers. Robert A. Heinlein's 1957 book The Door Into Summer had its protagonist involuntarily frozen and deposited into the future with no assets and no employment prospects, given that his engineering background was 30 years out of date. Larry Niven's 1991 short story "Rammer" had its hero revived into the blanked body of a criminal and sent out as a spaceship pilot by a society that would have calmly vaped his personality and replaced it with the next one if he were found unsuitable (Niven was also, by the way, the writer who coined the descriptor "corpsicle" for the cryopreserved). Even Woody Allen's Miles Monroe woke up in danger.

The thing is, those aren't reasons for cryonicists not to try to make their dream a reality. They are arguments for careful thought on the part of the cryonics organizations who are offering cryopreservation and possible revival as services. And they do think about it, in part because the people running those organizations expect to be cryopreserved themselves The scientist and Alcor board member Ralph Merkle, in an interview last year, pointed out that the current board chooses its successors with great care, "Because our lives will depend on selecting a good group to continue the core values."

Many of them are also bad aarguments. Most people, given their health, want their lives to continue; if they didn't, we'd be awash in suicides. If overpopulation is the problem, having children is just as selfish a way of securing immortality as wanting longer life for oneself. If burdening future generations is the problem, doing so by being there is hardly worse than using up all the planet's resources in our lifetime, leaving our descendants to suffer the consequences unaided. Nor is being uncertain of the consequences a reason: human history is filled with technologies we've developed on the basis that we'd deal with the consequences as they arose. Some consequences were good, some bad; most technologies have a mix of the two.

After the board meeting ended, several of those present and I went on talking about just these issues over lunch.

"We won't be harder to deal with than a baby," one of them said. True, but there is a much bigger biological urge to reproduce than there is to revive someone who was pronounced dead a century or two ago.

"We are kind of going around biology," he admitted.

Only up to a point: there was enough food.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).