Main

July 31, 2020

Driving while invisible

jamesbond-invisiblecar.jpegThe point is not whether it's ludicrous but whether it breaks the law.

Until Hannah Smethurst began speaking at this week's gikii event - the year's chance to mix law, digital rights, and popular culture - I had not realized just how many invisible vehicles there are in our books and films. A brief trawl turns up: Wonder Woman's invisible jet, Harry Potter's invisibility cloak and other invisibility devices, and James Bond's invisible Aston Martin. Do not trouble me with your petty complaints about physics. This is about the law.

Every gikii (see here for links to writeups of previous years) - ranges from deeply serious-with-a-twist to silly-with-an-insightful undercurrent. This year's papers included the need for a fundamental rethink of how we regulate power (Michael Veale), the English* "bubble" law that effectively granted flatmates permanent veto power over each other's choice of sex partner (gikii founder Lilian Edwards), and the mistaken-identity frustrations of having early on used your very common name as your Gmail address (Jat Singh).

In this context, Smethurst's paper is therefore business as usual. As she explained, there is nothing in highway legislation that requires your car to be visible. The same is not true of number plates, which the law says must be visible at all times. But can you enforce it? If you can't see the car, how do you know you can't see the number plate? More uncertain is the highway code's requirement to indicate braking and turns when people don't know you're there; Smethurst suggested that a good lawyer could argue successfully that turning on the lights unexpectedly would dazzle someone. No, she said, the main difficulty is the dangerous driving laws. Well, that and the difficulty of getting insurance to cover the many accidents when people - pedestrians, cyclists, other cars - collide with it.

This raised the possibly of "invisibility lanes", an idea that seems like it should be the premise for a sequel to Death Race 2000. My overall conclusion: invisibility is like online anonymity. People want it for themselves, but not for other people - at least, not for other people they don't trust to behave well. If you want an invisible car so you can drive 100 miles an hour with impunity, I suggest a) you probably aren't safe to have one, and b) try driving across Kansas.

We then segued into the really important question: if you're riding an invisible bike, are *you* visible? (General consensus: yes, because you're not enclosed.)

On a more serious note, people have a tendency to laugh nervously when you mention that numerous jurisdictions are beginning to analyze sewage for traces of coronavirus. Actually, wastewater epidemiology, as this particular public health measure is known, is not a new surveillance idea born of just this pandemic, though it does not go all the way back to John Snow and the Broadwick Street pump. Instead, Snow plotted known cases on a map, and spotted the pump as the source of contagion when they formed a circle around it. Still, epidemiology did start with sewage.

In the decades since wastewater epidemiology was developed, some of its uses have definitely had an adversarial edge, such asestablishing the level of abuse of various drugs and doping agents or particular diseases in a given area. The goal, however, is not to supposed to be trapping individuals; instead it's to provide population-wide data. Because samples are processed at the treatment plant along with everyone else's, there's a reasonable case to be made the system is privacy-preserving; even though you could analyze samples for an individual's DNA and exact microbiome, matching any particular sample to its own seems unlikely.

However, Reuben Binns argued, that doesn't mean there are no privacy implications. Like anything segmented by postcode, the catchment areas defined for such systems are likely to vary substantially in the number of households and individuals they contain, and a lot may depend on where you put the collection points. This isn't so much an issue for the present purpose, which is providing an early-warning system for coronavirus outbreaks, but will be later, when the system is in place and people want to use it for other things. A small neighborhood with a noticeable concentration of illegal drugs - or a small section of an Olympic athletes village with traces of doping agents above a particular threshold - could easily find itself a frequent target of more invasive searches and investigations. Also, unless you have your own septic field, there is no opt-out.

Binns added this unpleasant prospect: even if this system is well-intentioned and mostly harmless, it becomes part of a larger "surveillant assemblage" whose purpose is fundamentally discriminatory: "to create distinctions and hierarchies in populations to treat them differently," as he put it. The direction we're going, eventually every part of our infrastructure will be a data source, for our own good.

This was also the point of Veale's paper: we need to stop focusing primarily on protecting privacy by regulating the use and collection of data, and start paying attention to the infrastructure. A large platform can throw away the data and still have the models and insights that data created - and the exceptional computational power to make use of it. All that infrastructure - there's your invisible car.

Illustrations: James Bond's invisible car (from Live and Let Die).

*Correction: I had incorrectly identified this law as Scottish.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 13, 2019

Purposeful dystopianism

Truman-Show-exist.pngA university comparative literature class on utopian fiction taught me this: all utopias are dystopias underneath. I was reminded of this at this week's Gikii, when someone noted the converse, that all dystopias contain within themselves the flaw that leads to their destruction. Of course, I also immediately thought of the bare patch on Smaug's chest in The Hobbit because at Gikii your law and technology come entangled with pop culture. (Write-ups of past years: 2018; 2016; 2014; 2013; 2008.)

Granted, as was pointed out to me, fictional utopias would have no dramatic conflict without dystopian underpinnings, just as dystopias would have none without their misfits plotting to overcome. But the context for this subdiscussion was the talk by Andres Guadamuz, which he began by locating "peak Cyber-utopianism" at 2006 to 2010, when Time magazine celebrated the power the Internet had brought each of us, Wikileaks was doing journalism, bitcoin was new, and social media appeared to have created the Arab Spring. "It looked like we could do anything." (Ah, youth.)

Since then, serially, every item on his list has disappointed. One startling statistic Guadamuz cited: streaming now creates more carbon emissions than airplanes. Streaming online video generates as much carbon dioxide per year as Belgium; bitcoin uses as much energy as Austria. By 2030, the Internet is projected to account for 20% of all energy consumption. Cue another memory, from 1995, when MIT Media Lab founder Nicholas Negroponte was feted for predicting in Being Digital that wired and wireless would switch places: broadcasting would move to the Internet's series of tubes, and historically wired connections such as the telephone network would become mobile and wireless. Meanwhile, all physical forms of information would become bits. No one then queried the sense of doing this. This week, the lab Negroponte was running then is in trouble, too. This has deep repercussions beyond any one institution.

Twenty-five years ago, in Tainted Truth, journalist Cynthia Crossen documented the extent to which funders get the research results they want. Successive generations of research have backed this up. What the Media Lab story tells us is that they also get the research they want - not just, as in the cases of Big Oil and Big Tobacco, the *specific* conclusions they want promoted but the research ecosystem. We have often told the story of how the Internet's origins as a cooperative have been coopted into a highly centralized system with central points of failure, a process Guadamuz this week called "cybercolonialism". Yet in focusing on the drivers of the commercial world we have paid insufficient attention to those driving the academic underpinnings that have defined today's technological world.

To be fair, fretting over centralization was the most mundane topic this week: presentations skittered through cultural appropriation via intellectual property law (Michael Dunford, on Disney's use of Māui, a case study of moderation in a Facebook group that crosses RuPaul and Twin Peaks fandom (Carolina Are), and a taxonomy of lying and deception intended to help decode deepfakes of all types (Andrea Matwyshyn and Miranda Mowbray).

Especially, it is hard for a non-lawyer to do justice to the discussions of how and whether data protection rights persist after death, led by Edina Harbinja, Lilian Edwards, Michael Veale, and Jef Ausloos. You can't libel the dead, they explained, because under common law, personal actions die with the person: your obligation not to lie about someone dies when they do. This conflicts with information rights that persist as your digital ghost: privacy versus property, a reinvention of "body" and "soul". The Internet is *so many* dystopias.

Centralization captured so much of my attention because it is ongoing and threatening. One example is the impending rollout of DNS-over-HTTPS. We need better security for the Internet's infrastructure, but DoH further concentrates centralized control. In his presentation Derek MacAuley noted that individuals who need the kind of protection DoH is claimed to provide would do better to just use Tor. It, too, is not perfect, but it's here and it works. This adds one more to so many historical examples where improving the technology we had that worked would have spared us the level of control now exercised by the largest technology companies.

Centralization completely undermines the Internet's original purpose: to withstand a bomb outage. Mozilla and Google surely know this. The third DoH partner, Cloudflare, the content delivery network in the middle, certainly does: when it goes down, as it did for 15 minutes in July, millions of websites become unreachable. The only sensible response is to increase resilience with multiple pathways. Instead, we have Facebook proposing to further entrench its central role in many people's lives with its nascent Libra cryptocurrency. "Well, *I*'m not going to use it" isn't an adequate response when in some countries Facebook effectively *is* the Internet.

So where are the flaws in our present Internet dystopias? We've suggested before that advertising saturation may be one; the fakery that runs all the way through the advertising stack is probably another. Government takeovers and pervasive surveillance provide motivation to rebuild alternative pathways. The built-in lack of security is, as ever, a growing threat. But the biggest flaw built into the centralized Internet may be this: boredom.


Illustrations: The Truman Show.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 31, 2019

Moral machines

2001-stewardess.jpgWhat are AI ethics boards for?

I've been wondering about this for some months now, particularly in April, when Google announced the composition of its new Advanced Technology External Advisory Council (ATEAC) - and a week later announced its dissolution. The council was dropped after a media storm that began with a letter from 50 of Google's own employees objecting to the inclusion of Kay Coles James, president of the Heritage Foundation.

At The Verge, James Vincent suggests the boards are for "ethics washing" rather than instituting change. The aborted Google board, for example, was intended, as member Joanna Bryston writes, to "stress test" policies Google had already formulated.

However, corporations are not the only active players. The new Ada Lovelace Institute's research program is intended to shape public policy in this area. The AI Now Institute is studying social implications. Data & Society is studying AI use and governance. Altogether, Brent Mittelstadt counts 63 public-private initiatives, and says the principles they're releasing "closely resemble the four classic principles of medical ethics" - an analogy he finds uncertain.

Last year, when Steven Croft, the Bishop of Oxford, proposed ten commandments for artificial intelligence, I also tended to be dismissive: who's going to listen? What company is going to choose a path against its own financial interests? A machine learning expert friend, has a different complaint: corporations are not the problem, it's governments. No matter what companies decide, governments always demand carve-outs for intelligence and security services, and once they have it, game over.

I did appreciate Croft's contention that all commandments are aspirational. An agreed set of principles would at least provide a standard against which to measure technology and decisions. Principles might be particularly valuable for guiding academic researchers, some of whom currently regard social media as a convenient public laboratory.

Still, human rights law already supplies that sort of template. What can ethics boards do that the law doesn't already? If discrimination is already wrong, why do we need an ethics board to add that it's wrong when an algorithm does it?

At a panel kicking off this year's Privacy Law Scholars, Ryan Calo suggested an answer: "We need better moral imagination." In his view, a lot of the discussion of AI ethics centers on form rather than content: how should it be applied? Should there be a certification regime? Or perhaps compliance requirements? Instead, he proposed that we should be looking at how AI changes the affordances available to us. His analogy: retrieving the sailors left behind in the water after you destroyed their ship was an ethical obligation until the arrival of new technology - submarines - made it infeasible.

For Calo, too many conversations about AI avoid considering the content, As a frustrating example: "The primary problem around the ethics of driverless cars is not how they will reshape cities or affect people with disabilities and ownership structures, but whether they should run over the nuns or the schoolchildren."

As anyone who's ever designed a survey knows, defining the questions is crucial. In her posting, Bryson expresses regret that the intended board will not now be called into action to consider and perhaps influence Google's policy. But the fact that Google, not the board, was to devise policies and set the questions about them makes me wonder how effective it could have been. So much depends on who imagines the prospective future.

The current Kubrick exhibition at London's Design Museum paid considerable homage to Kubrick's vision and imagination in creating the mysterious and wonderful universe in 2001: A Space Odyssey. Both the technology and the furniture still look "futuristic" despite having been designed more than 50 years ago. What *has* dated is the women: they are still wearing 1960s stewardess uniforms and hats, and the one woman with more than a few lines spends them discussing her husband and his whereabouts; the secrecy surrounding the appearance of a monolith in a crater on the moon is for the men to raise. Calo was finding the same thing in rereading Isaac Asimov's Foundation trilogy: "Not one woman leader for four books," he said. "And people still smoke!" Yet they are surrounded by interstellar travel and mind-reading devices.

So while what these boards - as Helen Nissenbaum said in the same panel, "There are so many institutes announcing principles as if that's the end of the story." - are doing now is not inspiring, maybe what they *could* do might be. What if, as Calo suggested, there are human and civil rights commitments AI allows us to make that were impossible before?

"We should be imagining how we can not just preserve extant ethical values but generate new ones based on affordances that we now have available to us," he said, suggesting as one example "mobility as a right". I'm not really convinced that our streets are going to be awash in autonomous vehicles any time soon, but you can see his point. If we have the technology to give independent mobility to people who are unable to drive themselves...well, shouldn't we? You may disagree on that specific idea, but you have to admit: it's a much better class of conversation.tw


Illustrations: Space Station receptionist from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 12, 2019

The Algernon problem

charly-movie-image.jpgLast week we noted that it may be a sign of a maturing robotics industry that it's possible to have companies specializing in something as small as fingertips for a robot hand. This week, the workshop day kicking off this year's We Robot conference provides a different reason to think the same thing: more and more disciplines are finding their way to this cross-the-streams event. This year, joining engineers, computer scientists, lawyers, and the odd philosopher are sociologists, economists, and activists.

The result is oddly like a meeting of the Research Institute for the Science of Cyber Security, where a large part of the point from the beginning has been that human factors and economics are as important to good security as technical knowledge. This was particularly true in the face-off between the economist Rob Seamans and the sociologist Beth Bechky, which pitted quantitative "things we can count" against qualitative "study the social structures" thinking. The range of disciplines needed to think about what used to be "computer" security keeps growing as the ways we use computers become more complex; robots are computer systems whose mechanical manifestations interact with humans. This move has to happen.

One sign is a change in language. Madeline Elish, currently in the news for her newly published 2016 We Robot paper, Moral Crumple Zones, said she's trying to replace the term "deploying" with "integrating" for arriving technologies. "They are integrated into systems," she explained, "and when you say "integrate" it implies into what, with whom, and where." By conrast, "deployment" is military-speak, devoid of context. I like this idea, since by 2015, it was clear from a machine learning conference at the Royal Society that many had begun seeing robots as partners rather than replacements.

Later, three Japanese academics - the independent researcher Hideyuki Matsumi, Takayuki Kato, and Pumio Shimpo - tried to explain why Japanese people like robots so much - more, it seems, than "we" do (whoever "we" are). They suggested three theories: the influence of TV and manga; the influence of the mainstream Shinto religion, which sees a spirit in everything; and the Japanese government strategy to make the country a robotics powerhouse. The latter has produced a 356-page guideline for research development.

"Japanese people don't like to draw distinctions and place clear lines," Shinto said. "We think of AI as a friend, not an enemy, and we want to blur the lines." Shimpo had just said that even though he has two actual dogs he wants an Aibo. Kato dissented: "I personally don't like robots."

The MIT researcher Kate Darling, who studies human responses to robots, found positive reinforcement in studies that have found that autistic kids respond well to robots. "One theory is that they're social, but not too social." An experiment that placed these robots in homes for 30 days last summer had "stellar results". But: when the robots were removed at the end of the experiment, follow-up studies found that the kids were losing the skills the robots had brought them. The story evokes the 1958 Daniel Keyes story Flowers for Algernon, but then you have to ask: what were the skills? Did they matter to the children or just to the researchers and how is "success" defined?

The opportunities anthropomorphization opens for manipulation are an issue everywhere. Woody Hartzog called the tendency to believe what the machine says "automation bias", but that understates the range of motivations: you may believe the machine because you like it, because it's manipulated you, or because you're working in a government benefits agency where you can't be sure you won't get fired if you defy the machine's decision. Would that everyone could see Bill Smart and Cindy Grimm follow up their presentation from last year to show: AI is just software; it doesn't "know" things; and it's the complexity that gets you. Smart hates the term "autnomous" for robots "because in robots it means deterministic software running on a computer. It's teleoperation via computer code."

This is the "fancy hammer" school of thinking about robots, and it can be quite valuable. Kevin Bankston soon demonstrated this: "Science fiction has trained us to worry about Skynet instead of housing discrimination, and expect individual saviors rather than communities working together to deal with community problems." AI is not taking our jobs; capitalists are using AI to take our jobs - a very different problem. As long as we see robots and AI as autonomous, we miss that instead they ares agents carrying out others' plans. This is a larger example of a pervasive problem with smartphones, social media sites, and platforms generally: they are designed to push us to forget the data-collecting, self-interested, manipulative behemoth behind them.

Returning to Elish's comment, we are one of the things robots integrate with. At the moment, this is taking the form of making random people research subjects: the pedestrian killed in Arizona by a supposedly self-driving car, the hapless prisoners whose parole is decided by it's-just-software, the people caught by the Metropolitan Police's staggeringly flawed facial recognition, the homeless people who feel threatened by security robots, the Caltrain passengers sharing a platform with an officious delivery robot. Did any of us ask to be experimented on?


Illustrations: Cliff Robertson in Charly, the movie version of "Flowers for Algernon".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 31, 2012

Remembering the moon

"I knew my life was going to be lived in space," a 50-something said to me in 2009 on the anniversary of the moon landings, trying to describe the impact they had on him as a 12-year-old. I understood what he meant: on July 20, 1969, a late summer Sunday evening in my time zone, I was 15 and allowed to stay up late to watch; awed at both the achievement and the fact that we could see it live, we took Polaroid pictures (!) of the TV image showing Armstrong stepping onto the Moon's surface.

The science writer Tom Wilkie remarked once that the real impact of those early days of the space program was the image of the Earth from space, that it kicked off a new understanding of the planet as a whole, fragile ecosystem. The first Earth Day was just nine months later. At the time, it didn't seem like that. "We landed on the moon" became a sort of yardstick; how could we put a man on the moon yet be unable to fix a bicycle? That sort of thing.

To those who've grown up always knowing we landed on the moon in ancient times (that is, before they were born), it's hard to convey what a staggering moment of hope and astonishment that was. For one thing, it seemed so improbable and it happened so fast. In 1962, President Kennedy promised to put a man on the moon by the end of the decade - and it happened, even though he was assassinated. For another, it was the science fiction we all read as teens come to life. Surely the next steps would be other planets, greater access for the rest of us. Wouldn't I, in my lifetime, eventually be able also to look out the window of a vehicle in motion and see the Earth getting smaller?

Probably not. Many years later, I was on the receiving end of a rant from an English friend about the wasteful expense of sending people into space when unmanned spacecraft could do so much more for so much less money. He was, of course, right, and it's not much of a surprise that the death of the first human to set foot on the Moon, Neil Armstrong, so nearly coincided with the success of the Mars investigator robot, Curiosity. What Curiosity also reminds us, or should, is that although we admire Armstrong as a hero, the fact is that landing on the Moon wasn't so much his achievement as that of probably thousands, of engineers, programmers, and scientists who developed and built the technology necessary to get him there. As a result, the thing that makes me saddest about Armstrong's death on August 25 is the loss of his human memory of the experience of seeing and touching that off-Earth orbiting body.

The science fiction writer Charlie Stross has a lecture transcript I particularly like about the way the future changes under your feet. The space program - and, in the UK and France, Concorde - seemed like a beginning at the time, but has so far turned out to be an end. Sometime between 1950 and 1970, Stross argues, progress was redefined from being all about the speed of transport to being all about the speed of computers or, more precisely, Moore's Law. In the 1930s, when the moon-walkers were born, the speed of transport was doubling in less than a decade; but it only doubled in the 40 years from the late 1960s to 2007, when he wrote this talk. The speed of acceleration had slowed dramatically.

Applying this precedent to Moore's Law, Intel founder Gordon Moore's observation that the number of transistors that could fit on an integrated circuit doubled about every 24 months, increasing computing speed and power proportionately, Stross was happy to argue that despite what we all think today and the obsessive belief among Singularitarians that computers will surpass the computational power of humans oh, any day now, but certainly by 2030, "Computers and microprocessors aren't the future. They're yesterday's future, and tomorrow will be about something else." His suggestion: bandwidth, bringing things like lifelogging and ubiquitous computing so that no one ever gets lost; if we'd had that in 1969, the astronauts would have been sending back first-person total-immersion visual and tactile experiences that would now be in NASA's library for us all to experience as if at first hand instead of the just the external image we all know.

The science fiction I grew up with assumed that computers would remain rare (if huge) expensive items operated by the elite and knowledgeable (except, perhaps, for personal robots). Space flight, and personal transport, on the other hand, would be democratized. Partly, let's face it, that's because space travel and robots make compelling images and stories, particularly for movies, while sitting and typing...not so much. I didn't grow up imagining my life being mediated and expanded by computer use; I, like countless generations before me, grew up imagining the places I might go and the things I might see. Armstrong and the other astronauts, were my proxies. One day in the not-too-distant future, we will have no humans left who remember what it was actually like to look up and see the Earth in the sky while standing on a distant rock. There only ever have been, Wikipedia tells me, 12, all born in the 1930s.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 11, 2012

Self-drive

When I first saw that Google had obtained a license for its self-driving car in the state of Nevada I assumed that the license it had been issued was a driver's license. It's disappointing to find out that what they meant was that the car had been issued with license plates so it can operate on public roads. Bah: all operational cars have license plates, but none have driver's licenses. Yet.

The Guardian has been running a poll, asking readers if they'd ride in the car or not. So far, 84 percent say yes. I would, too, I think. With a manual override and a human prepared to step in for oh, the first ten years or so.

I'm sure that Google, being a large company in a highly litigious society, has put the self-driving car through far more rigorous tests than any a human learner undergoes. Nonetheless, I think it ought to be required to get a driver's license, not just license plates. It should have to pass the driving test like everyone else. And then buy insurance, which is where we'll find out what the experts think. Will the rates for a self-driving car be more or less than for a newly licensed male aged 18 to 25?

To be fair, I've actually been to Nevada, and I know how empty most of those roads are. Even without that, I'd certainly rather ride in Google's car than on a roller coaster. I'd rather share the road with Google's car than with a drunk driver. I'd rather ride in Google's car than trust the next Presidential election to electronic voting machines.

That last may seem illogical. After all, riding in a poorly driven car can kill you. A gamed electronic voting machine can only steal your votes. The same problems with debugging software and checking its integrity apply to both. Yet many of us have taken quite long flights on fly-by-wire planes and ridden on driverless trains without giving it much thought.

But a car is *personal*. So much so that we tolerate 1.2 million deaths annually worldwide from road traffic; in 2011 alone, more than ten times as many people died on American roads as were killed in the 9/11 World Trade Center attack. Yet everyone thinks they're an above-average driver and feels safest when they're controlling their own car. Will a self-driving car be that delusional?

The timing was interesting because this week I have also been reading a 2009 book I missed, The Case for Working With Your Hands or Why Office Work is Bad for Us and Fixing Things Feels Good . The author, Michael Crawford, argues that manual labour, which so many middle class people have been brought up to despise, is more satisfying - and has better protection against outsourcing - than anything today's white collar workers learn in college. I've been saying for years that if I had teenagers I'd be telling them to learn a trade like automechanics, plumbing, electrical work, nursing, or even playing live music - anything requiring skill and knowledge and that can't easily be outsourced to another country in the global economy. I'd say teaching, but see last week's.

Dumb down plumbing all you want with screw-together PVC pipes and joints, but someone still has to come to your house to work on it. Even today's modern cars, with their sealed subsystems and electronic read-outs, need hands-on care once in a while. I suppose Google's car arrives back at home base and sends in a list of fix-me demands for its human minders to take care of.

When Crawford talks about the satisfaction of achieving something in the physical world, he's right, up to a point. In an interview for the Guardian in 1995 (TXT), John Perry Barlow commented to me that, "The more time I spend in cyberspace, the more I love the physical world, and any kind of direct, hard-linked interaction with it. I never appreciated the physical world anything like this much before." Now, Barlow, more than most people, knows a lot of about fixing things: he spent 17 years running a debt-laden Wyoming ranch and, as he says in that piece, he spent most of it fixing things that couldn't be fixed. But I'm going to argue that it's the contrast and the choice that makes physical work seem so attractive.

Yes, it feels enormously different to know that I have personally driven across the US many times, the most notable of which was a three-and-a-half-day sprint from Connecticut to Los Angeles in the fall of 1981 (pre-GPS, I might add, without needing to look at a map). I imagine being driven across would be more like taking the train even though you can stop anywhere you like: you see the same scenery, more or less, but the feeling of personal connection would be lost. Very much like the difference between knowing the map and using GPS. Nonetheless, how do I travel across the US these days? Air. How does Barlow make his living? Being a "cognitive dissident". And Crawford writes books. At some point, we all seem to want to expand our reach beyond the purely local, physical world. Finding that balance - and employment for 9 billion people - will be one of this century's challenges.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 2, 2011

Debating the robocalypse

"This House fears the rise of artificial intelligence."

This was the motion up for debate at Trinity College Dublin's Philosophical Society (Twitter: @phil327) last night (December 1, 2011). It was a difficult one, because I don't think any of the speakers - neither the four students, Ricky McCormack, Michael Coleman, Cat O'Shea, and Brian O'Beirne, nor the invited guests, Eamonn Healy, Fred Cummins, and Abraham Campbell - honestly fear AI all that much. Either we don't really believe a future populated by superhumanly intelligent killer robots is all that likely, or, like Ken Jennings, we welcome our new computer overlords.

But the point of this type of debate is not to believe what you are saying - I learned later that in the upper levels of the game you are assigned a topic and a position and given only 15 minutes to marshal your thoughts - but to argue your assigned side so passionately, persuasively, and coherently that you win the votes of the assembled listeners even if later that night, while raiding the icebox, they think, "Well, hang on..." This is where politicians and Dail/House of Commons debating style come from, As a participatory sport it was utterly new to me, and it explains a *lot* about the derailment of political common sense by the rise of public relations and lobbying.

Obviously I don't actually oppose research into AI. I'm all for better tools, although I vituperatively loathe tools that try to game me. As much fun as it is to speculate about whether superhuman intelligences will deserve human rights, I tend to believe that AI will always be a tool. It was notable that almost every speaker assumed that AI would be embodied in a more-or-less humanoid robot. Far more likely, it seems to me, that if AI emerges it will be first in some giant, boxy system (that humans can unplug) and even if Moore's Law shrinks that box it will be much longer before AI and robotics converge into a humanoid form factor.

Lacking conviction on the likelihood of all this, and hence of its dangers, I had to find an angle, which eventually boiled down to Walt Kelly and We have met the enemy and he is us. In this, I discovered, I am not alone: a 2007 ThinkArtificial poll found that more than half of respondents feared what people would do with AI: the people who program it, own it, and deploy it.

If we look at the history of automation to date, a lot of it has been used to make (human) workers as interchangeable as possible. I am old enough to remember, for example, being able to walk down to the local phone company in my home town of Ithaca, NY, and talk in person to a customer service representative I had met multiple times before about my piddling residential account. Give everyone the same customer relationship database and workers become interchangeable parts. We gain some convenience - if Ms Jones is unavailable anyone else can help us - but we pay in lost relationships. The company loses customer loyalty, but gains (it hopes) consistent implementation of its rules and the economic leverage of no longer depending on any particular set of workers.

I might also have mentioned automated trading systems, which are making the markets swing much more wildly much more often. Later, Abraham Campbell, a computer scientist working in augmented reality at University College Dublin, said as much as 25 percent of trading is now done by bots. So, cool: Wall Street has become like one of those old IRC channels where you met a cute girl named Eliza...

Campbell had a second example: the Siri, which will tell you where to hide a dead body but not where you might get an abortion. Google's removal of torrent sites from its autosuggestion/Instant feature didn't seem to me egregious censorship, partly because there are other search engines and partly (short-sightedly) because I hate Instant so much already. But as we become increasingly dependent on mediators to help us navigate our overcrowded world, the agenda and/or competence of the people programming them are vital to know. These will be transparent only as long as there are alternatives.

Simultaneously, back in England in work that would have made Jessica Mitford proud, Privacy International's Eric King and Emma Draper were publishing material that rather better proves the point. Big Brother Inc lays out the dozens of technology companies from democratic Western countries that sell surveillance technologies to repressive regimes. King and Draper did what Mitford did for the funeral business in the late 1960s (and other muckrakers have done since): investigate what these companies' marketing departments tell prospective customers.

I doubt businesses will ever, without coercion, behave like humans with consciences; it's why they should not be legally construed as people. During last night's debate, the prospective robots were compared to women and "other races", who were also denied the vote. Yes, and they didn't get it without a lot of struggle. The In the "Robocalypse" (O'Beirne), they'd better be prepared to either a) fight to meltdown for their rights or b) protect their energy sources and wait patiently for the human race to exterminate itself.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 3, 2009

Copyright encounters of the third dimension

Somewhere around 2002, it occurred to me that the copyright wars we're seeing over digitised intellectual property - music, movies, books, photographs - might, in the not-unimaginable future be repeated, this time with physical goods. Even if you don't believe that molecular manufacturing will ever happen, 3D printing and rapid prototyping machines offer the possibility of being able to make a large number of identical copies of physical goods that until now were difficult to replicate without investing in and opening a large manufacturing facility.

Lots of people see this as a good thing. Although: Chris Phoenix, co-founder of the Center for Responsible Nanotechnology, likes to ask, "Will we be retired or unemployed?"

In any case, I spent some years writing a book proposal that never went anywhere, and then let the idea hang around uselessly, like a human in a world where robots have all the jobs.

Last week, at the University of Edinburgh's conference on governance of new technologies (which I am very unhappy to have missed), RAF engineer turned law student Simon Bradshaw presented a paper on the intellectual property consequences of "low-cost rapid prototyping". If only I'd been a legal scholar...

It turns out that as a legal question rapid prototyping has barely been examined. Bradshaw found nary a reference in a literature search. Probably most lawyers think this stuff is all still just science fiction. But, as Bradshaw does, make some modest assumptions, and you find that perhaps three to five years from now we could well be having discussions about whether Obama was within the intellectual property laws to give the Queen a printed-out, personalized iPod case designed to look like Elvis, whose likeness and name are trademarked in the US. Today's copyright wars are going to seem so *simple*.

Bradshaw makes some fairly reasonable assumptions about this timeframe. Until recently, you could pay anywhere from $20,000 to $1.5 million for a fabricator/3D printer/rapid prototyping machine. But prices and sizes are dropping and functionality is going up. Bradshaw puts today's situation on a par with the state of personal computers in the late 1970s, the days of the Commodore PET and the Apple II and home kids like the Sinclair MK14. Let's imagine, he says, the world of the second generation fabricator: the size of a color laser printer, cost $1,000 or less, fed with readily available plastic, better than 0.1mm resolution (and in color), 20cm cube maximum size, and programmable by enthusiasts.

As the UK Intellectual Property Office will gladly tell you, there are four kinds of IP law: copyright, patent, trademark, and design. Of these, design is by far the least known; it's used to protect what the US likes to call "trade dress", that is, the physical look and feel of a particular item. Apple, for example, which rarely misses a trick when it comes to design, applied for a trademark on the iPhone's design in the US, and most likely registered it under the UK's design right as well. Why not? Registration is cheap (around £200), and the iPhone design was genuinely innovative.

As Bradshaw analyzes it, all four of these types of IP law could apply to objects created using 3D printing, rapid prototyping, fabricating...whatever you want to call it. And those types of law will interact in bizarre and unexpected ways - and, of course, differently in different countries.

For example: in the UK, a registered design can be copied if it's done privately and for non-commercial use. So you could, in the privacy of your home, print out copies of a test-tube stand (in Bradshaw's example) whose design is registered. You could not do it in a school to avoid purchasing them.

Parts of the design right are drafted so as to prevent manufacturers from using the right to block third-parties from making spare parts. So using your RepRap to make a case for your iPod is legal as long as you don't copy any copyrighted material that might be floating around on the surface of the original. Make the case without Elvis.

But when is an object just an object and when is it a "work of artistic merit"? Because if what you just copied is a sculpture, you're in violation of copyright law. And here, Bradshaw says, copyright law is unhelpfully unclear. Some help has come from the recent ruling in Lucasfilm v Ainsworth, the case about the stormtrooper helmets copied from the first Star Wars movie. Is a 3D replica of a 2D image a derivative work?

Unsurprisingly, it looks like US law is less forgiving. In the helmet case, US courts ruled in favor of Lucasfilm; UK courts drew a distinction between objects that had been created for artistic purposes in their own right and those that hadn't.

And that's all without even getting into the thing that if everyone has a fabricator there are whole classes of items that might no longer be worth selling. In that world, what's going to be worth paying for is the designs that drive the fabricators. Think knitted Dr Who puppets, only in 3D.

It's all going to be so much fun, dontcha think?

Update (1/26/2012): Simon Bradshaw's paper is now published here.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 21, 2008

The art of the impossible

So the question of last weekend very quickly became: how do you tell plausible fantasy from wild possibility? It's a good conversation starter.

One friend had a simple assessment: "They are all nuts," he said, after glancing over the weekend's program. The problem is that 150 years ago anyone predicting today's airline economy class would also have sounded nuts.

Last weekend's (un)conference was called Convergence, but the description tried to convey the sense of danger of crossing the streams. The four elements that were supposed to converge: computing, biotech, cognitive technology, and nanotechnology. Or, as the four-colored conference buttons and T-shirts had it, biotech, infotech, cognotech, and nanotech.

Unconferences seem to be the current trend. I'm guessing, based on very little knowledge, that it was started by Tim O'Reilly's FOO camps or possibly the long-running invitation-only Hackers conference. The basic principle is: collect a bunch of smart, interesting, knowledgeable people and they'll construct their own program. After all, isn't the best part of all conferences the hallway chats and networking, rather than the talks? Having been to one now (yes, a very small sample), I think in most cases I'm going to prefer the organized variety: there's a lot to be said for a program committee that reviews the proposals.

The day before, the Center for Responsible Nanotechnology ran a much smaller seminar on Global Catastrophic Risks. It made a nice counterweight: the weekend was all about wild visions of the future; the seminar was all about the likelihood of our being wiped out by biological agents, astronomical catastrophe, or, most likely, our own stupidity. Favorite quote of the day, from Anders Sandberg: "Very smart people make very stupid mistakes, and they do it with surprising regularity." Sandberg learned this, he said, at Oxford, where he is a philosopher in the Institute for the Future of Humanity.

Ralph Merkle, co-inventor of public key cryptography, now working on diamond mechanosynthesis, said to start with physics textbooks, most notably the evergreen classic by Halliday and Resnick. You can see his point: if whatever-it-is violates the laws of physics it's not going to happen. That at least separates the kinds of ideas flying around at Convergence and the Singularity Summit from most paranormal claims: people promoting dowsing, astrology, ghosts, or ESP seem to be about as interested in the laws of physics as creationists are in the fossil record.

A sidelight: after years of The Skeptic, I'm tempted to dismiss as fantasy anything where the proponents tell you that it's just your fear that's preventing you from believing their claims. I've had this a lot - ghosts, alien spacecraft, alien abductions, apparently these things are happening all over the place and I'm just too phobic to admit it. Unfortunately, the behavior of adherents to a belief just isn't evidence that it's wrong.

Similarly, an idea isn't wrong just because its requirements are annoying. Do I want to believe that my continued good health depends on emulating Ray Kurzweil and taking 250 pills a day and, a load of injections weekly? Certainly not. But I can't prove it's not helping him. I can, however, joke that it's like those caloric restriction diets - doing it makes your life *seem* longer.

Merkle's other criterion: "Is it internally consistent?" This one's harder to assess, particularly if you aren't a scientific expert yourself.

But there is the technique of playing the man instead of the ball. Merkle, for example, is a cryonicist and is currently working on diamond mechanosynthesis. Put more simply, he's busy designing the tools that will be needed to build things atom by atom when - if - molecular manufacturing becomes a reality. If that sounds nutty, well, Merkle has earned the right to steam ahead unworried because his ideas about cryptography, which have become part of the technology we use every day to protect ecommerce transactions, were widely dismissed at first.

Analyzing language is also open to the scientifically less well-educated: do the proponents of the theory use a lot of non-standard terms that sound impressive but on inspection don't seem to mean anything? It helps if they can spell, but that's not a reliable indicator - snake oil salesmen can be very professional, and some well-educated excellent scientists can't spell worth a damn.

The Risks seminar threw out a useful criterion for assessing scenarios: would it make a good movie? If your threat to civilization can be easily imagined as a line delivered by Bruce Willis, it's probably unlikely. It's not a scientifically defensible principle, of course, but it has a lot to recommend it. In human history, what's killed the most people while we're worrying about dramatic events like climate change and colliding asteroids? Wars and pandemics.

So, where does that leave us? Waiting for deliverables, of course. Even if a goal sounds ludicrous working towards it may still produce useful results. A project like Aubrey de Grey's ideas about "curing aging" by developing techniques for directly repairing damage (or SENS, for Strategies for Engineered Negligible Senescence) seems a case in point. And life extension is the best hope for all of these crazy ideas. Because, let's face it: if it doesn't happen in our lifetime, it was impossible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 31, 2008

Machine dreams

Just how smart are humans anyway? Last week's Singularity Summit spent a lot of time talking about the exact point at which computer processing power would match that of the human brain, but that's only the first step. There's the software to make the hardware do stuff, and then there's the whole question of consciousness. At that point, you've strayed from computer science into philosophy and you might as well be arguing about angels on the heads of pins. Of course everyone hopes they'll be alive to see these questions settled, but in the meantime all we have is speculation and the snide observation that it's typical that a roomful of smart people would think that all problems can be solved by more intelligence.

So I've been trying to come up with benchmarks for what constitutes artificial intelligence, and the first thing I think is that the Turing test is probably too limited. In it, a judge has to determine which of two typing correspondents is the machine and which the human, That's fine as far as it goes, but one of the consistent threads that un through all this is a noticeable disdain for human bodies.

While our brain power is largely centralized, it still seems to me likely that both its grey matter and the rest of our bodies are an important part of the substrate. How we move through space, how our bodies react and feed our brains is part and parcel of how our minds work, however much we may wish to transcend biology. The fact that we can watch films of bonobos and chimpanzees and recognise our own behaviour in their interactions should show us that we're a lot closer to most animal species than we think - and a lot further from most machines.

For that sort of reason, the Turing test seems limited. A computer passes that test if, when paired against a human, the judge can't tell which is which. At the moment, it seems clear the winner is going to be spambots - some spam messages are already devised cleverly enough to fool even Net-savvy individuals into opening them sometimes. But they're hardly smart - they're just programmed that way. And a lot depends on the capability of the judge - some people even find Eliza convincing, though it's incredibly easy to send off-course into responses that are clearly those of a machine. Find a judge who wants to believe and you're into the sort of game that self-styled psychics like to play.

Nor can we judge a superhuman intelligence by the intractable problems it solves. One of the more evangelist speakers last weekend talked about being able to instantly create tall buildings via nanotechnology. (I was, I'm afraid, irresistibly reminded of that Bugs Bunny cartoon where Marvin pours water on beans to produce instant Martians to get rid of Bugs.) This is clearly just silly: you're talking about building a gigantic building out of molecules. I don't care how many billions of nanobots you have, the sheer scale means it's going to take time. And, as Kevin Kelly has written, no matter how smart a machine is, figuring out how to cure cancer or roll back aging won't be immediate either because you can't really speed up the necessary experiments. Biology takes time.

Instead, one indicator might be variability of response; that is, that feeding several machines the same input - or giving the same machine the same input at different times - produces different, equally valid interpretations. If, for example, you give a 10th grade class Jane Austen's Pride and Prejudice to read and report on, different students might with equal legitimacy describe it as a historical account of the economic forces affecting 18th century women, a love story, the template for romantic comedy, or even the story of the plain sister in a large family whose talents were consistently overlooked until her sisters got married.

In The Singularity Is Near, Ray Kurzweil laments that each human must read a text separately and that knowledge can't be quickly transferred from one to another the way a speech recognition program can be loaded into a new machine in seconds - but that's the point. Our strength is that our intelligences are all different, and we aren't empty vessels into which information is poured but stews in which new information causes varying chemical reactions.

You might argue that search engines can already do this, in that you don't get the same list of hits if you type the same keywords into Google versus Yahoo! versus Ask.com, and if you come back tomorrow you may get a different response from any one of them. That's true. It isn't the kind of input I had in mind, but fair enough.

The other benchmark that's occurred to me so far is that machines will be getting really smart when they get bored.

ZDNet UK editor Rupert Goodwins has a variant on this from when he worked at Sinclair Research. "If it went out one evening, drank too much, said the next morning, 'never again' and repeated the exercise immediately. Truly human." But see? There again: a definition of human intelligence that requires a body.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 26, 2008

Wimsey's whimsy

One of the things about living in a foreign country is this: every so often the actual England I live in collides unexpectedly with the fictional England I grew up with. Fictional England had small, friendly villages with murders in them. It had lowering, thick fogs and grim, fantastical crimes solvable by observation and thought. It had mathematical puzzles before breakfast in a chess game. The England I live in has Sir Arthur Conan Doyle's vehement support for spiritualism, traffic jams, overcrowding, and four million people who read The Sun.

This week, at the GikIII Workshop, in a break between Internet futures, I wandered out onto a quadrangle of grass so brilliantly and perfectly green that it could have been an animated background in a virtual world. Overlooking it were beautiful, stolid, very old buildings. It had a sign: Balliol College. I was standing on the quad where, "One never failed to find Wimsey of Balliol planted in the center of the quad and laying down the law with exquisite insolence to somebody." I know now that many real people came out of Balliol (three kings, three British prime ministers, Aldous Huxley, Robertson Davies, Richard Dawkins, and Graham Greene) and that those old buildings date to 1263. Impressive. But much more startling to be standing in a place I first read about at 12 in a Dorothy Sayers novel. It's as if I spent my teenaged years fighting alongside Angel avatars and then met David Boreanaz.

Organised jointly by Ian Brown at the Oxford Internet Institute and the University of Edinburgh's Script-ed folks, GikIII (prounounced "geeky") is a small, quirky gathering that studies serious issues by approaching them with a screw loose. For example: could we control intelligent agents with the legal structure the Ancient Romans used for slaves (Andrew Katz)? How sentient is a robot sex toy? Should it be legal to marry one? And if my sexbot rapes someone, are we talking lawsuit, deactivation, or prison sentence (Fernando Barrio)? Are RoadRunner cartoons all patent applications for devices thought up by Wile E. Coyote (Caroline Wilson)? Why is The Hound of the Baskervilles a metaphor for cloud computing (Miranda Mowbray)?

It's one of the characteristics of modern life that although questions like these sound as practically irrelevant as "how many angels, infinitely large, can fit on the head of a pin, infinitely small?", which may (or may not) have been debated here seven and a half centuries ago, they matter. Understanding the issues they raise matters in trying to prepare for the net.wars of the future.

In fact, Sherlock Holmes's pursuit of the beast is metaphorical; Mowbray was pointing out the miasma of legal issues for cloud computing. So far, two very different legal directions seem likely as models: the increasingly restrictive EULAs common to the software industry, and the service-level agreements common to network outsourcing. What happens if the cloud computing company you buy from doesn't pay its subcontractors and your data gets locked up in a legal battle between them? The terms and conditions in effect for Salesforce.com warn that the service has 30 days to hand back your data if you terminate, a long time in business. Mowbray suggests that the most likely outcome is EULAs for the masses and SLAs at greater expense for those willing to pay for them.

On social networks, of course, there are only EULAs, and the question is whether interoperability is a good thing or not. If the data people put on social networks ("shouldn't there be a separate disability category for stupid people?" someone asked) can be easily transferred from service to service, won't that make malicious gossip even more global and permanent? A lot of the issues Judith Rauhofer raised in discussing the impact of global gossip are not new to Facebook: we have a generation of 35-year-olds coping with the globally searchable history of their youthful indiscretions on Usenet. (And WELL users saw the newly appointed CEO of a large tech company delete every posting he made in his younger, more drug-addled 1980s.) The most likely solution to that particular problem is time. People arrested as protesters and marijuana smokers in the 1960s can be bank presidents now; in a few years the work force will be full of people with Facebook/MySpace/Bebo misdeeds and no one will care except as something laugh at drunkenly late out in the pub.

But what Lilian Edwards wants to know is this: if we have or can gradually create the technology to make "every ad a wanted ad" - well, why not? Should we stop it? Online marketing is at £2.5 billion a year according to Ofcom, and a quarter of the UK's children spend 22 hours a week playing computer games, where there is no regulation of industry ads and where Web 2.0 is funded entirely by advertising. When TV and the Internet roll together, when in-game is in-TV and your social network merges with megamedia, and MTV is fully immersive, every detail can be personalized product placement. If I grew up five years from now, my fictional Balliol might feature Angel driving across the quad in a Nissan Prairie past a billboard advertising airline tickets.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 21, 2007

The summer of lost hats

I seem to have spent the summer dodging in and out of science fiction novels featuring four general topics: energy, security, virtual worlds, and what someone at the last conference called "GRAIN" technologies (genetic engineering, robotics, AI, and nanotechnology). So the summer started with doom and gloom and got progressively more optimistic. Along the way, I have mysteriously lost a lot of hats. The phenomena may not be related.

I lost the first hat in June, a Toyota Motor Racing hat (someone else's joke; don't ask) while I was reading the first of many very gloomy books about the end of the world as we know it. Of course, TEOTWAWKI has been oft-predicted, and there is, as Damian Thompson, the Telegraph's former religious correspondent, commented when I was writing about Y2K – a "wonderful and gleeful attention to detail" in these grand warnings. Y2K was a perfect example: a timetable posted to comp.software.year-2000 had the financial system collapsing around April 1999 and the cities starting to burn in October…

Energy books can be logically divided into three categories. One, apocalyptics: fossil fuels are going to run out (and sooner than you think), the world will continue to heat up, billions will die, and the few of us who survive will return to hunting, gathering, and dying young. Two, deniers: fossil fuels aren't going to run out, don't be silly, and we can tackle global warming by cleaning them up a bit. Here. Have some clean coal. Three, optimists: fossil fuels are running out, but technology will help us solve both that and global warming. Have some clean coal and a side order of photovoltaic panels.

I tend, when not wracked with guilt for having read 15 books and written 30,000 words on the energy/climate crisis and then spent the rest of the summer flying approximately 33,000 miles, toward optimism. People can change – and faster than you think. Ten years ago, you'd have been laughed off the British isles for suggesting that in 2007 everyone would be drinking bottled water. Given the will, ten years from now everyone could have a solar collector on their roof.

The difficulty is that at least two of those takes on the future of energy encourage greater consumption. If we're all going to die anyway and the planet is going inevitably to revert to the Stone Age, why not enjoy it while we still can? All kinds of travel will become hideously expensive and difficult; go now! If, on the other hand, you believe that there isn't a problem, well, why change anything? The one group who might be inclined toward caution and saving energy is the optimists – technology may be able to save us, but we need time to create create and deploy it. The more careful we are now, the longer we'll have to do that.

Unfortunately, that's cautious optimism. While technology companies, who have to foot the huge bills for their energy consumption, are frantically trying to go green for the soundest of business reasons, individual technologists don't seem to me to have the same outlook. At Black Hat and Defcon, for example (lost hats number two and three: a red Canada hat and a black Black Hat hat), among all the many security risks that were presented, no one talked about energy as a problem. I mean, yes, we have all those off-site backups. But you can take out a border control system as easily with an electrical power outage as you can by swiping an infected RFID passport across a reader to corrupt the database. What happens if all the lights go out, we can't get them back on again, and everything was online?

Reading all those energy books changes the lens through which you view technical developments somewhat. Singapore's virtual worlds are a case in point (lost hat: a navy-and-tan Las Vegas job): everyone is talking about what kinds of laws should apply to selling magic swords or buying virtual property, and all the time in the back of your mind is the blog posting that calculated that the average Second Life avatar consumes as much energy as the average Brazilian. And emits as much carbon as driving an SUV for 2,000 miles. Bear in mind that most SL avatars aren't figured up that often, and the suggestion that we could curb energy consumption by having virtual conferences instead of physical ones seems less realistic. (Though we could, at least, avoid airport security.) In this, as in so much else, the science fiction writer Vernor Vinge seems to have gotten there first: his book Marooned in Real Time looks at the plight of a bunch of post-Singularity augmented humans knowing their technology is going to run out.

It was left to the most science fictional of the conferences, last week's Center for Responsible Nanotechnology conference (my overview is here) to talk about energy. In wildly optimistic terms: technology will not only save us but make us all rich as well.

This was the one time all summer I didn't lose any hats (red Swiss everyone thought was Red Cross, and a turquoise Arizona I bought just in case). If you can keep your hat while all around you everyone is losing theirs…

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 30, 2006

Technical enough for government work

Wednesday night was a rare moment of irrelevant glamor in my life, when I played on the Guardian team in a quiz challenge grudge match.

In March, Richard Sarson (intriguingly absent, by the way) accused MPs of not knowing which end was up, technically speaking, and BT funded a test. All good fun.

But Sarson had a serious point: MPs are spending billions and trillions of public funds without the technical knowledge to them. His particular focus was the ID card, which net.wars has written about so often. Who benefits from these very large IT contracts besides, of course, the suppliers and contractors? It must come down to Yes, Minister again: commissioning a huge, new IT system gives the Civil Service a lot of new budget and bureaucracy to play with, especially if the ministers don't understand the new system. Expanded budgets are expanded power, we know this, and if the system doesn't work right the first time you need an even bigger budget to fix them with.

And at that point, the issue collided in my mind with this week's other effort, a discussion of Vernor Vinge's ideas of where our computer-ridden world might be going. Because the strangest thing about the world Vernor Vinge proposes in his new book, Rainbows End, is that all the technology pretty much works as long as no one interferes with it. For example: this is a world filled with localizer sensors and wearable computing; it's almost impossible to get out of view of a network node. People decide to go somewhere and snap! a car rolls up and pops open its doors.

I'm wondering if Vinge has ever tried to catch a cab when it was raining in Manhattan.

There are two keys to this world. First: it is awash in so many computer chips that IPv6 might not have enough addresses (yeah, yeah, I know, no electron left behind and all that). Second: each of these chips has a little blocked off area called the Secure Hardware Environment (SHE), which is reserved for government regulation. SHE enables all sorts of things: detailed surveillance, audit trails, the blocking of undesirable behavior. One of my favorite of Vinge's ideas about this is that the whole system inverts Lawrence Lessig's idea of code is law into "law is code". When you make new law, instead of having to wait five or ten years until all the computers have been replaced so they conform to the new law, you can just install the new laws as a flash regulatory update. Kind of like Microsoft does now with Windows Genuine Advantage. Or like what I call "idiot stamps" – today's denominationless stamps, intended for people who can never remember how much postage is.

There are a lot of reasons why we don't want this future, despite the convenience of all those magically arriving cars, and despite the fact that Vinge himself says he thinks frictional costs will mean that SHE doesn't work very well. "But it will be attempted, both by the state and by civil special interest petitioners." For example, he said, take the reaction of a representative he met from a British writers' group who thought it was a nightmare scenario – but loved the bit where microroyalties were automatically and immediately transmitted up the chain. "If we could get that, but not the monstrous rest of it…"

For another, "You really need a significant number of people who are willing to be Amish to the extent that they don't allow embedded microprocessors in their lifestyle." Because, "You're getting into a situation where that becomes a single failure point. If all the microprocessors in London went out, it's hard to imagine anything short of a nuclear attack that would be a deadlier disaster."

Still, one of the things that makes this future so plausible is that you don't have to posit the vast, centralized expenditure of these huge public IT projects. It relies instead on a series of developments coming together. There are examples all around us. Manufacturers and retailers are leaping gleefully onto RFID in everything. More and more desktop and laptop computers are beginning to include the Trusted Computing Module, which is intended to provide better security through blocking all unsigned programs from running but as a by-product could also allow the widescale, hardware-level deployment of DRM. The business of keeping software updated has become so complex that most people are greatly relieved to be able to make it automatic. People and municipalities all over the place are installing wireless Internet for their own use and sharing it. To make Vinge's world, you wait until people have voluntarily bought or installed much of the necessary infrastructure and then do a Project Lite to hook it up to the functions you want.

What governments would love about the automatic regulatory upgrade is the same thing that the Post Office loves about idiot stamps: you can change the laws (or prices) without anyone's really being aware of what you're doing. And there, maybe, finally, is some real value for those huge, failed IT projects: no one in power can pretend they aren't there. Just, you know, God help us if they ever start being successful.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).