Main

December 6, 2018

Richard's universal robots

Praminda Caleb-Solly -4.jpegThe robot in the video is actually a giant hoist attached to the ceiling. It has big grab bars down at the level of the person sitting on the edge of the bed, waiting. When the bars approach, she grabs them, and lets the robot slowly help her up into a standing position, and then begins to move forward.

This is not how any of us imagines a care robot, but I am persuaded this is more like our future than the Synths in 2015's Humans, which are incredibly humanoid (helpfully for casting) but so, so far from anything ready for deployment. This thing, which Praminda Caleb-Solly showed at work in a demonstration video at Tuesday's The Shape of Things conference, is a work in progress. There are still problems, most notably that your average modern-build English home has neither high enough ceilings nor enough lateral space to accommodate it. My bedroom is about the size of the stateroom in the Marx Brothers movie A Night at the Opera; you'd have to put it in the hall and hope the grab bar assembly could reach through the doorway. But still.

As the news keeps reminding us, the the Baby Boomer bulge will soon reach frailty. In industrialized nations, where mobility, social change, and changed expectations have broken up extended families, need will explode. In the next 12 years, Caleb-Solly said, a fifth of people over 80 - 4.8 million people in the UK - will require regular care. Today, the National Health Service is short almost 250,000 staff (a problem Brexit exacerbates wholesale). Somehow, we'll have to find 110,000 people to work in social care in England alone. Technology is one way to help fill that gap. Today, though, 30% of users abandon their assistive technologies; they're difficult to adapt to changing needs, difficult to personalize, and difficult to interact with.

Personally, I am not enthusiastic about having a robot live in my house and report on what I do to social care workers. But I take Caleb-Solly's point when she says, "We need smart solutions that can deal with supporting a healthy lifestyle of quality". That ceiling-hoist robot is part of a modular system that can add functions and facilities as people's needs and capacity change over time.

Thumbnail image for werobot-pepper-head_zpsrvlmgvgl.jpgIn movies and TV shows, robot assistants are humanoids, but that future is too far away to help the onrushing 4.8 million. Today's care-oriented robots have biological, but not human, inspirations: the PARO seal, or Pepper, which Caleb-Solly's lab likes because it's flexible and certified for experiments in people's homes. You may wonder what intelligence, artificial or otherwise, a walker needs, but given sensors and computational power the walker can detect how its user is holding it, how much weight it's bearing, whether the person's balance is changing, and help them navigate. I begin to relax: this sounds reasonable. And then she says, "Information can be conveyed to the carer team to assess whether something changed and they need more help," and I close down with suspicion again. That robot wants to rat me out.

There's a simple fix for that: assume the person being cared for has priorities and agency of their own, and have the robot alert them to the changes and let them decide what they want to do about it. That approach won't work in all situations; there are real issues surrounding cognitive decline, fear, misplaced pride, and increasing multiple frailties that make self-care a heavy burden. But user-centered design can't merely mean testing the device with real people with actual functional needs; the concept must extend to ownership of data and decision-making. Still, the robot walker in Caleb-Solly's lab taught her how to waltz. That has to count for something.

The project - CHIRON, for Care at Home using Intelligent Robotic Omni-functional Nodes - is a joint effort between Three Sisters Care, Caleb-Solly's lab, and Shadow Robot, and funded with £2 million over two years by Innovate UK.

Shadow Robot was the magnet that brought me here. One of the strangest and most eccentric stories in an already strange and eccentric field, Shadow began circa 1986, when the photographer Richard Greenhill was becalmed on a ship with nothing to do for several weeks but read the manual for the Sinclair ZX 81. His immediate thought: you could control a robot with one of those! His second thought: I will build one.

greenhill-rotated-2.jpegBy 1997, Greenhill's operation was a band of volunteers meeting every week in a north London house filled with bits of old wire and electronics scrounged from junkyards. By then, Greenhill had most of a hominid with deceptively powerful braided-cloth "air muscles". By my next visit, in 2009, former volunteer Rich Walker had turned Shadow into a company selling a widely respected robot hand, whose customers include NASA, MIT, and Carnegie-Mellon. Improbably, the project begun by the man with no degrees, no funding, and no university affiliation has outlasted numerous more famous efforts filled with degree-bearing researchers who used up their funding, published, and disbanded. And now it's contributing robotics research expertise to CHIRON.

Seen Tuesday, Greenhill was eagerly outlining a future in which we can all build what we need and everyone can live for free. Well, why not?


Illustrations: Praminda Caleb-Solly presenting on Tuesday (Kois Miah); Pepper; Richard Greenhill demonstrating his personally improved scooter.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 9, 2018

Escape from model land

Thumbnail image for lennysmith-davidtuckett-cruise-2018-11-08.jpg
"Models are best for understanding, but they are inherently wrong," Helen Dacre said, evoking robotics engineer Bill Smart on sensors. Dacre was presenting a tool that combines weather forecasts, air quality measurements, and other data to help airlines and other stakeholders quickly assess the risk of flying after a volcanic eruption. In April 2010, when Iceland's Eyjafjallajökull blew its top, European airspace shut down for six days at an estimated overall cost of £1.1 billion. Since then, engine manufacturers have studied the effect of atmospheric volcanic ash on aircraft engines, and are finding that a brief excursion through peak levels of concentration is less damaging than prolonged exposure at lower levels. So, do you fly?

This was one of the projects presented at this week's conference of the two-year-old network Challenging Radical Uncertainty in Science, Society and the Environment (CRUISSE). To understand "radical uncertainty", start with Frank Knight, who in 1921 differentiated between "risk", where the outcomes are unknown but the probabilities are known, and uncertainty, where even the probabilities are unknown. Timo Ehrig summed this up as "I know what I don't know" versus "I don't know what I don't know", evoking Donald Rumsfeld's "unknown unknowns". In radical uncertainty decisions, existing knowledge is not relevant because the problems are new: the discovery of metal fatigue in airline jets; the 2008 financial crisis; social media; climate change. The prior art, if any, is of questionable relevance. And you're playing with live ammunition - real people's lives. By the million, maybe.

How should you change the planning system to increase the stock of affordable housing? How do you prepare for unforeseen cybersecurity threats? What should we do to alleviate the impact of climate change? These are some of the questions that interested CRUISSE founders Leonard Smith and David Tuckett. Such decisions are high-impact, high-visibility, with complex interactions whose consequences are hard to foresee.

It's the process of making them that most interests CRUISSE. Smith likes to divide uncertainty problems into weather and climate. With "weather" problems, you make many similar decisions based on changing input; with "climate" problems your decisions are either a one-off or the next one is massively different. Either way, with climate problems you can't learn from your mistakes: radical uncertainty. You can't reuse the decisions; but you *could* reuse the process by which you made the decision. They are trying to understand - and improve - those processes.

This is where models come in. This field has been somewhat overrun by a specific type of thinking they call OCF, for "optimum choice framework". The idea there is that you build a model, stick in some variables, and tweak them to find the sweet spot. For risks, where the probabilities are known, that can provide useful results - think cost-benefit analysis. In radical uncertainty...see above. But decision makers are tempted to build a model anyway. Smith said, "You pretend the simulation reflects reality in some way, and you walk away from decision making as if you have solved the problem." In his hand-drawn graphic, this is falling off the "cliff of subjectivity" into the "sea of self-delusion".

Uncertainty can come from anywhere. Kris de Meyer is studying what happens if the UK's entire national electrical grid crashes. Fun fact: it would take seven days to come back up. *That* is not uncertain. Nor are the consequences: nothing functioning, dark streets, no heat, no water after a few hours for anyone dependent on pumping. Soon, no phones unless you still have copper wire. You'll need a battery or solar-powered radio to hear the national emergency broadcast.

The uncertainty is this: how would 65 million modern people react in an unprecedented situation where all the essentials of life are disrupted? And, the key question for the policy makers funding the project, what should government say? *Don't* fill your bathtub with water so no one else has any? *Don't* go to the hospital, which has its own generators, to charge your phone?

"It's a difficult question because of the intention-behavior gap," de Meyer said. De Meyer is studying this via "playable theater", an effort that starts with a story premise that groups can discuss - in this case, stories of people who lived through the blackout. He is conducting trials for this and other similar projects around the country.

In another project, Catherine Tilley is investigating the claim that machines will take all our jobs . Tilley finds two dominant narratives. In one, jobs will change, not disappear, and automation more of them, enhanced productivity, and new wealth. In the other, we will be retired...or unemployed. The numbers in these predictions are very large, but conflicting, so they can't all be right. What do we plan for education and industrial policy? What investments do we make? Should we prepare for mass unemployment, and if so, how?

Tilley identified two common assumptions: tasks that can be automated will be; automation will be used to replace human labor. But interviews with ten senior managers who had made decisions about automation found otherwise. Tl;dr: sectoral, national, and local contexts matter, and the global estimates are highly uncertain. Everyone agrees education is a partial solution - "but for others, not for themselves".

Here's the thing: machines are models. They live in model land. Our future depends on escaping.


Illustrations: David Tuckett and Lenny Smith.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2018

The Rochdale hypothesis

Unity_sculpture,_Rochdale_(1).JPGFirst, open a shop. Thus the pioneers of Rochdale, Lancashire, began the process of building their town. Faced with the jobs and loss of income brought by the Industrial Revolution, a group of 28 people, about half of them weavers, designed the set of Rochdale principles, and set about finding £1 each to create a cooperative that sold a few basics. Ten years later, Wikipedia tells us, Britain was home to thousands of imitators: cooperatives became a movement.

Could Rochdale form the template for building a public service internet?

This was the endpoint of a day-long discussion held as part of MozFest and led by a rogue band from the BBC. Not bad, considering that it took us half the day to arrive at three key questions: What is public? What is service? What is internet?

Pause.

To some extent, the question's phrasing derives from the BBC's remit as a public service broadcaster. "Public service" is the BBC's actual mandate; broadcasting the activity it's usually identified with, is only the means by which it fulfills that mission. There might be - are - other choices. To educate, inform, to entertain, those are its mandate. Neither says radio or TV.

Probably most of the BBC's many global admirers don't realize how broadly the BBC has interpreted that. In the 1980s, it commissioned a computer - the Acorn, which spawned ARM, whose chips today power smartphones - and a series of TV programs to teach the nation about computing. In the early 1990s, it created a dial-up Internet Service Provider to help people get online. Some ten or 15 years ago I contributed to an online guide to the web for an audience with little computer literacy. This kind of thing goes way beyond what most people - for example, Americans - mean by "public broadcasting".

But, as Bill Thompson explained in kicking things off, although 98% of the public has some exposure to the BBC every week, the way people watch TV is changing. Two days later, the Guardian reported that the broadcasting regulator, Ofcom, believes the BBC is facing an "existential crisis" because the younger generation watches significantly less television. An eighth of young people "consume no BBC content" in any given week. When everyone can access the best of TV's back catalogue on a growing array of streaming services, and technology giants like Netflix and Amazon are spending billions to achieve worldwide dominance, the BBC must change to find new relevance.

So: the public service Internet might be a solution. Not, as Thompson went on to say, the Internet to make broadcasting better, but the Internet to make *society* better. Few other organizations in the world could adopt such a mission, but it would fit the BBC's particular history.

Few of us are happy with the Internet as it is today. Mozilla's 2018 Internet Health Report catalogues problems: walled gardens, constant surveillance to exploit us by analyzing our data, widespread insecurity, and increasing censorship.

So, again: what does a public service Internet look like? What do people need? How do you avoid the same outcome?

"Code is law," said Thompson, citing Lawrence Lessig's first book. Most people learned from that book that software architecture could determine human behaviour. He took a different lesson: "We built the network, and we can change it. It's just a piece of engineering."

Language, someone said, has its limits when you're moving from rhetoric to tangible service. Canada, they said, renamed the Internet "basic service" - but it changed nothing. "It's still concentrated and expensive."

Also: how far down the stack do we go? Do we rewrite TCP/IP? Throw out the web? Or start from outside and try to blow up capitalism? Who decides?

At this point an important question surfaced: who isn't in the room? (All but about 30 of the world's population, but don't get snippy.) Last week, the Guardian reported that the growth of Internet access is slowing - a lot. UN data to be published next month by the Web Foundation, shows growth dropped from 19% in 2007 to less than 6% in 2017. The report estimates that it will be 2019, two years later than expected, before half the world is online, and large numbers may never get affordable access. Most of the 3.8 billion unconnected are rural poor, largely women, and they are increasingly marginalized.

The Guardian notes that many see no point in access. There's your possible starting point. What would make the Internet valuable to them? What can we help them build that will benefit them and their communities?

Last week, the New York Times suggested that conflicting regulations and norms are dividing the Internet into three: Chinese, European, and American. They're thinking small. Reversing the Internet's increasing concentration and centralization can't be by blowing up the center because it will fight back. But decentralizing by building cooperatively at the edges...that is a perfectly possible future consonant with its past, even we can't really force clumps of hipsters to build infrastructure in former industrial towns, by luring them there with cheap housing prices. Cue Thompson again: he thought of this before, and he can prove it: here's his 2000 manifesto on e-mutualism.

Building public networks in the many parts of Britain where access is a struggle...that sounds like a public service remit to me.

Illustrations: Illustrations: The Unity sculpture, commemorating the 150th anniversary of the Rochdale Pioneers (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2018

Not the new oil

Ada_Lovelace_Chalon_portrait.jpg"Does data age like fish or like wine?" the economist Diane Coyle asked last week. It was one of a long list of questions she suggested researchers need to answer in a presentation at the new Ada Lovelace Institute. More important, the meeting generally asked, how can data best be used to serve the common good? The newly-created Ada Lovelace Institute is being set up to answer this sort of question.

This is a relatively new way of looking at things that has been building up over the last year or two - active rather than passive, social rather than economic, and requiring a different approach from traditional discussions of individual privacy. That might mean stewardship - management as a public good - rather than governance according to legal or quasi-legal rules; and a new paradigm for privacy, which for the last decades has been cast as an individual right rather than a social compact. As we have argued here before, it is long since time to change that last bit, a point made by Ivana Bartoletti, head of the data privacy and data protection practice for GemServ.

One of the key questions for Coyle, as an economist, is how to value data - hence the question about how it ages. In one effort, she tried to get price and volume statistics from cloud providers, and found no agreement on how they thought about their business or how they made the decision to build a new data center. Bytes are the easiest to measure - but that's not how they do it. Some thought about the number of data records, or computations per second, but these measures are insufficient without knowing the content.

"Forget 'the new oil'," she said; the characteristics are too different. Well, that's good news in a sense; if data is not the new oil then we don't have to be dinosaur bones or plankton. But given how many businesses have spent the last 20 years building their plans on the presumption that data *is* the new oil, getting them to change that view will be an uphill slog. Coyle appears willing to try: data, she said, is a public good, non-rivalrous in use, and, like many digital goods, with high fixed but low marginal costs. She went on to say, however, that personal data is not valuable, citing the small price you get if you divide Facebook's profits across its many users.

This is, of course, not really true, any more than you can decide between wine and fish: data's value depends on the beholder, the beholder's purpose, the context, and a host of other variables. The same piece of data may be valueless at times and highly valuable at others. A photograph of Brett Kavanaugh and Christine Blasey Ford on that bed in 1982, for example, would have been relatively valueless at the time, and yet be worth a fortune now, whether to suppress or to publish. The economic value might increase as long as it was kept secret - but diminish rapidly once it was made public, while the social value is zero while it's secret but huge if made public. As commodities go, data is weird. Coyle invoked Erwin Schrödinger: you don't know what you've got until you look at it. And even then, you have to keep looking as circumstances change.

That was the opening gambit, but a split rapidly surfaced in the panel, which also included Emma Prest, the executive director of DataKind. Prest and Bartoletti raised issues of consent and ethics, and data turned from a public good into a matter of human rights.

If you're a government or a large company focused on economic growth, then viewing data as a social good means wringing as much profit as you can out of it. That to date has been the direction, leading to amassing giant piles of the stuff and enabling both open and secret trades in surveillance and tracking. One often-proposed response is to apply intellectual property rights; the EU tried something like this in 1996 when it passed the Database Directive, generally unloved today, but this gives organizations rights in databases they compile. It doesn't give individuals property rights over "my" data. As tempting as IP rights might be, one problem is that a lot of data is collaboratively created. "My" medical record is a composite of information I have given doctors and their experience and knowledge-based interpretation. Shouldn't they get an ownership share?

Of course someone - probably a security someone - will be along shortly to point out that ethics, rights, and public goods are not things criminals respect. But this isn't about bad guys. Oil or not, data has always also been a source of power. In that sense, it's heartening to see that so many of these conversations - at the nascent Ada Lovelace Institute, at the St Paul's Institute PDF), at the LSE, and at Data & Society, to name just a few - are taking place. If AI is about data, robotics is at least partly about AI in a mobile substrate. Eventually, these discussions of the shape of the future public sphere will be seen for what they are: debates over the future distribution of power. Don't tell Whitehall.


Illustrations: Ada Lovelace.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 11, 2018

Lost in transition

End_all_DRM_in_the_world_forever,_within_a_decade.jpg"Why do I have to scan my boarding card?" I demanded loudly of the machine that was making this demand. "I'm buying a thing of milk!"

The location was Heathrow Terminal 5. The "thing of milk" was a pint of milk being purchased with a view to a late arrival in a continental European city where tea is frequently offered with "Kafeesahne", a thick, off-white substance that belongs with tea about as much as library paste does.

A human materialized out of nowhere, and typed in some codes. The transaction went through. I did not know you could do that.

The incident sounds minor - yes, I thanked her - but has a real point. For years, UK airport retailers secured discounts for themselves by demanding to scan boarding cards at the point of purchase while claiming the reason was to exempt the customers from VAT when they are taking purchases out of the country. Just a couple of years ago the news came out: the companies were failing to pass the resulting discounts on to customers and simply pocketing the VAT. Legally, you are not required to comply with the request.

They still ask, of course.

If you're dealing with a human retail clerk, refusing is easy: you say "No" and they move on to completing the transaction. The automated checkout (which I normally avoid), however is not familiar with No. It is not designed for No. No is not part of its vocabulary unless a human comes along with an override code.

My legal right not to scan my boarding card therefore relies on the presence of an expert human. Take the human out of that loop - or overwhelm them with too many stations to monitor - and the right disappears, engineered out by automation and enforced by the time pressure of having to catch a flight and/or the limited resource of your patience.

This is the same issue that has long been machinified by DRM - digital rights management - and the locks it applies to commercially distributed content. The text of Alice in Wonderland is in the public domain, but wrap it in DRM and your legal rights to copy, lend, redistribute, and modify all vanish, automated out with no human to summon and negotiate with.

Another example: the discount railcard I pay for once a year is renewable online. But if you go that route, you are required to upload your passport, photo driver's license, or national ID card. None of these should really be necessary. If you renew at a railway station, you pay your money and get your card, no identification requested. In this example the automation requires you to submit more data and take greater risk than the offline equivalent. And, of course, when you use a website there's no human to waive the requirement and restore the status quo.

Each of these services is designed individually. There is no collusion, and yet the direction is uniform.

Most of the discussion around this kind of thing - rightly - focuses on clearly unjust systems with major impact on people's lives. The COMPAS recidivism algorithm, for example, is used to risk-assess the likelihood that a criminal defendant will reoffend. A ProPublica study found that the algorithm tended to produced biased results of two kinds: first, black defendants were more likely than white defendants to be incorrectly rated as high risk; second, white reoffenders were incorrectly classified as low-risk more often than black ones. Other such systems show similar biases, all for the same basic reason: decades of prejudice are baked into the training data these systems are fed. Virginia Eubanks, for example, has found similar issues in systems such as those that attempt to identify children at risk and that appear to see poverty itself as a risk factor.

By contrast, the instances I'm pointing out seem smaller, maybe even insignificant. But the potential is that over time wide swathes of choices and rights will disappear, essentially automated out of our landscape. Any process can be gamed this way.

At a Royal Society meeting last year, law professor Mireille Hildebrandt outlined the risks of allowing the atrophy of governance through the text-driven law that today is negotiated in the courts. The danger, she warned, is that through machine deployment and "judgemental atrophy" it will be replaced with administration, overseen by inflexible machines that enforce rules with no room for contestability, which Hildebrandt called "the heart of the rule of law".

What's happening here is, as she said, administration - but it's administration in which our legitimate rights dissipate in a wave of "because we can" automated demands. There are many ways we willingly give up these rights already - plenty of people are prepared to give up anonymity in financial transactions by using all manner of non-cash payment systems, for example. But at least those are conscious choices from which we derive a known benefit. It's hard to see any benefit accruing from the loss of the right to object to unreasonable bureaucracy imposed upon us by machines designed to serve only their owners' interests.


Illustrations: "Kill all the DRM in the world within a decade" (via Wikimedia.).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 27, 2018

We know where you should live

Thumbnail image for PatCadigan-Worldcon75.jpgIn the memorable panel "We Know Where You Will Live" at the 1996 Computers, Freedom, and Privacy conference, the science fiction writer Pat Cadigan startled everyone, including fellow panelists Vernor Vinge, Tom Maddox, and Bruce Sterling, by suggesting that some time in the future insurance companies would levy premiums for "risk purchases" - beer, junk foods - in supermarkets in real time.

Cadigan may have been proved right sooner than she expected. Last week, John Hancock, a 156-year-old US insurance company, announced it would discontinue underwriting traditional life insurance policies. Instead, in future all its policies will be "interactive"; that is, they will come with the "Vitality" program, under which customers supply data collected by their wearable fitness trackers or smartphones. John Hancock promotes the program, which it says is already used by 8 million customers in 18 countries, and as providing discounts. In the company's characterization, it's a sort of second reward for "living healthy". In the company's depiction, everyone wins - you get lower premiums and a healthier life, and John Hancock gets your data, enabling it to make more accurate risk assessments and increase its efficiency.

Even then, Cadigan was not the only one with the idea that insurance companies would exploit the Internet and the greater availability of data. A couple of years later, a smart and prescient friend suggested that we might soon be seeing insurance companies offer discounts for mounting a camera on the hood of your car so they could mine the footage to determine blame when accidents occurred. This was long before smartphones and GoPros, but the idea of small, portable cameras logging everything goes back at least to 1945, when Vannevar Bush wrote As We May Think, an essay that imagined something a lot like the web, if you make allowances for storing the whole thing on microfilm.

This "interactive" initiative is clearly a close relative of all these ideas, and is very much the kind of thing University of Maryland professor Frank Pasquale had in mind when writing his book The Black Box Society. John Hancock may argue that customers know what data they're providing, so it's not all that black a box, but the reality is that you only know what you upload. Just like when you download your data from Facebook, you do not know what other data the company matches it with, what else is (wrongly or rightly) in your profile, or how long the company will keep penalizing you for the month you went bonkers and ate four pounds of candy corn. Surely it's only a short step to scanning your shopping cart or your restaurant meal with your smartphone to get back an assessment of how your planned consumption will be reflected in your insurance premium. And from there, to automated warnings, and...look, if I wanted my mother lecturing me in my ear I wouldn't have left home at 17.

There has been some confusion about how much choice John Hancock's customers have about providing their data. The company's announcement is vague about this. However, it does make some specific claims: Vitality policy holders so far have been found to live 13-21 years longer than the rest of the insured population; generate 30% lower hospitalization costs; take nearly twice as many steps as the average American; and "engage with" the program 576 times a year.

John Hancock doesn't mention it, but there are some obvious caveats about these figures. First of all, the program began in 2015. How does the company have data showing its users live so much longer? Doesn't that suggest that these users were living longer *before* they adopted the program? Which leads to the second point: the segment of the population that has wearable fitness trackers and smartphones tends to be more affluent (which tends to favor better health already) and more focused on their health to begin with (ditto). I can see why an insurance company would like me to "engage with" its program twice a day, but I can't see why I would want to. Insurance companies are not my *friends*.

At the 2017 Computers, Privacy, and Data Protection, one of the better panels discussed the future for the insurance industry in the big data era. For the insurance industry to make sense, it requires an element of uncertainty: insurance is about pooling risk. For individuals, it's a way of managing the financial cost of catastrophes. Continuously feeding our data into insurance companies so they can more precisely quantify the risk we pose to their bottom line will eventually mean a simple equation: being able to get insurance at a reasonable rate is a pretty good indicator you're unlikely to need it. The result, taken far enough, will be to undermine the whole idea of insurance: if everything is known, there is no risk, so what's the point? betting on a sure thing is cheating in insurance just as surely as it is in gambling. In the panel, both Katja De Vries and Mireille Hildebrandt noted the sinister side of insurance companies acting as "nudgers" to improve our behavior for their benefit.

So, less "We know where you will live" and more "We know where and how you *should* live."


Illustrations: Pat Cadigan (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 27, 2018

Think horses, not zebras

IBM-watson-jeopardy.pngThese two articles made a good pairing: Oscar Schwartz's critique of AI hype in the Guardian, and Jennings Brown's takedown of IBM's Watson in real-world contexts. Brown's tl;dr: "This product is a piece of shit," a Florida doctor reportedly told IBM in the leaked memos on which Gizmodo's story is based. "We can't use it for most cases."

Watson has had a rough ride lately: in August 2017 Brown catalogued mounting criticisms of the company and its technology; that June, MIT Technology Review did, too. All three agree: IBM's marketing has outstripped Watson's technical capability.

That's what Schwartz is complaining about: even when scientists make modest claims; media and marketing hype it to the hilt. As a result, instead of focusing on design and control issues such as how to encode social fairness into algorithms, we're reading Nick Bostrom's suggestion that an uncontrolled superintelligent AI would kill humanity in the interests of making paper clips or the EU's deliberation about whether robots should have rights. These are not urgent issues, and focusing on them benefits only vendors who hope we don't look too closely at what they're actually doing.

Schwartz's own first example is the Facebook chat bots that were intended to simulate negotiation-like conversations. Just a couple of days ago someone referred to this as bots making up their own language and cited it as an example of how close AI is to the Singularity. In fact, because they lacked the right constraints, they just made strange sentences out of normal English words. The same pattern is visible with respect to self-driving cars.

You can see why: wild speculation drives clicks - excuse me, monetized eyeballs - but understanding what's wrong with how most of us think about accuracy in machine learning is *mathy*. Yet understanding the technology's very real limits is crucial to making good decisions about it.

With medicine, we're all particularly vulnerable to wishful thinking, since sooner or later we all rely on it for our own survival (something machines will never understand). The UK in particular is hoping AI will supply significant improvements because of the vast amount of patient, that is, training, data the NHS has to throw at these systems. To date, however, medicine has struggled to use information technology effectively.

Attendees at We Robot have often discussed what happens when the accuracy of AI diagnostics outstrips that of human doctors. At what point does defying the AI's decision become malpractice? At this year's conference, Michael Froomkin presented a paper studying the unwanted safety consequences of this approach (PDF).

The presumption is that the AI system's ability to call on the world's medical literature on top of generations of patient data will make it more accurate. But there's an underlying problem that's rarely mentioned: the reliability of the medical literature these systems are built on. The true extent of this issue began to emerge in 2005, when John Ioannidis published a series of papers estimating that 90% of medical research is flawed. In 2016, Ioannidis told Retraction Watch that systematic reviews and meta-analyses are also being gamed because of the rewards and incentives involved.

The upshot is that it's more likely to be unclear, when doctors and AI disagree, where to point the skepticism. Is the AI genuinely seeing patterns and spotting things the doctor can't? (In some cases, such as radiology, apparently yes. But clinical trials and peer review are needed.) Does common humanity mean the doctor finds clues in the patient's behavior and presentation that an AI can't? (Almost certainly.) Is the AI neutral in ways that biased doctors may not be? Stories of doctors not listening to patients, particularly women, are legion. Yet the most likely scenario is that the doctor will be the person entering data - which means the machine will rely on the doctor's interpretation of what the patient says. In all these conflicts, what balance do we tell the AI to set?

Much sooner than Watson will cure cancer we will have to grapple with which AIs have access to which research. In 2015, the team responsible for drafting Liberia's ebola recovery plan in 2014 wrote a justifiably angry op-ed in the New York Times. They had discovered that thousands of Liberians could have been spared ebola had a 1982 paper for Annals of Virology been affordable for them to read; it warned that Liberia needed to be included in the ebola virus endemic zone. Discussions of medical AI to date appear to handwave this sort of issue, yet cost structures, business models, and use of medical research are crucial. Is the future open access, licensing and royalties, all-you-can-eat subscriptions?

The best selling point for AI is that its internal corpus of medical research can be updated a lot faster than doctors' brains can be. In 2017, David Epstein wrote at ProPublica, many procedures and practices become entrenched, and doctors are difficult to dissuade from prescribing them even when they've been found useless. In the US, he added, the 21st Century Cures Act, passed in December 2016, threatens to make all this worse by lowering standards of evidence.

All of these are pressing problems no medical AI can solve. The problem, as usual, is us.

Illustrations: Watson wins at Jeopardy (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 8, 2018

Block that metaphor

oldest-robot-athens-2015-smaller.jpgMy favourite new term from this year's Privacy Law Scholars conference is "dishonest anthropomorphism". The term appeared in a draft paper written by Brenda Leung and Evan Selinger as part of a proposal for its opposite, "honest anthropomorphism". The authors' goal was to suggest a taxonomy that could be incorporated into privacy by design theory and practice, so that as household robots are developed and deployed they are less likely to do us harm. Not necessarily individual "harm" as in Isaac Asimov's Laws of Robotics, which tended to see robots as autonomous rather than a projection of its manufacturer into our personal space, therefore glossing over this more intentional and diffuse kind of deception. Pause to imagine that Facebook goes into making robots and you can see what we're talking about here.

"Dishonest anthropomorphism" derives from an earlier paper, Averting Robot Eyes by Margo Kaminski, Matthew Rueben, Bill Smart, and Cindy Grimm, which proposes "honest anthropomorphism" as a desirable principle in trying to protect people from the privacy problems inherent in admitting a robot, even something as limited as a Roomba, into your home. (At least three of these authors are regular attendees at We Robot since its inception in 2012.) That paper categorizes three types of privacy issues that robots bring: data privacy, boundary management, and social/relational.

The data privacy issues are substantial. A mobile phone or smart speaker may listen to or film you, but it has to stay where you put it (as Smart has memorably put it, "My iPad can't stab me in my bed"). Add movement and processing, and you have a roving spy that can collect myriad kinds of data to assemble an intimate picture of your home and its occupants. "Boundary management" refers to capabilities humans may not realize their robots have and therefore don't know to protect themselves against - thermal sensors that can see through walls, for example, or eyes that observe us even when the robot is apparently looking elsewhere (hence the title).

"Social/relational" refers to the our social and cultural expectations of the beings around us. In the authors' examples, unscrupulous designers can take advantage of our inclination to apply our expectations of other humans to entice us into disclosing more than we would if we truly understood the situation. A robot that mimics human expressions that we understand through our own muscle memory may be highly deceptive, inadvertently or intentionally. Robots may also be given the capability of identifying micro-reactions we can't control but that we're used to assuming go unnoticed.

A different session - discussing research by Marijn Sax, Natalie Helberger, and Nadine Bol - provided a worked example, albeit one without the full robot component. In other words: they've been studying mobile health apps. Most of these are obviously aimed at encouraging behavioral change - walk 10,000 steps, lose weight, do yoga. What the authors argue is that they are more aimed at effecting economic change than at encouraging health, an aspect often obscured from users. Quite apart from the wrongness of using an app marketed to improve your health as a vector for potentially unrelated commercial interests, the health framing itself may be questionable. For example, the famed 10,000 steps some apps push you to take daily has no evidence basis in medicine: the number was likely picked as a Japanese marketing term in the 1960s. These apps may also be quite rigid; in one case that came up during the discussion, an injured nurse found she couldn't adapt the app to help her follow her doctor's orders to stay off her feet. In other words, they optimize one thing, which may or may not have anything to do with health or even health's vaguer cousin, "wellness".

Returning to dishonest anthropomorphism, one suggestion was to focus on abuse rather than dishonesty; there are already laws that bar unfair practices and deception. After all, the entire discipline of user design is aimed at nudging users into certain behaviors and discouraging others. With more complex systems, even if the aim is to make the user feel good it's not simple: the same user will react differently to the same choice at different times. Deciding which points to single out in order to calculate benefit is as difficult as trying to decide where to begin and end a movie story, which the screenwriter William Goldman has likened to deciding where to cut a piece of string. The use of metaphor was harmless when we were talking desktops and filing cabinets; much less so when we're talking about a robot cat that closely emulates a biological cat and leads us into the false sense that we can understand it in the same way.

Deception is becoming the theme of the year, perhaps partly inspired by Facebook and Cambridge Analytica. It should be a good thing. It's already clear that neither the European data protection approach nor the US consumer protection approach will be sufficient in itself to protect privacy against the incoming waves of the Internet of Things, big data, smart infrastructure, robots, and AI. As the threats to privacy expand, the field itself must grow in new directions. What made these discussions interesting is that they're trying to figure out which ones.

Illustrations: Recreation of oldest known robot design (from the Ancient Greek Technology exhibition)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 1, 2018

The three IPs

Thumbnail image for 1891_Telegraph_Lines.jpgAgainst last Friday's date history will record two major European events. The first, as previously noted is the arrival into force of the General Data Protection Regulation, which is currently inspiring a number of US news sites to block Europeans. The second is the amazing Irish landslide vote to repeal the 8th amendment to the country's constitution, which barred legislators from legalizing abortion. The vote led the MEP Luke Ming Flanagan to comment that, "I always knew voters were not conservative - they're just a bit complicated."

"A bit complicated" sums up nicely most people's views on privacy; it captures perfectly the cognitive dissonance of someone posting on Facebook that they're worried about their privacy. As Merlin Erroll commented, terrorist incidents help governments claim that giving them enough information will protect you. Countries whose short-term memories include human rights abuses set their balance point differently.

The occasion for these reflections was the 20th birthday of the Foundation for Information Policy Research. FIPR head Ross Anderson noted on Tuesday that FIPR isn't a campaigning organization, "But we provide the ammunition for those who are."

Led by the late Caspar Bowden, FIPR was most visibly activist in the late 1990s lead-up to the passage of the now-replaced Regulation of Investigatory Powers Act (2000). FIPR in general and Bowden in particular were instrumental in making the final legislation less dangerous than it could have been. Since then, FIPR helped spawn the 15-year-old European Digital Rights and UK health data privacy advocate medConfidential.

Many speakers noted how little the debates have changed, particularly regarding encryption and surveillance. In the case of encryption, this is partly because mathematical proofs are eternal, and partly because, as Yes, Minister co-writer Antony Jay said in 2015, large organizations such as governments always seek to impose control. "They don't see it as anything other than good government, but actually it's control government, which is what they want.". The only change, as Anderson pointed out, is that because today's end-to-end connections are encrypted, the push for access has moved to people's phones.

Other perennials include secondary uses of medical data, which Anderson debated in 1996 with the British Medical Association. Among significant new challenges, Anderson, like many others noted the problems of safety and sustainability. The need to patch devices that can kill you changes our ideas about the consequences of hacking. How do you patch a car over 20 years? he asked. One might add: how do you stop a botnet of pancreatic implants without killing the patients?

We've noted here before that built infrastructure tends to attract more of the same. Today, said Duncan Campbell, 25% of global internet traffic transits the UK; Bude, Cornwall remains the critical node for US-EU data links, as in the days of the telegraph. As Campbell said, the UK's traditional position makes it perfectly placed to conduct global surveillance.

One of the most notable changes in 20 years: there were no less than two speakers whose open presence would have been unthinkable: Ian Levy, the technical director of the National Cyber Security centre, the defensive arm of GCHQ, and Anthony Finkelstein, the government's chief scientific advisor for national security. You wouldn't have seen them even ten years ago, when GCHQ was deploying its Mastering the Internet plan, known to us courtesy of Edward Snowden. Levy made a plea to get away from the angels versus demons school of debate.

"The three horsemen, all with the initials 'IP' - intellectual property, Internet Protocol, and investigatory powers - bind us in a crystal lattice," said Bill Thompson. The essential difficulty he was getting at is that it's not that organizations like Google DeepMind and others have done bad things, but that we can't be sure they haven't. Being trustworthy, said medConfidential's Sam Smith, doesn't mean you never have to check the infrastructure but that people *can* check it if they want to.

What happens next is the hard question. Onora O'Neill suggested that our shiny, new GDPR won't work, because it's premised on the no-longer-valid idea that personal and non-personal data are distinguishable. Within a decade, she said, new approaches will be needed. Today, consent is already largely a façade; true consent requires understanding and agreement.

She is absolutely right. Even today's "smart" speakers pose a challenge: where should my Alexa-enabled host post the privacy policy? Is crossing their threshold consent? What does consent even mean in a world where sensors are everywhere and how the data will be used and by whom may be murky. Many of the laws built up over the last 20 years will have to be rethought, particularly as connected medical devices pose new challenges.

One of the other significant changes will be the influx of new and numerous stakeholders whose ideas about what the internet is are very different from those of the parties who have shaped it to date. The mobile world, for example, vastly outnumbers us; the Internet of Things is being developed by Asian manufacturers from a very different culture.

It will get much harder from here, I concluded. In response, O'Neill was not content. It's not enough, she said, to point out problems. We must propose at least the bare bones of solutions.


Illustrations: 1891 map of telegraph lines (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


May 25, 2018

Who gets the kidney?

whogetsthekidney.jpg
At first glance, Who should get the kidney? seemed more reasonable and realistic than MIT's Moral Machine.

To recap: about a year ago, MIT ran an experiment, a variation of the old trolley problem, in which it asked visitors in charge of a vehicle about to crash to decide which nearby beings (adults, children, pets) to sacrifice and which to save. Crash!

As we said at the time, people don't think like that. In charge of a car, you react instinctively to save yourself, whoever's in the car with you, and then try to cause the least damage to everything else. Plus, much of the information the Moral Machine imagined - this stick figure is a Nobel prize-winning physicist; this one is a sex offender - just is not available to a car driver in a few seconds and even if it were, it's cognitive overload.

So, the kidney: at this year's We Robot, researchers offered us a series of 20 pairs of kidney recipients and a small selection of factors to consider: age, medical condition, number of dependents, criminal convictions, drinking habits. And you pick. Who gets the kidney?

Part of the idea as presented is that these people have a kidney available to them but it's not a medical match, and therefore some swapping needs to happen to optimize the distribution of kidneys. This part, which made the exercise sound like a problem AI could actually solve, is not really incorporated into the tradeoffs you're asked to make. Shorn of this ornamentation, Who Gets the Kidney? is a simple and straightforward question of whom to save. Or, more precisely, who in future will prove to have deserved to have been given this second chance at life? You are both weighing the value of a human being as expressed through a modest set of known characteristics and trying to predict the future. In this, it is no different from some real-world systems, such as the benefits and criminal justice systems Virginia Eubanks studies in her recent book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.

I found, as did the others in our group, that decision fatigue sets in very quickly. In this case, the goal - to use the choices to form like-minded discussion clusters of We Robot attendees - was not life-changing, and many of us took the third option, flipping a coin.

At my table, one woman felt strongly that the whole exercise was wrong; she embraced the principle that all lives are of equal value. Our society often does not treat them that way, and one reason is obvious: most people, put in charge of a kidney allocation system, want things arranged so that if they themselves they will get one.

Instinct isn't always a good guide, either. Many people, used to thinking in terms of protecting children and old people as "they've had their chance at life", automatically opt to give the kidney to the younger person. Granted, I'm 64, and see above paragraph, but even so: as distressing as it is to the parents, a baby can be replaced very quickly with modest effort. It is *very* expensive and time-consuming to replace an 85-year-old. It may even be existentially dangerous, if that 85-year-old is the one holding your society's institutional memory. A friend advises that this is a known principle in population biology.

The more interesting point, to me, was discovering that this exercise really wasn't any more lifelike than the moral machine. It seemed more reasonable because unlike the driver in the crashing car, kidney patients have years of documentation of their illness and there is time for them, their families, and their friends to fill in further background. The people deciding the kidney's destination are much better informed, and in the all-too-familiar scenario of allocating scarce resources. And yet: it's the same conundrum, and in the end how many of us want the machine, rather than a human, to decide whether we live or die?

Someone eventually asked: what if we become able to make an oversupply of kidneys? This only solves the top layer of the problem. Each operation has costs in surgeons' time, medical equipment, nursing care, and hospital infrastructure. Absent a disruptive change in medical technology, it's hard to imagine it will ever be easy to give everyone a kidney who needs one. Say it in food: we actually do grow enough food to supply everyone, but it's not evenly distributed, so in some areas we have massive waste and in others horrible famine (and in some places, both).

Moving to current practice, in a Guardian article Eubanks documents the similar conundrums confronting those struggling to allocate low-income housing, welfare, and other basic needs to poor people in the US in a time of government "austerity". The social workers, policy makers, and data scientists on these jobs have to make decisions, that, like the kidney and driving examples, have life-or-death consequences. In this case, as Eubanks puts it, they decide which get helped among "the most exploited and marginalized people in the United States". The automated systems Eubanks encounters do not lower barriers to programs as promised and, she writes, obscure the political choices that created these social problems in the first place. Automating the response doesn't change those.


Illustrations: Project screenshot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 20, 2018

Deception

werobot-pepper-head_zpsrvlmgvgl.jpg"Why are robots different?" 2018 co-chair Mark Lemley asked repeatedly at this year's We Robot. We used to ask this in the late 1990s when trying to decide whether a new internet development was worth covering. "Would this be a story if it were about telephones?" Tom Standage and Ben Rooney frequently asked at the Daily Telegraph.

The obvious answer is physical risk and our perception of danger. The idea that autonomously moving objects may be dangerous is deeply biologically hard-wired. A plant can't kill you if you don't go near it. Or, as Bill Smart put it at the first We Robot in 2012, "My iPad can't stab me in my bed." Autonomous movement fools us into thinking things are smarter than they are.

It is probably not much consolation to the driver of the crashed autopiloting Tesla or his bereaved family that his predicament was predicted two years ago at We Robot 2016. In a paper, Madeline Elish called humans in these partnerships "Moral Crumple Zones", because, she argued, in a human-machine partnership, the human would take all the pressure, like the crumple zone in a car.

Today, Tesla is fulfilling her prophecy by blaming the driver for not getting his hands onto the steering wheel fast enough when commanded. (Other prior art on this: Dexter Palmer's brilliant 2016 book Version Control.)

As Ian Kerr pointed out, the user's instructions are self-contradictory. The marketing brochure uses the metaphors "autopilot" and "autosteer" to seduce buyers into envisioning a ride of relaxed luxury while the car does all the work. But the legal documents and user manual supplied with the car tell you that you can't rely on the car to change lanes, and you must keep your hands on the wheel at all times. A computer ingesting this would start smoking.

Granted, no marketer wants to say, "This car will drive itself in a limited fashion, as long as you watch the road and keep your hands on the steering wheel." The average consumer reading that says, "Um...you mean I have to drive it?"

The human as moral crumple zone also appears in analyses of the Arizona Uber crash. Even-handedly, Brad Templeton points plenty of blame at Uber and its decisions: the car's LIDAR should have spotted the pedestrian crossing the road in time to stop safely. He then writes, "Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired." And yet humans are notoriously bad at the job required of her: monitor a machine. Safety drivers are typically deployed in pairs to split the work - but also to keep each other attentive.

The larger We Robot discussion was part about public perception of risk, based on a paper (PDF) by Aaron Mannes that discussed how easy it is to derail public trust in a company or new technology when statistically less-significant incidents spark emotional public outrage. Self-driving cars may in fact be safer overall than human drivers despite the fatal crash in Arizona; Mannes also mentioned were Three Mile Island, which made the public much more wary of nuclear power, and the Ford Pinto, which spent the 1970s occasionally catching fire.

Mannes suggested that if you have that trust relationship you may be able to survive your crisis. Without it, you're trying to win the public over on "Frankenfoods".

So much was funnier and more light-hearted seven years ago, as a long-time attendee pointed out; the discussions have darkened steadily year by year as theory has become practice and we can no longer think the problems are as far away as the Singularity.

In San Francisco, delivery robots cause sidewalk congestion and make some homeless people feel surveilled; in Chicago and Durham we risk embedding automated unfairness into criminal justice; the egregious extent of internet surveillance has become clear; and the world has seen its first self-driving car road deaths. The last several years have been full of fear about the loss of jobs; now the more imminent dragons are becoming clearer. Do you feel comfortable in public spaces when there's a like a mobile unit pointing some of its nine cameras at you?

Karen Levy, finds that truckers are less upset about losing their jobs than about automation invading their cabs, ostensibly for their safety. Sensors, cameras, and wearables that monitor them for wakefulness, heart health, and other parameters are painful and enraging to this group, who chose their job for its autonomy.

Today's drivers have the skills to step in; tomorrow's won't. Today's doctors are used to doing their own diagnostics; tomorrow's may not be. In the paper by Michael Froomkin, Ian Kerr, and Joëlle Pinea (PDF), automation may mean not only deskilling humans (doctors) but also a frozen knowledge base. Many hope that mining historical patient data will expose patterns that enable more accurate diagnostics and treatments. If the machines take over, where will the new approaches come from?

Worse, behind all that is sophisticated data manipulation for which today's internet is providing the prototype. When, as Woody Hartzog suggested, Rocco, your Alexa-equipped Roomba, rolls up to you, fakes a bum wheel, and says, "Daddy, buy me an upgrade or I'll die", will you have the heartlessness to say no?

Illustrations: Pepper and handler at We Robot 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 14, 2018

Late, noisy, and wrong

Thumbnail image for Bill Smart - We Robot 2016.jpg"All sensors are terrible," Bill Smart and Cindy Grimm explained as part of a pre-conference workshop at this year's We Robot. Smart, an engineer at Oregon State with prior history here, loves to explain why robots and AI aren't as smart as people think. "Just a fancy hammer," he said the first year.

Thursday's target was broad: the reality of sensors, algorithms, and machine learnng.

One of his slides read:


  • It's all just math and physics.

  • There is no intelligence.

  • It's just a computer program.

  • Sensors turn physics into numbers.

That last one is the crucial bit, and it struck me as surprising only because in all the years I've read about and glibly mentioned sensors and how many there are in our phones they've never really been explained to me. I'm not an electrical engineering student, so like most of us, I wave around the words. Of course I know that digital means numbers, and computers do calculations with numbers not fuzzy things like light and sound, and therefore the camera in my phone (which is a sensor) is storing values describing light levels rather than photographing light in the way that analogue film did, But I don't' - or didn't until Thursday - really know what sensors do measure. For most purposes, it's OK that my understanding is...let's call it abstract. But it does make it easy to overestimate what the technology can do now and how soon it will be able to fulfil the fantasies of mad scientists.

Smart's point is that when you start talking about what AI can do - whether or not you're using my aspirational intelligence recasting of the term - you'd better have some grasp of what it really is. It means the difference between a blob on the horizon that can be safely ignored and a woman pushing a bicycle across a roadway in front of an oncoming LIDAR-equipped Uber self-driving car;

So he begins with this: "All sensors are terrible." We don't use better ones because either such a thing does not exist or because they're too expensive. They are all "noisy, late, and wrong" and "you can never measure what you want to."

What we want to measure are things like pressure, light, and movement, and because we imagine machines as analogues of ourselves, we want them to feel the pressure, see the light, and understand the movement. However, what sensors can measure is electrical current. So we are always "measuring indirectly through assumptions and physics". This is the point AI Weirdness makes too, more visually, by showing what happens when you apply a touch of surrealism to the pictures you feed through machine learning.

He described what a sensor does this way: "They send a ping of energy into the world. It interacts, and comes back." In the case of LIDAR - he used a group of humans to enact this - a laser pulse is sent out, and the time it takes to return is a number of oscillations of a crystal. This has some obvious implications: you can't measure anything shorter than one oscillation.

Grimm explains that a "time of flight" sensor like that is what cameras - back to old Kodaks - use to auto-focus. Smartphones are pretty good at detecting a cluster of pixels that looks like a face and using that to focus on. But now let's imagine it's being used in a knee-high robot on a sidewalk to detect legs. In an art installation Smart and Grimm did they found that it doesn't work in Portland...because of all those hipsters wearing black jeans.

So there are all sorts of these artefacts, and we will keep tripping over them because most of us don't really know what we're talking about. With image recognition, the important thing to remember is that the sensor is detecting pixel values, not things - and a consequence of that is that we don't necessarily know *what* the system has actually decided is important and we can't guarantee what it might be recognizing. So turn machine learning loose on a batch of photos of Audis, and if they all happen to be photographed at the same angle the system won't recognize an Audi photographed at a different one. Teach a self-driving car all the roads in San Francisco and it still won't know anything about driving in Portland.

That circumscription is important. Teach a machine learning system on a set of photos of Abraham Lincoln and a zebra fish, and you get a system that can't imagine it might be a cat. The computer - which, remember, is working with an array of numbers - looks at the numbers in the array and based on what it has identified as significant in previous runs makes the call based on what's closest. It's numbers in, numbers out, and we can't guarantee what it's "recognizing".

A linguistic change would help make all this salient. LIDAR does not "see" the roadway in front of the car that's carrying it. Google's software does not "translate" language. Software does not "recognize" images. The machine does not think, and it has no gender.

So when Mark Zuckerberg tells Congress that AI will fix everything, consider those arrays of numbers that may interpret a clutch of pixels as Abraham Lincoln when what's there is a zebra fish...and conclude he's talking out of his ass.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier co\lumns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


March 23, 2018

Aspirational intelligence

2001-hal.png"All commandments are ideals," he said. He - Steven Croft, the Bishop of Oxford - had just finished reading out to the attendees of Westminster Forum's seminar (PDF) his proposed ten commandments for artificial intelligence. He's been thinking about this on our behalf: Croft malware writers not to adopt AI enhancements. Hence the reply.

The first problem is: what counts as AI? Anders Sandberg has quipped that it's only called AI until it starts working, and then it's called automation. Right now, though, to many people "AI" seems to mean "any technology I don't understand".

Croft's commandment number nine seems particularly ironic: this week saw the first pedestrian killed by a self-driving car. Early guesses are that the likely weakest links were the underemployed human backup driver and the vehicle's faulty LIDAR interpretation of a person walking a bicycle. Whatever the jaywalking laws are in Arizona, most of us instinctively believe that in a cage match between a two-ton automobile and an unprotected pedestrian the car is always the one at fault.

Thinking locally, self-driving cars ought to be the most ethics-dominated use of AI, if only because people don't like being killed by machines. Globally, however, you could argue that AI might be better turned to finding the best ways to phase out cars entirely.

We may have better luck at persuading criminal justice systems to either require transparency, fairness, and accountability in machine learning systems that predict recidivism and who can be helped or drop them entirely.

The less-tractable issues with AI are on display in the still-developing Facebook and Cambridge Analytica scandals. You may argue that Facebook is not AI, but the platform certainly uses AI in fraud detection and to determine what we see and decide which of our data parts to use on behalf of advertisers. All on its own, Facebook is a perfect exemplar of all the problems Australian privacy advocate foresaw in 2004 after examining the first social networks. In 2012, Clark wrote, "From its beginnings and onward throughout its life, Facebook and its founder have demonstrated privacy-insensitivity and downright privacy-hostility." The same could be said of other actors throughout the tech industry.

Yonatan Zunger is undoubtedly right when he argues in the Boston Globe that computer science has an ethics crisis. However, just fixing computer scientists isn't enough if we don't fix the business and regulatory environment built on "ask forgiveness, not permission". Matthew Stoll writes in the Atlantic about the decline since the 1970s of American political interest in supporting small, independent players and limiting monopoly power. The tech giants have widely exported this approach; now, the only other government big enough to counter it is the EU.

The meetings I've attended of academic researchers considering ethics issues with respect to big data have demonstrated all the careful thoughtfulness you could wish for. The November 2017 meeting of the Research Institute in Science of Cyber Security provided numerous worked examples in talks from Kat Hadjimatheou at the University of Warwick, C Marc Taylor from the the UK Research Integrity Office, and Paul Iganski the Centre for Research and Evidence on Security Threats (CREST). Their explanations of the decisions they've had to make about the practical applications and cases that have come their way are particularly valuable.

On the industry side, the problem is not just that Facebook has piles of data on all of us but that the feedback loop from us to the company is indirect. Since the Cambridge Analytica scandal broke, some commenters have indicated that being able to do without Facebook is a luxury many can't afford and that in some countries Facebook *is* the internet. That in itself is a global problem.

Croft's is one of at least a dozen efforts to come up with an ethics code for AI. The Open Data Institute has its Data Ethics Canvas framework to help people working with open data identify ethical issues. The IEEE has published some proposed standards (PDF) that focus on various aspects of inclusion - language, cultures, non-Western principles. Before all that, in 2011, Danah Boyd and Kate Crawford penned Six Provocations for Big Data, which included a discussion of the need for transparency, accountability, and consent. The World Economic Forum published its top ten ethical issues in AI in 2016. Also in 2016, a Stanford University Group published a report trying to fend off regulation by saying it was impossible.

If the industry proves to be right and regulation really is impossible, it won't be because of the technology itself but because of the ecosystem that nourishes amoral owners. "Ethics of AI", as badly as we need it, will be meaningless if the necessary large piles of data to train it are all owned by just a few very large organizations and well-financed criminals; it's equivalent to talking about "ethics of agriculture" when all the seeds and land are owned by a child's handful of global players. The pre-emptive antitrust movement of 2018 would find a way to separate ownership of data from ownership of the AI, algorithms, and machine learning systems that work on them.


Illustrations: HAL.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 2, 2018

In sync

Discarding images-King David music.jpgUntil Wednesday, I was not familiar with the use of "sync" to stand for a music synchronization license - that is, a license to use a piece of music in a visual setting such as a movie, video game, or commercial. The negotiations involved can be Byzantine and very, very slow, in part because the music's metadata is so often wrong or missing. In one such case, described at Music 4.5's seminar on developing new deals and business models for sync (Flash), it took ten years to get the wrong answer from a label to the apparently simple question: who owns the rights to this track on this compilation album?

The surprise: this portion of the music business is just as frustrated as activists with the state of online copyright enforcement. They don't love the Digital Millennium Copyright Act (2000) any more than we do. We worry about unfair takedowns of non-infringing material and bans on circumvention tools; they hate that the Act's Safe Harbor grants YouTube and Facebook protection from liability as long as they remove content when told it's infringing. Google's automated infringement detection software, ContentID, I heard Wednesday, enables the "value gap", which the music industry has been fretting about for several years now because the sites have no motivation to create licensing systems. There is some logic there.

However, where activists want to loosen copyright, enable fair use, and restore the public domain, they want to dump Safe Harbor, either by developing a technological bypass; or change the law; or by getting FaceTube to devise a fairer, more transparent revenue split. "Instagram," said one, "has never paid the music industry but is infringing copyright every day."

To most of us, "online music" means subscription-based streaming services like Spotify or download services like Amazon and iTunes. For many younger people, especially Americans though, YouTube is their jukebox. Pex estimates that 84% of YouTube videos contain at least ten seconds of music. Google says ContentID matches 99.5% of those, and then they are either removed or monetized. But, Pex argues, 65% of those videos remain unclaimed and therefore provide no revenue. Worse, as streaming grows, downloads are crashing. There's a detectable attitude that if they can fix licensing on YouTube they will have cracked it for all sites hosting "creator-generated content".

It's a fair complaint that ContentID was built to protect YouTube from liability, not to enable revenues to flow to rights holders. We can also all agree that the present system means millions of small-time creators are locked out of using most commercial music. The dancing baby case took eight years to decide that the background existence of a Prince song in a 29-second home video of a toddler dancing was fair use. But sync, too, was designed for businesses negotiating with businesses. Most creators might indeed be willing to pay to legally use commercial music if licensing were quick, simple, and cheap.

There is also a question of whether today's ad revenues are sustainable; a graphic I can't find showed that the payout per view is shrinking. Bloomberg finds that increasingly winning YouTubers are taking all with little left for the very long tail.

The twist in the tale is this. MP3 players unbundled albums into songs as separate marketable items. Many artists were frustrated by the loss of control inherent in enabling mix tapes at scale. Wednesday's discussion heralded the next step: unbundling the music itself, breaking it apart into individual beats, phrases and bars, each licensable.

One speaker suggested scenarios. The "content" you want to enjoy is 42 minutes long but your commute is only 38 minutes. You might trim some "unnecessary dialogue" and rearrange the rest so now it fits! My reaction: try saying "unnecessary dialogue" to Aaron Sorkin and let's see how that goes.

I have other doubts. I bet "rearranging" will take longer than watching the four minutes. Speeding up the player slightly achieves the same result, and you can do that *now* for free (try really blown it. More useful was the suggestion that hearing-impaired people could benefit from being able to tweak the mix to fade the background noise and music in a pub scene to make the actors easier to understand. But there, too, we actually already have closed captions. It's clear, however, that the scenarios may be wrong, but the unbundling probably isn't.

In this world, we won't be talking about music, but "music objects". Many will be very low-value...but the value of the total catalogue might rise. The BBC has an experiment up already: The Mermaid's Tears, an "object-based radio drama" in which you can choose to follow any one of the three characters to experience the story.

Smash these things together, and you see a very odd world coming at us. It's hard to see how fair use survives a system that aims to license "music objects" rather than "music". In 1990, Pamela Samuelson warned about copyright maximlism. That agenda does not appear to have gone away.


Illustrations: King David dancing before the Ark of the Covenant, 'Maciejowski Bible', Paris ca. 1240 (via Discarding Images.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 16, 2018

Data envy

new-22portobelloroad.jpgWhile we're all fretting about Facebook, Google, and the ecosystem of advertisers that track our every online move, many other methods for tracking each of us are on the rise, sprawling out across the cyber-physical continuum. You can see the world's retailers, transport authorities, and governments muttering, "Why should *they* have all the data?" CCTV was the first step, and it's a terrible role model. Consent is never requested; instead, where CCTV's presence is acknowledged it comes with "for your safety" propaganda.

People like the Center for Digital Democracy's Jeff Chester or security and privacy researcher Chris Soghoian have often exposed the many hidden companies studying us in detail online. At a workshop in 2011, they predicted much of 2016's political interference and manipulation. They didn't predict that Russians would seek to interfere with Western democracies; but they did correctly foresee the possibility of individual political manipulation via data brokers and profiling. Was this, that workshop asked, one of the last moments at which privacy incursions could be reined in?

A listener then would have been introduced to companies like Axciom and Xaxis, behind-the-scenes swappers of our data trails. Like Equifax, we do not have direct relationships with these companies, and as people said on Twitter during the Equifax breach, "We are their victims, not their customers".

At Freedom to Tinker, in September Steven Engelhardt exposed the extent to which email has become a tracking device. Because most people use just one email address, it provides an easy link. HTML email is filled with third-party trackers that send requests to myriad third-parties, which can then match the email address against other information they hold. Many mailing lists add to this by routing clicks on links through their servers to collect information about what you view, just like social media sites. There are ways around these things - ban your email client from loading remote content, view email as plain text, and copy the links rather than clicking on them. Google is about to make all this much worse by enabling programs to run within email messages. It is, as they say at TechCrunch, a terrible idea for everyone except Google: it means more ads, more trackers, and more security risks.

In December, also at Freedom to Tinker, Gunes Acar explained that a long-known vulnerability in browsers' built-in password managers helps third parties track us. The browser memorizes your login details the first time you land on a website and enter them. Then, as you browse on the site to a non-login page, the third party plants a script with an invisible login form that your browser helpfully autofills . The script reads and hashes the email address, and sends it off to the mother ship, where it can be swapped and matched to other profiles with the same email address hash. Again, since people use the same one for everything and rarely change it, email addresses are exceptionally good connectors between browsing profiles, mobile apps, and devices. Ad blockers help protect against this; browser vendors and publishers could also help.

But these are merely extensions of the tracking we already have. Amazon Go's new retail stores rely on tracking customers throughout, noting not only what they buy but how long they stand in front of a shelf and what they pick up and put back. This should be no surprise: Recode predicted as much in 2015. Other retailers will copy this: why should online retailers have all the data?

Meanwhile, police in Wales have boasted about using facial recognition to arrest people, matching images of people of interest against both its database of 500,000 custody images and live CCTV feeds while the New York Times warns that the technology's error rate spikes when the subjects being matched are not white and male. In the US, EFF reports that according to researchers at Georgetown Law School an estimated 117 million Americans are already in law enforcement facial recognition systems with little oversight.

We already knew that phones are tracked by their attempts to connect to passing wifi SSIDs; at last month's CPDP, the panel on physical tracking introduced targeted tracking using MAC addresses extracted via wifi connections. In many airports, said Future of Privacy Forum's Jules Polonetsky, courtesy of Blip Systems deploys sensors to help with logistical issues such as traffic flow and queue management. In Cincinnati, says the company's website, these sensors help the Transportation Security Agency better allocate resources and provide smoother "passenger processing" (should you care to emerge flat and orange like American cheese).

Visitors to office buildings used to sign in with name, company, and destination; now, tablets demand far more detailed information with no apparent justification. Every system, as Infomatica's Monica McDonnell explained at CPDP, is made up of dozens of subsystems, some of which may date to the 1960s, all running slightly different technologies that may or may not be able to link together the many pockets of information generated for each person.

These systems are growing much faster than most of us realize, and this is even before autonomous vehicles and the linkage of systems into smart cities. If the present state of physical tracking is approximately where the web was in 2000...the time to set the limits is now.


Illustrations: George Orwell's house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 26, 2018

Bodies in the clouds

andrea-matwyshyn.jpgThis year's Computers, Privacy, and Data Protection conference had the theme "The Internet of Bodies". I chaired the "Bodies in the Clouds" panel, which was convened by Lucie Krahulcova of Access Now, and this is something like what I may have said to introduce it.

The notion of "cyberspace" as a separate space derives from the early days of the internet, when most people outside of universities or large science research departments had to dial up and wait while modems mated to get there. Even those who had those permanent connections were often offline in other parts of their lives. Crucially, the people you met in that virtual land were strangers, and it was easy to think there were no consequences in real life.

In 2013, New America Foundation co-founder Michael Lind called cyberspace an idea that makes you dumber the moment you learn of it and begged us to stop believing the internet is a mythical place that governments and corporations are wrongfully invading. While I disagreed, I can see that those with no memory of those early days might see it that way. Today's 30-year-olds were 19 when the iPhone arrived, 18 when Facebook became a thing, 16 when Google went public, and eight when Netscape IPO'd. They have grown up alongside iTunes, digital maps, and GPS, surrounded online by everyone they know. "Cyberspace" isn't somewhere they go; online is just an extension of their phones or laptops..

And yet, many of the laws that now govern the internet were devised with the separate space idea in mind. "Cyberspace", unsurprisingly, turned out not to be exempt from the laws governing consumer fraud, copyright, defamation, libel, drug trafficking, or finance. Many new laws passed in this period are intended to contain what appeared to legislators with little online experience to be a dangerous new threat. These laws are about to come back to bite us.

At the moment there is still *some* boundary: we are aware that map lookups, video sites, and even Siri requests require online access to answer, just as we know when we buy a device like a "smart coffee maker" or a scale that tweets our weight that it's externally connected, even if we don't fully understand the consequences. We are not puzzled by the absence of online connections as we would be if the sun disappeared and we didn't know what an eclipse was.

Security experts had long warned that traditional manufacturers were not grasping the dangers of adding wireless internet connections to their products, and in 2016 they were proved right, when the Mirai botnet harnessed video recorders, routers, baby monitors, and CCTV cameras to delier monster attacks on internet sites and service providers.

For the last few years, I've called this the invasion of the physical world by cyberspace. The cyber-physical construct of the Internet of Things will pose many more challenges to security, privacy, and data protection law. The systems we are beginning to build will be vastly more complex than the systems of the past, involving many more devices, many more types of devices, and many more service providers. An automated city parking system might have meters, license plate readers, a payment system, middleware gateways to link all these, and a wireless ISP. Understanding who's responsible when such systems go wrong or how to exercise our privacy rights will be difficult. The boundary we can still see is vanishing, as is our control over it.

For example, how do we opt out of physical tracking when there are sensors everywhere? It's clear that the Cookie Directive approach to consent won't work in the physical world (though it would give a new meaning to "no-go areas").

Today's devices are already creating new opportunities to probe previously inaccessible parts of our lives. Police have asked for data from Amazon Echos in a Arkansas murder case. In Germany, investigators used the suspect's Apple Health app while re-enacting the steps they believed he took and compared the results to the data the app collected at the time of the crime to prove his guilt.

A friend who buys and turns on an Amazon Echo is deemed to have accepted its privacy policy. Does visiting their home mean I've accepted it too? What happens to data about me that the Echo has collected if I am not a suspect? And if it controls their whole house, how do I get it to work after they've gone to bed?

At Privacy Law Scholars in 2016, Andrea Matwyshyn introduced a new idea: the Internet of Bodies, the theme of this year's CPDP. As she spotted then, the Internet of Bodies make us dependent for our bodily integrity and ability to function on this hybrid ecosystem. At that first discussion of what I'm sure will be an important topic for many years to come, someone commented, "A pancreas has never reported to the cloud before."

A few weeks ago, a small American ISP sent a letter to warn a copyright-infringing subscriber that continuing to attract complaints would cause the ISP to throttle their bandwidth, potentially interfering with devices requiring continuous connections, such as CCTV monitoring and thermostats. The kind of conflict this suggests - copyright laws designed for "cyberspace" touching our physical ability to stay warm and alive in a cold snap - is what awaits us now.

Illustrations: Andrea Matwyshyn.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 1, 2017

Unstacking the deck

Thumbnail image for Alice_par_John_Tenniel_42.pngA couple of weeks ago, I was asked to talk to a workshop studying issues in decision-making in standards development organizations about why the consumer voice is important. This is what I think I may have said.

About a year ago, my home router got hacked thanks to a port deliberately left open by the manufacturer and documented (I now know) in somewhat vague terms on page 210 of a 320-page manual. The really important lesson I took from the experience was that security is a market failure: you can do everything right and still lose. The router was made by an eminently respectable manufacturer, sold by a knowledgeable expert, configured correctly, patched up to date, and yet still failed a basic security test. The underlying problem was that the manufacturer imagined that the port it left open would only ever be used by ISPs wishing to push updates to their customers and that ordinary customers would not be technically capable of opening the port when needed. The latter assumption is probably true, but the former is nonsense. No attacker says, "Oh, look, a hole! I wonder if we're allowed to use it." Consumers are defenseless against manufacturers who fail to understand this.

But they are also, as we have seen this year, defenseless against companies' changing business plans and models. In April, Google's Nest subsidiary decided to turn off devices made by Revolv, a company it bought in 2014 that made a smart home hub. Again, this is not a question of ending support for a device that continues to function as would have happened any time in the past. The fact that the hub is controlled by an app means both the hardware and the software can be turned off when the company loses interest in the product. These are, as Arlo Gilbert wrote at Medium, devices people bought and paid for. Where does Google get the right, in Gilbert's phrasing, to "reach into your home and pull the plug"?

In August, sound system manufacturer Sonos offered its customers two choices: accept its new privacy policy, which requires customers to agree to broader and more detailed data collection, or watch your equipment decline in functionality as updates are no longer applied and possibly cease to function. Here, the issue appears to be that Sonos wants its speakers to integrate with voice assistants, and the company therefore must conform to privacy policies issued by upstream companies such as Amazon. If you do not accept, eventually you have an ex-sound system. Why can't you accept the privacy policy if and only if you want to add the voice assistant?

Finally, in November, Logitech announced it would end service and support for its Harmony Hub devices in March 2018. This might have been a "yawn" moment except that "end of life" means "stop working". The company eventually promised to replace all these devices with newer Harmony Hubs, which can control a somewhat larger range of devices, but the really interesting thing is why it made the change. According to Ars Technica, Logitech did not want to renew an encryption certificate whose expiration will leave Harmony Link devices vulnerable to attacks. It was, as the linked blog posting makes plain, a business decision. For consumers and the ecologically conscientious, a wasteful one.

So, three cases where consumers, having paid money for devices in good faith, are either forced to replace them or accept being extorted for their data. In a world where even the most mundane devices are reconfigurable via software and receive updates over the internet, consumers need to be protected in new ways. Standards development organizations have a role to play in that, even if it's not traditionally been their job. We have accepted "Pay-with-data" as a tradeoff for "free" online; now this is "pay-with-data" as part of devices we've paid to buy.

The irony is that the internet was supposed to empower consumers by redressing the pricing information imbalance between buyers and sellers. While that has certainly happened, the incoming hybrid cyber-physical world will up-end that. We will continue to know a lot more about pricing than we used to, but connected software allows the companies that make the objects that clutter our homes to retain control of those items throughout their useful lives. In such a situation the power balance that applies is "Possession is nine-tenths of the law." And possession will no longer be measurable by the physical location of the object but by who has access to change what it does. Increasingly, that's not us. Consumers have no ability to test their cars for regulatory failures (VW) or know whether Uber is screwing the regulators or Uber drivers are screwing riders. This is a new imbalance of power we cannot fix by ourselves.

Worse, much of this will be invisible to us. All the situations discussed here became visible. But I only found out about the hack on my router because I am eccentric enough to run my own mail server and the spam my router sent got my outgoing email bounced when it caused an anti-spam service to blacklist my mail server. In the billion-object Internet of Things, such communications and many of their effects will primarily be machine-to-machine and hidden from human users, and the world will cease to function in unpredictable odd ways.

Illustrations: John Tenniel's Alice, under attack by a pack of cards.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2017

Twister

Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".


Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 10, 2017

Regulatory disruption

Thumbnail image for Northern_Rock_Queue.jpgThe financial revolution due to hit Britain in mid-January has had surprisingly little publicity and has little to do with the money-related things making news headlines over the last few years. In other words, it's not a new technology, not even a cryptocurrency. Instead, this revolution is regulatory: banks will be required to open up access to their accounts to third parties.

The immediate cause of this change is two difficult-to-distinguish pieces of legislation, one UK-specific and one EU-wide. The EU piece is Payment Services Directive 2, which is intended to foster standards and interoperability in payments across Europe. In the UK, Open Banking requires the nine biggest retail banks to create APIs that, given customer consent, will give third parties certified by the Financial Conduct Authority direct access to customer accounts. Account holders have begun getting letters announcing new terms and conditions, although recipients report that the parts that refer to open banking and consent are masterfully vague.

Thumbnail image for rotated-birch-contactlessmonopoly-ttf2016.jpgAs anyone attending the annual Tomorrow's Transactions Forum knows, open banking has been creeping up on us for the last few years. Consult Hyperion's Tim Richards has a good explanation of the story so far. At this year's event, Dave Birch, who has a blog posting outlining PSD2's background and context, noted that in China, where the majority of non-cash payments are executed via mobile, Alipay and Tencent are already executing billions of transactions a year, bypassing banks entirely. While the banks aren't thrilled about losing the transactions and their associated (dropping) revenue, the bigger issue is that they are losing the data and insight into their customers that traditionally has been exclusively theirs.

We could pick an analogy from myriad internet-disrupted sectors, but arguably the best fit is telecoms deregulation, which saw AT&T (in the US) and BT (in the UK) forced to open up their networks to competitors. Long distance revenues plummeted and all sorts of newcomers began leaching away their customers.

For banks, this story began the day Elon Musk's x.com merged with Peter Thiel's money transfer business to create the first iteration of Paypal so that anyone with an email address could send and receive money. Even then, the different approach of cryptocurrencies was the subject of experiments, but for most people the rhetoric of escaping government was less a selling point than being able to trade small sums with strangers who didn't take credit cards. Today's mobile payment users similarly don't care whether a bank is involved or not as long as they get their money.

Part of the point is to open up competition. In the UK, consumer-bank relationships tend to be lifelong, partly because so much of banking here has been automated for decades. For most people, moving their account involves not only changing arrangements for inbound payments like salary, but also also all the outbound payments that make up a financial life. The upshot is to give the banks impressive customer lock-in, which the Competition and Markets Authority began trying to break with better account portability.

The larger point of Open Banking, however, is to drive innovation in financial services. Why, the reasoning goes, shouldn't it be easier to aggregate data from many sources - bank and other financial accounts, local transport, government benefits - and provide a dashboard to streamline management or automatically switch to the cheapest supplier of unavoidable services? At Wired, Rowland Manthorpe has a thorough outline of the situation and its many uncertainties. Among these are the impact on the banks themselves - will they become, as the project's leader and the telecoms analogy suggest, plumbing for the financial sector or will they become innovators themselves? Or, despite the talk of fintech startups, will the big winners be Google and Facebook?

The obvious concerns in all this are security and privacy. Few outside the technology sector understand what an API is; how do we explain it to the broad range of the population so they understand how to protect themselves? Assuming that start-ups emerge, what mechanisms will we have to test how well our data is secured or trace how it's being used? What about the potential for spoof apps that steal people's data and money?

It's also easy to imagine that "consent" may be more than ordinarily mangled, a problem a friend calls the "tendency to mandatory". It's easy to imagine that the companies to whom we apply for insurance, a loan, or a job may demand an opened gateway to account data as part of the approvals process, which is extortion rather than consent.

This is also another situation where almost all of "my" data inevitably involves exposing third parties, the other halves of our transactions who have never given consent for that to happen. Given access to a large enough percentage of the population's banking data, triangulation should make it possible to fill in a fair bit of the rest. Amazon already has plenty of this kind of data from its own customers; for Facebook and Google this must be an exciting new vista.

Understanding what this will all mean will take time. But it represents a profound change, not only in the landscape of financial services but in the area of technical innovation. This time, those fusty old government regulators are the ones driving disruption.


Illustrations: Northern Rock in 2007 (Dominic Alves); Dave Birch.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 3, 2017

Life forms

Thumbnail image for Cephalopod_barnstar.pngWould you rather be killed by a human or a machine?

At this week's Royal Society meeting on AI and Society, Chris Reed recounted asking this question of an audience in Singapore. They all picked the human, even though they knew it was irrational, because they thought at least they'd know *why*.

A friend to whom I related this had another theory: maybe they thought there was a chance they could talk the the human killer out of it, whereas the machine would be implacable. It's possible.

My own theory pins this distaste for machine killing on a different, crucial underlying factor: a sense of shared understanding. The human standing over you with the axe or driving the oncoming bus may be a professional paid to dispatch you, a serial killer, an angry ex, or mentally ill, but they all have a personal understanding of what a human life means because they all have one they know they, too, will one day lose. The meaning of removing someone else's life is thoroughly embedded in all of us. Not having that is more or less the definition of a machine, or was until Philip K. Dick and his replicants. But there is no reason to assume that every respondent had the same reason.

Similarly, a commenter in the audience found similar responses to an Accenture poll he encountered on Twitter that inquired whether he would be in favor of AI making health decisions. When he checked the voting results, 69% had said no. Here again, the death of a patient by medical mistake keeps a human doctor awake at night (if television is to be believed), while to a machine it's a statistic, no matter how heavily weighted in its inner backpropagating neural networks.

Marion-Oswald-in-template.jpgThese two anecdotes resonated because earlier, Marion Oswald had opened her talk by asking whether, like Peter Godfrey-Smith's observation of cephalopods, interacting with AI was the closest we can come to interacting with an intelligent alien. Arguably, unless the aliens are immortal, on issues of life and death we can actually expect to have more shared understanding with them, as per above, than with machines.

The primary focus of Oswald's talk was actually to discuss her work studying HART, an algorithmic model used by Durham Constabulary to decide whether offenders qualified for deferred prosecution and help with their problems. The study raises all sorts of questions we're going to have to consider over the coming years about the role of police in society.

These issues were somewhat taken up later by Mireille Hildebrandt, who warned of the risks of transforming text-driven law - the messy stuff centuries of court cases have contested and interpreted - to data-driven law Allowing that to happen, she argued, transforms law into administration. "Contestability is the heart of the rule of law," she said. "There is more to the law that predictability and expedience." A crucial part of that is being able to test the system, and here Hildebrandt was particularly gloomy, in that although legal systems that comb the legal corpus are currently being marketed as aids for lawyers, she views it as inevitable that at some point they will become replacements. Some time after that, the skills necessary to test the inner workings of these systems will have vanished from the systems' human owners' firms.

At the annual We Robot conference, a recurring theme is the hard edges of computer systems, an aspect Ellen Ullman examined closely in her 1997 book, Close to the Machine. In Bill Smart's example, the difference between 59.99 miles an hour and 60.01 miles an hour is indistinguishable, but to a computer fitted with the right sensors the difference is a speeding ticket. An aspect of this that is insufficiently discussed is that all biological beings have some level of unpredictability. Robots and AI with far greater sensing precision than is available to humans will respond to changes we can't detect, making them appear less predictable, and therefore more intelligent, than they actually are. This is a deception we will have to learn to decode.

Already, machines that are billed as tools to aid human judgement are often much more trusted than they should be. Danielle Citron's 2006 paper Technological Due Process studied this in connection with benefits scoring systems in Texas and California, and found two problems. First, humans tended to trust the machine's decisions rather than apply their own judgement, a problem Hildebrandt referred to as "judgemental atrophy". Second, computer programmers are not trained lawyers, and are therefore not good at accurately translating legal text into decision-making systems. How do you express a fuzzy but widely understood and often-used standard like the UK's "reasonable person" in computer code? You'd have to precisely define the attopoint at which "reasonable" abruptly flicks to "unreasonable".

Ultimately, Oswald came down against the "intelligent alien" idea: "These are people-made, and it's up to us to find the benefits and tackle the risks," she said. "Ignorance of mathematics is no excuse."

That determination rests on the notion that the people building AI systems and the people using them have shared values. We already know that's not true, but even so: I vote less alien than a cephalopod on everything but the fear of death.

Illustrations: Cephalopod (via Obsidian Soul; Marion Oswald.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 29, 2017

Ubersicht

London_Skyline.jpgIf it keeps growing, every company eventually reaches a moment where this message arrives: it's time to grow up. For Microsoft, IBM, and Intel it was antitrust suits. Google's had the EU's €2.4 billion fine. For Facebook and Twitter, it may be abuse and fake news.

This week, it was Uber's turn, when Transport for London declined to renew Uber's license to operate. Uber's response was to apologize and promise to "do more" while urging customers to sign its change.org petition. At this writing, 824,000 have complied.

Travis_Kalanick_at_DLD_Munich_2015_(cropped).jpgI can't see the company as a victim here. The "sharing economy" rhetoric of evil protectionist taxi regulators has taken knocks from the messy reality of the company's behavior and the Grade A jerkishness of its (now former) founding CEO, the controversial Travis Kalanick. The tone-deaf "Rides of Glory" blog post. The safety-related incidents that TfL complains the company failed to report because: PR. Finally, the clashes with myriad city regulators the company would prefer to bypass: currently, it's threatening to pull out of Quebec. Previously, both Uber and Lyft quit Austin, Texas for a year rather than comply with a law requiring driver fingerprinting. In a second London case, Uber is arguing that its drivers are not employees; SumOfUs begs to differ.

People who use Uber love Uber, and many speak highly of drivers they use regularly. In one part of their brains, Uber-loving friends advocate for social justice, privacy, and fair wages and working conditions; in the other, Uber is so cool, cheap, convenient, and clean, and the app tracks the cab in real time...and city transport is old, grubby, and slow. But we're not at the beginning of this internet thing any more, and we know a lot about what happens when a cute, cuddly company people love grows into a winner-takes-all behemoth the size of a nation-state.

A consideration beyond TfL's pay grade is that transport doesn't really scale, as Hubert Horan explains in his detailed analysis of the company's business model. As Horan explains, Uber can't achieve new levels of cost savings and efficiency (as Amazon and eBay did) because neither the fixed costs of providing the service nor network externalities create them. More simply, predatory competition - that is, venture capitalists providing the large sums that allow Uber to undercut and put out of business existing cab firms (and potentially public transport) - is not sustainable until all other options have been killed and Uber can raise its prices.

Black_London_Cab.jpgEarlier this year, at a conferenceon autonomous vehicles, TfL's representative explained the problems it faces. London will grow from 8.6 million to 10 million people by 2025. On the tube, central zone trains are already running at near the safe frequency limit and space prohibits both wider and longer trains. Congestion will increase: trucks, cars, cabs, buses, bicycles, and pedestrians. All these interests - plus the thousands of necessary staff - need to be balanced, something self-interested companies by definition do not do. In Silicon Valley, where public transport is relatively weak, it may not be clearly understood how deeply a city like London depends on it.

At Wired UK, Matt Burgess says Uber will be back. When Uber and Lyft exited Austin, Texas rather than submit to a new law requiring them to fingerprint drivers, within a year state legislators had intervened. But that was several scandals ago, which is why I think that this once SorryWatch has it wrong: Uber's apology may be adequately drafted (as they suggest, minus the first paragraph), but the company's behaviour has been egregious enough to require clear evidence of active change. Uber needs a plan, not a PR campaign - and urging its customers to lobby for it does not suggest it's understood that..

At London Reconnections, John Bull explains the ins and outs of London's taxi regulation in fascinating detail. Bull argues that in TfL Uber has met a tech-savvy and forward-thinking regulator that is its own boss and too big to bully. Given that almost the only cost the company can squeeze is its drivers' compensation, what protections need to be in place? How does increasing hail-by-app taxi use fit into overall traffic congestion?

Uber is one of the very first of the new hybrid breed of cyber-physical companies. Bypassing regulators - asking forgiveness rather than permission - may have flown when the consequences were purely economic, but it can't be tolerated in the new era of convergence, in which the risks are. My iPhone can't stab me in my bed, (as Bill Smart has memorably observed, but that's not true of these hybrids..

TfL will presumably focus on rectifying the four areas in its announcement. Beyond that, though I'd like to see Uber pressed for some additional concessions. In particular, I think the company - and others like it - should be required to share their aggregate ride pattern data (not incidivual user accounts) with TfL to aid the authority to make better decisions for the benefit of all Londoners. As Tom Slee, the author of What's Yours Is Mine: Against the Sharing Economy, has put it, "Uber is not 'the future', it's 'a future'".


Illustrations: London skyline (by Mewiki); London black cab (Jimmy Barrett; Travis Kalanick (Dan Taylor).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 16, 2017

The ghost in the machine

rotated-patrickball-2017.jpgHumans are a problem in decision-making. We have prejudices based on limited experience, received wisdom, weird personal irrationality, and cognitive biases psychologists have documented. Unrecognized emotional mechanisms shield us from seeing our mistakes.

Cue machine learning as the solution du jour. Many have claimed that crunching enough data will deliver unbiased judgements. These days, this notion is being debunked: the data the machines train on and analyze arrives pre-infected, as we created it in the first place, a problem Cathy O'Neil does a fine job of explaining in Weapons of Math Destruction. See also Data & Society and Fairness, Accountability, and Transparency in Machine Learning.

Patrick Ball, founding director of the Human Rights Database Analysis Group, argues, however, that there are underlying worse problems. HRDAG "applies rigorous science to the analysis of human rights violations around the world". It uses machine learning - currently, to locate mass graves in Mexico - but a key element of its work is "multiple systems estimation" to identify overlaps and gaps.

"Every kind of classification system - human or machine - has several kinds of errors it might make," he says. "To frame that in a machine learning context, what kind of error do we want the machine to make?" HRDAG's work on predictive policing shows that "predictive policing" finds patterns in police records, not patterns in occurrence of crime.

Media reports love to rate machine learning's "accuracy", typically implying the percentage of decisions where the machine's "yes" represents a true positive and its "no" means a true negative. Ball argues this is meaningless. In his example, a search engine that scans billions of web pages for "Wendy Grossman" can be accurate to .99999 because the vast supply of pages that don't mention me (true negatives) will swamp the results. The same is true of any machine system trying to find something rare in a giant pile of data - and it gets worse as the pile of data gets bigger, a problem net.wars has often called searching for a needle in a haystack by building bigger haystacks in relation to data retention.

For any automated decision system, you can draw a 2x2 confusion matrix, like this:
ConfusionMatrix.png
"There are lots of ways to understand that confusion matrix, but the least meaningful of those ways is to look at true positives plus true negatives divided by the total number of cases and say that's accuracy," Ball says, "because in most classification problems there's an asymmetry of yes/no answers" - as above. A "94% accurate" model "isn't accurate at all, and you haven't found any true positives because these classifications are so asymmetric." This fact does make life easy for marketers, though: you can improve your "accuracy" just by throwing more irrelevant data at the model. "To lay people, accuracy sounds good, but it actually isn't the measure we need to know."

Unfortunately, there isn't a single measure: "We need to know at least two, and probably four. What we have to ask is, what kind of mistakes are we willing to tolerate?"

In web searches, we can tolerate a few seconds to scan 100 results and ignore the false positives. False negatives - pages missing that we wanted to see - are less acceptable. Machine learning uses "recall" for the fraction of true positives in the set of results, and "precision" for that of true positives in the entire set being searched. The various ways the classifier can be set can be drawn as a curve. Human beings understand a single number better than tradeoffs; reporting accuracy then means picking a spot on the curve as the point to set the classifier. "But it's always going to be ridiculously optimistic because it will include an ocean of true negatives." This is true whether you're looking for 2,000 fraudulent financial transactions in a sea of billions daily, or finding a handful of terrorists in the general population. Recent attackers, from 9/11 to London Bridge 2017, have already been objects of suspicion, but forces rarely have the capacity to examine every such person, and before an attack there may be nothing to find. Retaining all that irrelevant data may, however, help forensic investigation.

Where there are genuine distinguishing variables, the model will find the matches even given extreme asymmetry in the data. "If we're going to report in any serious way, we will come up with lay language around, 'we were trying to identify 100 people in a population of 20,00 and we found 90 of them." Even then, care is needed to be sure you're finding what you think. The classic example here is the the US Army's trial using neural networks to find camouflaged tanks. The classifier fell victim to the coincidence that all the pictures with tanks in them had been taken on sunny days and all the pictures of empty forest on cloudy days. "That's the way bias works," Ball says.

Cathy_O'Neil_at_Google_Cambridge.jpgThe crucial problem is that we can't see the bias. In her book, O'Neil favors creating feedback loops to expose these problems. But these can be expensive and often can't be created - that's why the model was needed.

"A feedback loop may help, but biased predictions are not always wrong - but they're wrong any time you wander into the space of the bias," Ball says. In his example: say you're predicting people's weight given their height. You use one half of a data set to train a model, then plot heights and weights, draw a line, and use its slope and intercept to predict the other half. It works. "And Wired would write the story." Investigating when the model makes errors on new data shows the training data all came from Hong Kong schoolchildren who opted in, a bias we don't spot because getting better data is expensive, and the right answer is unknown.

"So it's dangerous when the system is trained on biased data. It's really, really hard to know when you're wrong." The upshot, Ball says, is that "You can create fair algorithms that nonetheless reproduce unfair social systems because the algorithm is fair only with respect to the training data. It's not fair with respect to the world."


Illustrations: Patrick Ball; confusion matrix (Jackverr); Cathy O'Neil (GRuban).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 30, 2012

Robot wars

Who'd want to be a robot right now, branded a killer before you've even really been born? This week, Huw Price, a philosophy professor, Martin Rees, an emeritus professor of cosmology and astrophysics, and Jaan Tallinn, co-founder of Skype and a serial speaker at the Singularity Summit, announced the founding of the Cambridge Project for Existential Risk. I'm glad they're thinking about this stuff.

Their intention is to build a Centre for the Study of Existential Risk. There are many threats listed in the short introductory paragraph explaining the project - biotechnology, artificial life, nanotechnology, climate change - but the one everyone seems to be focusing on is: yep, you got it, KILLER ROBOTS - that is, artificial general intelligences so much smarter than we are that they may not only put us out of work but reshape the world for their own purposes, not caring what happens to us. Asimov would weep: his whole purpose in creating his Three Laws of Robotics was to provide a device that would allow him to tell some interesting speculative, what-if stories and get away from the then standard fictional assumption that robots were eeeevil.

The list of advisors to Cambridge project has some interesting names: Hermann Hauser, now in charge of a venture capital fund, whose long history in the computer industry includes founding Acorn and an attempt to create the first mobile-connected tablet (it was the size of a 1990s phone book, and you had to write each letter in an individual box to get it to recognize handwriting - just way too far ahead of its time); and Nick Bostrum of the Future of Humanity Institute at Oxford. The other names are less familiar to me, but it looks like a really good mix of talents, everything from genetics to the public understanding of risk.

The killer robots thing goes quite a way back. A friend of mine grew up in the time before television when kids would pay a nickel for the Saturday show at a movie theatre, which would, besides the feature, include a cartoon or two and the next chapter of a serial. We indulge his nostalgia by buying him DVDs of old serials such as The Phantom Creeps, which features an eight-foot, menacing robot that scares the heck out of people by doing little more than wave his arms at them.

Actually, the really eeeevil guy in that movie is the mad scientist, Dr Zorka, who not only creates the robot but also a machine that makes him invisible and another that induces mass suspended animation. The robot is really just drawn that way. But, like CSER, what grabs your attention is the robot.

I have a theory about this that I developed over the last couple of months working on a paper on complex systems, automation, and other computing trends, and this is that it's all to do with biology. We - and other animals - are pretty fundamentally wired to see anything that moves autonomously as more intelligent than anything that doesn't. In survival terms, that makes sense: the most poisonous plant can't attack you if you're standing out of reach of its branches. Something that can move autonomously can kill you - yet is also more cuddly. Consider the Roomba versus a modern dishwasher. Counterintuitively, the Roomba is not the smarter of the two.

And so it was that on Wednesday, when Voice of Russia assembled a bunch of us for a half-hour radio discussion, the focus was on KILLER ROBOTs, not synthetic biology (which I think is a much more immediately dangerous field) or climate change (in which the scariest new development is the very sober, grown-up, businesslike this-is-getting-expensive report from the insurer Munich Re). The conversation was genuinely interesting, roaming from the mysteries of consciousness to the problems of automated trading and the 2010 flash crash. Pretty much everyone agreed that there really isn't sufficient evidence to predict a date at which machines might be intelligent enough to pose an existential risk to humans. You might be worried about self-driving cars, but they're likely to be safer than drunk humans.

There is a real threat from killer machines; it's just that it's not super-human intelligence or consciousness that's the threat here. Last week, Human Rights Watch and the International Human Rights Clinic published Losing Humanity: the Case Against Killer Robots, arguing that governments should act pre-emptively to ban the development of fully autonomous weapons. There is no way, that paper argues, for autonomous weapons (which the military wants so fewer of *our* guys have to risk getting killed) to distinguish reliably between combatants and civilians.

There were some good papers on this at this year's We Robot conference from Ian Kerr and Kate Szilagyi (PDF) and Markus Wegner.

From various discussions, it's clear that you don't need to wait for *fully* autonomous weapons to reach the danger point. In today's partially automated systems, the operator may be under pressure to make a decision in seconds and "automation bias" means the human will most likely accept whatever the machines suggests it will do, the military equivalent of clicking OK. The human in the loop isn't as much of a protection as we might hope against the humans designing these things. Dr Zorka, indeed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series

October 19, 2012

Finding the gorilla

"A really smart machine will think like an animal," predicted Temple Grandin at last weekend's Singularity Summit. To an animal, she argued, a human on a horse often looks like a very different category of object than a human walking. That seems true; and yet animals also live in a sensory-driven world entirely unlike that of machines.

A day later, Melanie Mitchell, a professor of computer science at Portland State University, argued that analogies are key, she said, to human intelligence, producing landmark insights like comparing a brain to a computer (von Neumann) or evolutionary competition to economic competition (Darwin). This is true, although that initial analogy is often insufficient and may even be entirely wrong. A really significant change in our understanding of the human brain came with research by psychologists like Elizabeth Loftus showing that where computers retain data exactly as it was (barring mechanical corruption), humans improve, embellish, forget, modify, and partially lose stored memories; our memories are malleable and unreliable in the extreme. (For a worked example, see The Good Wife, season 1, episode 6.)

Yet Mitchell is obviously right when she says that much of our humor is based on analogies. It's a staple of modern comedy, for example, for a character to respond on a subject *as if* it were another subject (chocolate as if it were sex, a pencil dropping on Earth as if it were sex, and so on). Especially incongruous analogies: when Watson asks - in the video clip she showed - for the category "Chicks dig me" it's funny because we know that as a machine a) Watson doesn't really understand what it's saying, and b) Watson is pretty much the polar opposite of the kind of thing that "chicks" are generally imagined to "dig".

"You are going to need my kind of mind on some of these Singularity projects," said Grandin, meaning visual thinkers, rather than the mathematical and verbal thinkers who "have taken over". She went on to contend that visual thinkers are better able to see details and relate them to each other. Her example: the emergency generators at Fukushima located below the level of a plaque 30 feet up on the seawall warning that flood water could rise that high. When she talks - passionately - about installing mechanical overrides in the artificial general intelligences Singularitarians hope will be built one day soonish, she seems to be channelling Peter G. Neumann, who talks often about the computer industry's penchant for repeating the security mistakes of decades past.

An interesting sideline about the date of the Singularity: Oxford's Stuart Armstrong has studied these date predictions and concluded pretty much that, in the famed words of William Goldman, no one knows anything. Based on his study of 257 predictions collected by the Singularity Institute and published on its Web site, he concluded that most theories about these predictions are wrong. The dates chosen typically do not correlate with the age or expertise of the predicter or the date of the prediction. I find this fascinating: there's something like an 80 percent consensus that the Singularity will happen in five to 100 years.

Grandin's discussion of visual thinkers made me wonder whether they would be better or worse at spotting the famed invisible gorilla than most people. Spoiler alert: if you're not familiar with this psychologist test, go now and watch the clip before proceeding. You want to say better - after all, spotting visual detail is what visual thinkers excel at - but what if the demands of counting passes is more all-consuming for them than for other types of thinkers? The psychologist Daniel Kahneman, participating by video link, talked about other kinds of bias but not this one. Would visual thinkers be more or less likely to engage in the common human pastime of believing we know something based on too little data and then ignoring new data?

This is, of course, the opposite of today's Bayesian systems, which make a guess and then refine it as more data arrives: almost the exact opposite of the humans Kahneman describes. So many of the developments we're seeing now rely on crunching masses of data (often characterized as "big" but often not *really* all that big) to find subtle patterns that humans never spot. Linda Avey, founder of the personal genome profiling service 23andMe and John Wilbanks are both trying to provide services that will allow individuals to take control of and understand their personal medical data. Avey in particular seems poised to link in somehow to the data generated by seekers in the several-year-old self-quantified movement.

This approach is so far yielding some impressive results. Peter Norvig, the director of research at Google, recounted both the company's work on recognizing cats and its work on building Google Translate. The latter's patchy quality seems more understandable when you learn that it was built by matching documents issued in multiple languages against each other and building up statistical probabilities. The former seems more like magic, although Slate points out that the computers did not necessarily pick out the same patterns humans would.

Well, why should they? Do I pick out the patterns they're interested in? The story continues...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 5, 2012

The doors of probability

Mike Lynch has long been the most interesting UK technology entrepreneur. In 2000, he became Britain's first software billionaire. In 2011 he sold his company, Autonomy, to Hewlett-Packard for $10 billion. A few months ago, Hewlett-Packard let him escape back into the wild of Cambridge. We've been waiting ever since for hints of what he'll do next; on Monday, he showed up at NESTA to talk about his adventures with Wired UK editor David Rowan.

Lynch made his name and his company by understanding that the rule formulated in 1750 by the English vicar and mathematician Thomas Bayes could be applied to getting machines to understand unstructured data. These days, Bayes is an accepted part of the field of statistics, but for a couple of centuries anyone who embraced his ideas would have been unwise to admit it. That began to change in the 1980s, when people began to realize the value of his ideas.

"The work [Bayes] did offered a bridge between two worlds," Lynch said on Monday: the post-Renaissance world of science, and the subjective reality of our daily lives. "It leads to some very strange ideas about the world and what meaning is."

As Sharon Bertsch McGrayne explains in The Theory That Would Not Die, Bayes was offering a solution to the inverse probability problem. You have a pile of encrypted code, or a crashed airplane, or a search query: all of these are effects; your problem is to find the most likely cause. (Yes, I know: to us the search query is the cause and the page of search results if the effect; but consider it from the computer's point of view.) Bayes' idea was to start with a 50/50 random guess and refine it as more data changes the probabilities in one direction or another. When you type "turkey" into a search engine it can't distinguish between the country and the bird; when you add "recipe" you increase the probability that the right answer is instructions on how to cook one.

Note, however, that search engines work on structured data: tags, text content, keywords, and metadata all going into building an index they can run over to find the hits. What Lynch is talking about is the stuff that humans can understand - raw emails, instant messages, video, audio - that until now has stymied the smartest computers.

Most of us don't really like to think in probabilities. We assume every night that the sun will rise in the morning; we call a mug a mug and not "a round display of light and shadow with a hole in it" in case it's really a doughnut. We also don't go into much detail in making most decisions, no matter how much we justify them afterwards with reasoned explanations. Even decisions that are in fact probabilistic - such as those of the electronic line-calling device Hawk-Eye used in tennis and cricket - we prefer to display as though they were infallible. We could, as Cardiff professor Harry Collins argued, take the opportunity to educate people about probability: the on-screen virtual reality animation could include an estimate of the margin for error, or the probability that the system is right (much the way IBM did in displaying Watson's winning Jeopardy answers). But apparently it's more entertaining - and sparks fewer arguments from the players - to pretend there is no fuzz in the answer.

Lynch believes we are just at the beginning of the next phase of computing, in which extracting meaning from all this unstructured data will bring about profound change.

"We're into understanding analog," he said. "Fitting computers to use instead of us to them." In addition, like a lot of the papers and books on algorithms I've been reading recently, he believes we're moving away from the scientific tradition of understanding a process to get an outcome and into taking huge amounts of data about outcomes and from it extracting valid answers. In medicine, for example, that would mean changing from the doctor who examines a patient, asks questions, and tries to understand the cause of what's wrong with them in the interests of suggesting a cure. Instead, why not a black box that says, "Do these things" if the outcome means a cured patient? "Many people think it's heresy, but if the treatment makes the patient better..."

At the beginning, Lynch said, the Autonomy founders thought the company could be worth £2 to £3 million. "That was our idea of massive back then."

Now, with his old Autonomy team, he is looking to invest in new technology companies. The goal, he said, is to find new companies built on fundamental technology whose founders are hungry and strongly believe that they are right - but are still able to listen and learn. The business must scale, requiring little or no human effort to service increased sales. With that recipe he hopes to find the germs of truly large companies - not the put in £10 million sell out at £80 million strategy he sees as most common, but multi-billion pound companies. The key is finding that fundamental technology, something where it's possible to pick a winner.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 31, 2012

Remembering the moon

"I knew my life was going to be lived in space," a 50-something said to me in 2009 on the anniversary of the moon landings, trying to describe the impact they had on him as a 12-year-old. I understood what he meant: on July 20, 1969, a late summer Sunday evening in my time zone, I was 15 and allowed to stay up late to watch; awed at both the achievement and the fact that we could see it live, we took Polaroid pictures (!) of the TV image showing Armstrong stepping onto the Moon's surface.

The science writer Tom Wilkie remarked once that the real impact of those early days of the space program was the image of the Earth from space, that it kicked off a new understanding of the planet as a whole, fragile ecosystem. The first Earth Day was just nine months later. At the time, it didn't seem like that. "We landed on the moon" became a sort of yardstick; how could we put a man on the moon yet be unable to fix a bicycle? That sort of thing.

To those who've grown up always knowing we landed on the moon in ancient times (that is, before they were born), it's hard to convey what a staggering moment of hope and astonishment that was. For one thing, it seemed so improbable and it happened so fast. In 1962, President Kennedy promised to put a man on the moon by the end of the decade - and it happened, even though he was assassinated. For another, it was the science fiction we all read as teens come to life. Surely the next steps would be other planets, greater access for the rest of us. Wouldn't I, in my lifetime, eventually be able also to look out the window of a vehicle in motion and see the Earth getting smaller?

Probably not. Many years later, I was on the receiving end of a rant from an English friend about the wasteful expense of sending people into space when unmanned spacecraft could do so much more for so much less money. He was, of course, right, and it's not much of a surprise that the death of the first human to set foot on the Moon, Neil Armstrong, so nearly coincided with the success of the Mars investigator robot, Curiosity. What Curiosity also reminds us, or should, is that although we admire Armstrong as a hero, the fact is that landing on the Moon wasn't so much his achievement as that of probably thousands, of engineers, programmers, and scientists who developed and built the technology necessary to get him there. As a result, the thing that makes me saddest about Armstrong's death on August 25 is the loss of his human memory of the experience of seeing and touching that off-Earth orbiting body.

The science fiction writer Charlie Stross has a lecture transcript I particularly like about the way the future changes under your feet. The space program - and, in the UK and France, Concorde - seemed like a beginning at the time, but has so far turned out to be an end. Sometime between 1950 and 1970, Stross argues, progress was redefined from being all about the speed of transport to being all about the speed of computers or, more precisely, Moore's Law. In the 1930s, when the moon-walkers were born, the speed of transport was doubling in less than a decade; but it only doubled in the 40 years from the late 1960s to 2007, when he wrote this talk. The speed of acceleration had slowed dramatically.

Applying this precedent to Moore's Law, Intel founder Gordon Moore's observation that the number of transistors that could fit on an integrated circuit doubled about every 24 months, increasing computing speed and power proportionately, Stross was happy to argue that despite what we all think today and the obsessive belief among Singularitarians that computers will surpass the computational power of humans oh, any day now, but certainly by 2030, "Computers and microprocessors aren't the future. They're yesterday's future, and tomorrow will be about something else." His suggestion: bandwidth, bringing things like lifelogging and ubiquitous computing so that no one ever gets lost; if we'd had that in 1969, the astronauts would have been sending back first-person total-immersion visual and tactile experiences that would now be in NASA's library for us all to experience as if at first hand instead of the just the external image we all know.

The science fiction I grew up with assumed that computers would remain rare (if huge) expensive items operated by the elite and knowledgeable (except, perhaps, for personal robots). Space flight, and personal transport, on the other hand, would be democratized. Partly, let's face it, that's because space travel and robots make compelling images and stories, particularly for movies, while sitting and typing...not so much. I didn't grow up imagining my life being mediated and expanded by computer use; I, like countless generations before me, grew up imagining the places I might go and the things I might see. Armstrong and the other astronauts, were my proxies. One day in the not-too-distant future, we will have no humans left who remember what it was actually like to look up and see the Earth in the sky while standing on a distant rock. There only ever have been, Wikipedia tells me, 12, all born in the 1930s.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


August 17, 2012

Bottom dwellers

This week Google announced it would downgrade in its search results sites with an exceptionally high number of valid copyright notices filed against them. As the EFF points out, the details of exactly how this will work are scarce and there is likely to be a big, big problem with false positives - that is, sites that are downgraded unfairly. You have only to look at the recent authorial pile-on that took down the legitimate ebook lending site LendInk for what can happen when someone gets hold of the wrong side of the copyright stick.

Unless we know how the inclusion of Google's copyright notice stats will work, how do we know what will be affected, how, and for how long? There is no transparency to let a site know what's happening to it, and no appeals process. Given the many abuses of the Digital Millennium Copyright Act, under which such copyright notices are issued, it's hard to know how fair such a system will be. Though, granted: the company could have simply done it and not told us. How would we know?

The timing of this move is interesting because it comes only a few months after Google began advocating for the notion that search engine results are, like newspaper editorial matter, a form of free speech under the First Amendment. The company went as far as to commission the legal scholar Eugene Volokh to write a white paper outlining the legal arguments. These basically revolve around the idea that a search algorithm is merely a new form of editorial judgment; Google returns search results in the order in which, in its opinion, they will be most helpful to users.

In response, Tim Wu, author of The Master Switch, argued in the New York Times that conceding the right of free speech to computerized decisions brings serious problems with it in the long run. Supposing, for example, that antitrust authorities want to regulate Google to ensure that it doesn't use its dominance in search to unfairly advantage its other online properties - YouTube, Google Books, Google Maps, and so on. If search results are free speech, that type of regulation becomes unconstitutional. On BoingBoing, Cory Doctorow responded that one should regulate the bad speech without denying it is speech. Earlier, in the Guardian Doctorow argued that Google's best gambit was making the argument about editorial integrity; publications make esthetic judgments, but Google famously loves to live by numbers.

This part of the argument is one that we're going to be seeing a lot of over the next few decades, because it boils down to this bit of Philip K. Dick territory: should machines programmed by humans have free speech rights? And if so, under what circumstances? If Google search results are free speech, is the same true of the output of credit-scoring algorithms or speed cameras? A magazine editor can, if asked, explain the reasoning process by which material was commissioned for, placed in, or rejected by her magazine; Google is notoriously secretive about the workings of its algorithms. We do not even know the criteria Google uses to judge the quality of its search results.

These are all questions we're going to have to answer as a society; and they are questions that may be answered very differently in countries without a First Amendment. My own first inclination is to require some kind of transparency in return: for every generation of separation between human and result, there must be an additional layer of explanation detailing how the system is supposed to work. The more people the results affect, the bigger the requirement for transparency. Something like that.

The more immediate question, of course, is, whether Google's move will have an impact on curbing unauthorized file-sharing. My guess is not that much; few file-sharers of my acquaintance use Google for the purpose of finding files to download.

Yet, in an otherwise sensible piece about the sentencing of Surfthechannel.com owner Anton Vickerman to four years in prison in the Guardian, Dan Sabbagh winds up praising Google's decision with a bunch of errors. First of all, he blames the music industry's problems on mistakes "such as failing to introduce copy protection". As the rest of us know, the music industry only finally dropped copy protection in 2009 - because consumers hate it. Arguably, copy protection delayed the adoption of legal, paid services by years. He also calls the decision to sell all-you-can-eat subscriptions to music back catalogues a mistake; on what grounds is not made clear.

Finally, he argues, "Had Google [relegated pirate sites' results] a decade ago, it might not have been worthwhile for Vickerman to set up his site at all."

Ten years ago? In 2002, Napster had been gone for less than a year. Gnutella and BitTorrent were measuring their age in months. iTunes was a year old. The Pirate Bay wouldn't exist for some months more. Google was two years away from going public. The mistake then wasn't downgrading sites oft accused of copyright infringement. The mistake then was not building legal, paid downloading services and getting them up and running as fast as possible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 25, 2012

Camera obscura

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't), or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 11, 2012

Self-drive

When I first saw that Google had obtained a license for its self-driving car in the state of Nevada I assumed that the license it had been issued was a driver's license. It's disappointing to find out that what they meant was that the car had been issued with license plates so it can operate on public roads. Bah: all operational cars have license plates, but none have driver's licenses. Yet.

The Guardian has been running a poll, asking readers if they'd ride in the car or not. So far, 84 percent say yes. I would, too, I think. With a manual override and a human prepared to step in for oh, the first ten years or so.

I'm sure that Google, being a large company in a highly litigious society, has put the self-driving car through far more rigorous tests than any a human learner undergoes. Nonetheless, I think it ought to be required to get a driver's license, not just license plates. It should have to pass the driving test like everyone else. And then buy insurance, which is where we'll find out what the experts think. Will the rates for a self-driving car be more or less than for a newly licensed male aged 18 to 25?

To be fair, I've actually been to Nevada, and I know how empty most of those roads are. Even without that, I'd certainly rather ride in Google's car than on a roller coaster. I'd rather share the road with Google's car than with a drunk driver. I'd rather ride in Google's car than trust the next Presidential election to electronic voting machines.

That last may seem illogical. After all, riding in a poorly driven car can kill you. A gamed electronic voting machine can only steal your votes. The same problems with debugging software and checking its integrity apply to both. Yet many of us have taken quite long flights on fly-by-wire planes and ridden on driverless trains without giving it much thought.

But a car is *personal*. So much so that we tolerate 1.2 million deaths annually worldwide from road traffic; in 2011 alone, more than ten times as many people died on American roads as were killed in the 9/11 World Trade Center attack. Yet everyone thinks they're an above-average driver and feels safest when they're controlling their own car. Will a self-driving car be that delusional?

The timing was interesting because this week I have also been reading a 2009 book I missed, The Case for Working With Your Hands or Why Office Work is Bad for Us and Fixing Things Feels Good . The author, Michael Crawford, argues that manual labour, which so many middle class people have been brought up to despise, is more satisfying - and has better protection against outsourcing - than anything today's white collar workers learn in college. I've been saying for years that if I had teenagers I'd be telling them to learn a trade like automechanics, plumbing, electrical work, nursing, or even playing live music - anything requiring skill and knowledge and that can't easily be outsourced to another country in the global economy. I'd say teaching, but see last week's.

Dumb down plumbing all you want with screw-together PVC pipes and joints, but someone still has to come to your house to work on it. Even today's modern cars, with their sealed subsystems and electronic read-outs, need hands-on care once in a while. I suppose Google's car arrives back at home base and sends in a list of fix-me demands for its human minders to take care of.

When Crawford talks about the satisfaction of achieving something in the physical world, he's right, up to a point. In an interview for the Guardian in 1995 (TXT), John Perry Barlow commented to me that, "The more time I spend in cyberspace, the more I love the physical world, and any kind of direct, hard-linked interaction with it. I never appreciated the physical world anything like this much before." Now, Barlow, more than most people, knows a lot of about fixing things: he spent 17 years running a debt-laden Wyoming ranch and, as he says in that piece, he spent most of it fixing things that couldn't be fixed. But I'm going to argue that it's the contrast and the choice that makes physical work seem so attractive.

Yes, it feels enormously different to know that I have personally driven across the US many times, the most notable of which was a three-and-a-half-day sprint from Connecticut to Los Angeles in the fall of 1981 (pre-GPS, I might add, without needing to look at a map). I imagine being driven across would be more like taking the train even though you can stop anywhere you like: you see the same scenery, more or less, but the feeling of personal connection would be lost. Very much like the difference between knowing the map and using GPS. Nonetheless, how do I travel across the US these days? Air. How does Barlow make his living? Being a "cognitive dissident". And Crawford writes books. At some point, we all seem to want to expand our reach beyond the purely local, physical world. Finding that balance - and employment for 9 billion people - will be one of this century's challenges.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 6, 2012

I spy

"Men seldom make passes | At girls who wear glasses," Dorothy Parker incorrectly observed in 1937. (How would she know? She didn't wear any). You have to wonder what she could have made of Google Goggles which, despite the marketing-friendly alliterative name, are neither a product (yet) nor a new idea.

I first experienced the world according to a heads-up display in 1997 during a three-day conference (TXT) on wearable computing at MIT ($). The eyes-on demonstration was a game of pool with the headset augmenting my visual field with overlays showing cuing angles. (Could be the next level of Olympic testing: checking athletes for contraband contract lenses and earpieces for those in sports where coaching is not allowed.)

At that conference, a lot of ideas were discussed and demonstrated: temperature-controlling T-shirts, garments that could send back details of a fallen soldier's condition, and so on. Much in evidence were folks like Thad Starner, who scanned my business card and handed it back to me and whose friends commented on the way he'd shift his eyes to his email mid-conversation, and Steve Mann, who turned himself into a cyborg experiment as long ago as the 1980s. Checking their respective Web pages, I see that Mann hasn't updated the evolution of wearables graphic since the late 1990s, by which time the headset looked like an ordinary pair of sunglasses; in 2002, when airport security forced him to divest his gear, he had trouble adjusting to life without it. Starner is on leave to work at...Project Glass, the home of Google Goggles.

The problem when a technological dream spans decades is that between conception and prototype things change. In 1997, that conference seemed to think wearable computing - keyboards embroidered in conductive thread, garments made of cloth woven from copper-covered strands, souped-up eyeglasses, communications-enabled watches, and shoes providing from the energy generated in walking - surely was a decade or less away.

The assumptions were not particularly contentious. People wear wrist watches and jewelry, right? So they'll wear things with the same fashion consciousness, but functional. Like, it measures and displays your heart rhythms (a woman danced wearing a light-flashing pendant that sped up with her heart rate), or your moods (high-tech mood rings), or acts as the controller for your personal area network.

Today, a lot of people don't *wear* wrist watches any more.

For wearable guys, it's good progress. The functionality that required 12 pounds of machinery draped about your person - I see from my pieces linked above and my contemporaneous notes, that the rig I tried felt like wearing a very heavy, inflexible sandwich board - is an iPhone or Android. Even my old Palm Centro comes close. As Jack Schofield writes in the Guardian, the headset is really all that's left that we don't have. And Google has a lot of competition.

What interests me is let's say these things do take off in a big way. What then? Where will the information come from to display on those headsets? Who will be the gatekeepers? If we - some of us - want to see every building decorated with outsized female nudes, will we have to opt in for porn?

My speculation here is surely not going to be futuristic enough, because like most people I'm locked into current trends. But let's say that glasses bolt onto the mobile/Internet ecologies we have in place. It is easy to imagine that, if augmented reality glasses do take off, they will be an important gateway to the next generation of information services. Because if all the glasses are is a different way of viewing your mobile phone, then they're essentially today's ear pieces - surely not sufficient motivation for people with good vision to wear glasses. So, will Apple glasses require an iTunes account and an iOS device to gain access to a choice of overlays to turn on and off that you receive from the iTunes store in real time? Similarly, Google/Android/Android marketplace. And Microsoft/Windows Mobile/Bing or something. And whoever.

So my questions are things like: will the hardware and software be interoperable? Will the dedicated augmented reality consumer need to have several pairs? Will it be like, "Today I'm going mountain climbing. I've subscribed to the Ordnance Survey premium service and they have their own proprietary glasses, so I'll need those. And then I need the Google set with the GPS enhancement to get me there in the car and find a decent restaurant afterwards." And then your kids are like, "No, the restaurants are crap on Google. Take the Facebook pair, so we can ask our friends." (Well, not Facebook, because the kids will be saying, "Facebook is for *old* people." Some cool, new replacement that adds gaming.)

What's that you say? These things are going to collapse in price so everyone can afford 12 pairs? Not sure. Prescription glasses just go on getting more expensive. I blame the involvement of fashion designers branding frames, but the fact is that people are fussy about what they wear on their faces.

In short, will augmented reality - overlays on the real world - be a new commons or a series of proprietary, necessarily limited, world views?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 30, 2012

The ghost of cash

"It's not enough to speak well of digital money," Geronimo Emili said on Wednesday. "You must also speak negatively of cash." Emili has a pretty legitimate gripe. In his home country, Italy, 30 percent of the economy is black and the gap between the amount of tax the government collects and the amount it's actually owed is €180 billion. Ouch.

This sets off a bit of inverted nationalist competition between him and the Greek lawyer Maria Giannakaki, there to explain a draft Greek law mandating direct payment of VAT from merchants' tills to eliminate fraud: which country is worse? Emili is sure it's Italy.

"We invented banks," he said. "But we love cash." Italy's cash habit costs the country €10 billion a year - and 40 percent of Europe's bank robberies.

This exchange took place at this year's Digital Money Forum, an annual event that pulls together people interested in everything from the latest mobile technology to the history of Anglo-Saxon coinage. Their shared common interest: what makes money work? If you, like most of this group, want to see physical cash eliminated, this is the key question.

Why Anglo-Saxon coinage? Rory Naismith explains that the 8th century began the shift from valuing coins merely for their metal content and assigning them a premium for their official status. It was the beginning of the abstraction of money: coins, paper, the elimination of the gold standard, numbers in cyberspace. Now, people like Emili and this event's convenor, David Birch, argue it's time to accept money's fully abstract nature and admit the truth: it's a collective hallucination, a "promise of a promise".

These are not just the ravings of hungry technology vendors: Birch, Emili, and others argue that the costs of cash fall disproportionately on the world's poor, and that cash is the key vector for crime and tax evasion. Our impressions of the costs are distorted because the costs of electronic payments, credit cards, and mobile wallets are transparent, while cash is free at the point of use.

When I say to Birch that eliminating cash also means eliminating the ability to transact anonymously, he says, "That's a different conversation." But it isn't, if eliminating crime and tax evasion are your drivers. In the two days only Bitcoin offers anonymity, but it's doomed to its niche market, for whatever reason. (I think it's too complicated; Dutch financial historian Simon Lelieveldt says it will fail because it has no central bank.)

I pause to be annoyed by the claim that cash is filthy and spreads disease. This is Microsoft-level FUD, and not worthy of smart people claiming to want to benefit the poor and eliminate crime. In fact, I got riled enough to offer to lick any currency (or coins; I'm not proud) presented. I performed as promised on a fiver and a Danish note. And you know, they *kept* that money?

In 1680, says Birch, "Pre-industrial money was failing to serve an industrial revolution." Now, he is convinced, "We are in the early part of the post-industrial revolution, and we're shoehorning industrial money in to fit it. It can't last." This is pretty much what John Perry Barlow said about copyright in 1993, and he was certainly right.

But is Birch right? What kind of medium is cash? Is it a medium of exchange, like newspapers, trading stored value instead of information, or is it a format, like video tape? If it's the former, why shouldn't cash survive, even if only as a niche market? Media rarely die altogether - but formats come and go with such speed that even the more extreme predictions at this event - such as Sandra Alzetta, who said that her company expects half its transactions to be mobile by 2020 -seem quite modest. Her company is Visa International, by the way.

I'd say cash is a medium of exchange, and today's coins and notes are its format. Past formats have included shells, feathers, gold coins, and goats; what about a format for tomorrow that printed or minted on demand, at ATMs? I ask the owner of the grocery shop around the corner if his life would be better if cash were eliminated, and he shrugs no. "I'd still have to go out and get the stuff."

What's needed is low-cost alternatives that fit in cultural contexts. Lydia Howland, whose organization IDEO works to create human-centered solutions to poverty, finds the same needs in parts of Britain that exist in countries like Kenya, where M-Pesa is succeeding in bringing access to banking and remote payments to people who have never had access to financial services before.

"Poor people are concerned about privacy," she said on Wednesday. "But they have so much anonymity in their lives that they pay a premium for every financial service." Also, because they do so much offline, there is little understanding of how they work or live. "We need to create a society where a much bigger base has a voice."

During a break, I try to sketch the characteristics of a perfect payment mechanism: convenient; transparent to the user; universally accepted; universally accessible and usable; resistant to tracking, theft, counterfeiting, and malware; and hard to steal on a large scale. We aren't there yet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 23, 2012

The year of the future

If there's one thing everyone seemed to agree on yesterday at Nominet's annual Internet policy conference, it's that this year, 2012, is a crucial one in the development of the Internet.

The discussion had two purposes. One is to feed into Nominet's policy-making as the body in charge of .uk, in which capacity it's currently grappling with questions such as how to respond to law enforcement demands to disappear domains. The other, which is the kind of exercise net.wars particularly enjoys and that was pioneered at the Computers, Freedom, and Privacy conference (next one spring 2013, in Washington, DC), is to peer into the future and try to prepare for it.

Vint Cerf, now Google's Chief Internet Evangelist, outlined some of that future, saying that this year, 2012, will see more dramatic changes to the Internet than anything since 1983. He had a list:

- The deployment of better authentication in the form of DNSSec;

- New certification regimes to limit damage in the event of more cases like 2011's Diginotar hack;

- internationalized domain names;

- The expansion of new generic top-level domains;

- The switch to IPv6 Internet addressing, which happens on June 6;

- Smart grids;

- The Internet of things: cars, light bulbs, surfboards (!), and anything else that can be turned into a sensor by implanting an RFID chip.

Cerf paused to throw in an update on his long-running project the interplanetary Internet he's been thinking about since 1998 (TXT).

"It's like living in a science fiction novel," he said yesterday as he explained about overcoming intense network lag by using high-density laser pulses. The really cool bit: repurposing space craft whose scientific missions have been completed to become part of the interplanetary backbone. Not space junk: network nodes-in-waiting.

The contrast to Ed Vaizey, the minister for culture, communications and the creative industries at the Department of Culture, Media, and Sport, couldn't have been more marked. He summed up the Internet's governance problem as the "three Ps": pornography, privacy, and piracy. It's nice rhetorical alliteration, but desperately narrow. Vaizey's characterization of 2012 as a critical year rests on the need to consider the UK's platform for the upcoming Internet Governance Forum leading to 2014's World Information Technology Forum. When Vaizey talks about regulating with a "light touch", does he mean the same things we do?

I usually place the beginning of the who-governs-the-Internet argument at1997, the first time the engineers met rebellion when they made a technical decision (revamping the domain name system). Until then, if the pioneers had an enemy it was governments, memorably warned off by John Perry Barlow's 1996 Declaration of the Independence of Cyberspace. After 1997, it was no longer possible to ignore the new classes of stakeholders, commercial interests and consumers.

I'm old enough as a Netizen - I've been online for more than 20 years - to find it hard to believe that the Internet Governance Forum and its offshoots do much to change the course of the Internet's development: while they're talking, Google's self-drive cars rack up 200,000 miles on San Francisco's busy streets with just one accident (the car was rear-ended; not their fault) and Facebook sucks in 800 million users (if it were a country, it would be the world's third most populous nation).

But someone has to take on the job. It would be morally wrong for governments, banks, and retailers to push us all to transact with them online if they cannot promise some level of service and security for at least those parts of the Internet that they control. And let's face it: most people expect their governments to step in if they're defrauded and criminal activity is taking place, offline or on, which is why I thought Barlow's declaration absurd at the time

Richard Allan, director of public policy for Facebook EMEA - or should we call him Lord Facebook? - had a third reason why 2012 is a critical year: at the heart of the Internet Governance Forum, he said, is the question of how to handle the mismatch between global Internet services and the cultural and regulatory expectations that nations and individuals bring with them as they travel in cyberspace. In Allan's analogy, the Internet is a collection of off-shore islands like Iceland's Surtsey, which has been left untouched to develop its own ecosystem.

Should there be international standards imposed on such sites so that all users know what to expect? Such a scheme would overcome the Balkanization problem that erupts when sites present a different face to each nation's users and the censorship problem of blocking sites considered inappropriate in a given country. But if that's the way it goes, will nations be content to aggregate the most open standards or insist on the most closed, lowest-common-denominator ones?

I'm not sure this is a choice that can be made in any single year - they were asking this same question at CFP in 1994 - but if this is truly the year in which it's made, then yes, 2012 is a critical year in the development of the Internet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 2, 2012

Drive by wire

The day in 1978 when I first turned on my CB radio, I discovered that all that time the people in the cars around me had been having conversations I knew nothing about. Suddenly my car seemed like a pre-Annie Sullivan Helen Keller.

Judging by yesterday's seminar on self-driving cars, something similar is about to happen, but on a much larger scale. Automate driving and then make each vehicle part of the Internet of Things and suddenly the world of motoring is up-ended.

The clearest example came from Jeroen Ploeg, who is part of a Dutch national project on Cooperative Advanced Cruise Control. Like everyone here, Ploeg is grappling with issues that recur across all the world's densely populated zones: congestion, pollution, and safety. How can you increase capacity without building more roads (expensive) while decreasing pollution (expensive, unpleasant, and unhealthy) and increasing safety (deaths from road accidents have decreased in the UK for the last few years but are still nearly 2,000 a year)? Decreasing space between cars isn't safe for humans, who also lack the precision necessary to keep a tightly packed line of cars moving evenly. What Ploeg explains, and then demonstrates on a ride in a modified Prius through the Nottingham lunchtime streets, is that given the ability to communicate the cars can collaborate to keep a precise distance that solves all three problems. When he turns on the cooperative bit so that our car talks to its fellow in front of us, the advance warnings significantly smooth our acceleration and braking.

"It has a big potential to increase throughput," he says, noting that packing safely closer together can cut down trucks' fuel requirements by up to 10 percent from the reduction in headwinds.

But other than that, "There isn't a business case for it," he says sadly. No: because we don't buy cars collaboratively, we buy them individually according to personal values like top speed, acceleration, fuel efficiency, comfort, sporty redness, or fantasy.

To robot vehicle researchers, the question isn't if self-driving cars will take over - the various necessary bits of technology are too close to ready - but when and how people will accept the inevitable. There are some obvious problems. Human factors, for one. As cars become more skilled - already, they help humans park, keep in lanes, and keep a consistent speed - humans forget the techniques they've learned. Gradually, says Natasha Merat, co-director at the Institute for Transport Studies at the University of Leeds, they stop paying attention. In critical situations, her research shows, they react more slowly; in urban situations more automated means they're more likely to watch DVDs until or unless they hear an alarm sound. (Curiously, her research shows that on motorways they continue to pay more attention; speed scares, apparently.) So partial automation may be more dangerous than full automation despite seeming like a good first step.

The more fascinating thing is what happens when vehicles start to communicate. Paul Newman, head of the Mobile Robotics Unit at Oxford proposes that your vehicle should learn your routes; one day, he imagines, a little light comes on indicating that it's ready to handle the drive itself. Newman wants to reclaim his time ("It's ridiculous to think that we're condemned to a future of congestion, accidents, and time-wasting"), but since GPS is too limited to guide an automated car - it doesn't work well inside cities, it's not fine-grained enough for parking lots - there's talk of guide boxes. Newman would rather take cues from the existing infrastructure the way humans do. But give vehicles the ability to communicate and share information - maps, pictures, and sensor data. "I don't need a funky French car bubble car. I want today's car with cameras and a 3G connection."

It's later, over lunch, that I realize what he's really proposing. Say all of Britain's roads are traversed once an hour by some vehicle or other. If each picks up infrastructure, geographical, and map data and shares it...you have the vehicle equivalent of Wikipedia to compete with Google's Street View.

Two topics are largely skipped at this event, both critical: fuel and security. John Miles, from Arup argued that it's a misconception that a large percentage of today's road traffic could be moved to rail. But is it safe to assume we'll find enough fuel to run all those extra vehicles either? Traffic increased in the UK by 85 percent since 1980; another 25 percent increase is expected in just the next 20 years.

But security is the crucial one because it must be built into V2V from the beginning. Otherwise, we're talking the apocryphal old joke about cars crashing unpredictably, like Windows.

It's easy to resist this particular future even without wondering whether people will accept statistics showing robot cars are safer if a child is killed by one: I don't even like cars that bossily remind me to wear a seatbelt. But, as several people said yesterday, I am the wrong age. The "iPod generation" don't identify cars so closely with independence, and they don't like looking up from their phones. The 30-year-old of 2032 who knows how to back into a tight parking space may be as rare as a 30-year-old today who can multiply three-digit numbers in his head. Me, I'll wave from the train.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 30, 2011

Ignorance is no excuse

My father was not a patient man. He could summon up some compassion for those unfortunates who were stupider than himself. What he couldn't stand was ignorance, particularly willful ignorance. The kind of thing where someone boasts about how little they know.

That said, he also couldn't abide computers. "What can you do with a computer that you can't do with a paper and pencil?" he demanded to know when I told him I was buying a friend's TRS-80 Model III in 1981. He was not impressed when I suggested that it would enable me to make changes on page 3 of a 78-page manuscript without retyping the whole thing.

My father had a valid excuse for that particular bit of ignorance or lack of imagination. It was 1981, when most people had no clue about the future of the embryonic technology they were beginning to read about. And he was 75. But I bet if he'd made it past 1984 he'd have put some effort into understanding this technology that would soon begin changing the printing industry he worked in all his life.

While computers were new on the block, and their devotees were a relatively small cult of people who could be relatively easily spotted as "other", you could see the boast "I know nothing about computers" as a replay of high school. In American movies and TV shows that would be jocks and the in-crowd on one side, a small band of miserable, bullied nerds on the other. In the UK, where for reasons I've never understood it's considered more admirable to achieve excellence without ever being seen to work hard for it, the sociology plays out a little differently. I guess here the deterrent is less being "uncool" and more being seen as having done some work to understand these machines.

Here's the problem: the people who by and large populate the ranks of politicians and the civil service are the *other* people. Recent events such as the UK's Government Digital Service launch suggest that this is changing. Perhaps computers have gained respectability at the top level from the presence of MPs who can boast that they misspent their youth playing video games rather than, like the last generation's Ian Taylor, getting their knowledge the hard way, by sweating for it in the industry.

There are several consequences of all this. The most obvious and longstanding one is that too many politicians don't "get" the Net, which is how we get legislation like the DEA, SOPA, PIPA, and so on. The less obvious and bigger one is that we - the technology-minded, the early adopters, the educated users - write them off as too stupid to talk to. We call them "congresscritters" and deride their ignorance and venality in listening to lobbyists and special interest groups.

The problem, as Emily Badger writes for Miller-McCune as part of a review of Clay Johnson's latest book, is that if we don't talk to them how can we expect them to learn anything?

This sentiment is echoed in a lecture given recently at Rutgers by the distinguished computer scientist David Farber on the technical and political evolution of the Internet (MP3) (the slides are here (PDF)). Farber's done his time in Washington, DC, as chief technical advisor to the Federal Communications Commission and as a member of the Presidential Advisory Board on Information Technology. In that talk, Farber makes a number of interesting points about what comes next technically - it's unlikely, he says, that today's Internet Protocols will be able to cope with the terabyte networks on the horizon, and reengineering is going to be a very, very hard problem because of the way humans resist change - but the more relevant stuff for this column has to do with what he learned from his time in DC.

Very few people inside the Beltway understand technology, he says there, citing the Congressman who asked him seriously, "What is the Internet?" (Well, see, it's this series of tubes...) And so we get bad - that is, poorly grounded - decisions on technology issues.

Early in the Net's history, the libertarian fantasy was that we could get on just fine without their input, thank you very much. But as Farber says, politicians are not going to stop trying to govern the Internet. And, as he doesn't quite say, it's not like we can show them that we can run a perfect world without them. Look at the problems techies have invented: spam, the flaky software infrastructure on which critical services are based, and so on. "It's hard to be at the edge in DC," Farber concludes.

So, going back to Badger's review of Johnson: the point is it's up to us. Set aside your contempt and distrust. Whether we like politicians or not, they will always be with us. For 2012, adopt your MP, your Congressman, your Senator, your local councilor. Make it your job to help them understand the bills they're voting on. Show them tshat even if they don't understand the technology there's votes in those who do. It's time to stop thinking of their ignorance as solely *their* fault.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 23, 2011

Duck amuck

Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?

"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.

It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".

When I tell this to Gabriel Weinberg, the guy behind DDG - his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application - he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.

"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source - such as, say, the National Institutes of Health - it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.

This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.

Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.

"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."

As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?

"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches - kind of like clicking around Wikipedia."

Following one thing to another, without going back to a search engine...sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?

"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.

There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it - its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.

"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."

Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.

Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with ? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising - yet he's also promising little spam and privacy - no tracking. Now, that really would be disruptive.

So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 16, 2011

Location, location, location

In the late 1970s, I used to drive across the United States several times a year (I was a full-time folksinger), and although these were long, long days at the wheel, there were certain perks. One was the feeling that the entire country was my backyard. The other was the sense that no one in the world knew exactly where I was. It was a few days off from the pressure of other people.

I've written before that privacy is not sleeping alone under a tree but being able to do ordinary things without fear. Being alone on an interstate crossing Oklahoma wasn't to hide some nefarious activity (like learning the words to "There Ain't No Instant Replay in the Football Game of Life"). Turn off the radio and, aside from an occasional billboard, the world was quiet.

Of course, that was also a world in which making a phone call was a damned difficult thing to do, which is why professional drivers all had CB radios. Now, everyone has mobile phones, and although your nearest and dearest may not know where you are, your phone company most certainly does, and to a very fine degree of "granularity".

I imagine normal human denial is broad enough to encompass pretending you're in an unknown location while still receiving text messages. Which is why this year's A Fine Balance focused on location privacy.

The travel privacy campaigner Edward Hasbrouck has often noted that travel data is particularly sensitive and revealing in a way few realize. Travel data indicate your religion (special meals), medical problems, and life style habits affecting your health (choosing a smoking room in a hotel). Travel data also shows who your friends are, and how close: who do you travel with? Who do you share a hotel room with, and how often?

Location data is travel data on a steady drip of steroids. As Richard Hollis, who serves on the ISACA Government and Regulatory Advocacy Subcommittee, pointed out, location data is in fact travel data - except that instead of being detailed logging of exceptional events it's ubiquitous logging of everything you do. Soon, he said, we will not be able to opt out - and instead of travel data being a small, sequestered, unusually revealing part of our lives, all our lives will be travel data.

Location data can reveal the entire pattern of your life. Do you visit a church every Monday evening that has an AA meeting going on in the basement? Were you visiting the offices of your employer's main competitor when you were supposed to have a doctor's appointment?

Research supports this view. Some of the earliest work I'm aware of is of Alberto Escudero-Pascual. A month-long experiment tracking the mobile phones in his department enabled him to diagram all the intra-departmental personal relations. In a 2002 paper, he suggests how to anonymize location information (PDF). The problem: no business wants anonymization. As Hollis and others said, businesses want location data. Improved personalization depends on context, and location provides a lot of that.

Patrick Walshe, the director of privacy for the GSM Association, compared the way people care about privacy to the way they care about their health: they opt for comfort and convenience and hope for the best. They - we - don't make changes until things go wrong. This explains why privacy considerations so often fail and privacy advocates despair: guarding your privacy is like eating your vegetables, and who except a cranky person plans their meals that way?

The result is likely to be the world that Microsoft UK's director of Search, advertising, and online UK, Dave Coplin, outlined, arguing that privacy today is at the turning point that the Melissa virus represented for security 11 years ago when it first hit.

Calling it "the new battleground," he said, "This is what happens when everything is connected." Similarly, Blaine Price, a senior lecturer in computing at the Open University, had this cheering thought: as humans become part of the Internet of Things, data leakage will become almost impossible to avoid.

Network externalities mean that the number of people using a network increase its value for all other users of that network. What about privacy externalities? I haven't heard the phrase before, although I see it's not new (PDF). But I mean something different than those papers do: the fact that we talk about privacy as an individual choice when instead it's a collaborative effort. A single person who says, "I don't care about my privacy" can override the pro-privacy decisions of dozens of their friends, family, and contacts. "I'm having dinner with @wendyg," someone blasts, and their open attitude to geolocation reveals mine.

In his research on tracking, Price has found that the more closely connected the trackers are the less control they have over such decisions. I may worry that turning on a privacy block will upset my closest friend; I don't obsess at night, "Will the phone company think I'm mad at it?"

So: you want to know where I am right now? Pay no attention to the geolocated Twitterer who last night claimed to be sitting in her living room with "wendyg". That wasn't me.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 2, 2011

Debating the robocalypse

"This House fears the rise of artificial intelligence."

This was the motion up for debate at Trinity College Dublin's Philosophical Society (Twitter: @phil327) last night (December 1, 2011). It was a difficult one, because I don't think any of the speakers - neither the four students, Ricky McCormack, Michael Coleman, Cat O'Shea, and Brian O'Beirne, nor the invited guests, Eamonn Healy, Fred Cummins, and Abraham Campbell - honestly fear AI all that much. Either we don't really believe a future populated by superhumanly intelligent killer robots is all that likely, or, like Ken Jennings, we welcome our new computer overlords.

But the point of this type of debate is not to believe what you are saying - I learned later that in the upper levels of the game you are assigned a topic and a position and given only 15 minutes to marshal your thoughts - but to argue your assigned side so passionately, persuasively, and coherently that you win the votes of the assembled listeners even if later that night, while raiding the icebox, they think, "Well, hang on..." This is where politicians and Dail/House of Commons debating style come from, As a participatory sport it was utterly new to me, and it explains a *lot* about the derailment of political common sense by the rise of public relations and lobbying.

Obviously I don't actually oppose research into AI. I'm all for better tools, although I vituperatively loathe tools that try to game me. As much fun as it is to speculate about whether superhuman intelligences will deserve human rights, I tend to believe that AI will always be a tool. It was notable that almost every speaker assumed that AI would be embodied in a more-or-less humanoid robot. Far more likely, it seems to me, that if AI emerges it will be first in some giant, boxy system (that humans can unplug) and even if Moore's Law shrinks that box it will be much longer before AI and robotics converge into a humanoid form factor.

Lacking conviction on the likelihood of all this, and hence of its dangers, I had to find an angle, which eventually boiled down to Walt Kelly and We have met the enemy and he is us. In this, I discovered, I am not alone: a 2007 ThinkArtificial poll found that more than half of respondents feared what people would do with AI: the people who program it, own it, and deploy it.

If we look at the history of automation to date, a lot of it has been used to make (human) workers as interchangeable as possible. I am old enough to remember, for example, being able to walk down to the local phone company in my home town of Ithaca, NY, and talk in person to a customer service representative I had met multiple times before about my piddling residential account. Give everyone the same customer relationship database and workers become interchangeable parts. We gain some convenience - if Ms Jones is unavailable anyone else can help us - but we pay in lost relationships. The company loses customer loyalty, but gains (it hopes) consistent implementation of its rules and the economic leverage of no longer depending on any particular set of workers.

I might also have mentioned automated trading systems, which are making the markets swing much more wildly much more often. Later, Abraham Campbell, a computer scientist working in augmented reality at University College Dublin, said as much as 25 percent of trading is now done by bots. So, cool: Wall Street has become like one of those old IRC channels where you met a cute girl named Eliza...

Campbell had a second example: the Siri, which will tell you where to hide a dead body but not where you might get an abortion. Google's removal of torrent sites from its autosuggestion/Instant feature didn't seem to me egregious censorship, partly because there are other search engines and partly (short-sightedly) because I hate Instant so much already. But as we become increasingly dependent on mediators to help us navigate our overcrowded world, the agenda and/or competence of the people programming them are vital to know. These will be transparent only as long as there are alternatives.

Simultaneously, back in England in work that would have made Jessica Mitford proud, Privacy International's Eric King and Emma Draper were publishing material that rather better proves the point. Big Brother Inc lays out the dozens of technology companies from democratic Western countries that sell surveillance technologies to repressive regimes. King and Draper did what Mitford did for the funeral business in the late 1960s (and other muckrakers have done since): investigate what these companies' marketing departments tell prospective customers.

I doubt businesses will ever, without coercion, behave like humans with consciences; it's why they should not be legally construed as people. During last night's debate, the prospective robots were compared to women and "other races", who were also denied the vote. Yes, and they didn't get it without a lot of struggle. The In the "Robocalypse" (O'Beirne), they'd better be prepared to either a) fight to meltdown for their rights or b) protect their energy sources and wait patiently for the human race to exterminate itself.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 4, 2011

The identity layer

This week, the UK government announced a scheme - Midata - under which consumers will be able to reclaim their personal information. The same day, the Centre for the Study of Financial Innovation assembled a group of experts to ask what the business model for online identification should be. And: whatever that model is, what the the government's role should be. (For background, here's the previous such discussion.)

My eventual thought was that the government's role should be to set standards; it might or might not also be an identity services provider. The government's inclination now is to push this job to the private sector. That leaves the question of how to serve those who are not commercially interesting; at the CSFI meeting the Post Office seemed the obvious contender for both pragmatic and historical reasons.

As Mike Bracken writes in the Government Digital Service blog posting linked above, the notion of private identity providers is not new. But what he seems to assume is that what's needed is federated identity - that is, in Wikipedia's definition, a means for linking a person's electronic identity and attributes across multiple distinct systems. What I meant is a system in which one may have many limited identities that are sufficiently interoperable that you can make a choice which to use at the point of entry to a given system. We already have something like this on many blogs, where commenters may be offered a choice of logging in via Google, OpenID, or simply posting a name and URL.

The government gateway circa Year 2000 offered a choice: getting an identity certificate required payment of £50 to, if I remember correctly, Experian or Equifax, or other companies whose interest in preserving personal privacy is hard to credit. The CSFI meeting also mentioned tScheme - an industry consortium to provide trust services. Outside of relatively small niches it's made little impact. Similarly, fifteen years ago, the government intended, as part of implementing key escrow for strong cryptography, to create a network of trusted third parties that it would license and, by implication, control. The intention was that the TTPs should be folks that everyone trusts - like banks. Hilarious, we said *then*. Moving on.

In between then and now, the government also mooted a completely centralized identity scheme - that is, the late, unlamented ID card. Meanwhile, we've seen the growth a set of competing American/global businesses who all would like to be *the* consumer identity gateway and who managed to steal first-mover advantage from existing financial institutions. Facebook, Google, and Paypal are the three most obvious. Microsoft had hopes, perhaps too early, when in 1999 it created Passport (now Windows Live ID). More recently, it was the home for Kim Cameron's efforts to reshape online identity via the company's now-cancelled CardSpace, and Brendon Lynch's adoption of U-Prove, based on Stefan Brands' technology. U-Prove is now being piloted in various EU-wide projects. There are probably lots of other organizations that would like to get in on such a scheme, if only because of the data and linkages a federated system would grant them. Credit card companies, for example. Some combination of mobile phone manufacturers, mobile network operators, and telcos. Various medical outfits, perhaps.

An identity layer that gives fair and reasonable access to a variety of players who jointly provide competition and consumer choice seems like a reasonable goal. But it's not clear that this is what either the UK's distastefully spelled "Midata" or the US's NSTIC (which attracted similar concerns when first announced, has in mind. What "federated identity" sounds like is the convenience of "single sign-on", which is great if you're working in a company and need to use dozens of legacy systems. When you're talking about identity verification for every type of transaction you do in your entire life, however, a single gateway is a single point of failure and, as Stephan Engberg, founder of the Danish company Priway, has often said, a single point of control. It's the Facebook cross-all-the-streams approach, embedded everywhere. Engberg points to a discussion paper) inspired by two workshops he facilitated for the Danish National IT and Telecom Agency (NITA) in late 2010 that covers many of these issues.

Engberg, who describes himself as a "purist" when it comes to individual sovereignty, says the only valid privacy-protecting approach is to ensure that each time you go online on each device you start a new session that is completely isolated from all previous sessions and then have the choice of sharing whatever information you want in the transaction at hand. The EU's LinkSmart project, which Engberg was part of, created middleware to do precisely that. As sensors and RFID chips spread along with IPv6, which can give each of them its own IP address, linkages across all parts of our lives will become easier and easier, he argues.

We've seen often enough that people will choose convenience over complexity. What we don't know is what kind of technology will emerge to help us in this case. The devil, as so often, will be in the details.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 21, 2011

Printers on fire

It used to be that if you thought things were spying on you, you were mentally disturbed. But you're not paranoid if they're really out to get you, and new research at Columbia University, with funding from DARPA's Crash program, exposes how vulnerable today's devices are. Routers, printers, scanners - anything with an embedded system and an IP address.

Usually what's dangerous is monoculture: Windows is a huge target. So, argue Columbia computer science professor Sal Stolfo and PhD student Ang Cui, device manufacturers rely on security by diversity: every device has its own specific firmware. Cui estimates, for example, that there are 300,000 different firmware images for Cisco routers, varying by feature set, model, operating system version, hardware, and so on. Sure, what's the payback? Especially compared to that nice, juicy Windows server over there?

"In every LAN there are enormous numbers of embedded systems in every machine that can be penetrated for various purposes," says Cui.

The payback is access to that nice, juicy server and, indeed, the whole network Few update - or even check - firmware. So once inside, an attacker can lurk unnoticed until the device is replaced.

Cui started by asking: "Are embedded systems difficult to hack? Or are they just not low-hanging fruit?" There isn't, notes Stolfo, an industry providing protection for routers, printers, the smart electrical meters rolling out across the UK, or the control interfaces that manage conference rooms.

If there is, after seeing their demonstrations, I want it.

Their work is two-pronged: first demonstrate the need, then propose a solution.

Cui began by developing a rootkit for Cisco routers. Despite the diversity of firmware and each image's memory layout, routers are a monoculture in that they all perform the same functions. Cui used this insight to find the invariant elements and fingerprint them, making them identifiable in the memory space. From that, he can determine which image is in place and deduce its layout.

"It takes a millisecond."

Once in, Cui sets up a control channel over ping packets (ICMP) to load microcode, reroute traffic, and modify the router's behaviour. "And there's no host-based defense, so you can't tell it's been compromised." The amount of data sent over the control channel is too small to notice - perhaps a packet per second.

"You can stay stealthy if you want to."

You could even kill the router entirely by modifying the EEPROM on the motherboard. How much fun to be the army or a major ISP and physically connect to 10,000 dead routers to restore their firmware from backup?

They presented this at WOOT (Quicktime), and then felt they needed something more dramatic: printers.

"We turned off the motor and turned up the fuser to maximum." Result: browned paper and...smoke.

How? By embedding a firmware update in an apparently innocuous print job. This approach is familiar: embedding programs where they're not expected is a vector for viruses in Word and PDFs.

"We can actually modify the firmware of the printer as part of a legitimate document. It renders correctly, and at the end of the job there's a firmware update." It hasn't been done before now, Cui thinks, because there isn't a direct financial pay-off and it requires reverse-engineering proprietary firmware. But think of the possibilities.

"In a super-secure environment where there's a firewall and no access - the government, Wall Street - you could send a resume to print out." There's no password. The injected firmware connects to a listening outbound IP address, which responds by asking for the printer's IP address to punch a hole inside the firewall.

"Everyone always whitelists printers," Cui says - so the attacker can access any computer. From there, monitor the network, watch traffic, check for regular expressions like names, bank account numbers, and social security numbers, sending them back out as part of ping messages.

"The purpose is not to compromise the printer but to gain a foothold in the network, and it can stay for years - and then go after PCs and servers behind the firewall." Or propagate the first printer worm.

Stolfo's and Cui's call their answer a "symbiote" after biological symbiosis, in which two biological organisms attach to each other to mutual benefit.

The goal is code that works on an arbitrarily chosen executable about which you have very little knowledge. Emulating a biological symbiote, which finds places to attach to the host and extract resources, Cui's symbiote first calculates a secure checksum across all the static regions of the code, then finds random places where its code can be injected.

"We choose a large number of these interception points - and each time we choose different ones, so it's not vulnerable to a signature attack and it's very diverse." At each device access, the symbiote steals a little bit of the CPU cycle (like an RFID chip being read) and automatically verifies the checksum.

"We're not exploiting a vulnerability in the code," says Cui, "but a logical fallacy in the way a printer works." Adds Stolfo, "Every application inherently has malware. You just have to know how to use it."

Never mind all that. I'm still back at that printer smoking. I'll give up my bank account number and SSN if you just won't burn my house down.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


September 30, 2011

Trust exercise

When do we need our identity to be authenticated? Who should provide the service? Whom do we trust? And, to make it sustainable, what is the business model?

These questions have been debated ever since the early 1990s, when the Internet and the technology needed to enable the widespread use of strong cryptography arrived more or less simultaneously. Answering them is a genuinely hard problem (or it wouldn't be taking so long).

A key principle that emerged from the crypto-dominated discussions of the mid-1990s is that authentication mechanisms should be role-based and limited by "need to know"; information would be selectively unlocked and in the user's control. The policeman stopping my car at night needs to check my blood alcohol level and the validity of my driver's license, car registration, and insurance - but does not need to know where I live unless I'm in violation of one of those rules. Cryptography, properly deployed, can be used to protect my information, authenticate the policeman, and then authenticate the violation result that unlocks more data.

Today's stored-value cards - London's Oyster travel card, or Starbucks' payment/wifi cards - when used anonymously do capture some of what the crypto folks had in mind. But the crypto folks also imagined that anonymous digital cash or identification systems could be supported by selling standalone products people installed. This turned out to be wholly wrong: many tried, all failed. Which leads to today, where banks, telcos, and technology companies are all trying to figure out who can win the pool by becoming the gatekeeper - our proxy. We want convenience, security, and privacy, probably in that order; they want security and market acceptance, also probably in that order.

The assumption is we'll need that proxy because large institutions - banks, governments, companies - are still hung up on identity. So although the question should be whom do we - consumers and citizens - trust, the question that ultimately matters is whom do *they* trust? We know they don't trust *us*. So will it be mobile phones, those handy devices in everyone's pockets that are online all the time? Banks? Technology companies? Google has launched Google Wallet, and Facebook has grand aspirations for its single sign-on.

This was exactly the question Barclaycard's Tom Gregory asked at this week's Centre for the Study of Financial Innovation round-table discussion (PDF) . It was, of course, a trick, but he got the answer he wanted: out of banks, technology companies, and mobile network operators, most people picked banks. Immediate flashback.

The government representatives who attended Privacy International's 1997 Scrambling for Safety meeting assumed that people trusted banks and that therefore they should be the Trusted Third Parties providing key escrow. Brilliant! It was instantly clear that the people who attended those meetings didn't trust their banks as much as all that.

One key issue is that, as Simon Deane-Johns writes in his blog posting about the same event, "identity" is not a single, static thing; it is dynamic and shifts constantly as we add to the collection of behaviors and data representing it.

As long as we equate "identity" with "a person's name" we're in the same kind of trouble the travel security agencies are when they try to predict who will become a terrorist on a particular flight. Like the browser fingerprint, we are more uniquely identifiable by the collection of our behaviors than we are by our names, as detectives who search for missing persons know. The target changes his name, his jobs, his home, and his wife - but if his obsession is chasing after trout he's still got a fishing license. Even if a link between a Starbucks card and its holder's real-world name is never formed, the more data the card's use enters into the system the more clearly recognizable as an individual he will be. The exact tag really doesn't matter in terms of understanding his established identity.

What I like about Deane-Johns' idea -

the solution has to involve the capability to generate a unique and momentary proof of identity by reference to a broad array of data generated by our own activity, on the fly, which is then useless and can be safely discarded"

is two things. First, it has potential as a way to make impersonation and identity fraud much harder. Second is that implicit in it is the possibility of two-way authentication, something we've clearly needed for years. Every large organization still behaves as though its identity is beyond question whereas we - consumers, citizens, employees - need to be thoroughly checked. Any identity infrastructure that is going to be robust in the future must be built on the understanding that with today's technology anyone and anything can be impersonated.

As an aside, it was remarkable how many people at this week's meeting were more concerned about having their Gmail accounts hacked than their bank accounts. My reasoning is that the stakes are higher: I'd rather lose my email reputation than my house.. Their reasoning is that the banking industry is more responsive to customer problems than technology companies. That truly represents a shift from 1997, when technology companies were smaller and more responsive.

More to come on these discussions...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 23, 2011

Your grandmother's phone

In my early 20s I had a friend who was an expert at driving cars with...let's call them quirks. If he had to turn the steering wheel 15 degrees to the right to keep the car going straight while peering between smears left by the windshield wipers and pressing just the exact right amount on the brake pedal, no problem. This is the beauty of humans: we are adaptable. That characteristic has made us the dominant species on the planet, since we can adapt to changes of habitat, food sources, climate (within reason), and cohorts. We also adapt to our tools, which is why technology designers get away with flaws like the iPhone's "death grip". We don't like it - but we can deal with it.

At least, we can deal with it when we know what's going on. At this week's Senior Market Mobile, the image that stuck in everyone's mind came early in the day, when Cambridge researchers Ian Hosking and Mike Bradley played a video clip of a 78-year-old woman trying to figure out how to get past an iPad's locked screen. Was it her fault that it seemed logical to her to hold it in one hand while jabbing at it in frustration? As Donald Norman wrote 20 years ago, for an interface to be intuitive it has to match the user's mental model of how it works.

That 78-year-old's difficulties, when compared with the glowing story of the 100-year-old who bonded instantly with her iPad, make another point: age is only one aspect of a person's existence - and one whose relevance they may reject. If you're having trouble reading small type or remembering the menu layout, pushing the buttons, or hearing a phone call what matters isn't that you're old but that you have vision impairment, cognitive difficulties, less dextrous fingers, or hearing loss. You don't have to be old to have any of those things - and not all old people have them.

For those reasons, the design decisions intended to aid seniors - who, my God, are defined as anyone over 55! - aid many other people too. All of these points were made with clarity by Mark Beasley, whose company specializes in marketing to seniors - you know, people who, unlike predominantly 30-something designers and marketers, don't think they're old and who resent being lumped together with a load of others with very different needs on the basis of age. And who think it's not uncool to be over 50. (How ironic, considering that when the Baby Boomers were 18 they minted the slogan, "Never trust anyone over 30.")

Besides physical attributes and capabilities, cultural aspects matter more in a target audience's than their age per se. We who learned to type on manual typewriters bash keyboards a lot harder than those who grew up with computers. Those who grew up with the phone grudgingly sited in the hallway, using it only for the briefest of conversations are less likely to be geared toward settling in for a long, loud intimate conversation on a public street.

Last year at this event, Mobile Industry Review editor Ewan McLeod lambasted the industry because even the iPhone did not effectively serve his parents' greatest need: an easy way to receive and enjoy pictures of their grandkids. This year, Stuart Arnott showed off a partial answer, Mindings, a free app for Android tablets that turns them into smart display frames. You can send them pictures or text messages or, in Arnott's example, a reminder to take medication that, when acknowledged by a touch goes on to display the picture or message the owner really wants to see.

Another project in progress, Threedom is an attempt to create an Android design with only three buttons that uses big icons and type to provide all the same functionality but very simply.

The problem with all of this - which Arnott seems to have grasped with Mindings - is that so much of these discussions focus on the mobile phone as a device in isolation. But that's not really learning the lesson of the iPod/iPhone/iPad, which is that what matters is the ecology surrounding the device. It is true that a proportion of today's elderly do not use computers or understand why they suddenly need a mobile phone. But tomorrow's elderly will be radically different. Depending on class and profession, people who are 60 now are likely to have spent many years of his working life using computers and mobile phones. When they reach 86, what will dictate their choice of phone will be only partly whatever impairments age may bring. A much bigger issue is going to be the legacy and other systems that the phone has to work with: implantable electronic medical devices, smart electrical meters, ancient software in use because it's familiar (and has too much data locked inside it), maybe even that smart house they keep telling us we're going to have one of these days. Those phones are going to have to do a lot more than just make it easy to call your son.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 22, 2011

Face to face

When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.

Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.

The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk - and one that can't be withdrawn with any confidence by deleting your account.

There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.

The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.

Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."

The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests align perfectly.

So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?

One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.

I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 8, 2011

The grey hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views, also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers - not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea - and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time between the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden?
One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friend default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help - but for that to be effective we need far greater transparency from all these - largely American - companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 17, 2011

If you build it...

Lawrence Lessig once famously wrote that "Code is law". Today, at the last day of this year's Computers, Freedom, and Privacy, Ross Anderson's talk about the risks of centralized databases suggested a corollary: Architecture is policy. (A great line and all mine, so I thought, until reminded that only last year CFP had an EFF-hosted panel called exactly that.)

You may *say* that you value patient (for example) privacy. And you may believe that your role-based access rules will be sufficient to protect a centralized database of personal health information (for example), but do the math. The NHS's central database, Anderson said, includes data on 50 million people that is accessible by 800,000 people - about the same number as had access to the diplomatic cables that wound up being published by Wikileaks. And we all saw how well that worked. (Perhaps the Wikileaks Unit could be pressed into service as a measure of security risk.)

So if you want privacy-protective systems, you want the person vendors build for - "the man with the checkbook" to be someone who understands what policies will actually be implemented by your architecture and who will be around the table at the top level of government, where policy is being drafted. When the man with the checkbook is a doctor, you get a very different, much more functional, much more privacy protective system. When governments recruit and listen to a CIO you do not get a giant centralized, administratively convenient Wikileaks Unit.

How big is the threat?

Assessing that depends a lot, said Bruce Schneier, on whether you accept the rhetoric of cyberwar (Americans, he noted, are only willing to use the word "war" when there are no actual bodies involved). If we are at war, we are a population to be subdued; if we are in peacetime we are citizens to protect. The more the rhetoric around cyberwar takes over the headlines, the harder it will be to get privacy protection accepted as an important value. So many other debates all unfold differently depending whether we are rhetorically at war or at peace: attribution and anonymity; the Internet kill switch; built-in and pervasive wiretapping. The decisions we make to defend ourselves in wartime are the same ones that make us more vulnerable in peacetime.

"Privacy is a luxury in wartime."

Instead, "This" - Stuxnet, attacks on Sony and Citibank, state-tolerated (if not state-sponsored) hacking - "is what cyberspace looks like in peacetime." He might have, but didn't, say, "This is the new normal." But if on the Internet in 1995 no one knew you were a dog; on the Internet in 2011 no one knows whether your cyberattack was launched by a government-sponsored military operation or a couple of guys in a Senegalese cybercafé.

Why Senegalese? Because earlier, Mouhamadou Lo, a legal advisor from the Computing Agency of Senegal, had explained that cybercrime affects everyone. "Every street has two or three cybercafés," he said. "People stay there morning to evening and send spam around the world." And every day in his own country there are one or two victims. "it shows that cybercrime is worldwide."

And not only crime. The picture of a young Senegalese woman, posted in Facebook, appeared in the press in connection with the Strauss-Kahn affair because it seemed to correspond to a description given of the woman in the case. She did nothing wrong; but there are still consequences back home.

Somehow I doubt the solution to any of this will be found in the trend the ACLU's Jay Stanley and others highlighted towards robot policing. Forget black helicopters and CCTV; what about infrared cameras that capture private moments in the dark and helicopters the size of hummingbirds that "hover and stare". The mayor of Ogden, Utah wants blimps over his city, and, as Vernon M Keenan, director of the Georgia Bureau of Investigation put it, "Law enforcement does not do a good job of looking at new technologies through the prism of civil liberties."

Imagine, said the ACLU's Jay Stanley: "The chilling prospect of 100 percent enforcement."

Final conference thoughts, in no particular order:

- This is the first year of CFP (and I've been going since 1994) where Europe and the UK are well ahead on considering a number of issues. One was geotracking (Europe has always been ahead in mobile phones); but also electronic health care records and how to manage liability for online content. "Learn from our mistakes!" pleaded one Dutch speaker (re health records).

- #followfriday: @sfmnemonic; @privacywonk; @ehasbrouck; @CenDemTech; @openrightsgroup; @privacyint; @epic; @cfp11.

- The market in secondary use of health care data is now $2 billion (PriceWaterhouseCooper via Latanya Sweeney).

- Index on Censorship has a more thorough write-up of Bruce Schneier's talk.

- Today was IBM's 100th birthday.

- This year's chairs, Lillie Coney (EPIC) and Jules Polonetsky, did an exceptional job of finding a truly diverse range of speakers. A rarity at technology-related conferences.

- Join the weekly Twitter #privchat, Tuesdays at noon Eastern US time, hosted by the Center for Democracy and Technology.

- Have a good year, everybody! See you at CFP 2012 (and here every Friday until then).

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 10, 2011

The creepiness factor

"Facebook is creepy," said the person next to me in the pub on Tuesday night.

The woman across from us nodded in agreement and launched into an account of her latest foray onto the service. She had, she said uploaded a batch of 15 photographs of herself and a friend. The system immediately tagged all of the photographs of the friend correctly. It then grouped the images of her and demanded to know, "Who is this?"

What was interesting about this particular conversation was that these people were not privacy advocates or techies; they were ordinary people just discovering their discomfort level. The sad thing is that Facebook will likely continue to get away with this sort of thing: it will say it's sorry, modify some privacy settings, and people will gradually get used to the convenience of having the system save them the work of tagging photographs.

In launching its facial recognition system, Facebook has done what many would have thought impossible: it has rolled out technology that just a few weeks ago *Google* thought was too creepy for prime time.

Wired UK has a set of instructions for turning tagging off. But underneath, the system will, I imagine, still recognize you. What records are kept of this underlying data and what mining the company may be able to do on them is, of course, not something we're told about.

Facebook has had to rein in new elements of its service so many times now - the Beacon advertising platform, the many revamps to its privacy settings - that the company's behavior is beginning to seem like a marketing strategy rather than a series of bungling missteps. The company can't be entirely privacy-deaf; it numbers among its staff the open rights advocate and former MP Richard Allan. Is it listening to its own people?

If it's a strategy it's not without antecedents. Google, for example, built its entire business without TV or print ads. Instead, every so often it would launch something so cool everyone wanted to use it that would get it more free coverage than it could ever have afforded to pay for. Is Facebook inverting this strategy by releasing projects it knows will cause widely covered controversy and then reining them back in only as far as the boundary of user complaints? Because these are smart people, and normally smart people learn from their own mistakes. But Zuckerberg, whose comments on online privacy have approached arrogance, is apparently justified, in that no matter what mistakes the company has made, its user base continues to grow. As long as business success is your metric, until masses of people resign in protest, he's golden. Especially when the IPO moment arrives, expected to be before April 2012.

The creepiness factor has so far done nothing to hurt its IPO prospects - which, in the absence of an actual IPO, seem to be rubbing off on the other social media companies going public. Pandora (net loss last quarter: $6.8 million) has even increased the number of shares on offer.

One thing that seems to be getting lost in the rush to buy shares - LinkedIn popped to over $100 on its first day, and has now settled back to $72 and change (for a Price/Earnings ratio 1076) - is that buying first-day shares isn't what it used to be. Even during the millennial technology bubble, buying shares at the launch of an IPO was approximately like joining a queue at midnight to buy the new Apple whizmo on the first day, even though you know you'll be able to get it cheaper and debugged in a couple of months. Anyone could have gotten much better prices on Amazon shares for some months after that first-day bonanza, for example (and either way, in the long term, you'd have profited handsomely).

Since then, however, a new game has arrived in town: private exchanges, where people who meet a few basic criteria for being able to afford to take risks, trade pre-IPO shares. The upshot is that even more of the best deals have already gone by the time a company goes public.

In no case is this clearer than the Groupon IPO, about which hardly anyone has anything good to say. Investors buying in would be the greater fools; a co-founder's past raises questions, and its business model is not sustainable.

Years ago, Roger Clarke predicted that the then brand-new concept of social networks would inevitably become data abusers simply because they had no other viable business model. As powerful as the temptation to do this has been while these companies have been growing, it seems clear the temptation can only become greater when they have public markets and shareholders to answer to. New technologies are going to exacerbate this: performing accurate facial recognition on user-uploaded photographs wasn't possible when the first pictures were being uploaded. What capabilities will these networks be able to deploy in the future to mine and match our data? And how much will they need to do it to keep their profits coming?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 29, 2011

Searching for reality

They say that every architect has, stuck in his desk drawer, a plan for the world's tallest skyscraper; probably every computer company similarly has a plan for the world's fastest supercomputer. At one time, that particular contest was always won by Seymour Cray. Currently, the world's fastest computer is Tianhe-1A, in China. But one day soon, it's going to be Blue Waters, an IBM-built machine filling 9,000 square feet at the National Center for Supercomputing Applications at the University of Illinois at Champaign-Urbana.

It's easy to forget - partly because Champaign-Urbana is not a place you visit by accident - how mainstream-famous NCSA and its host, UIUC, used to be. NCSA is the place from which Mosaic emerged in 1993. UIUC was where Arthur C. Clarke's HAL was turned on, on January 12, 1997. Clarke's choice was not accidental: my host, researcher Robert McGrath tells me that Clarke visited here and saw the seminal work going on in networking and artificial intelligence. And somewhere he saw the first singing computer, an IBM 7094 haltingly rendering "Daisy Bell." (Good news for IBM: at that time they wouldn't have had to pay copyright clearance fees on a song that was, in 1961, 69 years old.)

So much was invented here: Telnet, for example.

"But what have they done for us lately?" a friend in London wondered.

NCSA's involvement with supercomputing began when Larry Smarr, having worked in Europe and admired the access non-military scientists had to high-performance computers, wrote a letter to the National Science Foundation proposing that the NSF should fund a supercomputing center for use by civilian scientists. They agreed, and the first version of NCSA was built in 1986. Typically, a supercomputer is commissioned for five years; after that it's replaced with the fastest next thing. Blue Waters will have more than 300,000 8-core processors and be capable of a sustained rate of 1 petaflop and a peak rate of 10 petaflops. The transformer room underneath can provide 24 megawatts of power - as energy-efficiently as possible. Right now, the space where Blue Waters will go is a large empty white space broken up by black plug towers. It looks like a set from a 1950s science fiction film.

On the consumer end, we're at the point now where a five-year-old computer pretty much answers most normal needs. Unless you're a gamer or a home software developer, the pressure to upgrade is largely off. But this is nowhere near true at the high end of supercomputing.

"People are never satisfied for long," says Tricia Barker, who showed us around the facility. "Scientists and engineers are always thinking of new problems they want to solve, new details they want to see, and new variables they want to include." Planned applications for Blue Waters include studying storms to understand why some produce tornadoes and some don't. In the 1980s, she says, the data points were kilometers apart; Blue Waters will take the mesh down to 10 meters.

"It's why warnings systems are so hit and miss," she explains. Also on the list are more complete simulations to study climate change.

Every generation of supercomputers gets closer to simulating reality and increases the size of the systems we can simulate in a reasonable amount of time. How much further can it go?

They speculate, she said, about how, when, and whether exaflops can be reached: 2018? 2020? At all? Will the power requirements outstrip what can reasonably be supplied? How big would it have to be? And could anyone afford it?

In the end, of course, it's all about the data. The 500 petabytes of storage Blue Waters will have is only a small piece of the gigantic data sets that science is now producing. Across campus, also part of NCSA, senior research scientist Ray Plante is part of the Large Synoptic Survey Telescope project which, when it gets going, will capture a third of the sky every night on 3 gigapixel cameras with a wide field of view. The project will allow astronomers to see changes over a period of days, allowing them to look more closely at phenomena such as bursters and supernovae, and study dark energy.

Astronomers have led the way in understanding the importance of archiving and sharing data, partly because the telescopes are so expensive that scientists have no choice about sharing them. More than half the Hubble telescope papers, Plante says, are based on archival research, which means research conducted on the data after a short period in which research is restricted to those who proposed (and paid for) the project. In the case of LSST, he says, there will be no proprietary period: the data will be available to the whole community from Day One. There's a lesson here for data hogs if they care to listen.

Listening to Plante - and his nearby colleague Joe Futrelle - talk about the issues involved in storing, studying, and archiving these giant masses of data shows some of the issues that lie ahead for all of us. Many of today's astronomical studies rely on statistics, which in turn requires matching data sets that have been built into catalogues without necessarily considering who might in future need to use them: opening the data is only the first step.

So in answer to my friend: lots. I saw only about 0.1 percent of it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 15, 2011

The open zone

This week my four-year-old computer had a hissy fit and demanded, more or less simultaneously, a new graphics card, a new motherboard, and a new power supply. It was the power supply that was the culprit: when it blew it damaged the other two pieces. I blame an incident about six months ago when the power went out twice for a few seconds each time, a few minutes apart. The computer's always been a bit fussy since.

I took it to the tech guys around the corner to confirm the diagnosis, and we discussed which replacements to order and where to order them from. I am not a particularly technical person, and yet even I can repair this machine by plugging in replacement parts and updating some software. (It's fine now, thank you.)

Here's the thing: at no time did anyone say, "It's four years old. Just get a new one." Instead, the tech guys said, "It's a good computer with a good processor. Sure, replace those parts." A watershed moment: the first time a four-year-old computer is not dismissed as obsolete.

As if by magic, confirmation turned up yesterday, when the Guardian's Charles Arthur asked whether the PC market has permanently passed its peak. Arthur goes on to quote Jay Chou, a senior research analyst at IDC, suggesting that we are now in the age of "good-enough computing" and that computer manufacturers will now need to find ways to create a "compelling user experience". Apple is the clear leader in that arena, although it's likely that if I'd had a Mac instead of a PC it would have been neither so easy nor so quick and inexpensive to fix my machine and get back to work on it, Macs are wonders of industrial design, but as I noted in 2007 when I built this machine, building PCs is now a color-by-numbers affair plugged together out of subsystem pieces that plug together in only one way. What it lacks in elegance compared to a Mac is more than made up for by being able to repair it myself.

But Chou is likely right that this is not the way the world is going.

In his 1998 book The Invisible Computer, usability pioneer Donald Norman projected a future of information appliances, arguing that computers would become invisible because they would be everywhere. (He did not, however, predict the ubiquitous 20-second delay that would accompany this development. You know, it used to be you could turn something on and it would work right away because it didn't have to load software into its memory?) For his model, Norman took electric motors: in the early days you bought one electric motor and used it to power all sorts of variegated attachments; later (now) you found yourself owning dozens of electric motors, all hidden inside appliances.

The trade-off is pretty much the same: the single electric motor with attachments was much more repairable by a knowledgeable end user than today's sealed black-box appliances are. Similarly, I can rebuild my PC, but I can only really replace the hard drive on my laptop, and the battery on my smart phone. Iphone users can't even do that. Norman, whose interest is usability, doesn't - or didn't, since he's written other books since - see this as necessarily a bad deal for consumers, who just want their technology to work intuitively so they can use it to get stuff done.

Jonathan Zittrain, though, has generally taken the opposite view, arguing in his book The Future of the Internet - and How to Stop It and in talks such as the one he gave at last year's Web science meeting that the general-purpose computer, which he dates to 1977, is dying. With it, to some extent, is going the open Internet; it was at that point that, to illustrate what he meant by curated content, he did a nice little morph from the ultra-controlled main menu of CompuServe circa 1992 to today's iPhone home screen.

"How curated do we want things to be?" he asked.

It's the key question. Zittrain's view, backed up by Tim Wu in The Master Switch is that security and copyright may be the levers used to close down general-purpose computers and the Internet, leaving us with a corporately-owned Internet that runs on black boxes to which individual consumers have little or no access. This is, ultimately, what the "Open" in Open Rights Group seems to me to be about: ensuring that the most democratic medium ever invented remains a democratic medium.

Clearly, there are limits. The earliest computer kits were open - but only to the relatively small group of people with - or willing to acquire - considerable technical skill. My computer would not be more open to me if I had to get out a soldering iron to fix my old motherboard and code my own operating system. Similarly, the skill required to deal with security threats like spam and malware attacks raises the technical bar of dealing with computers to the point where they might as well be the black boxes Zittrain fears. But somewhere between the soldering iron and the point-and-click of a TV remote control there has to be a sweet spot where the digital world is open to the most people. That's what I hope we can find.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 8, 2011

Brought to book

JK Rowling is seriously considering releasing the Harry Potter novels as ebooks, while Amanda Hocking, who's sold a million or so ebooks has signed a $2 million contract with St. Martin's Press. In the same week. It's hard not to conclude that ebooks are finally coming of age.

And in many ways this is a good thing. The economy surrounding the Kindle, Barnes and Noble's Nook, and other such devices is allowing more than one writer to find an audience for works that mainstream publishers might have ignored. I do think hard work and talent will usually out, and it's hard to believe that Hocking would not have found herself a good career as a writer via the usual routine of looking for agents and publishers. She would very likely have many fewer books published at this point, and probably wouldn't be in possession of the $2 million it's estimated she's made from ebook sales.

On the other hand, assuming she had made at least a couple of book sales by now, she might be much more famous: her blog posting explaining her decision notes that a key factor is that she gets a steady stream of complaints from would-be readers that they can't buy her books in stores. She expects to lose money on the St. Martin's deal compared to what she'd make from self-publishing the same titles. To fans of disintermediation, of doing away with gatekeepers and middle men and allowing artists to control their own fates and interact directly with their audiences, Hocking is a self-made hero.

And yet...the future of ebooks may not be so simply rosy.

This might be the moment to stop and suggest reading a little background on book publishing from the smartest author I know on the topic, science fiction writer Charlie Stross. In a series of blog postings he's covered common misconceptions about publishing, why the Kindle's 2009 UK launch was bad news for writers, and misconceptions about ebooks. One of Stross's central points: epublishing platforms are not owned by publishers but by consumer electronics companies - Apple, Sony, Amazon.

If there's one thing we know about the Net and electronic media generally it's that when the audience for any particular new medium - Usenet, email, blogs, social networks - gets to be a certain size it attracts abuse. It's for this reason that every so often I argue that the Internet does not scale well.

In a fascinating posting on Patrick and Theresa Nielsen-Hayden's blog Making Light, Jim Macdonald notes the case of Canadian author S K S Perry, who has been blogging on LiveJournal about his travails with a thief. Perry, having had no luck finding a publisher for his novel Darkside, had posted it for free on his Web site, where a thief copied it and issued a Kindle edition. Macdonald links this sorry tale (which seems now to have reached a happy-enough ending) with postings from Laura Hazard Owen and Mike Essex that predict a near future in which we are awash in recycled ebook...spam. As all three of these writers point out, there is no system in place to do the kind of copyright/plagiarism checking that many schools have implemented. The costs are low; the potential for recycling content vast; and the ease of gaming the ratings system extraordinary. And either way, the ebook retailer makes money.

Macdonald's posting primarily considers this future with respect to the challenge for authors to be successful*: how will good books find audiences if they're tiny islands adrift in a sea of similar-sounding knock-offs and crap? A situation like that could send us all scurrying back into the arms of people who publish on paper. That wouldn't bother Amazon-the-bookseller; Apple and others without a stake in paper publishing are likely to care more (and promising authors and readers due care and diligence might help them build a better, differentiated ebook business).

There is a mythology that those who - like the Electronic Frontier Foundation or the Open Rights Group - oppose the extension and tightening of copyright are against copyright. This is not the case: very few people want to do away with copyright altogether. What most campaigners in this area want is a fairer deal for all concerned.

This week the issue of term extension for sound recordings in the EU revived when Denmark changed tack and announced it would support the proposals. It's long been my contention that musicians would be better served by changes in the law that would eliminate some of the less fair terms of typical contracts, that would provide for the reversion of rights to musicians when their music goes out of commercial availability, and that would alter the balance of power, even if only slightly, in favor of the musicians.

This dystopian projected future for ebooks is a similar case. It is possible to be for paying artists and even publishers and still be against the imposition of DRM and the demonization of new technologies. This moment, where ebooks are starting to kick into high gear, is the time to find better ways to help authors.

*Successful: an author who makes enough money from writing books to continue writing books.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 14, 2011

Face time

The history of the Net has featured many absurd moments, but this week was some sort of peak of the art. In the same week I read that a) based a $450 million round of investment from Goldman Sachs Facebook is now valued at $50 billion, higher than Boeing's market capitalization and b) Facebook's founder, Mark Zuckerberg, is so tired of the stress of running the service that he plans to shut it down on March 15. As I seem to recall a CS Lewis character remarking irritably, "Why don't they teach logic in these schools?" If you have a company worth $50 billion and you don't much like running it any more, you sell the damn thing and retire. It's not like Zuckerberg even needs to wait to be Time's Man of the Year.

While it's safe to say that Facebook isn't going anywhere soon, it's less clear what its long-term future might be, and the users who panicked at the thought of the service's disappearance would do well to plan ahead. Because: if there's one thing we know about the history of the Net's social media it's that the party keeps moving. Facebook's half-a-billion-strong user base is, to be sure, bigger than anything else assembled in the history of the Net. But I think the future as seen by Douglas Rushkoff, writing for CNN last week is more likely: Facebook, he argued based on its arguably inflated valuation, is at the beginning of its end, as MySpace was when Rupert Murdoch bought it in 2005 for $580 million. (Though this says as much about Murdoch's Net track record as it does about MySpace: Murdoch bought the text-based Delphi, at its peak moment in late 1993.)

Back in 1999, at the height of the dot-com boom, the New Yorker published an article (abstract; full text requires subscription) comparing the then-spiking stock price of AOL with that of the Radio Corporation of America back in the 1920s, when radio was the hot, new democratic medium. RCA was selling radios that gave people unprecedented access to news and entertainment (including stock quotes); AOL was selling online accounts that gave people unprecedented access to news, entertainment, and their friends. The comparison, as the article noted, wasn't perfect, but the comparison chart the article was written around was, as the author put it, "jolly". It still looks jolly now, recreated some months later for this analysis of the comparison.

There is more to every company than just its stock price, and there is more to AOL than its subscriber numbers. But the interesting chart to study - if I had the ability to create such a chart - would be the successive waves of rising, peaking, and falling numbers of subscribers of the various forms of social media. In more or less chronological order: bulletin boards, Usenet, Prodigy, Genie, Delphi, CompuServe, AOL...and now MySpace, which this week announced extensive job cuts.

At its peak, AOL had 30 million of those; at the end of September 2010 it had 4.1 million in the US. As subscriber revenues continue to shrink, the company is changing its emphasis to producing content that will draw in readers from all over the Web - that is, it's increasingly dependent on advertising, like many companies. But the broader point is that at its peak a lot of people couldn't conceive that it would shrink to this extent, because of the basic principle of human congregation: people go where their friends are. When the friends gradually start to migrate to better interfaces, more convenient services, or simply sites their more annoying acquaintances haven't discovered yet, others follow. That doesn't necessarily mean death for the service they're leaving: AOL, like CIX, the The WELL, and LiveJournal before it, may well find a stable size at which it remains sufficiently profitable to stay alive, perhaps even comfortably so. But it does mean it stops being the growth story of the day.

As several financial commentators have pointed out, the Goldman investment is good for Goldman no matter what happens to Facebook, and may not be ring-fenced enough to keep Facebook private. My guess is that even if Facebook has reached its peak it will be a long, slow ride down the mountain and between then and now at least the early investors will make a lot of money.

But long-term? Facebook is barely five years old. According to figures leaked by one of the private investors, its price-earnings ratio is 141. The good news is that if you're rich enough to buy shares in it you can probably afford to lose the money.

As far as I'm aware, little research has been done studying the Net's migration patterns. From my own experience, I can say that my friends lists on today's social media include many people I've known on other services (and not necessarily in real life) as the old groups reform in a new setting. Facebook may believe that because the profiles on its service are so complex, including everything from status updates and comments to photographs and games, users will stay locked in. Maybe. But my guess is that the next online party location will look very different. If email is for old people, it won't be long before Facebook is, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 19, 2010

Power to the people

We talk often about the fact that ten years of effort - lawsuits, legislation, technology - on the part of the copyright industries has made barely a dent in the amount of material available online as unauthorized copies. We talk less about the similar situation that applies to privacy despite years of best efforts by Privacy International, Electronic Privacy Information Center, Center for Democracy and Technology, Electronic Frontier Foundation, Open Rights Group, No2ID, and newcomer Big Brother Watch. The last ten years have built Google, and Facebook, and every organization now craves large data stores of personal information that can be mined. Meanwhile, governments are complaisant, possibly because they have subpoena power. It's been a long decade.

"Information is the oil of the 1980s," wrote Thomas McPhail and Brenda McPhail in 1987 in an article discussing the politics of the International Telecommunications Union, and everyone seems to take this encomium seriously.

William Heath, who spent his early career founding and running Kable, a consultancy specializing in government IT. The question he focused on a lot: how to create the ideal government for the digital era, has been saying for many months now that there's a gathering wave of change. His idea is that the *new* new thing is technologies to give us back control and up-end the current situation in which everyone behaves as if they own all the information we give them. But it's their data only in exactly the same way that taxpayers' money belongs to the government. They call it customer relationship management; Heath calls the data we give them volunteered personal information and proposes instead vendor relationship management.

Always one to put his effort where his mouth is (Heath helped found the Open Rights Group, the Foundation for Policy Research, and the Dextrous Web as well as Kable), Heath has set up not one, but two companies. The first, Ctrl-Shift, is a research and advisory businesses to help organizations adjust and adapt to the power shift. The second, Mydex, a platform now being prototyped in partnership with the Department of Work and Pensions and several UK councils (PDF). Set up as a community interest company, Mydex is asset-locked, to ensure that the company can't suddenly reverse course and betray its customers and their data.

The key element of Mydex is the personal data store, which is kept under each individual's own control. When you want to do something - renew a parking permit, change your address with a government agency, rent a car - you interact with the remote council, agency, or company via your PDS. Independent third parties verify the data you present. To rent a car, for example, you might present a token from the vehicle licensing bureau that authenticates your age and right to drive and another from your bank or credit card company verifying that you can pay for the rental. The rental company only sees the data you choose to give it.

It's Heath's argument that such a setup would preserve individual privacy and increase transparency while simultaneously saving companies and governments enormous sums of money.

"At the moment there is a huge cost of trying to clean up personal data," he says. "There are 60 to 200 organisations all trying to keep a file on you and spending money on getting it right. If you chose, you could help them." The biggest cost, however, he says, is the lack of trust on both sides. People vanish off the electoral rolls or refuse to fill out the census forms rather than hand over information to government; governments treat us all as if we were suspected criminals when all we're trying to do is claim benefits we're entitled to.

You can certainly see the potential. Ten years ago, when they were talking about "joined-up government", MPs dealing with constituent complaints favored the notion of making it possible to change your address (for example) once and have the new information propagate automatically throughout the relevant agencies. Their idea, however, was a huge, central data store; the problem for individuals (and privacy advocates) was that centralized data stores tend to be difficult to keep accurate.

"There is an oft-repeated fallacy that existing large organizations meant to serve some different purpose would also be the ideal guardians of people's personal data," Heath says. "I think a purpose-created vehicle is a better way." Give everyone a PDS, and they can have the dream of changing their address only once - but maintain control over where it propagates.

There are, as always, key questions that can't be answered at the prototype stage. First and foremost is the question of whether and how the system can be subverted. Heath's intention is that we should be able to set our own terms and conditions for their use of our data - up-ending the present situation again. We can hope - but it's not clear that companies will see it as good business to differentiate themselves on the basis of how much data they demand from us when they don't now. At the same time, governments who feel deprived of "their" data can simply pass a law and require us to submit it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 1, 2010

Duty of care

"Anyone who realizes how important the Web is," Tim Berners-Lee said on Tuesday, "has a duty of care." He was wrapping up a two-day discussion meeting at the Royal Society. The subject: Web science.

What is Web science? Even after two days, it's difficult to grasp, in part because defining it is a work in progress. Here are some of the disciplines that contributed: mathematics, philosophy, sociology, network science, and law, plus a bunch of much more directly Webby things that don't fit easily into categories. Which of course is the point: Web science has to cover much more than just the physical underpinnings of computers and network wires. Computer science or network science can use the principles of mathematics and physics to develop better and faster machines and study architectures and connections. But the Web doesn't exist without the people putting content and applications on it, and so Web science must be as much about human behaviour as about physics.

"If we are to anticipate how the Web will develop, we will require insight into our own nature," Nigel Shadbolt, one of the event's convenors, said on Monday. Co-convenor Wendy Hall has said, similarly, "What creates the Web is us who put things on it, and that's not natural or engineered.". Neither natural (biological systems) or engineered (planned build-out like the telecommunications networks), but something new. If we can understand it better, we can not only protect it better, but guide it better toward the most productive outcomes, just as farmers don't haphazardly interbreed species of corn but use their understanding to select for desirable traits.

The simplest parts of the discussions to understand, therefore, were (ironically) the mathematicians. Particularly intriguing was the former chief scientist Robert May, whose approach to removing nodes from the network to make it non-functional applied equally to the Web, epidemiology, and banking risk.

This is all happening despite the recent Wired cover claiming the "Web is dead". Dead? Facebook is a Web site; Skype, the app store, IM clients, Twitter, and the New York Times all reach users first via the Web even if they use their iPhones for subsequent visits (and how exactly did they buy those iPhones, hey?) Saying it's dead is almost exactly the old joke about how no one goes to a particular restaurant any more because it's too crowded.

People who think the Web is dead have stopped seeing it. But the point of Web science is that for 20 years we've been turning what started as an academic playground into a critical infrastructure, and for government, finance, education, and social interaction to all depend on the Web it must have solid underpinnings. And it has to keep scaling - in a presentation on the state of deployment of IPv6 in China, Jianping Wu noted that Internet penetration in China is expected to jump from 30 percent to 70 percent in the next ten to 20 years. That means adding 400-900 million users. The Chinese will have to design, manage, and operate the largest infrastructure in the world - and finance it.

But that's the straightforward kind of scaling. IBMer Philip Tetlow, author of The Web's Awake (a kind of Web version of the Gaia hypothesis), pointed out that all the links in the world are a finite set; all the eyeballs in the world looking at them are a finite set...but all the contexts surrounding them...well, it's probably finite but it's not calculable (despite Pierre Levy's rather fanciful construct that seemed to suggest it might be possible to assign a URI to every human thought). At that level, Tetlow believes some of the neat mathematical tools, like Jennifer Chayes' graph theory, will break down.

"We're the equivalent of precision engineers," he said, when what's needed are the equivalent of town planners and urban developers. "And we can't build these things out of watches."

We may not be able to build them at all, at least not immediately. Helen Margetts outlined the constraints on the development of egovernment in times of austerity. "Web science needs to map, understand, and develop government just as for other social phenomena, and export back to mainstream," she said.

Other speakers highlighted gaps between popular mythology and reality. MIT's David Carter noted that, "The Web is often associated with the national and international but not the local - but the Web is really good at fostering local initiatives - that's something for Web science to ponder." Noshir Contractor, similarly, called out The Economist over the "death of distance": "More and more research shows we use the Web to have connections with proximate people."

Other topics will be far more familiar to net.wars readers: Jonathan Zittrain explored the ways the Web can be broken by copyright law, increasing corporate control (there was a lovely moment when he morphed the iPhone's screen into the old CompuServe main menu), the loss of uniformity so that the content a URL points to changes by geographic location. These and others are emerging points of failure.

We'll leave it to an unidentified audience question to sum up the state of Web science: "Nobody knows what it is. But we are doing it."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

August 20, 2010

Naming conventions

Eric Schmidt, the CEO of Google, is not a stupid person, although sometimes he plays one for media consumption. At least, that's how it seemed this week, when the Wall Street Journal reported that he had predicted, apparently in all seriousness, that the accumulation of data online may result in the general right for young people to change their names on reaching adulthood in order to escape the embarrassments of their earlier lives.

As Danah Boyd commented in response, it is to laugh.

For one thing, every trend in national and international law is going toward greater, permanent trackability. I know the UK is dumping the ID card and many US states are stalling on Real ID, but try opening a new bank account in the US or Europe, especially if you're a newly arrived foreigner. It's true that it's not so long ago - 20 years, perhaps - that people, especially in California, did change their names at the drop of an acid tablet. I'm fairly sure, for example, that the woman I once knew as Dancingtree Moonwater was not named that by her parents. But those days are gone with the anti-money laundering regulations, the anti-terrorist laws, and airport security.

For one thing, when is he imagining the adulthood moment to take place? When they're 17 and applying to college and need to cite their past records of good works, community involvement, and academic excellence? When they're 21 and graduating from college and applying for jobs and need to cite their past records of academic excellence, good works, and community involvement? I don't know about you, but I suspect that an admissions officer/prospective employer would be deeply suspicious of a kid coming of age today who had, apparently, no online history at all. Even if that child is a Mormon.

For another, changing your name doesn't change your identity (even if the change is because you got married). Investigators who track down people who've dropped out of their lives and fled to distant parts to start new ones often do so by, among other things, following their hobbies. You can leave your spouse, abandon your children, change jobs, and move to a distant location - but it isn't so easy to shake a passion for fly-fishing or 1957 Chevys. The right to reinvent yourself, as Action on Rights for Children's Terri Dowty pointed out during the campaign against the child-tracking database ContactPoint, is an important one. But that means letting minor infractions and youthful indiscretions fade into the mists of time, not to be pulled out and laughed until, say, 30 years hence, rather than being recorded in a database that thinks it "knows" you.

I think Schmidt knows all this perfectly well. And I think if such an infrastructure - turn 16, create a new identity - were ever to be implemented the first and most significant beneficiary would be...Google. I would expect most people's search engine use to provide as individual a fingerprint as, well, fingerprints. (This is probably less true for journalists, who research something different every week and therefore display the database equivalent of multiple personality disorder.)

Clearly if the solution to young people posting silly stuff online where posterity can bite them on the ass is a change of name the only way to do it is to assign kids online-only personas at birth that can be retired when they reach an age of reason. But in such a scenario, some kids would wind up wanting to adopt their online personas as their real ones because their online reputation has become too important in their lives. In the knowledge economy, as plenty of others have pointed out, reputation is everything.

This is, of course, not a new problem. As usual. When, in 1995, DejaNews (bought by Google some years back to form the basis of the Google Groups archive) was created, it turned what had been ephemeral Usenet postings into a permanent archive. If you think people post stupid stuff on Facebook now, when they know their friends and families are watching, you should have seen the dumb stuff they posted on Usenet when they thought they were in the online equivalent of Benidorm, where no one knew them and there were no consequences. Many of those Usenet posters were students. But I also recall the newly appointed CEO of a public company who went around the WELL deleting all his old messages. Didn't mean there weren't copies...or memories.

There is a genuine issue here, though, and one that a very smart friend with a 12-year-old daughter worries about regularly: how do you, as a parent, guide your child safely through the complexities of the online world and ensure that your child has the best possible options for her future while still allowing her to function socially with her peers? Keeping her offline is not an answer. Neither are facile statements from self-interested CEOs who, insulated by great wealth and technological leadership, prefer to pretend to themselves that these issues have already been decided in their favor.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 9, 2010

The big button caper

There's a moment early in the second season of the TV series Mad Men when one of the Sterling Cooper advertising executives looks out the window and notices, in a tone of amazement, that young people are everywhere. What he was seeing was, of course, the effect of the baby boom. The world really *was* full of young people.

"I never noticed it," I said to a friend the next day.

"Well, of course not," he said. "You were one of them."

Something like this will happen to today's children - they're going to wake up one day and think the world is awash in old people. This is a fairly obvious consequence of the demographic bulge of the Baby Boomers, which author Ken Dychtwald has compared to "a pig going through a python".

You would think that mobile phone manufacturers and network operators would be all over this: carrying a mobile phone is an obvious safety measure for an older, perhaps infirm or cognitively confused person. But apparently the concept is more difficult to grasp than you'd expect, and so Simon Rockman, the founder and former publisher of What Mobile and now working for the GSM Association, convened a senior mobile market conference on Tuesday.

Rockman's pitch is that the senior market is a business opportunity: unlike other market sectors it's not saturated; older users are less likely to be expensive data users and more loyal. The margins are better, he argues, even if average revenue per user is low.

The question is, how do you appeal to this market? To a large extent, seniors are pretty much like everyone else: they want gadgets that are attractive, even cool. They don't want the phone equivalent of support stockings. Still, many older people do have difficulties with today's ultra-tiny buttons, icons, and screens, iffy sound quality, and complex menu structures. Don't we all?

It took Ewan MacLeod, the editor of Mobile Industry Review to point out the obvious. What is the killer app for most seniors in any device? Grandchildren, pictures of. MacLeod has a four-week-old son and a mother whose desire to see pictures apparently could only be fully satisfied by a 24-hour video feed. Industry inadequacy means that MacLeod is finding it necessary to write his own app to make sending and receiving pictures sufficiently simple and intuitive. This market, he pointed out, isn't even price-sensitive. Tell his mother she'll need to spend £60 on a device so she can see daily pictures of her grandkids, and she'll say, "OK." Tell her it will cost £500, and she'll say..."OK."

I bet you're thinking, "But the iPhone!" And to some extent you're right: the iPhone is sleek, sexy, modern, and appealing; it has a zoom function to enlarge its display fonts, and it is relatively easy to use. And so MacLeod got all the grandparents onto iPhones. But he's having to write his own app to easily organize and display the photos the phones receive: the available options are "Rubbish!"

But even the iPhone has problems (even if you're not left-handed). Ian Hosking, a senior research associate at the Cambridge Engineering Design Centre, overlaid his visual impairment simulation software so it was easy to see. Lack of contrast means the iPhone's white on black type disappears unreadably with only a small amount of vision loss. Enlarging the font only changes the text in some fields. And that zoom feature, ah, yes, wonderful - except that enabling it requires you to double-tap and then navigate with three fingers. "So the visual has improved, but the dexterity is terrible."

Oops.

In all this you may have noticed something: that good design is good design, and a phone design that accommodates older people will also most likely be a more usable phone for everyone else. These are principles that have not changed since Donald Norman formulated them in his classic 1998 book The Design of Everyday Things. To be sure there is some progress. Evelyne Pupeter-Fellner, co-founder of Emporia, for example, pointed out the elements of her company's designs that are quietly targeted at seniors: the emergency call system that automatically dials, in turn, a list of selected family members or friends until one answers; the ringing mechanism that lights up the button to press to answer. The radio you can insert the phone into that will turn itself down and answer the phone when it rings. The design that lets you attach it to a walker - or a bicycle. The single-function buttons. Similarly, the Doro was praised.

And yet it could all be so different - if we would only learn from Japan, where nearly 86 percent of seniors have - and use data on - mobile phones, according to Kei Shimada, founder of Infinita.

But in all the "beyond big buttons" discussion and David Doherty's proposition that health applications will be the second killer app, one omission niggled: the aging population is predominantly female, and the older the cohort the more that is true.

Who are least represented among technology designers and developers?

Older women.

I'd call that a pretty clear mismatch. Somewhere between we who design and they who consume is your problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 18, 2010

Things I learned at this year's CFP

- There is a bill in front of Congress to outlaw the sale of anonymous prepaid SIMs. The goal seems to be some kind of fraud and crime prevention. But, as Ed Hasbrouck points out, the principal people who are likely to be affected are foreign tourists and the Web sites who sell prepaid SIMS to them.

- Robots are getting near enough in researchers' minds for them to be spending significant amounts of time considering the legal and ethical consequences in real life - not in Asimov's fictional world where you could program in three safety llaws and your job was done. Ryan Calo points us at the work of Stanford student Victoria Groom on human-robot interaction. Her dissertation research not yet on the site, discovered that humans allocate responsibility for success and failure proportionately according to how anthropomorphic the robot is.

- More than 24 percent of tweets - and rising sharply - are sent by automated accounts, according to Miranda Mowbray at HP labs. Her survey found all sorts of strange bots: things that constantly update the time, send stock quotes, tell jokes, the tea bot that retweets every mention of tea...

- Google's Kent Walker, the 1997 CFP chair, believes that censorship is as big a threat to democracy as terrorism, and says that open architectures and free expression are good for democracy - and coincidentally also good for Google's business.

- Microsoft's chief privacy strategist, Peter Cullen, says companies must lead in privacy to lead in cloud computing. Not coincidentally, others are the conference note that US companies are losing business to Europeans in cloud computing because EU law prohibits the export of personal data to the US, where data protection is insufficient.

- It is in fact possible to provide wireless that works at a technical conference. And good food!

- The Facebook Effect is changing the attitude of other companies about user privacy. Lauren Gelman, who helps new companies with privacy issues, noted that because start-ups all see Facebook's success and want to be the next 400 million-user environment, there was a strong temptation to emulate Facebook's behavior. Now, with the angry cries mounting from consumers, she's having to spend less effort convincing them about the level of pushback companies will get from consumers if they change their policies and defy their expectations. Even so, it's important to ensure that start-ups include privacy in their budgets and not become an afterthought. In this respect, she makes me realize, privacy in 2010 is at the stage that usability was in the early 1990s.

- All new program launches come through the office of the director of Yahoo!'s business and human rights program, Ebele Okabi-Harris. "It's very easy for the press to focus on China and particular countries - for example, Australia last year, with national filtering," she said, "but for us as a company it's important to have a structure around this because it's not specific to any one region." It is, she added later, a "global problem".

- We should continue to be very worried about the database state because the ID cards repeal act continues the trend toward data sharing among government departments and agencies, according to Christina Zaba from No2ID.

- Information brokers and aggregators, operating behind the scenes, are amassing incredible amounts of details about Americans and it can require a great deal of work to remove one's information from these systems. The main customers of these systems are private investigators, debt collectors, media, law firms, and law enforcement. The Privacy Rights Clearinghouse sees many disturbing cases, as Beth Givens outlined, as does Pam Dixon's World Privacy forum.

- I always knew - or thought I knew - that the word "robot" was not coined by Asimov but by Karel Capek for his play R.U.R. (for "Rossum's Universal Robots", which coincidentally I also know that playing a robot in same was Michael Caine's first acting job). But Twitterers tell me that this isn't quite right. The word is derived from the Czech word "robota", "compulsory work for a feudal landlord". And that it was actually coined by Capek's older brother, Josef..

- There will be new privacy threats emerging from automated vehicles, other robots, and voicemail transcription services, sooner rather than later.

- Studying the inner workings of an organization like the International Civil Aviation Organization is truly difficult because the time scales - ten years to get from technical proposals to mandated standard, which is when the public becomes aware of - are a profound mismatch for the attention span of media and those who fund NGOs. Anyone who feels like funding an observer to represent civil society at ICAO should get in touch with Edward Hasbrouck.

- A lot of our cybersecurity problems could be solved by better technology.

- Lillie Coney has a great description of deceptive voting practices designed to disenfranchise the opposition: "It's game theory run amok!"

- We should not confuse insecure networks (as in vulnerable computers and flawed software) with unsecured networks (as in open wi-fi).

- Next year's conference chairs are EPIC's Lillie Coney and Jules Polonetsky. It will be in Washington, DC, probably the second or third week in June. Be there!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 12, 2010

The cost of money

Everyone except James Allan scrabbled in the bag Joe DiVanna brought with him to the Digital Money Forum (my share: a well-rubbed 1908 copper penny). To be fair, Allan had already left by then. But even if he hadn't he'd have disdained the bag. I offered him my pocketful of medium-sized change and he looked as disgusted as if it were a handkerchief full of snot. That's what living without cash for two years will do to you.

Listen, buddy, like the great George Carlin said, your immune system needs practice.

People in developed countries talk a good game about doing away with cash in favor of credit cards, debit cards, and Oyster cards, but the reality, as Michael Salmony pointed out, is that 80 percent of payments in Europe are...cash. Cash seems free to consumers (where cards have clearer charges), but costs European banks €84 billion a year. Less visibly banks also benefit (when the shadow economy hoards high-value notes it's an interest-free loan), and governments profit from Seigniorage (when people buy but do not spend coins).

"Any survey about payment methods," Salmony said Wednesday, "reveals that in all categories cash is the preferred payment method." You can buy a carrot or a car; it costs you nothing directly; it's anonymous, fast, and efficient. "If you talk directly to supermarkets, they all agree that cash is brilliant - they have sorting machines, counting machines...It's optimized so well, much better than cards."

The "unbanked", of course, such as the London migrants Kavita Datta studies, have no other options. Talk about the digital divide, this is the digital money divide: the cashless society excludes people who can't show passports, can't prove their address, or are too poor to have anything to bank with.

"You can get a job without a visa, but not without a bank account," one migrant worker told her. Electronic payments, ain't they grand?

But go to Africa, Asia, or South America, and everything turns upside down. There, too, cash is king - but there, unlike here with banks and ATMs on every corner and a fully functioning system of credit cards and other substitutes, cash is a terrible burden. Of the 2.6 billion people living on less than $2 a day, said Ignacio Mas, fewer than 10 percent have access to formal financial services. Poor people do save, he said, but their lack of good options means they save in bad ways.

They may not have banks, but most do have mobile phones, and therefore digital money means no long multi-bus rides to pay bills. It means being able to send money home at low cost. It means saving money that can't be easily stolen. In Ghana 80 percent of the population have no access to financial services - but 80 percent are covered by MTN, which is partnering with the banks to fill the gap. In Pakistan, Tameer Microfinance Bank partnered with Telenor to launch Easy-Peisa, which did 150,000 transactions its first month and expects a million by December. One million people produce milk in Pakistan; Nestle pays them all painfully by check every month. The opportunity in these countries to leapfrog traditional banking and head into digital payments is staggering, and our banks won't even care. The average account balance of customers for Kenya's M-Pesa customers is...$3.

When we're not destroying our financial system, we have more choices. If we're going to replace cash, what do we replace it with and what do we need? Really smart people to figure out how to do it right - like Isaac Newton, said Thomas Levenson. (Really. Who knew Isaac Newton had a whole other life chasing counterfeiters?) Law and partnership protocols and banks to become service providers for peer-to-peer finance, said Chris Cook. "An iTunes moment," said Andrew Curry. The democratization of money, suggested conference organizer David Birch.

"If money is electronic and cashless, what difference does it make what currency we use?" Why not...kilowatt hours? You're always going to need to heat your house. Global warming doesn't mean never having to say you're cold.

Personally, I always thought that if our society completely collapsed, it would be an excellent idea to have a stash of cigarettes, chocolate, booze, and toilet paper. But these guys seemed more interested in the notion of Facebook units. Well, why not? A currency can be anything. Second Life has Linden dollars, and people sell virtual game world gold for real money on eBay.

I'd say for the same reason that most people still walk around with notes in their wallet and coins in their pocket: we need to take our increasing abstraction step by step. Many have failed with digital cash, despite excellent technology, because they asked people to put "real" money into strange units with no social meaning and no stored trust. Birch is right: storing value in an Oyster card is no different than storing value in Beenz. But if you say that money is now so abstract that it's a collective hallucination, then the corroborative details that give artistic verisimilitude to an otherwise bald and unconvincing currency really matter.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

September 11, 2009

Public broadcasting

It's not so long ago - 2004, 2005 - that the BBC seemed set to be the shining champion of the Free World of Content, functioning in opposition to *AA (MPAA, RIAA) and general entertainment industry desire for total content lockdown. It proposed the Creative Archive; it set up BBC Backstage; and it released free recordings of the classics for download.

But the Creative Archive released some stuff and then ended the pilot in 2006, apparently because much of the BBC's content doesn't really belong to it. And then came the iPlayer. The embedded DRM, along with its initial Windows-only specification (though the latter has since changed), made the BBC look like less of a Free Culture hero.

Now, via the consultative offices of Ofcom we learn that the BBC wants to pacify third-party content owners by configuring its high-definition digital terrestrial services - known to consumers as Freeview HD - to implement copy protection. This request is, of course, part of the digital switchover taking place across the country over the next four years.

The thing is, the conditions under which the BBC was granted the relevant broadcasting licenses require that content be broadcast free-to-air. That is, unencrypted, which of course means no copy protection. So the BBC's request is to be allowed instead to make the stream unusable to outsiders by compressing the service information data using in-house-developed lookup tables. Under the proposal, the BBC will make those tables available free of charge to manufacturers who agree to its terms. Or, pretty clearly, the third party rights holders' terms.

This is the kind of hair-splitting the American humorist Jean Kerr used to write about when she detailed conversations with her children. She didn't think, for example, to include in the long list of things they weren't supposed to do when they got up first on a Sunday morning, the instruction not to make flour paste and glue together all the pages of the Sunday New York Times. "Now, of course, I tell them."

When the BBC does it, it's not so funny. Nor is it encouraging in the light of the broader trend toward claiming intellectual property protection in metadata when the data itself is difficult to restrict. Take, for example, the MTA's Metro-North Railroad, which runs commuter trains (on which Meryl Streep and Robert de Niro so often met in the 1984 movie Falling in Love) from New York City up both sides of the Hudson River to Connecticut. MTA has been issuing cease-and-desist orders to the owner of StationStops a Web site and iPhone schedule app dedicated to the Metro-North trains, claiming that it owns the intellectual property rights in its scheduling data. If it were in the UK, the Guardian's Free Our Data campaign would be all over it.

In both cases - and many others - it's hard to understand the originating organisation's complaint. Metro-North is in the business of selling train tickets; the BBC is supposed to measure its success in 1) the number of people who consumer its output; 2) the educational value of its output to the license fee-paying public. Promulgating schedule data can only help Metro-North, which is not a commercial company but a public benefit corporation owned by the State of New York. It's not going to make much from selling data licenses.

The BBC's stated intention is to prevent perfect, high-definition copies of broadcast material from escaping into the hands of (evil) file-sharers. The alternative, it says, would be to amend its multiplex license to allow it to encrypt the data streams. Which, they hasten to add, would require manufacturers to amend their equipment, which they certainly would not be able to do in time for the World Cup next June. Oh, the horror!

Fair enough, the consumer revolt if people couldn't watch the World Cup in HD because their equipment didn't support the new encryption standard would indeed be quite frightening to behold. But the BBC has a third alternative: tell rights holders that the BBC is a public service broadcaster, not a policeman for hire.

Manufacturers will still have to modify equipment under the more "modest" system information compression scheme: they will have to have a license. And it seems remarkably unlikely that licenses would be granted to the developers of open source drivers or home-brew devices such as Myth TV, and of course it couldn't be implemented retroactively in equipment that's already on the market. How many televisions and other devices will it break in your home?

Up until now, in contrast to the US situation, the UK's digital switchover has been pretty gentle and painless for a lot of people. If you get cable or satellite, at some point you got a new set-top box (mine keep self-destructing anyway); if you receive all your TV and radio over the air you attached a Freeview box. But this is the broadcast flag and the content management agenda all over again.

We know why rights holders want this. But why should the BBC adopt their agenda? The BBC is the best-placed broadcasting and content provider organisation in the world to create a parallel, alternative universe to the strictly controlled one the commercial entertainment industry wants. It is the broadcaster that commissioned a computer to educate the British public. It is the broadcaster that belongs to the people. Reclaim your heritage, guys.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

September 4, 2009

Nothing ventured, nothing lost

What does a venture capitalist do in a recession?

"Panic." Hermann Hauser says, then laughs. It is, in fact, hard to imagine him panicking if you've heard the stories he tells about his days as co-founder of Acorn Computers. He's quickly on to his real, more measured, view.

"It's just the bottom of the cycle, and people my age have been through this a number of times before. Though many people are panicking, I know that normally we come out the other end. If you just look at the deals I'm seeing at the moment, they're better than any deals I've seen in my entire life." The really positive thing, he says, is that, "The speed and quality of innovation are speeding up and not slowing down. If you believe that quality of innovation is the key to a successful business, as I do, then this is a good era. We have got to go after the high end of innovation - advanced manufacturing and the knowledge-based economy. I think we are quite well placed to do that." Fortunately, Amadeus had just raised a fund when the recession began, so it still has money to invest; life is, he admits, less fun for "the poor buggers who have to raise funds."

Among the companies he is excited about is Plastic Logic, which is due to release its first product next year, a competitor to the Kindle that will have a much larger screen, be much lighter, and will also be a computing platform with 3g, Bluetooth, and Wi-fi all built in, all built on plastic transistors that will be green to produce, more responsive than silicon - and sealed against being dropped in the bath water. "We have the world beat," he says. "It's just the most fantastic thing."

Probably if you ask any British geek above the age of 39, an Acorn BBC Micro figured prominently in their earliest experiences with computing. Hauser was and is not primarily a technical guy - although his idea of exhilarating vacation reading is Thermal Physics, by Charles Kittel and Herbert Kroemer - but picking the right guys to keep supplied with tea and financing is a rare skill, too.

"As I go around the country, people still congratulate me on the BBC Micro and tell me how wonderful it was. Some are now professors in computer science and what they complain about is that as people switched over to PCs - on the BBC Micro everybody knew how to program. The main interface was a programming interface, and it was so easy to program in BASIC everybody did it. Kids have no clue what programming is about - they just surf the Net. Nobody really understands any more what a computer does from the transistor up. It's a dying breed of people who actually know that all this is built on CMOS gates and can build it up from there."

Hauser went on to found an early effort in pen computing - "the technology wasn't good enough" and "the basic premise that I believed in, that pen computing would be important because everybody knew how to wield a pen just wasn't true" - and then the venture capital fund Amadeus, through which he helped fund, among others, leading Bluetooth chip supplier CSR. Britain, he says, is a much more hospitable environment now than it was when he was trying to make his Cambridge bank manager understand Acorn's need for a £1 million overdraft. Although, he admits now, "I certainly wouldn't have invested in myself." And would have missed Acorn's success.

"I think I'm the only European who's done four billion-dollar companies," he says. "Of course I've failed a lot. I assume that more of my initiatives that I've founded finally failed than finally succeeded."

But times have changed since consultants studied Acorn's books and told them to stop trading immediately because they didn't understand how technology companies worked. "All the building blocks you need to have to have a successful technology cluster are now finally in place," he says. "We always that the technology, but we always lacked management, and we've grown our own entrepreneurs now in Britain." He calls Stan Boland, CEO of 3g USB stock manufacturer Icera and Acorn's last managing director a "rock star" and "one of the best CEOs I have come across in Europe or the US." In addition, he says, "There is also a chance of attracting the top US talent, for the first time." However, "The only thing I fear and that we have to be careful about is that the relative decline doesn't turn into an absolute decline."

One element of Britain's changing climate with respect to technology investment that Hauser is particularly proud of is helping create tax credits and taper relief for capital gains through his work on Leon Mandelson's advisory panel on new industry and new jobs. "The reason I have done it is that I don't believe in the post-industrial society. We have to have all parts of industry in our country."

Hauser's latest excitement is stem cells; he's become the fourth person in the world to have his entire genome mapped. "It's the beginning of personal medicine."

The one thing that really bemuses him is being given lifetime achievement awards. "I have lived in the future all my life, and I still do. It's difficult to accept that I've already created a past. I haven't done yet the things I want to do!"


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

June 13, 2009

Futures

"What is the future of computers, freedom, and privacy?" a friend asked over lunch, apparently really wanting to know. This was ten days ago, and I hesitated before finding an out.

"I don't know," I said. "I haven't been to the conference yet.

Now I have been to the conference, at least this year's instance of it, and I still don't really know how to answer this question. As always, I've come away with some ideas to follow up, but mostly the sense of a work in progress. How do some people manage to be such confident futurologists?

I don't mean science fiction writers: while they're often confused with futurologists and Arthur C. Clarke's track record in predicting communications satellites notwithstanding, they're not, really. They're storytellers who take our world, change a few variables, and speculate. I also don't mean trend-spotters, who see a few instances of something and generalize from there, or pundits, who are just very, very good at quotables.

Futurologists are good at the backgrounds science fiction writers use - but not good at coming up with stories. They're not, as I had it explained to me once, researchers, because they dream rather than build things. The smart ones have figured out that dramatic predictions get more headlines - and funding - than mundane ones and they have a huge advantage over urban planners and actuaries: they don't have to be right, just interesting. (Whereas, a "psychic seer" like Nostradamus doesn't even have to be interesting as long as his ramblings are vague enough to be reinterpretable every time some new major event comes along.)

It's perennially intriguing how much of the past images of the future throw away: changing fashions in clothing, furniture, and lifestyles leave no trace. Take, for example, Popular Mechanics' 1950 predictions for 2000. Some of that article is prescient: converging televisions and telephones, for example. Some extrapolates from then new technologies such as X-rays, plastics, and frozen foods. But far more of it is a reminder of how much better the future was in the past: family helicopters, solar power in real, widespread use, cheap housing. And yet even more of it reflects the constrained social roles of the 1950s: the assumption that all those synthetic plastic fabrics, furniture, and finishings would be hosed down by...the woman of the house.

I'll bet the guy who wrote that had a wife who was always complaining about having to do all the housework. And didn't keep his books at home. Or family heirlooms, personal memorabilia, or silly gewgaws picked up on that trip to Pittsburgh. I'm not entirely clear why anyone would find frozen milk and candy made from sawdust appealing, though I suppose home cooking is indeed going out of style.

But my friend's question was serious: I can't answer it by throwing extravagantly wild imaginings at it for their entertainment value. Plus, he's probably most interested in his lifetime and that of his children, and it's a simple equation that the farther out the future you're predicting the less plausible you have to be.

It's not hard to guess that computing power will continue to grow, even if it doesn't continue to keep pace with Moore's Law and is counterbalanced by the weight of Page's Law. What *is* hard to guess is how people will want to use it. To most of the generation writing the future in the 1950s, when World War II and the threat of Nazism was fresh, it was probably inconceivable that the citizens of democratic countries would be so willing to allow so many governments to track them in detail. As inconceivable, I suppose, as that the pill would come along a few years later and wipe away the social order they believed was nature's way. Orwell, of course, foresaw the possibilities of a surveillance society, but he imagined the central control of a giant government, not a society where governments rely on commercial companies to fill out their dossiers on citizens.

I find it hard to imagine dramatic futures in part because I do believe most people want to hold onto at least parts of their past, and therefore that any future we construct will be more like Terry Gilliam's movies than anything else, festooned with bizarre duct work and populated by junk that's either come back into fashion or that we simply forgot to throw away. And there are plenty of others around to predict the apocalypse (we run out of energy, three-quarters of the world's population dies, economic and environmental collapse, will you burn that computer or sit on it?) or its opposite (we find the Singularity, solve our energy problems, colonize space, and fix biology so we live forever). Neither seems to me the most likely.

I doubt my friend would have been satisfied with the answer: "More of the same, only different." But my guess is that the battle to preserve privacy will continue for a long time. Every increase in computing power makes greater surveillance possible, and 9/11 provided the seeming justification that overrode the fading memory of what was at stake in World War II. It won't be until an event with that kind of impact reminds people of the risk you take when you allow "If you have nothing to hide, you have nothing to fear" to become society's mantra that the mainstream will fight to take back their privacy.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 29, 2009

Three blind governments

I spent my formative adult years as a musician. And even so, if I were forced to choose to sacrifice one of my senses as a practical matter pick sight over hearing: as awful and isolating as it would be to be deaf it would be far, far worse to be blind.

Lack of access to information and therefore both employment and entertainment is the key reason. How can anyone participate in the "knowledge economy" if you can't read?

Years ago, when I was writing a piece about disabled access to the Net, the Royal National Institute for the Blind put me in touch with Peter Brasher, a consultant who was particularly articulate on the subject of disabled access to computing.

People tend to make the assumption - as I did - that the existence of Braille editions and talking books meant that blind and partially sighted people were catered for reasonably well. In fact, he said, only 8 percent of the blind population can read Braille; its use is generally confined to those who are blind from childhood (although see here for a counterexample). But by far and away the majority of vision loss comes later in life. It's entirely possible that the percentage of Braille readers is now considerably less; today's kids are more likely to be taught to rely on technology - text-to-speech readers, audio books, and so on. From 50 percent in the 1950s, the percentage of blind American children learning Braille has dropped to 10 percent.

There's a lot of concern about this which can be summed up by this question: if text-to-speech technology and audio books are so great, why aren't sighted kids told to use them instead of bothering to learn to read?

But the bigger issue Brasher raised was one of independence. Typically, he said, the availability of books in Braille depends on someone with an agenda, often a church. The result for an inquisitive reader is a constant sense of limits. Then computers arrived, and it became possible to read anything you wanted of your own choice. And then graphical interfaces arrived and threatened to take it all away again; I wrote here about what it's like to surf the Web using the leading text-to-speech reader, JAWS. It's deeply unpleasant, difficult, tiring, and time-consuming.

When we talk about people with limited ability to access books - blind, partially sighted; in other cases fully sighted but physically disabled - we are talking about an already deeply marginalized and underserved population. Some of the links above cite studies that show that unemployment among the Braille-reading blind population is 44 percent - and 77 percent among blind non-Braille readers. Others make the point that inability to access printed information interferes with every aspect of education and employment.

And this is the group that this week's meeting of the Standing Committee on Copyright and Related Rights at the World Intellectual Property Office has convened to consider. Should there be a blanket exception to allow the production of alternative formats of books for the visually impaired and disabled?

The proposal, introduced by Brazil, Paraguay, and Ecuador, seems simple enough, and the cause unarguable. The World Blind Union estimates that 95 percent of books never become available in alternative formats and when they do it's after some delay. As Brasher said nearly 15 years ago, such arrangements depend on the agendas ofcharitable organizations.

The culprit, as in so many net.wars, is copyright law. The WBU published arguments for copyright reform (DOC) in 2004. Amazon's Kindle is a perfect example of the problem: bowing to the demands of publishers, text-to-speech can be - and is being - turned off in the Kindle. The Kindle - any ebook reader with speech capabilities - ought to have been a huge step forward for disabled access to books.

And now, according to Twits present, at WIPO, the US, Canada, and the EU are arguing against the idea of this exemption. (They're not the only ones; elsewhere, the Authors Guild has argued that exemptions should be granted by special license and registration, something I'd certainly be unhappy about if I were blind.)

Governments, particularly democratic ones, are supposed to be about ensuring equal opportunities for all. They are supposed to be about ensuring fair play. What about the American Disabilities Act, the EU's charter of fundamental human rights, and Canada's human rights act? Can any of these countries seriously argue that the rights of publishers and copyright holders trump the needs of a seriously disadvantaged group of people that every single one of us is at risk of joining?

While it's clear that text-to-speech and audio books don't solve every problem, and while the US is correct to argue that copyright is only one of a number of problems confronting the blind, when the WBU argues that copyright poses a significant barrier to access shouldn't everyone listen? Or are publishers confused by the stereotypical image of the pirate with the patch over one eye?

If governments and rightsholders want us to listen to them about other aspects of copyright law, they need to be on the right side of this issue. Maybe they should listen to their own marketing departments about the way it looks when rich folks kick people who are already disadvantaged - and then charge for the privilege.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or email netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2009

Statebook of the art

The bad thing about the Open Rights Group's new site, Statebook is that it looks so perfectly simple to use that the government may decide it's actually a good idea to implement something very like it. And, unfortunately, that same simplicity may also create the illusion in the minds of the untutored who still populate the ranks of civil servants and politicians that the technology works and is perfectly accurate.

For those who shun social networks and all who sail in her: Statebook's interface is an almost identical copy of that of Facebook. True, on Facebook the applications you click on to add are much more clearly pointless wastes of time, like making lists of movies you've liked to share with your friends or playing Lexulus (the reinvention of the game formerly known as Scrabulous until Hasbrouck got all huffy and had it shut down).

Politicians need to resist the temptation to believe it's as easy as it looks. The interfaces of both the fictional Statebook and the real Facebook look deceptively simple. In fact, although friends tell me how much they like the convenience of being able to share photos with their friends in a convenient single location, and others tell me how much they prefer Facebook's private messaging to email, Facebook is unwieldy and clunky to use, requiring a lot of wait time for pages to load even over a fast broadband connection. Even if it weren't, though, one of the difficulties with systems attempting to put EZ-2-ewes front ends on large and complicated databases is that they deceive users into thinking the underlying tasks are also simple.

A good example would be airline reservations systems. The fact is that underneath the simple searching offered by Expedia or Travelocity lies some extremely complex software; it prices every itinerary rather precisely depending on a host of variables. These may not just the obvious things like the class of cabin, but the time of day, the day of the week, the time of year, the category of flyer, the routing, how far in advance the ticket is being purchased, and the number of available seats left. Only some of this is made explicit; frequent flyers trying to maxmize their miles per dollar despair while trying to dig out arcane details like the class of fare.

In his 1988 book The Design of Everyday Things, Donald Norman wrote about the need to avoid confusing the simplicity or complexity of an interface with the characteristics of the underlying tasks. He also writes about the mental models people create as they attempt to understand the controls that operate a given device. His example is a refrigerator with two compartments and two thermostatic controls. An uninformed user naturally assumes each thermostat controls one compartment, but in his example, one control sets the thermostat and the other directs the proportion of cold air that's sent to each comparment. The user's mental model is wrong and, as a consequence, attempts that user makes to set the temperature will also, most likely, be wrong.

In focusing on the increasing quantity and breadth of data the government is collecting on all of us, we've neglected to think about how this data will be presented to its eventual users. We have warned about the errors that build up in very large databases that are compiled from multiple sources. We have expressed concern about surveillance and about its chilling impact on spontaneous behaviour. And we have pointed out that data is not knowledge; it is very easy to take even accurate data and build a completely false picture of a person's life. Perhaps instead we should be focusing on ensuring that the software used to query these giant databases-in-progress teaches users not to expect too much.

As an everyday example of what I mean, take the automatic line-calling system used in tennis since 2005, Hawkeye. Hawkeye is not perfectly accurate. Its judgements are based on reconstructions that put together the video images and timing data from four or more high-speed video cameras. The system uses the data to calculate the three-dimensional flight of the ball; it incorporates its knowledge of the laws of physics, its model of the tennis court, and its database of the rules of the game in order to judge whether the ball is in or out. Its official margin for error is 3.6mm.

A study by two researchers at Cardiff University disputed that number. But more relevant here, they pointed out that the animated graphics used to show the reconstructed flight of the ball and the circle indicating where it landed on the court surface are misleading because they look to viewers as though they are authoritative. The two researchers, Harry Collins and Robert Evans, proposed that in the interests of public education the graphic should be redesigned to display the margin for error and the level of confidence.

This would be a good approach for database matches, too, especially since the number of false matches and errors will grow with the size of the databases. A real-life Statebook that doesn't reflect the uncertainty factor of each search, each match, and each interpretation next to every hit would indeed be truly dangerous.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 3, 2009

Copyright encounters of the third dimension

Somewhere around 2002, it occurred to me that the copyright wars we're seeing over digitised intellectual property - music, movies, books, photographs - might, in the not-unimaginable future be repeated, this time with physical goods. Even if you don't believe that molecular manufacturing will ever happen, 3D printing and rapid prototyping machines offer the possibility of being able to make a large number of identical copies of physical goods that until now were difficult to replicate without investing in and opening a large manufacturing facility.

Lots of people see this as a good thing. Although: Chris Phoenix, co-founder of the Center for Responsible Nanotechnology, likes to ask, "Will we be retired or unemployed?"

In any case, I spent some years writing a book proposal that never went anywhere, and then let the idea hang around uselessly, like a human in a world where robots have all the jobs.

Last week, at the University of Edinburgh's conference on governance of new technologies (which I am very unhappy to have missed), RAF engineer turned law student Simon Bradshaw presented a paper on the intellectual property consequences of "low-cost rapid prototyping". If only I'd been a legal scholar...

It turns out that as a legal question rapid prototyping has barely been examined. Bradshaw found nary a reference in a literature search. Probably most lawyers think this stuff is all still just science fiction. But, as Bradshaw does, make some modest assumptions, and you find that perhaps three to five years from now we could well be having discussions about whether Obama was within the intellectual property laws to give the Queen a printed-out, personalized iPod case designed to look like Elvis, whose likeness and name are trademarked in the US. Today's copyright wars are going to seem so *simple*.

Bradshaw makes some fairly reasonable assumptions about this timeframe. Until recently, you could pay anywhere from $20,000 to $1.5 million for a fabricator/3D printer/rapid prototyping machine. But prices and sizes are dropping and functionality is going up. Bradshaw puts today's situation on a par with the state of personal computers in the late 1970s, the days of the Commodore PET and the Apple II and home kids like the Sinclair MK14. Let's imagine, he says, the world of the second generation fabricator: the size of a color laser printer, cost $1,000 or less, fed with readily available plastic, better than 0.1mm resolution (and in color), 20cm cube maximum size, and programmable by enthusiasts.

As the UK Intellectual Property Office will gladly tell you, there are four kinds of IP law: copyright, patent, trademark, and design. Of these, design is by far the least known; it's used to protect what the US likes to call "trade dress", that is, the physical look and feel of a particular item. Apple, for example, which rarely misses a trick when it comes to design, applied for a trademark on the iPhone's design in the US, and most likely registered it under the UK's design right as well. Why not? Registration is cheap (around £200), and the iPhone design was genuinely innovative.

As Bradshaw analyzes it, all four of these types of IP law could apply to objects created using 3D printing, rapid prototyping, fabricating...whatever you want to call it. And those types of law will interact in bizarre and unexpected ways - and, of course, differently in different countries.

For example: in the UK, a registered design can be copied if it's done privately and for non-commercial use. So you could, in the privacy of your home, print out copies of a test-tube stand (in Bradshaw's example) whose design is registered. You could not do it in a school to avoid purchasing them.

Parts of the design right are drafted so as to prevent manufacturers from using the right to block third-parties from making spare parts. So using your RepRap to make a case for your iPod is legal as long as you don't copy any copyrighted material that might be floating around on the surface of the original. Make the case without Elvis.

But when is an object just an object and when is it a "work of artistic merit"? Because if what you just copied is a sculpture, you're in violation of copyright law. And here, Bradshaw says, copyright law is unhelpfully unclear. Some help has come from the recent ruling in Lucasfilm v Ainsworth, the case about the stormtrooper helmets copied from the first Star Wars movie. Is a 3D replica of a 2D image a derivative work?

Unsurprisingly, it looks like US law is less forgiving. In the helmet case, US courts ruled in favor of Lucasfilm; UK courts drew a distinction between objects that had been created for artistic purposes in their own right and those that hadn't.

And that's all without even getting into the thing that if everyone has a fabricator there are whole classes of items that might no longer be worth selling. In that world, what's going to be worth paying for is the designs that drive the fabricators. Think knitted Dr Who puppets, only in 3D.

It's all going to be so much fun, dontcha think?

Update (1/26/2012): Simon Bradshaw's paper is now published here.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 5, 2008

Saving seeds

The 17 judges of the European Court of Human Rights ruled unanimously yesterday that the UK's DNA database, which contains more than 3 million DNA samples, violates Article 8 of the European Convention on Human Rights. The key factor: retaining, indefinitely, the DNA samples of people who have committed no crime.

It's not a complete win for objectors to the database, since the ruling doesn't say the database shouldn't exist, merely that DNA samples should be removed once their owners have been acquitted in court or the charges have been dropped. England, the court said, should copy Scotland, which operates such a policy.

The UK comes in for particular censure, in the form of the note that "any State claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance..." In other words, before you decide to be the first on your block to use a new technology and show the rest of the world how it's done, you should think about the consequences.

Because it's true: this is the kind of technology that makes surveillance and control-happy governments the envy of other governments. For example: lacking clues to lead them to a serial killer, the Los Angeles Police Department wants to copy Britain and use California's DNA database to search for genetic profiles similar enough to belong to a close relative .The French DNA database, FNAEG, was proposed in 1996, created in 1998 for sex offenders, implemented in 2001, and broadened to other criminal offenses after 9/11 and again in 2003: a perfect example of function creep. But the French DNA database is a fiftieth the size of the UK's, and Austria's, the next on the list, is even smaller.

There are some wonderful statistics about the UK database. DNA samples from more than 4 million people are included on it. Probably 850,000 of them are innocent of any crime. Some 40,000 are children between the ages of 10 and 17. The government (according to the Telegraph) has spent £182 million on it between April 1995 and March 2004. And there have been suggestions that it's too small. When privacy and human rights campaigners pointed out that people of color are disproportionately represented in the database, one of England's most experienced appeals court judges, Lord Justice Sedley, argued that every UK resident and visitor should be included on it. Yes, that's definitely the way to bring the tourists in: demand a DNA sample. Just look how they're flocking to the US to give fingerprints, and how many more flooded in when they upped the number to ten earlier this year. (And how little we're getting for it: in the first two years of the program, fingerprinting 44 million visitors netted 1,000 people with criminal or immigration violations.)

At last week's A Fine Balance conference on privacy-enhancing technologies, there was a lot of discussion of the key technique of data minimization. That is the principle that you should not collect or share more data than is actually needed to do the job. Someone checking whether you have the right to drive, for example, doesn't need to know who you are or where you live; someone checking you have the right to borrow books from the local library needs to know where you live and who you are but not your age or your health records; someone checking you're the right age to enter a bar doesn't need to care if your driver's license has expired.

This is an idea that's been around a long time - I think I heard my first presentation on it in about 1994 - but whose progress towards a usable product has been agonizingly slow. IBM's PRIME project, which Jan Camenisch presented, and Microsoft's purchase of Credentica (which wasn't shown at the conference) suggest that the mainstream technology products may finally be getting there. If only we can convince politicians that these principles are a necessary adjunct to storing all the data they're collecting.

What makes the DNA database more than just a high-tech fingerprint database is that over time the DNA stored in it will become increasingly revealing of intimate secrets. As Ray Kurzweil kept saying at the Singularity Summit, Moore's Law is hitting DNA sequencing right now; the cost is accordingly plummeting by factors of ten. When the database was set up, it was fair to characterize DNA as a high-tech version of fingerprints or iris scans. Five - or 15, or 25, we can't be sure - years from now, we will have learned far more about interpreting genetic sequences. The coded, unreadable messages we're storing now will be cleartext one day, and anyone allowed to consult the database will be privy to far more intimate information about our bodies, ourselves than we think we're giving them now.

Unfortunately, the people in charge of these things typically think it's not going to affect them. If the "little people" have no privacy, well, so what? It's only when the powers they've granted are turned on them that they begin to get it. If a conservative is a liberal who's been mugged, and a liberal is a conservative whose daughter has needed an abortion, and a civil liberties advocate is a politician who's been arrested...maybe we need to arrest more of them.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 21, 2008

The art of the impossible

So the question of last weekend very quickly became: how do you tell plausible fantasy from wild possibility? It's a good conversation starter.

One friend had a simple assessment: "They are all nuts," he said, after glancing over the weekend's program. The problem is that 150 years ago anyone predicting today's airline economy class would also have sounded nuts.

Last weekend's (un)conference was called Convergence, but the description tried to convey the sense of danger of crossing the streams. The four elements that were supposed to converge: computing, biotech, cognitive technology, and nanotechnology. Or, as the four-colored conference buttons and T-shirts had it, biotech, infotech, cognotech, and nanotech.

Unconferences seem to be the current trend. I'm guessing, based on very little knowledge, that it was started by Tim O'Reilly's FOO camps or possibly the long-running invitation-only Hackers conference. The basic principle is: collect a bunch of smart, interesting, knowledgeable people and they'll construct their own program. After all, isn't the best part of all conferences the hallway chats and networking, rather than the talks? Having been to one now (yes, a very small sample), I think in most cases I'm going to prefer the organized variety: there's a lot to be said for a program committee that reviews the proposals.

The day before, the Center for Responsible Nanotechnology ran a much smaller seminar on Global Catastrophic Risks. It made a nice counterweight: the weekend was all about wild visions of the future; the seminar was all about the likelihood of our being wiped out by biological agents, astronomical catastrophe, or, most likely, our own stupidity. Favorite quote of the day, from Anders Sandberg: "Very smart people make very stupid mistakes, and they do it with surprising regularity." Sandberg learned this, he said, at Oxford, where he is a philosopher in the Institute for the Future of Humanity.

Ralph Merkle, co-inventor of public key cryptography, now working on diamond mechanosynthesis, said to start with physics textbooks, most notably the evergreen classic by Halliday and Resnick. You can see his point: if whatever-it-is violates the laws of physics it's not going to happen. That at least separates the kinds of ideas flying around at Convergence and the Singularity Summit from most paranormal claims: people promoting dowsing, astrology, ghosts, or ESP seem to be about as interested in the laws of physics as creationists are in the fossil record.

A sidelight: after years of The Skeptic, I'm tempted to dismiss as fantasy anything where the proponents tell you that it's just your fear that's preventing you from believing their claims. I've had this a lot - ghosts, alien spacecraft, alien abductions, apparently these things are happening all over the place and I'm just too phobic to admit it. Unfortunately, the behavior of adherents to a belief just isn't evidence that it's wrong.

Similarly, an idea isn't wrong just because its requirements are annoying. Do I want to believe that my continued good health depends on emulating Ray Kurzweil and taking 250 pills a day and, a load of injections weekly? Certainly not. But I can't prove it's not helping him. I can, however, joke that it's like those caloric restriction diets - doing it makes your life *seem* longer.

Merkle's other criterion: "Is it internally consistent?" This one's harder to assess, particularly if you aren't a scientific expert yourself.

But there is the technique of playing the man instead of the ball. Merkle, for example, is a cryonicist and is currently working on diamond mechanosynthesis. Put more simply, he's busy designing the tools that will be needed to build things atom by atom when - if - molecular manufacturing becomes a reality. If that sounds nutty, well, Merkle has earned the right to steam ahead unworried because his ideas about cryptography, which have become part of the technology we use every day to protect ecommerce transactions, were widely dismissed at first.

Analyzing language is also open to the scientifically less well-educated: do the proponents of the theory use a lot of non-standard terms that sound impressive but on inspection don't seem to mean anything? It helps if they can spell, but that's not a reliable indicator - snake oil salesmen can be very professional, and some well-educated excellent scientists can't spell worth a damn.

The Risks seminar threw out a useful criterion for assessing scenarios: would it make a good movie? If your threat to civilization can be easily imagined as a line delivered by Bruce Willis, it's probably unlikely. It's not a scientifically defensible principle, of course, but it has a lot to recommend it. In human history, what's killed the most people while we're worrying about dramatic events like climate change and colliding asteroids? Wars and pandemics.

So, where does that leave us? Waiting for deliverables, of course. Even if a goal sounds ludicrous working towards it may still produce useful results. A project like Aubrey de Grey's ideas about "curing aging" by developing techniques for directly repairing damage (or SENS, for Strategies for Engineered Negligible Senescence) seems a case in point. And life extension is the best hope for all of these crazy ideas. Because, let's face it: if it doesn't happen in our lifetime, it was impossible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 7, 2008

Reality TV

The Xerox machine in the second season of Mad Men has its own Twitter account, as do many of the show's human characters. Other TV characters have MySpace pages and Facebook groups, and of course they're all, legally or illegally, on YouTube.

Here at the American Film Institute's Digifest in Hollywood - really Hollywood, with the stars on the sidewalks and movie theatres everywhere - the talk is all of "cross-platform". This event allows the AFI's Digital Content Lab to show off some of the projects it's fostered over the last year, and the audience is full of filmmakers, writers, executives, and owners of technology companies, all trying to figure out digital television.

One of the more timely projects is a remix of the venerable PBS Newshour with Jim Lehrer. A sort of combination of Snopes, Wikipedia, and any of a number of online comment sites, the goal of The Fact Project is to enable collaboration between the show's journalists and the public. Anyone can post a claim or a bit of rhetoric and bring in supporting or refuting evidence; the show's journalistic staff weigh in at the end with a Truthometer rating and the discussion is closed. Part of the point, said the project's head, Lee Banville, is to expose to the public the many small but nasty claims that are made in obscure but strategic places - flyers left on cars in supermarket parking lots, or radio spots that air maybe twice on a tiny local station.

The DCL's counterpart in Australia showed off some other examples. Areo, for example, takes TV sets and footage and turns them into game settings. More interesting is the First Australians project, which in the six-year process of filming a TV documentary series created more than 200 edited mini-documentaries telling each interviewee's story. Or the TV movie Scorched, which even before release created a prequel and sequel by giving a fictional character her own Web site and YouTube channel. The premise of the film itself was simple but arresting. It was based on one fact, that at one point Sydney had no more than 50 weeks of water left, and one what-if - what if there were bush fires? The project eventually included a number of other sites, including a fake government department.

"We go to islands that are already populated," said the director, "and pull them into our world."

HBO's Digital Lab group, on the other hand, has a simpler goal: to find an audience in the digital world it can experiment on. Last month, it launched a Web-only series called Hooking Up. Made for almost no money (and it looks it), the show is a comedy series about the relationship attempts of college kids. To help draw larger audiences, the show cast existing Web and YouTube celebrities such as LonelyGirl15, KevJumba, and sxePhil. The show has pulled in 46,000 subscribers on YouTube.

Finally, a group from ABC is experimenting with ways to draw people to the network's site via what it calls "viewing parties" so people can chat with each other while watching, "live" (so to speak), hit shows like Grey's Anatomy. The interface the ABC party group showed off was interesting. They wanted, they said, to come up with something "as slick as the iPhone and as easy to use as AIM". They eventually came up with a three-dimensional spatial concept in which messages appear in bubbles that age by shrinking in size. Net old-timers might ask churlishly what's so inadequate about the interface of IRC or other types of chat rooms where messages appear as scrolling text, but from ABC's point of view the show is the centrepiece.

At least it will give people watching shows online something to do during the ads. If you're coming from a US connection, the ABC site lets you watch full episodes of many current shows; the site incorporates limited advertising. Perhaps in recognition that people will simply vanish into another browser window, the ads end with a button to click to continue watching the show and the video remains on pause until you click it.

The point of all these initiatives is simple and the same: to return TV to something people must watch in real-time as it's broadcast. Or, if you like, to figure out how to lure today's 20- and 30-somethings into watching television; Newshour's TV audience is predominantly 50- and 60-somethings.

ABC's viewing party idea is an attempt - as the team openly said - to recreate what the network calls "appointment TV". I've argued here before that as people have more and more choices about when and where to watch their favourite scripted show, sports and breaking news will increasingly rule television because they are the only two things that people overwhelmingly want to see in real time. If you're supported by advertising, that matters, but success will depend on people's willingness to stick with their efforts once the novelty is gone. The question to answer isn't so much whether you can compete with free (cue picture of a bottle of water) but whether you can compete with freedom (cue picture of evil file-sharer watching with his friends whenever he wants).


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 31, 2008

Machine dreams

Just how smart are humans anyway? Last week's Singularity Summit spent a lot of time talking about the exact point at which computer processing power would match that of the human brain, but that's only the first step. There's the software to make the hardware do stuff, and then there's the whole question of consciousness. At that point, you've strayed from computer science into philosophy and you might as well be arguing about angels on the heads of pins. Of course everyone hopes they'll be alive to see these questions settled, but in the meantime all we have is speculation and the snide observation that it's typical that a roomful of smart people would think that all problems can be solved by more intelligence.

So I've been trying to come up with benchmarks for what constitutes artificial intelligence, and the first thing I think is that the Turing test is probably too limited. In it, a judge has to determine which of two typing correspondents is the machine and which the human, That's fine as far as it goes, but one of the consistent threads that un through all this is a noticeable disdain for human bodies.

While our brain power is largely centralized, it still seems to me likely that both its grey matter and the rest of our bodies are an important part of the substrate. How we move through space, how our bodies react and feed our brains is part and parcel of how our minds work, however much we may wish to transcend biology. The fact that we can watch films of bonobos and chimpanzees and recognise our own behaviour in their interactions should show us that we're a lot closer to most animal species than we think - and a lot further from most machines.

For that sort of reason, the Turing test seems limited. A computer passes that test if, when paired against a human, the judge can't tell which is which. At the moment, it seems clear the winner is going to be spambots - some spam messages are already devised cleverly enough to fool even Net-savvy individuals into opening them sometimes. But they're hardly smart - they're just programmed that way. And a lot depends on the capability of the judge - some people even find Eliza convincing, though it's incredibly easy to send off-course into responses that are clearly those of a machine. Find a judge who wants to believe and you're into the sort of game that self-styled psychics like to play.

Nor can we judge a superhuman intelligence by the intractable problems it solves. One of the more evangelist speakers last weekend talked about being able to instantly create tall buildings via nanotechnology. (I was, I'm afraid, irresistibly reminded of that Bugs Bunny cartoon where Marvin pours water on beans to produce instant Martians to get rid of Bugs.) This is clearly just silly: you're talking about building a gigantic building out of molecules. I don't care how many billions of nanobots you have, the sheer scale means it's going to take time. And, as Kevin Kelly has written, no matter how smart a machine is, figuring out how to cure cancer or roll back aging won't be immediate either because you can't really speed up the necessary experiments. Biology takes time.

Instead, one indicator might be variability of response; that is, that feeding several machines the same input - or giving the same machine the same input at different times - produces different, equally valid interpretations. If, for example, you give a 10th grade class Jane Austen's Pride and Prejudice to read and report on, different students might with equal legitimacy describe it as a historical account of the economic forces affecting 18th century women, a love story, the template for romantic comedy, or even the story of the plain sister in a large family whose talents were consistently overlooked until her sisters got married.

In The Singularity Is Near, Ray Kurzweil laments that each human must read a text separately and that knowledge can't be quickly transferred from one to another the way a speech recognition program can be loaded into a new machine in seconds - but that's the point. Our strength is that our intelligences are all different, and we aren't empty vessels into which information is poured but stews in which new information causes varying chemical reactions.

You might argue that search engines can already do this, in that you don't get the same list of hits if you type the same keywords into Google versus Yahoo! versus Ask.com, and if you come back tomorrow you may get a different response from any one of them. That's true. It isn't the kind of input I had in mind, but fair enough.

The other benchmark that's occurred to me so far is that machines will be getting really smart when they get bored.

ZDNet UK editor Rupert Goodwins has a variant on this from when he worked at Sinclair Research. "If it went out one evening, drank too much, said the next morning, 'never again' and repeated the exercise immediately. Truly human." But see? There again: a definition of human intelligence that requires a body.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 24, 2008

Living by numbers

"I call it tracking," said a young woman. She had healthy classic-length hair, a startling sheaf of varyingly painful medical problems, and an eager, frequent smile. She spends some minutes every day noting down as many as 40 different bits of information about herself: temperature, hormone levels, moods, the state of the various medical problems, the foods she eats, the amount and quality of sleep she gets. Every so often, she studies the data looking for unsuspected patterns that might help her defeat a problem. By this means, she says she's greatly reduced the frequency of two of them and was working on a third. Her doctors aren't terribly interested, but the data helps her decide which of their recommendations are worth following.

And she runs little experiments on herself. Change a bunch of variables, track for a month, review the results. If something's changed, go back and look at each variable individually to find the one that's making the difference. And so on.

Of course, everyone with the kind of medical problem - diabetes, infertility, allergies, cramps, migraines, fatigue - that medicine can't really solve - has done something like this for generations. Diabetics in particularly have long had to track and control their blood sugar levels. What's different is the intensity - and the computers. She currently tracks everything in an Excel spreadsheet, but what she's longing for is good tools to help her with data analysis.

From what Gary Wolf, the organizer of this group, Quantified Self, says - about 30 people are here for its second meeting, after hours at Palo Alto's Institute for the Future to swap notes and techniques on personal tracking - getting out of the Excel spreadsheet is a key stage in every tracker's life. Each stage of improvement thereafter gets much harder.

Is this a trend? Co-founder Kevin Kelley thinks so, and so does the Washington Post, which covered this group's first meeting. You may not think you will ever reach the stage of obsession that would lead you to go to a meeting about it, but in fact, if the interviews I did with new-style health companies in the past year is any guide, we're going to be seeing a lot of this in the health side of things. Home blood pressure monitors, glucose tests, cholesterol tests, hormone tests - these days you can buy these things in Wal-Mart.

The key question is clearly going to be: who owns your health data? Most of the medical devices in development assume that your doctor or medical supplier will be the one doing the monitoring; the dozens of Web sites highlighted in that Washington Post article hope there's a business in helping people self-track everything from menstrual cycles to time management. But the group in Palo Alto are more interested in self-help: in finding and creating tools everyone can use, and in interoperability. One meeting member shows off a set of consumer-oriented prototypes - bathroom scale, pedometer, blood pressure monitor, that send their data to software on your computer to display and, prospectively, to a subscription Web site. But if you're going to look at those things together - charting the impact of how much you walk on your weight and blood pressure - wouldn't you also want to be able to put in the foods you eat? There could hardly be an area where open data formats will be more important.

All of that makes sense. I was less clear on the usefulness of an idea another meeting member has - he's doing a start-up to create it - a tiny, lightweight recording camera that can clip to the outside of a pocket. Of course, this kind of thing already has a grand, old man in the form of Steve Mann, who has been recording his life with an increasingly small sheaf of devices for a couple of decades now. He was tired, this guy said, of cameras that are too difficult to use and too big and heavy; they get left at home and rarely used. This camera they're working on will have a wide-angle lens ("I don't know why no one's done this") and take two to five pictures a second. "That would be so great," breathes the guy sitting next to me.

Instantly, I flash on the memory of Steve Mann dogging me with flash photography at Computers, Freedom, and Privacy 2005. What happens when the police subpoenas your camera? How long before insurance companies and marketing companies offer discounts as inducements to people to wear cameras and send them the footage unedited so they can study behavior they currently can't reach?

And then he said, "The 10,000 greatest minutes of your life that your grandchildren have to see," and all you can think is, those poor kids.

There is a certain inevitable logic to all this. If retailers, manufacturers, marketers, governments, and security services are all convinced they can learn from data mining us why shouldn't we be able to gain insights by doing it ourselves?

At the moment, this all seems to be for personal use. But consider the benefits of merging it with Web 2.0 and social networks. At last you'll be able to answer the age-old question: why do we have sex less often than the Joneses?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 26, 2008

Wimsey's whimsy

One of the things about living in a foreign country is this: every so often the actual England I live in collides unexpectedly with the fictional England I grew up with. Fictional England had small, friendly villages with murders in them. It had lowering, thick fogs and grim, fantastical crimes solvable by observation and thought. It had mathematical puzzles before breakfast in a chess game. The England I live in has Sir Arthur Conan Doyle's vehement support for spiritualism, traffic jams, overcrowding, and four million people who read The Sun.

This week, at the GikIII Workshop, in a break between Internet futures, I wandered out onto a quadrangle of grass so brilliantly and perfectly green that it could have been an animated background in a virtual world. Overlooking it were beautiful, stolid, very old buildings. It had a sign: Balliol College. I was standing on the quad where, "One never failed to find Wimsey of Balliol planted in the center of the quad and laying down the law with exquisite insolence to somebody." I know now that many real people came out of Balliol (three kings, three British prime ministers, Aldous Huxley, Robertson Davies, Richard Dawkins, and Graham Greene) and that those old buildings date to 1263. Impressive. But much more startling to be standing in a place I first read about at 12 in a Dorothy Sayers novel. It's as if I spent my teenaged years fighting alongside Angel avatars and then met David Boreanaz.

Organised jointly by Ian Brown at the Oxford Internet Institute and the University of Edinburgh's Script-ed folks, GikIII (prounounced "geeky") is a small, quirky gathering that studies serious issues by approaching them with a screw loose. For example: could we control intelligent agents with the legal structure the Ancient Romans used for slaves (Andrew Katz)? How sentient is a robot sex toy? Should it be legal to marry one? And if my sexbot rapes someone, are we talking lawsuit, deactivation, or prison sentence (Fernando Barrio)? Are RoadRunner cartoons all patent applications for devices thought up by Wile E. Coyote (Caroline Wilson)? Why is The Hound of the Baskervilles a metaphor for cloud computing (Miranda Mowbray)?

It's one of the characteristics of modern life that although questions like these sound as practically irrelevant as "how many angels, infinitely large, can fit on the head of a pin, infinitely small?", which may (or may not) have been debated here seven and a half centuries ago, they matter. Understanding the issues they raise matters in trying to prepare for the net.wars of the future.

In fact, Sherlock Holmes's pursuit of the beast is metaphorical; Mowbray was pointing out the miasma of legal issues for cloud computing. So far, two very different legal directions seem likely as models: the increasingly restrictive EULAs common to the software industry, and the service-level agreements common to network outsourcing. What happens if the cloud computing company you buy from doesn't pay its subcontractors and your data gets locked up in a legal battle between them? The terms and conditions in effect for Salesforce.com warn that the service has 30 days to hand back your data if you terminate, a long time in business. Mowbray suggests that the most likely outcome is EULAs for the masses and SLAs at greater expense for those willing to pay for them.

On social networks, of course, there are only EULAs, and the question is whether interoperability is a good thing or not. If the data people put on social networks ("shouldn't there be a separate disability category for stupid people?" someone asked) can be easily transferred from service to service, won't that make malicious gossip even more global and permanent? A lot of the issues Judith Rauhofer raised in discussing the impact of global gossip are not new to Facebook: we have a generation of 35-year-olds coping with the globally searchable history of their youthful indiscretions on Usenet. (And WELL users saw the newly appointed CEO of a large tech company delete every posting he made in his younger, more drug-addled 1980s.) The most likely solution to that particular problem is time. People arrested as protesters and marijuana smokers in the 1960s can be bank presidents now; in a few years the work force will be full of people with Facebook/MySpace/Bebo misdeeds and no one will care except as something laugh at drunkenly late out in the pub.

But what Lilian Edwards wants to know is this: if we have or can gradually create the technology to make "every ad a wanted ad" - well, why not? Should we stop it? Online marketing is at £2.5 billion a year according to Ofcom, and a quarter of the UK's children spend 22 hours a week playing computer games, where there is no regulation of industry ads and where Web 2.0 is funded entirely by advertising. When TV and the Internet roll together, when in-game is in-TV and your social network merges with megamedia, and MTV is fully immersive, every detail can be personalized product placement. If I grew up five years from now, my fictional Balliol might feature Angel driving across the quad in a Nissan Prairie past a billboard advertising airline tickets.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 5, 2008

Return of the browser wars

It was quiet, too quiet. For so long it's just been Firefox/Mozilla/Netscape, Internet Explorer, and sometimes Opera that it seemed like that was how it was always going to be. In fact, things were so quiet that it seemed vaguely surprising that Firefox had released a major update and even long-stagnant Internet Explorer has version 8 out in beta. So along comes Chrome to shake things up.

The last time there were as many as four browsers to choose among, road-testing a Web browser didn't require much technical knowledge. You loaded the thing up, pointed it at some pages, and if you liked the interface and nothing seemed hideously broken, that was it.

This time round, things are rather different. To really review Chrome you need to know your AJAX from your JavaScript. You need to be able to test for security holes, and then discover more security vulnerabilities. And the consequences when these things are wrong are so much greater now.

For various reasons, Chrome probably isn't for me, quite aside from its copy-and-paste EULA oops. Yes, it's blazingly fast and I appreciate that because it separates each tab or window into its own process it crashes more gracefully than its competitors. But the switching cost lies less in those characteristics than in the amount of mental retraining it takes to adapt your way of working to new quirks. And, admittedly based on very short acquaintance, Chrome isn't worth it now that I've reformatted Firefox 3's address bar into a semblance of the one in Firefox 2. Perhaps when Chrome is a little older and has replaced a few more of Firefox's most useful add-ons (or when I eventually discover that Chrome's design means it doesn't need them).

Chrome does not do for browsers what Google did for search engines. In 1998, Google's ultra-clean, quick-loading front page and search results quickly saw off competing, ultra-cluttered, wait-for-it portals like Altavista because it was such a vast improvement. (Ironically, Google now has all those features and more, but it's smart enough to keep them off the front page.)

Chrome does some cool things, of course, as anything coming out of Google always has. But its biggest innovation seems to be more completely merging local and global search, a direction in which Firefox 3 is also moving, although with fewer unfortunate consequences. And, as against that, despite the "incognito" mode (similar to IE8) there is the issue of what data goes back to Google for its coffers.

It would be nice to think that Chrome might herald a new round of browser innovation and that we might start seeing browsers that answer different needs than are currently catered for. For example: as a researcher I'd like a browser to pay better attention to archiving issues: a button to push to store pages with meaningful metadata as well as date and time, the URL the material was retrieved from, whether it's been updated since and if so how, and so on. There are a few offline browsers that sort of do this kind of thing, but patchily.

The other big question hovering over Chrome is standards: Chrome is possible because the World Wide Web Consortium has done its work well. Standards and the existence of several competing browsers with significant market share has prevented any one company from seizing control and turning the Web into the kind of proprietary system Tim Berners-Lee resisted from the beginning. Chrome will be judged on how well it renders third-party Web pages, but Google can certainly tailor its many free services to work best with Chrome - not so different a proposition from the way Microsoft has controlled the desktop.

Because: the big thing Chrome does is bring Google out of the shadows as a competitor to Microsoft. In 1995, Business Week ran a cover story predicting that Java (write once, run on anything) and the Web (a unified interface) could "rewrite the rules of the software industry". Most of the predictions in that article have not really come true - yet - in the 13 years since it was published; or if they have it's only in modest ways. Windows is still the dominant operating system, and Larry Ellison's thin clients never made a dent in the market. The other big half of the challenge to Microsoft, GNU/Linux and the open-source movement, was still too small and unfinished.

Google is now in a position to deliver on those ideas. Not only are the enabling technologies in place but it's now a big enough company with reliable enough servers to make software as a Net service dependable. You can collaboratively process your words using Google Docs, coordinate your schedules with Google Calendar, and phone across the Net with Google Talk. I don't for one minute think this is the death of Microsoft or that desktop computing is going to vanish from the Earth. For one thing, despite the best-laid cables and best-deployed radios of telcos and men, we are still a long way off of continuous online connectivity. But the battle between the two different paradigms of computing - desktop and cloud - is now very clearly ready for prime time.

Wendy M. Grossman's Web site hasn extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 6, 2008

The Digital Revolution turns 15

"CIX will change your life," someone said to me in 1991 when I got a commission to review a bunch of online systems and got my first modem. At the time, I was spending most or all of every day sitting alone in my house putting words in a row for money.

The Net, Louis Rossetto predicted in 1993, when he founded Wired, would change everybody's lives. He compared it to a Bengali typhoon. And that was modest compared to others of the day, who compared it favorably to the discovery of fire.

Today, I spend most or all of every day sitting alone in my house putting words in a row for money.

But yes: my profession is under threat, on the one hand from shrinkage of the revenues necessary to support newspapers and magazines - which is indeed partly fuelled by competition from the Internet - and on the other hand from megacorporate publishers who routinely demand ownership of the copyrights freelances used to resell for additional income - a practice that the Internet was likely to largely kill off anyway. Few have ever gotten rich from journalism, but freelance rates haven't budged in years; staff journalists get very modest raises and for those they are required to work more hours a week and produce more words.

That embarrassingly solipsistic view aside, more broadly, we're seeing the Internet begin to reshape the entertainment, telecommunications, retail, and software industries. We're seeing it provide new ways for people to organize politically and challenge the control of information. And we're seeing it and natural laziness kill off our history: writers and students alike rely on online resources at the expense of offline archives.

Wired was, of course, founded to chronicle the grandly capitalized Digital Revolution, and this month, 15 years on, Rossetto looked back to assess the magazine's successes and failures.

Rossetto listed three failures and three successes. The three failures: history has not ended; Old Media are not dead (yet); and governments and politics still thrive. The three successful predictions: the long boom; the One Machine, a man/machine planetary consciousness; that technology would change the way we relate to each other and cause us to reinvent social institutions.

I had expected to see the long boom in the list of failures, and not just because it was so widely laughed at when it was published. Rossetto is fair to say that the original 1997 feature was not invalidated by the 2000 stock market bust. It wasn't about that (although one couldn't resist snickering about it as the NASDAQ tanked). Instead, what the piece predicted was a global economic boom covering the period 1980 to 2020.

Wrote Peter Schwartz and Peter Leyden, "We are riding the early waves of a 25-year run of a greatly expanding economy that will do much to solve seemingly intractable problems like poverty and to ease tensions throughout the world. And we'll do it without blowing the lid off the environment."

Rossetto, assessing it now, says, " There's a lot of noise in the media about how the world is going to hell. Remember, the truth is out there, and it's not necessarily what the politicians, priests, or pundits are telling you."

I think: 1) the time to assess the accuracy of an article outlining the future to 2020 is probably around 2050; 2) the writers themselves called it a scenario that might guide people through traumatic upheavals to a genuinely better world rather than a prediction; 3) that nonetheless, it's clear that the US economy, which they saw as leading the way has suffered badly in the 2000s with the spiralling deficit and rising consumer debt; 4) that media alarm about the environment, consumer debt, government deficits, and poverty is hardly a conspiracy to tell us lies; and 5) that they signally underestimated the extent to which existing institutions would adapt to cyberspace (the underlying flaw in Rossetto's assumption that governments would be disbanding by now).

For example, while timing technologies is about as futile as timing the stock market, it's worth noting that they expected electronic cash to gain acceptance in 1998 and to be the key technology to enable electronic commerce, which they guessed would hit $10 billion by 2000. Last year it was close to $200 billion. Writing around the same time, I predicted (here) that ecommerce would plateau at about 10 percent of retail; I assumed this was wrong, but it seems that it hasn't even reached 4 perecent yet, though it's obvious that, particularly in the copyright industries, the influence of online commerce is punching well above its statistical weight.

No one ever writes modestly about the future. What sells - and gets people talking - are extravagant predictions, whether optimistic or pessimistic. Fifteen years is a tiny portion even of human history, itself a blip on the planet. Tom Standage, writing in his 1998 book The Victorian Internet, noted that the telegraph was a far more radically profound change for the society of its day than the Internet is for ours. A century from now, the Internet may be just as obsolete. Rossetto, like the rest of us, will have to wait until he's dead to find out if his ideas have lasting value.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 11, 2008

Beyond biology

"Will we have enough food?"

Last Saturday (for an article in progress for the Guardian), I attended the monthly board meeting at Alcor, probably the largest of the several cryonics organizations. Cryonics: preserving a newly deceased person's body in the hope that medical technology will improve to the point where that person can be warmed up, revived, and cured.

I was the last to arrive at what I understand was an unusually crowded meeting: fifteen, including board members, staffers, and visitors. Hence the chair's anxious question.

The conference room has a window at one end that looks into a mostly empty concrete space at a line of giant cylinders, some gleaming steel, some dull aluminum. These "dewars" are essentially giant Thermos bottles, and they are the vessels in which cryopreserved patients are held. Each dewar can hold up to nine patients – four whole bodies, head down, and five neuro patients in a column down the middle.

There is a good reason to call these cryopreserved Alcor members "patients". If the cryonics dream ever comes to fruition, they will not have been dead now. And in any case, calling them patients has the same function as naming your sourdough starter: it reminds you that here is something that cannot survive without your responsible care.

To Alcor's board and staff, these are often personal friends. A number have their framed pictures on the board room wall, with the dates of their birth and cryopreservation. It was therefore a little eerie to realize that those visible dewars were, mostly, occupied.

I think the first time I ever heard of anything like cryonics was Woody Allen's movie Sleeper. Reading about it as a serious proposition came nearly 20 years later, in Ed Regis's 1992 book Great Mambo Chicken and the Transhuman Condition. Regis's book, which I reviewed for New Scientist, was a vivid ramble through the outer fringes of science, which he dubbed "fin-de-siècle hubris".

My view hasn't changed: since cremation and burial both carry a chance of revival of zero, cryonics has to do hardly anything to offer better odds, no matter how slight. But it remains a contentious idea. Isaac Asimov, for example, was against it, at least for himself. The science fiction I read as a teenager was filled with overpopulated earths covered in giant blocks of one-room apartments and people who lived on synthetic food because there was no longer the space or ability to grow enough of the real stuff. And we're going to add long-dead people as well?

That kind of issue comes up when you mention cryonics. Isn't it selfish? Or expensive? Or an imposition on future generations? What would the revived person would live on, given their outdated skills. Supposing you wake up a slave?

Many of these issues have been considered, if not by cryonicists themselves for purely practical reasons then by sf writers. Robert A. Heinlein's 1957 book The Door Into Summer had its protagonist involuntarily frozen and deposited into the future with no assets and no employment prospects, given that his engineering background was 30 years out of date. Larry Niven's 1991 short story "Rammer" had its hero revived into the blanked body of a criminal and sent out as a spaceship pilot by a society that would have calmly vaped his personality and replaced it with the next one if he were found unsuitable (Niven was also, by the way, the writer who coined the descriptor "corpsicle" for the cryopreserved). Even Woody Allen's Miles Monroe woke up in danger.

The thing is, those aren't reasons for cryonicists not to try to make their dream a reality. They are arguments for careful thought on the part of the cryonics organizations who are offering cryopreservation and possible revival as services. And they do think about it, in part because the people running those organizations expect to be cryopreserved themselves The scientist and Alcor board member Ralph Merkle, in an interview last year, pointed out that the current board chooses its successors with great care, "Because our lives will depend on selecting a good group to continue the core values."

Many of them are also bad aarguments. Most people, given their health, want their lives to continue; if they didn't, we'd be awash in suicides. If overpopulation is the problem, having children is just as selfish a way of securing immortality as wanting longer life for oneself. If burdening future generations is the problem, doing so by being there is hardly worse than using up all the planet's resources in our lifetime, leaving our descendants to suffer the consequences unaided. Nor is being uncertain of the consequences a reason: human history is filled with technologies we've developed on the basis that we'd deal with the consequences as they arose. Some consequences were good, some bad; most technologies have a mix of the two.

After the board meeting ended, several of those present and I went on talking about just these issues over lunch.

"We won't be harder to deal with than a baby," one of them said. True, but there is a much bigger biological urge to reproduce than there is to revive someone who was pronounced dead a century or two ago.

"We are kind of going around biology," he admitted.

Only up to a point: there was enough food.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).