Main

July 8, 2022

Orphan consciousness

icelandverse.pngWhat if, Paul Bernal asked late in this year's Gikii, someone uploaded a consciousness and then we forgot where we got it from? Taking an analogy from copyrighted works whose owners are unknown - orphan works, an orphan consciousness. What rights would it have? Can it commit crimes? Is it murder to erase it? What if it met fellow orphan consciousness and together they created a third? Once it's up there without a link to humanity, then what?

These questions annoyed me less than proposals for robot rights, partly because they're more obviously a thought experiment, and partly because they specifically derived from Greg Daniels' science fiction series Upload, which inspired many of this year's gikii presentations. The gist: Nathan (Robbie Arnell), whose lung is collapsing after an autonomous vehicle crash, is offered two choices: take his chances in the operating room, or have his consciousness uploaded into Lakeview, a corporately owned and run "paradise" where he can enjoy an afterlife in considerable comfort. His girlfriend, Ingrid (Allegra Edwards), begs him to take the afterlife, at her family's expense. As he's rushed into signing the terms and conditions, I briefly expected him to land at the waystation in Albert Brooks' 1991 film Defending Your Life.

Instead, he wakes in a very nice country club hotel where he struggles to find his footing among his fellow uploaded avatars and wrangle the power dynamics in his relationship with Ingrid. What is she willing to fund? What happens if she stops paying? (A Spartan 2GB per day, we find later.) And, as Bernal asked, what are his neurorights?

Fictional use cases, as Gikii proves every year (2021): provide fully-formed use cases through which to explore the developing ethics and laws surrounding emergent technologies. For the current batch - the Digital Markets Act (EU, passed this week), the Digital Services Act (ditto), the Online Safety bill (UK, pending), the Platform Work Directive (proposed, EU), the platform-to-business regulations (in force 2020, EU and UK), and, especially, the AI Act (pending, EU) - Upload couldn't be more on point.

Side note: in-person attendees got to sample the Icelandverse, a metaverse of remarkable physical reality and persistence.

Upload underpinned discussions of deception and consent laws (Burkhard Schäfer and Chloë Kennedy), corporate objectification (Mauricio Figueroa ), and property rights - English law bans perpetual trusts. Can uploads opt out? Can they be murdered? Maybe like copyright, give them death plus 70 years?

Much of this has direct relevance to the "metaverse", which Anna-Maria Piskopani called "just one new way to do surveillance capitalism". The show's perfect example: when sex fails to progress, Ingrid yells out, "Tech support!".

In life, Nora (Andy Allo), the "angel" who arrives to help, works in an open plan corporate dystopia where her co-workers gossip about the avatars they monitor. As in this year's other notable fictional world, Dan Erickson's Severance, the company is always watching, a real pandemic-accelerated trend. In our paper, Andelka Phillips and I noted that although the geofenced chip implanted in Severance's workers prevents their work selves ("innies") from knowing anything about their out-of-hours selves ("outies"), their employer has no such limitation. Modern companies increasingly expect omniscience.

Both series reflect the growing ability of cyber systems to effect change in the physical world. Lachlan Urquhart, Lilian Edwards, and Derek McAuley used the science fiction comedy film Ron's Gone Wrong to examine the effect of errors at scale. The film's damaged robot, Ron, is missing safety features and spreads its settings to its counterparts. Would the AI Act view Ron as high or low risk? It may be a distinction without a difference; MacAuley reminded there will always be failures in the field. "A one-bit change can make changes of orders of magnitude." Then that chip ships by the billion, and can be embedded in millions of devices before it's found. Rinse, repeat, and apply to autonomous vehicles.

In Japan, however, as Naomi Lindvedt explained, the design culture surrounding robots has been far more influenced by the rules written for Astro Boy in 1951 by creator Tezuka Osamu than by Asimov's Laws. These rules are more restrictive and prescriptive, and designers aim to create robots that integrate into society and are user-friendly.

In other quick highlights, Michael Veale noted the Deliveroo ads that show food moving by itself, as if there are no delivery riders, and noted that technology now enforces the exclusivity that used to be contractual, so that drivers never see customer names and contact information, and so can't easily make direct arrangements; Tima Otu Anwana and Paul Eberstaller examined the business relationship between Only Fans and its creators; Sandra Schmitz-Berndt and Paula Contreras showed the difficulty of reporting cyber incidents given the multiple authorities and their inconsistent requirements; Adrian Aronsson-Storrier produced an extraordinary long-lest training video (Super-Betamax!) for a 500-year-old Swedish copyright cult; Helen Oliver discussed attitudes to privacy as revealed by years of UK high school students' entries for a competition to design fictional space stations; and Andy Phippen, based on his many discussions with kids, favors a harm reduction approach to online safety. "If the only horse in town is the Online Safety bill, nothing's going to change."


Illustrations: Image from the Icelandverse (by Inspired by Iceland).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 27, 2022

Well may the bogeyman come

NCC-EPIC-award-CPDP-2022.jpgIt's only an accident of covid that this year's Computers, Privacy, and Data Protection conference - delayed from late January - coincided with the fourth anniversary of the EU's General Data Protection Regulation. Yet its failures and frustrations were on everyone's mind as they considered new legislation forthcoming from the EU: the Digital Services Act, the Digital Markets Act, and, especially, the AI Act,

Two main frustrations: despite GDPR, privacy invasions continue to expand, and, related, enforcement has been extremely limited. The first is obvious to everyone here. For the second...as Max Schrems explained in a panel on GDPR enforcement, none of the cross-border cases his NGO, noyb, filed on May 19, 2018, the day after GDPR came into force, have been decided, and even decisions on simpler cases have failed to deal with broader questions.

In one of his examples, Spain rejected a complaint because it wasn't doing historic cases and Austria claimed the case was solved because the organization involved had changed its procedures. "But my rights were violated then." There was no redress.

Schrems is the data protection bogeyman; because legal actions he has brought have twice struck down US-EU agreements to enable data flows, the possibility of "Schrems III" if the next version gets it wrong is frequently mentioned. This particular panel highlighted numerous barriers that block effective action.

Other speakers highlighted numerous gaps between countries that impede cross-border complaints: some authorities have tight deadlines that expire while other authorities are working to more leisurely schedules; there are many conflicts between national procedural laws; each data protection authority has its own approach and requirements; and every cross-border complaint must be time-consumingly translated into English, even when both relevant authorities speak, say, German. "Getting an answer to a two-minute question takes four months," Nina Herbort said, highlighting the common underlying problem: underresourcing.

"Weren't they designed to fail?" Finn Myrstad asked.

Even successful enforcement has largely been limited to levying fines - and despite some of the eye-watering numbers they're still just cost of doing business to major technology platforms.

"We have the tools for structural sanctions," Johnny Ryan said in a discussion on judicial actions. Some of that is beginning to happen. A day earlier, the UK'a Information Commissioner's Office fined Clearview AI £7.5 million and ordered it to delete the images it holds of UK residents. In February, Canada issued a similar order; a few weeks ago, Illinois permanently banned the company from selling its database to most private actors and businesses nationwide, and barred it from selling its service to any entity within Illinois for five years. Sanctions like these hurt more than fines as does requiring companies to delete the algorithms they've based on illegally acquired data.

Other suggestions included building sovereignty by ensuring that public procurement does not default to off-the-shelf products from a few foreign companies but is built on local expertise, advocated by. Jan-Philipp Albrecht, the former MEP who panel on the impact of Schrems II that he is now building up cloud providers using locally-built hardware and open source software for the province of Schleswig-Holstein. Quang-Minh Lepescheux suggested requiring transparency in how people are trained to use automated decision making systems and forcing technology providers to accept third-party testing. Cristina Caffara, probably the only antitrust lawyer in sight, wants privacy advocates and antitrust lawyers to work together; the economists inside competition authorities insist that more data means better products so it's good for consumers. Rebecca Slaughter wants to give companies the clarity they say they want (until they get it): clear, regularly updated rules banning a list of practices with a catchall. Ryan also noted that some sanctions can vastly improve enforcement efficiency: there's nothing to investigate after banning a company from making acquisitions. Enforcing purpose limitation and banning the single "OK to everything" is more complicated but, "Purpose limitation is Kryptonite to Big Tech when it's misusing data."

Any and all of these are valuable. But new kinds of thinking are also needed. The more complex issue and another major theme was the limitations of focusing on personal data and individual rights. This was long predicted as a particular problem for genetic data - the former science journalist Tom Wilkie was first to point out the implications, sounding a warning in his book Perilous Knowledge, published in 1994, at the beginning of the Human Genome Project. Singling out individuals who have been harmed can easily obfuscate collective damage. The obvious example is Cambridge Analytica and Facebook; the damage to national elections can't be captured one Friends list at a time, controls on the increasing use of aggregated data require protection at scale, and, perversely, monitoring for bias and discrimination requires data collection.

In response to a panel on harmful patterns in recent privacy proposals, an audience member suggested that the African philosophy of ubuntu as a useful source of ideas for thinking about collective and, even more important, *interdependent* data. This is where we need to go. Many forms of data - including both genetic data and financial data - cannot be thought of any other way.


Illustrations: The Norwegian Consumer Council receives EPIC's International Privacy Champion award at CPDP 2022.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 1, 2021

Plausible diversions

amazon-astro.pngIf you want to shape a technology, the time to start is before it becomes fixed in the mindset of "'twas ever thus". This was the idea behind the creation of We Robot. At this year's event (see below for links to previous years), one clear example of this principle came from Thomas Krendl Gilbert and Roel I. J. Dobbe, whose study of autonomous vehicles pointed out the way we've privileged cars by coining "jaywalkification". On the blank page in the lawbook, we chose to make it illegal for pedestrians to get in cars'' way.

We Robot's ten years began with enthusiasm, segued through several depressed years of machine learning and AI, and this year has seemingly arrived at a twist on Arthur C. Clark's famous dictum To wit: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability. You could say it's been ten years of progressively removing robots' glamor.

Something like this was at the heart of the paper by Andrew Selbst, Suresh Venkatasubramanian, and I. Elizabeth Kumar, which uses the computer science staple of abstraction as a model for assigning responsibility for the behavior of complex systems. Weed out debates over the innards - is the system's algorithm unfair, or was the training data biased? - and aim at the main point​: this employer chose this system that produced these results. No one needs to be inside its "black box" if you can understand its boundaries. In one analogy, it's not the manufacturer's fault if a coffee maker fails to produce drinkable coffee from poisoned water and ground acorns; it *is* their fault if the machine turns potable water and ground coffee into toxic sludge. Find the decision points, and ask: how were those decisions made?

Gilbert and Dobbe used two other novel coinages: "moral crumple zoning" (from Madeleine Claire Elish's paper at We Robot 2016) and "rubblization", for altering the world to assist machines. Exhibit A, which exemplifies all three, is the 2018 incident in which an Uber car on autopilot killed a pedestrian in Tempe, Arizona. She was jaywalking; she and the inattentive safety driver were moral crumple zoned; and the rubblized environment prioritized cars.

Part of Gilbert's and Dobbe's complaint was that much discussion of autonomous vehicles focused on the trolley problem, which has little relevance to how either humans or AIs drive cars. It's more useful instead to focus on how autonomous vehicles reshape public space as they begin to proliferate.

This reshaping issue also arose in two other papers, one on smart farming in East Africa by Laura Foster, Katie Szilagyi, Angeline Wairegi, Chidi Oguamanam, and Jeremy de Beer, and one by Annie Brett on the rapid, yet largely overlooked expansion of autonomous vehicles in ocean shipping, exploration, and data collection. In the first case, part of the concern is the extension of colonization by framing precision agriculture and smart farming as more valuable than the local knowledge held by small farmers, the majority of whom are black women, and viewing that knowledge as freely available for appropriation. As in the Western world, where manufacturers like John Deere and Monsanto claim intellectual property rights in seeds and knowledge that formerly belonged to farmers, the arrival of AI alienates local knowledge by stowing it in algorithms, software, sensors, and equipment and makes the plants on which our continued survival depends into inert raw material. Brett, in her paper, highlights the growing gaps in international regulation as the Internet of Things goes maritime and changes what's possible.

A slightly different conflict - between privacy and the need to not be "mis-seen" - lies at the heart of Alice Xiang's discussion of computer vision. Elsewhere, Agathe Balayn and Seda Gürses make a related point in a new EDRi report that warns against relying on technical debiasing tweaks to datasets and algorithms at the expense of seeing the larger social and economic costs of these systems.

In a final example, Marc Canellas studied whole cybernetic systems and finds they create gaps where it's impossible for any plaintiff to prove liability, in part because of the complexity and interdependence inherent in these systems. Canellas proposes that the way forward is to redefine intentional discrimination and apply strict liability. You do not, Cynthia Khoo observed in discussing the paper, have to understand the inner workings of complex technology in order to understand that the system is reproducing the same problems and the same long history if you focus on the outcomes, and not the process - especially if you know the process is rigged to begin with. The wide spread of move fast and break things, Canellas noted, mostly encumbers people who are already vulnerable.

I like this overall approach of stripping away the shiny distraction of new technology and focusing on its results. If, as a friend says, Facebook accurately described setting up an account as "adding a line to our database" instead of "connecting with your friends", who would sign up? Similarly, don't let Amazon get cute about its new "Astro" comprehensive in-home data collector.

Many look at Astro and see instead the science fiction robot butler of decades hence. As Frank Pasquale noted, we tend to overemphasize the far future at the expense of today's decisions. In the same vein, Deborah Raji called robot rights a way of absolving people of their responsibility. Today's greater threat is that gig employers are undermining workers' rights, not whether robots will become sentient overlords. Today's problem is not that one day autonomous vehicles may be everywhere, but that the infrastructure needed to make partly-autonomous vehicles safe will roll over us. Or, as Gilbert put it: don't ask how you want cars to drive; ask how you want cities to work.


Previous years: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference; 2020.

Illustrations: Amazon photo of Astro.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 23, 2021

Internet fragmentation as a service

Screenshot from 2021-07-23 11-48-13.png"You spend most of your day telling a robot that you're not a robot. Think about that for two minutes and tell me you don't want to walk into the ocean," the comedian John Mulaney said in his 2018 comedy special, Kid Gorgeous. He was talking about captchas.

I was reminded of this during a recent panel at the US Internet Governance Forum hosted by Mike Nelson. Nelson's challenge to his panelists: imagine alternative approaches to governments' policy goals that won't damage the Internet. They talked about unintended consequences (and the exploitation thereof) of laws passed with good intentions, governments' demands for access to data, ransomware, content blocking, multiplying regional rulebooks, technical standards and interoperability, transparency, and rising geopolitical tensions, which cyberspace policy expert Melissa Hathaway suggested should be thought about by playing a mash-up of the games Risk and Settlers of Catan.The main topic: is the Internet at risk of Internet fragmentation?

So much depends on what you mean by "fragmentation". No one mentioned the physical damage achievable by ten backhoes. Nor the domain name system that allows humans and computers to find each other; "splitting the root" (that is, the heart of the DNS) used to dominate such discussions. Nor captchas, but the reason Mulaney sprang to mind was that every day (in every way) captchas frustrate access. Saying that makes me privileged; in countries where Facebook is zero-rated but the rest of the Internet costs money people can't afford on their data plans, the Internet is as cloven as it can possibly be.

Along those lines, Steve DelBianco raised the idea of splintering-by-local-law, the most obvious example being the demand in many countries for data localization. DelBianco, however, cited Illinois' Biometric Information Privacy Act (2008), which has been used to sue platforms on behalf of unnamed users for automatically tagging their photos online. Result: autotagging is not available to Illinois users on the major platforms, and neither is the Google Nest and Amazon Ring doorbells' facility for recognizing and admitting friends and family. See also GDPR, noted above, which three and a half years after taking force still has US media sites blocking access by insisting that our European visitors are important to us.

You could also say that the social Internet is splintering along ideological lines as the extreme right continue to build their own media and channels. In traditional media, this was Roger Ailes' strategy. Online, the medium designed to connect people doesn't care who it connects or for what purpose. Commercial social media engagement algorithms have exacerbated this, as many current books make plain.

Nelson, whose Internet policy experience goes back to the Clinton administration, suggested that policy change is generally driven by a big event: 9/11, for example, which led promptly to the passage of the PATRIOT Act (US) and the Anti-Terrorism, Crime, and Security Act (UK), or the Colonial Pipeline hack that has made ransomware an urgent mainstream concern. So, he asked: what kind of short, sharp shock would cause the Internet to fracture? If you see data protection law as a vector, the 2013 Snowden revelations were that sort of event; a year earlier, GDPR looked like fading away.

You may be thinking, as I was, that we're literally soaking in global catastrophes: the COVID-19 pandemic, and climate change. Both are slow-burning issues, unlike the high-profile drivers of legislative panic Nelson was looking for, but both generate dozens of interim shocks.

I'm always amazed so little is said about climate change and the future of the Internet; the IT industry's emissions just keep growing. China's ban on cryptocurrency mining, which it attributes to environmental concerns, may be the first of many such limits on the use of computing power. Disruptions to electricity supplies - just yesterday, the UK's National Grid warned there may be blackouts this winter - don't "break" the Internet, but they do make access precarious.

So far, the pandemic's effect has mostly been to exacerbate ideological splits and accelerate efforts to curb the spread of misinformation via social media. It's also led to increased censorship in some places; early on, China banned virus-related keywords on WeChat, and this week the Indian authorities raided a newspaper that criticized the government's pandemic response. In addition, the exposure and exacerbation of social inequalities brought by the pandemic may, David Bray suggested in the panel, be contributing to the increase in cybercrime, as "failed states" struggle to rescue their economies. This week's revelations of the database of numbers of interest to NSO Group clients since 2016 doesn't fragment the Internet as a global communications system, but it might in the sense that some people may not be able to afford the risk of being on it.

This is where Mulaney comes in. Today, robots gatekeep web pages. Three trends seem likely to expand their role: online, age verification and online safety laws; covid passports, which are beginning to determine access to physical-world events; and the Internet of Things, which is bridging what's left of the divide between cyberspace and the real world. In the Internet subsumed into everything of our future, "splitting the Internet" may no longer be meaningful as the purely virtual construct Nelson's panel was considering. In the cyber-physical world world, Internet fragmentation must also be hybrid.


Illustrations: The IGF-USA panel in action.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 23, 2021

Fast, free, and frictionless

Sinan-Aral-20210422_224835.jpg"I want solutions," Sinan Aral challenged at yesterday's Social Media Summit, "not a restatement of the problems". Don't we all? How many person-millennia have we spent laying out the issues of misinformation, disinformation, harassment, polarization, platform power, monopoly, algorithms, accountability, and transparency? Most of these have been debated for decades. The big additions of the last decade are the privatization of public speech via monopolistic social media platforms, the vastly increased scale, and the transmigration from purely virtual into physical-world crises like the January 6 Capitol Hill invasion and people refusing vaccinations in the middle of a pandemic.

Aral, who leads the MIT Initiative on the Digital Economy and is author of the new book The Hype Machine, chose his panelists well enough that some actually did offer some actionable ideas.

The issues, as Aral said, are all interlinked. (see also 20 years of net.wars). Maria Ressla connected the spread of misinformation to system design that enables distribution and amplification at scale. These systems are entirely opaque to us even while we are open books to them, as Guardian journalist Carole Cadwalladr noted, adding that while US press outrage is the only pressure that moves Facebook to respond, it no longer even acknowledges questions from anyone at her newspaper. Cadwalladr also highlighted the Securities and Exchange Commission's complaint that says clearly: Facebook misled journalists and investors. This dismissive attitude also shows in the leaked email, in which Facebook plans to "normalize" the leak of 533 million users' data.

This level of arrogance is the result of concentrated power, and countering it will require antitrust action. That in turn leads back to questions of design and free speech: what can we constrain while respecting the First Amendment? Where is the demarcation line between free speech and speech that, like crying "Fire!" in a crowded theater, can reasonably be regulated? "In technology, design precedes everything," Roger McNamee said; real change for platforms at global or national scale means putting policy first. His Exhibit A of the level of cultural change that's needed was February's fad, Clubhouse: "It's a brand-new product that replicates the worst of everything."

In his book, Aral opposes breaking up social media companies as was done incases such as Standard Oil, the AT&T. Zephyr Teachout agreed in seeing breakup, whether horizontal (Facebook divests WhatsApp and Instagram, for example) or vertical (Google forced to sell Maps) as just one tool.

The question, as Joshua Gans said, is, what is the desired outcome? As Federal Trade Commission nominee Lina Khan wrote in 2017, assessing competition by the effect on consumer pricing is not applicable to today's "pay-with-data-but-not-cash" services. Gans favors interoperability, saying it's crucial to restoring consumers' lost choice. Lock-in is your inability to get others to follow when you want to leave a service, a problem interoperability solves. Yes, platforms say interoperability is too difficult and expensive - but so did the railways and telephone companies, once. Break-ups were a better option, Albert Wenger added, when infrastructures varied; today's universal computers and data mean copying is always an option.

Unwinding Facebook's acquisition of WhatsApp and Instagram sounds simple, but do we want three data hogs instead of one, like cutting off one of Lernean Hydra's heads? One idea that emerged repeatedly is slowing "fast, free, and frictionless"; Yael Eisenstat wondered why we allow experimental technology at global scale but policy only after painful perfection.

MEP Marietje Schaake (Democrats 66-NL) explained the EU's proposed Digital Markets Act, which aims to improve fairness by preempting the too-long process of punishing bad behavior by setting rules and responsibilities. Current proposals would bar platforms from combining user data from multiple sources without permission; self-preferencing; and spying (say, Amazon exploiting marketplace sellers' data), and requires data portability and interoperability for ancillary services such as third-party payments.

The difficulty with data portability, as Ian Brown said recently, is that even services that let you download your data offer no way to use data you upload. I can't add the downloaded data from my current electric utility account to the one I switch to, or send my Twitter feed to my Facebook account. Teachout finds that interoperability isn't enough because "You still have acquire, copy, kill" and lock-in via existing contracts. Wenger argued that the real goal is not interoperability but programmability, citing open banking as a working example. That is also the open web, where a third party can write an ad blocker for my browser, but Facebook, Google, and Apple built walled gardens. As Jared Sine told this week's antitrust hearing, "They have taken the Internet and moved it into the app stores."

Real change will require all four of the levers Aral discusses in his book, money, code, norms, and laws - which Lawrence Lessig's 1996 book, Code and Other Laws of Cyberspace called market, software architecture, norms, and laws - pulling together. The national commission on democracy and technology Aral is calling for will have to be very broadly constituted in terms of disciplines and national representation. As Safiya Noble said, diversifying the engineers in development teams is important, but not enough: we need "people who know society and the implications of technologies" at the design stage.


Illustrations: Sinan Aral, hosting the summit.l

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 26, 2021

Curating the curators

Zuck-congress-20210325_212525.jpgOne of the longest-running conflicts on the Internet surrounds whether and what restrictions should be applied to the content people post. These days, those rules are known as "platform governance", and this week saw the first conference by that name. In the background, three of the big four CEOs returned to Congress for more questioning, the EU is planning the Digital Services Act; the US looks serious about antitrust action, and debate about revising Section 230 of the Communications Decency Act continues even though few understandwhat it does; and the UK continues to push "online harms.

The most interesting thing about the Platform Governance conference is how narrow it makes those debates look. The second-most interesting thing: it was not a law conference!

For one thing, which platforms? Twitter may be the most-studied, partly because journalists and academics use it themselves and data is more available; YouTube, Facebook, and subsidiaries WhatsApp and Instagram are the most complained-about. The discussion here included not only those three but less "platformy" things like Reddit, Tumblr, Amazon's livestreaming subsidiary Twitch, games, Roblox, India's ShareChat, labor platforms UpWork and Fiverr, edX, and even VPN apps. It's unlikely that the problems of Facebook, YouTube, and Twitter that governments obsess over are limited to them; they're just the most visible and, especially, the most *here*. Granting differences in local culture, business model, purpose, and platform design, human behavior doesn't vary that much.

For example, Jenny Domino reminded - again - that the behaviors now sparking debates in the West are not new or unique to this part of the world. What most agree *almost* happened in the US on January 6 *actually* happened in Myanmar with far less scrutiny despite a 2018 UN fact-finding mission that highlighted Facebook's role in spreading hate. We've heard this sort of story before, regarding Cambridge Analytica. In Myanmar and, as Sandeep Mertia said, India, the Internet of the 1990s never existed. Facebook is the only "Internet". Mertia's "next billion users" won't use email or the web; they'll go straight to WhatsApp or a local or newer equivalent, and stay there.

Mehitabel Glenhaber, whose focus was Twitch, used it to illustrate another way our usual discussions are too limited: "Moderation can escape all up and down the stack," she said. Near the bottom of the "stack" of layers of service, after the January 6 Capitol invasion Amazon denied hosting services to the right-wing chat app Parler; higher up the stack, Apple and Google removed Parler's app from their app stores. On Twitch, Glenhaber found a conflict between the site's moderatorial decision the handling of that decision by two browser extensions that replace text with graphics, one of which honored the site's ruling and one of which overturned it. I had never thought of ad blockers as content moderators before, but of course they are, and few of us examine them in detail.

Separately, in a recent lecture on the impact of low-cost technical infrastructure, Cambridge security engineer Ross Anderson also brought up the importance of the power to exclude. Most often, he said, social exclusion matters more than technical; taking out a scammer's email address and disrupting all their social network is more effective than taking down their more easily-replaced website. If we look at misinformation as a form of cybersecurity challenge - as we should, that's an important principle.

One recurring frustration is our general lack of access to the insider view of what's actually happening. Alice Marwick is finding from interviews that members of Trust and Safety teams at various companies have a better and broader view of online abuse than even those who experience it. Their data suggests that rather than being gender-specific harassment affects all groups of people; in niche groups the forms disagreements take can be obscure to outsiders. Most important, each platform's affordances are different; you cannot generalize from a peer-to-peer site like Facebook or Twitter to Twitch or YouTube, where the site's relationships are less equal and more creator-fan.

A final limitation in how we think about platforms and abuse is that the options are so limited: a user is banned or not, content stays up or is taken down. We never think, Sarita Schoenebeck said, about other mechanisms or alternatives to criminal justice such as reparative or restorative justice. "Who has been harmed?" she asked. "What do they need? Whose obligation is it to meet that need?" And, she added later, who is in power in platform governance, and what harms have they overlooked and how?

In considering that sort of issue, Bharath Ganesh found three separate logics in his tour through platform racism and the governance of extremism: platform, social media, and free speech. Mark Zuckerberg offers a prime example of the latter, the Silicon Valley libertarian insistence that the marketplace of ideas will solve any problems and that sees the First Amendment freedom of expression as an absolute right, not one that must be balanced against others - such as "freedom from fear". Following the end of the conference by watching the end of yesterday's Congressional hearings, you couldn't help thinking about that as Mark Zuckerberg embarked on yet another pile of self-serving "Congressman..." rather than the simple "yes or no" he was asked to deliver.


Illustrations: Mark Zuckerberg, testifying in Congress on March 25, 2021.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 22, 2021

Surveillance without borders

637px-Sophie_in't_Veld,_print2.jpegThis time last year, the Computers, Privacy, and Data Protection conference was talking about inevitable technology. Two thousand people from all over the world enclosed in two large but unventilated spaces arguing closely over buffets and snacks for four days! I remember occasional nods toward a shadow out there on the Asian horizon, but it was another few weeks before the cloud of dust indicating the coronavirus's gallop westward toward London became visible to the naked eye. This week marks a year since I've traveled more than ten miles from home.

The virus laughs at what we used to call "inevitable". It also laughs at what we think of as "borders".

The concept of "privacy" was always going to have to expand. Europe's General Data Protection Regulation came into force in May 2018; by CPDP 2019 the conference had already moved on to consider its limitations in a world where privacy invasion was going physical. Since then, Austrian lawyer Max Schrems has poked holes in international data transfers, police and others began rolling out automated facial recognition without the least care for public consent...and emergency measures to contain the public health crisis have overwhelmed hard-won rights.

This year two themes are emerging. First is that, as predicted, traditional ideas about consent simply do not work in a world where technology monitors and mediates our physical movements, especially because most citizens don't know to ask what the "legal basis for processing" is when their local bar demands their name and address for contact tracing and claims the would-be drinker has no discretion to refuse. Second is the need for enforcement. This is the main point Schrems has been making through his legal challenges to the Safe Harbor agreement ("Schrems I") and then to its replacement, the EU-US Privacy Shield agreement ("Schrems II"). Schrems is forcing data protection regulators to act even when they don't want to.

In his panel on data portability, Ian Brown pointed out a third problem: access to tools. Even where companies have provided the facility for downloading your data, none provide upload tools, not even archives for academic papers. You can have your data, but you can't use it anywhere. By contrast, he said, open banking is actually working well in the UK. EFF's Christoph Schmon added a fourth: the reality that it's "much easier to monetize hate speech than civil discourse online".

Artist Jonas Staal and lawyer Jan Fermon have an intriguing proposal for containing Facebook: collectivize it. In an unfortunately evidence-free mock trial, witnesses argued that it should be neither nationalized nor privately owned nor broken up, but transformed into a space owned and governed by its 2.5 billion users. Fermon found a legal basis in the right to self-determination, "the basis of all other fundamental rights". In reality, given Facebook's wide-ranging social effects, non-users, too, would have to become part-owners. Lawyers love governing things. Most people won't even read the notes from a school board meeting.

Schmon favored finding ways to make it harder to monetize polarization, chiefly through moderation. Jennifer Cobbe, in a panel on algorithm-assisted decision making suggested stifling some types of innovation. "Government should be concerned with general welfare, public good, human rights, equality, and fairness" and adopt technology only where it supports those values. Transparency is only one part of the answer - and it must apply to all parts of systems such as those controlling whether someone stays in jail or is released on parole, not just the final decision making bit.

But the world in which these debates are taking place is also changing, and not just because of the coronavirus. In a panel on intelligence agencies and fundamental rights, for example, MEP Sophie in't Veld (NL) pointed out the difficulties of exercising meaningful oversight when talk begins about increasing cross-border cooperation. In her view, the EU pretends "national security" is outside its interests, but 20 years of legislation offers national security as a justification for bloc-wide action. The result is to leave national authorities to make their own decisions. and "There is little incentive for national authorities to apply safeguards to citizens from other countries." Plus, lacking an EU-wide definition of "national security", member states can claim "national security" for almost any exemption. "The walls between law enforcement and the intelligence agencies are crumbling."

A day later, Petra Molnar put this a different way: "Immigration management technologies are used as an excuse to infringe on people's rights". Molnar works to highlight the use of refugees and asylum-seekers as experimental subjects for news technologies - drones, AI lie detectors, automated facial recognition; meanwhile the technologies are blurring geographical demarcations, pushing the "border" away from its physical manifestation. Conversely, current UK policy moves the "border" into schools, rental offices, and hospitals by requiring for teachers, landlords, and medical personnel to check immigration status.

Edin Omanovic pointed out a contributing factor: "People are concerned about the things they use every day" - like WhatsApp - "but not bulk data interception". Politicians have more to gain by signing off on more powers than from imposing limits - but the narrowness of their definition of "security" means that despite powers, access to technology, and top-class universities, "We've had 100,000 deaths because we were unprepared for the pandemic we knew was coming and possible."


Illustrations: Sophie in't Veld (via Arnfinn Petersen at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 25, 2020

The zero on the phone

WeRobot2020-Poster.jpegAmong the minor casualties of the pandemic has been the appearance of a Swiss prototype robot at this year's We Robot, the ninth year of this unique conference that crosses engineering, technology policy, and law to identify future conflicts and pre-emptively suggest solutions. The result was to leave the robots considered by this virtual We Robot remarkably (appropriately) abstract.

We Robot was founded to get a jump on the coming conflicts that robots will bring to law and policy, in part so that we don't repeat the Internet experience of repeating the same arguments decades on end. This year's event pre-empted the Internet experience in a new way: many authors have drawn on the failed optimism and cooperation of the 1990s to begin defining ways to ensure that robotics and AI do not follow the same path. Where at the beginning we were all eager to embrace robots, this year their disembodied AIs are being done *to* us.

In the one slight exception to this rule, Hallie Siegel's exploration of senior citizens' attitudes towards new technologies found that the seniors she studies are pragmatic, concerned about their privacy and autonomy and only really interested in technologies that provided benefits they really need.

Jason Millar and Elizabeth Gray drew directly on the Internet experience by comparing network neutrality to the issues surrounding the mapping software that controls turn-by-turn navigation systems in a discussion of "mobility shaping". Should navigation services be common carriers, as telephone lines are? The idea appeals to me, if only because the potential for physical control of where our vehicles are allowed to go seems so clear.

The theme of exploitation was particularly visible in the two papers on Africa. In the first, Arthur Gwagwa (Strathmore University, Nairobi), Erika Kraemer-Mbula, Nagla Rizk, Isaac Rutenberg, and Jeremy de Beer warn that the combination of foreign capital and local resources is likely to reproduce the power structures of previous forms of colonialism, an argument also seen recently in a paper by Abeba Birhane. Women in particular, who run the majority of start-ups in some African countries, may be ignored, and the authors suggest that a GDPR-like rule awarding individuals control over their own data could be crucial in creating value for, rather than extracted from, Africa.

In the second, Laura Foster (Indiana University), Bram Van Wiele, and Tobias Schönwetter extracted a database of press stories about AI in Africa from Lexis-Nexus, to find the familiar set of claims for new technology: happy, value-neutral disruption, yay!. The failure of most of these articles to consider gender and race, they observed, doesn't make the emerging picture neutral, but serves to reinforce the default of the straight, white male.

One way we push back against AI/robot control is the "human in the loop" to whom the final decision is delegated. This human has featured in every We Robot conference, most notably in 2016 as Madeleine Elish's moral crumple zone. In his paper, Liam McCoy argues for the importance of meaningful control, because the middle ground, where the human is expected to solve the most complex situations where AI fails without support or authority is truly dangerous. The middle ground may be profitable; at UK IGF a few weeks ago, Gus Hosein noted that automating dispute resolution has what's made GAFA rich. But in the higher stakes of cyber-physical systems, the human you summon by pushing zero has to be able to make a difference.

Silvia de Conca's idea of "human-centered legal design", which sought to give autonomous agents a duty of care as a way of filling the gap in liability that presently exists, and Cynthia Khoo's interest in vulnerable communities who are harmed by behavior that emerges from combined business models, platform scale, human nature, and algorithm design presented different methods of putting a human in the loop. Often, Khoo has found in investigating this idea, the potential harm was in fact known and simply ignored; how much can and should be foreseen when system parts interact in unexpected ways is a rising issue.

Several papers explored previously unnoticed vectors for bias and control. Sentiment analysis, last seen being called "the snake oil of 2011", and its successor, emotion analysis, which I first saw explored in the 1990s by Rosalind Picard at MIT, are creeping into AI systems. Some are particularly dubious: aggression detection systems and emotion recognition cameras.

Emily McBain-Ashfield and Jason Millar are the first I'm aware of to study how stereotyping gets into these systems. Yes, it's in the data - but the problem lies in the process analyzing and tagging it. The authors found three methods of doing this: manual (human, slow), dictionary-based using seed words (automated), and crowdsourced (see also Mary L. Gray and Siddharth Suri's 2019 book, Ghost Work. All have problems; automating that sort of issue creates notoriously crude mistakes, and the participants in crowdsourcing may be from very different linguistic and cultural contexts.

The discussant for this paper, Osonde Osaba sounded appalled: "By having these AI models of emotion out in the wild in commercial products we are essentially sanctioning the unregulated experimentation on humans and their emotional processes without oversight or control."

Remedies have to contend, however, with the legacy infrastructure. Alice Xiang discovered a conflict between traditional anti-discrimination law, which bars decision making based on a set of protected classes and the technical methods of mitigating algorithmic bias. "If we're not careful," she said, "the vast majority of approaches proposed in machine learning literature might actually be illegal if they are ever tested in court."

We Robot 2020 was the first to be held outside the US, and chairs Florian Martin-Bariteau, Jason Millar, and Katie Szilagyi set out to widen its international character and diversity. When the pandemic hit, the resulting exceptional breadth of location of authors and discussants made it infeasible to ask everyone to pretend they were in Ottawa's time zone. The conference therefore has recorded the authors' and discussants' conversations as if live - which means that you, too, can experience the originals. Just follow the links. We Robot events not already linked here: 2013; 2015; 2016 workshop; 2017; 2018 workshop and conference; 2019 workshop and conference.


Illustrations: Our robot avatars attend the conference for us on the We Robot 2020 poster.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 6, 2020

Mission creep

Haystack-Cora.png"We can't find the needles unless we collect the whole haystack," a character explains in the new play The Haystack, written by Al Blyth and in production at the Hampstead Theatre through March 7. The character is Hannah (Sarah Woodward), and she is director of a surveillance effort being coded and built by Neil (Oliver Johnstone) and Zef (Enyi Ororonkwo), familiarly geeky types whose preferred day-off activities are the cinema and the pub, rather than catching up on sleep and showers, as Hannah pointedly suggests. Zef has a girlfriend (and a "spank bank" of downloaded images) and is excited to work in "counter-terrorism". Neil is less certain, less socially comfortable, and, we eventually learn, more technically brilliant; he must come to grips with all three characteristics in his quest to save Cora (Rona Morison). Cue Fleabag: "This is a love story."

The play is framed by an encrypted chat between Neil and Denise, Cora's editor at the Guardian (Lucy Black). We know immediately from the technological checklist they run down in making contact that there has been a catastrophe, which we soon realize surrounds Cora. Even though we're unsure what it is, it's clear Neil is carrying a load of guilt, which the play explains in flashbacks.

As the action begins, Neil and Zef are waiting to start work as a task force seconded to Hannah's department to identify the source of a series of Ministry of Defence leaks that have led to press stories. She is unimpressed with their youth, attire, and casual attitude - they type madly while she issues instructions they've already read - but changes abruptly when they find the primary leaker in seconds. Two stories remain; because both bear Cora's byline she becomes their new target. Both like the look of her, but Neil is particularly smitten, and when a crisis overtakes her, he breaks every rule in the agency's book by grabbing a train to London, where, calling himself "Tom Flowers", he befriends her in a bar.

Neil's surveillance-informed "god mode" choices of Cora's favorite music, drinks, and food when he meets her remind of the movie Groundhog Day, in which Phil (Bill Murray) slowly builds up, day by day, the perfect approach to the women he hopes to seduce. In another cultural echo, the tense beginning is sufficiently reminiscent of the opening of Laura Poitras's film about Edward Snowden, CitizenFour, that I assumed Neil was calling from Moscow.

The requirement for the haystack, Hannah explains at the beginning of Act Two, is because the terrorist threat has changed from organized groups to home-grown "lone wolves", and threats can come from anywhere. Her department must know *everything* if it is to keep the nation safe. The lone-wolf theory is the one surveillance justification Blyth's characters don't chew over in the course of the play; for an evidence-based view, consult the VOX-Pol project. In a favorite moment, Neil and Hannah demonstrate the frustrating disconnect between technical reality and government targets. Neil correctly explains that terrorists are so rare that, given the UK's 66 million population, no matter how much you "improve" the system's detection rate it will still be swamped by false positives. Hannah, however, discovers he has nonetheless delivered. The false positive rate is 30% less! Her bosses are thrilled! Neil reacts like Alicia Florrick in The Good Wife after one of her morally uncomfortable wins.

Related: it is one of the great pleasures of The Haystack that its three female characters (out of a total of five) are smart, tough, self-reliant, ambitious, and good at their jobs.

The Haystack is impressively directed by Roxana Silbert. It isn't easy to make typing look interesting, but this play manages it, partly by the well-designed use of projections to show both the internal and external worlds they're seeing, and partly by carefully-staged quick cuts. In one section, cinema-style cross-cutting creates a montage that fast-forwards the action through six months of two key relationships.

Technically, The Haystack is impressive; Zef and Neil speak fluent Python, algorithms, and Bash scripts, and laugh realistically over a journalist's use of Hotmail and Word with no encryption ("I swear my dad has better infosec"), while the projections of their screens are plausible pieces of code, video games, media snippets, and maps. The production designers and Blyth, who has a degree in econometrics and a background as a research economist, have done well. There were just a few tiny nitpicks: Neil can't trace Cora's shut-down devices "without the passwords" (huh?); and although Neil and Zef also use Tor, at one point they use Firefox (maybe) and Google (doubtful). My companion leaned in: "They wouldn't use that." More startling, for me, the actors who play Neil and Zef pronounce "cache" as "cachet"; but this is the plaint of a sound-sensitive person. And that's it, for the play's 1:50 length (trust me; it flies by).

The result is an extraordinary mix of a well-plotted comic thriller that shows the personal and professional costs of both being watched and being the watcher. What's really remarkable is how many of the touchstone digital rights and policy issues Blyth manages to pack in. If you can, go see it, partly because it's a fine introduction to the debates around surveillance, but mostly because it's great entertainment.


Illustrations: Rona Morison, as Cora, in The Haystack.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 24, 2020

The inevitability narrative

new-22portobelloroad.jpg"We could create a new blueprint," Woody Hartzog said in a rare moment of hope on Wednesday at this year's Computers, Privacy, and Data Protection in a panel on facial recognition. He went on stress the need to move outside of the model of privacy for the last two decades: get consent, roll out technology. Not necessarily in that order.

A few minutes earlier, he had said, "I think facial recognition is the most dangerous surveillance technology ever invented - so attractive to governments and industry to deploy in many ways and so ripe for abuse, and the mechanisms we have so weak to confront the harms it poses that the only way to mitigate the harms is to ban it."

This week, a leaked draft white paper revealed that the EU is considering, as one of five options, banning the use of facial recognition in public places. In general, the EU has been pouring money into AI research, largely in pursuit of economic opportunity: if the EU doesn't develop its own AI technologies, the argument goes, Europe will have to buy it from China or the United States. Who wants to be sandwiched between those two?

This level of investment is not available to most of the world's countries, as Julia Powles elsewhere pointed out with respect to AI more generally. Her country, Australia, is destined to be a "technology importer and data exporter", no matter how the three-pronged race comes out. "The promises of AI are unproven, and the risks are clear," she said. "The real reason we need to regulate is that it imposes a dramatic acceleration on the conditions of the unrestrained digital extractive economy." In other words, the companies behind AI will have even greater capacity to grind us up as dinosaur bones and use the results to manipulate us to their advantage.

At this event last year there was a general recognition that, less than a year after the passage of the general data protection regulation, it wasn't going to be an adequate approach to the growth of tracking through the physical world. This year, the conference is awash in AI to a truly extraordinary extent. Literally dozens of sessions: if it's not AI in policing it's AI and data protection, ethics, human rights, algorithmic fairness, or embedded in autonomous vehicles, Hartzog's panel was one of at least half a dozen on facial recognition, which is AI plus biometrics plus CCTV and other cameras. As interesting are the omissions: in two full days I have yet to hear anything about smart speakers or Amazon Ring doorbells, both proliferating wildly in the soon-to-be non-EU UK.

These technologies are landing on us shockingly fast. This time last year, automated facial recognition wasn't even on the map. It blew up just last May, when Big Brother Watch pushed the issue into everyone's consciousness by launching a campaign to stop the police from using what is still a highly flawed technology. But we can't lean too heavily on the ridiculous - 98%! - inaccuracy of its real-world trials, because as it becomes more accurate it will become even more dangerous to anyone on the wrong list. Here, it has become clear that it's being rapidly followed by "emotional recognition", a build-out of technology pioneered 25 years ago at MIT by Rosalind Picard under the rubric "affective computing".

"Is it enough to ban facial recognition?" a questioner asked. "Or should we ban cameras?"

Probably everyone here is carrying at least two camera (pause to count: two on phone, one on laptop).

Everyone here is also conscious that last week, Kashmir Hill broke the story that the previously unknown, Peter Thiel-backed company Clearview AI had scraped 3 billion facial images off social media and other sites to create a database that enables its law enforcement cutomers to grab a single photo and get back matches from dozens of online sites. As Hill reminds, companies like Facebook have been able to do this since 2011, though at the time - just eight and a half years ago! - this was technology that Google (though not Facebook) thought was "too creepy" to implement.

In the 2013 paper A Theory of Creepy, Omer Tene and Jules Polonetsky. cite three kinds of "creepy" that apply to new technologies or new uses: it breaks traditional social norms; it shows the disconnect between the norms of engineers and those of the rest of society; or applicable norms don't exist yet. AI often breaks all three. Automated, pervasive facial recognition certainly does.

And so it seems legitimate to ask: do we really want to live in a world where it's impossible to go anywhere without being followed? "We didn't ban dangerous drugs or cars," has been a recurrent rebuttal. No, but as various speakers reminded, we did constrain them to become much safer. (And we did ban some drugs.) We should resist, Hartzog suggested, "the inevitability narrative".

Instead, the reality is that, as Lokke Moerel put it, "We have this kind of AI because this is the technology and expertise we have."

One panel pointed us at the AI universal guidelines, and encouraged us to sign. We need that - and so much more.


Illustrations: Orwell's house at 22 Portobello Road, London, complete with CCTV camera.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2019

When we were

zittrain-cim-iphone.jpg
"These people changed the world," said Jeff Wilkins, looking out across a Columbus, Ohio ballroom filled with more than 400 people. "And they know it, and are proud of it."

At one time, all this was his.

Wilkins was talking about...CompuServe, which he co-founded in 1969. How does it happen, he asked, that more than 400 people show up to celebrate a company that hasn't really existed for the last 23 years? I can't say, but a group of people happier to see each other (and random outsiders) again would be hard to find. "This is the only reunion I go to," one woman said.

It's easy to forget - or to never have known - CompuServe's former importance. Circa 1993, the Twitter handle now displayed on everyone's business cards and slides was their numbered CompuServe ID. My inclusion of mine (70007,5537) at the end of a Guardian article led a reader to complain that I should instead promote the small ISPs it would kill when broadband arrived. In 1994, Aerosmith released a single on CompuServe, the first time a major label tried online distribution. It probably took five hours to download.

In Wilkins' story, he was studying electrical engineering at the University of Arizona when his father-in-law asked for help with data processing for his new insurance company. Wilkins and fellow grad students Sandy Trevor, John Goltz, Larry Shelley, and Doug Chinnock, soon relocated to Columbus. It was, Wilkins said, Shelley who suggested starting a time-sharing company - "or should I say cloud computing?" Wilkins quipped, to applause and cheers.

Yes, he should. Everything new is old again.

In time-sharing, the fledgling company competed with GE and IBM. The information service started in 1979, as a way to occupy the computers during the empty evenings when the businesses had gone home. For the next 20 years, CompuServers invented everything for themselves: "GO" navigation commands, commercial email (first customer: HJ Heinz), live chat ("CB , news wires, online games and virtual worlds (partnering with Fujitsu on a graphical MUD), shopping... The now-ubiquitous GIF was the brainchild of Steve Wilhite (it's pronounced "JIF"). The legend of CompuServe inventions is kept alive by by Sandy Trevor and Dave Eastburn, whose Nuvocom "software archeology" business holds archives that have backed expert defense against numerous patent claims on technologies that CompuServe provably pioneered.

A panel reminisced about the CIS shopping mall. "We had an online stockbroker before anyone else thought about it," one said. Another remembered a call asking for a 30-minute meeting from the then-CEO of the nationwide flowers delivery service FTD. "I was too busy." (The CEO was Meg Whitman.). For CompuServe's 25th anniversary, the mall's travel agency collaborated on a three-day cruise with, as invited guests, the film critic Roger Ebert, who disseminated his movie reviews through the service and hosted the "Ask Roger Ebert" section in the Movies Forum, and his wife, Chaz. "That may have been the peak."

Mall stores paid an annual fee; curation ensured there weren't too many of any one category of store. Banners advertising products were such a novelty at the time - and often the liveliest, most visually attractive thing on the page - that as many as 25% of viewers clicked on them. Today, Amazon takes a percentage of transactions instead. "If we could have had a universal shopping cart, like Amazon," lamented one, "what might have been?"

Well, what? Could CompuServe now be under threat of a government-mandated breakup to separate its social media business, search, cloud provider, and shopping? Both CompuServe and AOL, whose speed to embrace graphical interfaces and aggressive marketing led it to first outstrip and then buy and dismantle CompuServe in the 1990s, would have had to cannibalize their existing businesses. Used to profits from access fees, both resisted the Internet's monthly subscription model.

One veteran openly admitted how profoundly he underestimated the threat of the Internet after surveying the rickety infrastructure designed by/for academics and students. "I didn't think that the Internet could survive in the reality of a business..." Instead, the information services saw their competition as each other. A contemporary view of the challenges is visible in this 1995 interview with Barry Berkov, the vice-president in charge of CIS.

However, CompuServe's closed approach left no opening for individuals' self-expression. The 1990s rising Internet stars, Geocities and MySpace, were all about that, as are today's social media.

So many shifts have changed social media since then: from topic-centered to person-centered forums, from proprietary to open to centralized, from dial-up modems to pervasive connections, the massive ramp-up of scale and, mobile-fueled, speed, along with the reconfiguration of business models and tehcnical infrastructure. Some things have degraded: past postings on Twitter and Facebook are much harder to find, and unwanted noise is everywhere. CompuServe would have had to navigate each of those shifts without error. As we know now, they didn't make it.

And yet, for 20-odd years, a company of early 20-somethings 2,500 miles from Silicon Valley, invented a prototype of today's world, at first unaware of the near-simultaneous first ARPAnet connection, the beginnings of the network they couldn't imagine would ever be trustworthy enough for businesses and governments to rely on. They may yet be proven right about that.

cis50-banner.jpg

Illustrations: Jonathan Zittrain's mockup of the CompuServe welcome screen (left, with thanks) next to today's iPhone showing how little things have changed; the reunion banner.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 13, 2019

Matrices of numbers

Wilcox, Dominic - Stained Glass car.jpgThe older man standing next to me was puzzled. "Can you drive it?"

He gestured at the VW Beetle-style car-like creation in front of us. Its exterior, except for the wheels and chassis, was stained glass. This car was conceived by the artist Desmond Wilcox, who surmised that by 2059 autonomous cars will be so safe that they will no longer need safety features such as bumpers and can be made of fragile materials. The sole interior furnishing, a bed, lets you sleep while in transit. In person, the car is lovely to look at. Utterly impractical today in 2019, and it always will be. The other cars may be safe, but come on: falling tree, extreme cold, hailstorm...kid with a baseball?

On being told no, it's an autonomous car that drives itself, my fellow visitor to the Science Museum's new exhibition, Driverless, looked dissatisfied. He appeared to prefer driving himself.

"It would look good with a light bulb inside it hanging at the back of the garden," he offered. It would. Bit big, though last week in San Francisco I saw a bigger superbloom.

"Driverless" is a modest exhibition by Science Museum standards, and unlike previous robot exhibitions, hardly any of these vehicles are ready for real-world use. Many are graded according to their project status: first version, early tests, real-world tests, in use. Only a couple were as far along as real-world tests.

Probably a third are underwater explorers. Among the exhibits: the (yellow submarine!) long-range Boaty McBoatface Autosub, which is meant to travel up to 2,000 km over several months, surfacing periodically to send information back to scientists. Both this and the underwater robot swarms are intended for previously unexplored hostile environments, such as underneath the Antarctic ice sheet.

Alongside these and Wilcox's Stained Glass Driverless Car of the Future was the Capri Mobility pod, the result of a project to develop on-demand vans that can shuttle up to four people along a defined route either through a pedestrian area or on public roads. Small Robot sent its Tom farm monitoring robot. And from Amsterdam came Roboat, a five-year research project to develop the first fleet of autonomous floating boats for deployment in Amsterdam's canals. These are the first autonomous vehicles I've seen that really show useful everyday potential for rethinking traditional shapes, forms, and functionality: their flat surfaces and side connectors allow them to be linked into temporary bridges a human can walk across.

There's also an app-controlled food delivery drone; the idea is you trigger it to drop your delivery from 20 meters up when you're ready to receive it. What could possibly go wrong?

On the fun side is Duckietown (again, sadly present only as an image), a project to teach robotics via a system of small, mobile robots that motor around a Lego-like "town" carrying small rubber ducks. It's compelling like model trains, and is seeking Kickstarter funding to make the hardware for wider distribution. This should have been the hands-on bit.

Previous robotics-related Science Museum exhibitions have asked as many questions as they answered. At that, this one is less successful. dont-cross.jpgDrive.ai's car-mounted warning signs, for example, are meant to tell surrounding pedestrians what its cars are doing. But are we really going to allow cars onto public roads (or even worse, pedestrian areas, like the Capri pods) to mow people down who don't see, don't understand, can't read, or willfully ignore the "GOING NOW; DON'T CROSS" sign? So we'll have to add sound: but do we want cars barking orders at us? Today, navigating the roads is a constant negotiation between human drivers, human pedestrians, and humans on other modes of transport (motorcycles, bicycles, escooters, skateboards...). Do we want a tomorrow where the cars have all the power?

In video clips researchers and commentators like Noel Sharkey, Kathy Nothstine, and Natasha Merat discuss some of these difficulties. Merat has an answer for the warning sign: humans and self-driving cars will have to learn each other's capabilities in order to peacefully coexist. This is work we don't really see happening today, and that lack is part of why I tend to think Christian Wolmar is right in predicting that these cars are not going to be filling our streets any time soon.

The placard for the Starship Bot (present only as a picture) advises that it cannot see above knee height, to protect privacy, but doesn't discuss the issues raised when Edward Hasbrouck encountered one in action. I was personally disappointed, after the recent We Robot discussion of the "monstrous" Moral Machine and its generalized sibling the trolley problem, to see it included here with less documentation than on the web. This matters, because the most significant questions about autonomous vehicles are going to be things like: what data do they collect about the people and things around them? To whom are they sending it? How long will it be retained? Who has the right to see it? Who has the right to command where these cars go?

More important, Sharkey says in a video clip, we must disentangle autonomous and remote-controlled vehicles, which present very different problems. Remote-controlled vehicles have a human in charge that we can directly challenge. By contrast, he said, we don't know why autonomous vehicles make the decisions they do: "They're just matrices of numbers."


Illustrations: Wilcox's stained glass car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 31, 2019

Moral machines

2001-stewardess.jpgWhat are AI ethics boards for?

I've been wondering about this for some months now, particularly in April, when Google announced the composition of its new Advanced Technology External Advisory Council (ATEAC) - and a week later announced its dissolution. The council was dropped after a media storm that began with a letter from 50 of Google's own employees objecting to the inclusion of Kay Coles James, president of the Heritage Foundation.

At The Verge, James Vincent suggests the boards are for "ethics washing" rather than instituting change. The aborted Google board, for example, was intended, as member Joanna Bryston writes, to "stress test" policies Google had already formulated.

However, corporations are not the only active players. The new Ada Lovelace Institute's research program is intended to shape public policy in this area. The AI Now Institute is studying social implications. Data & Society is studying AI use and governance. Altogether, Brent Mittelstadt counts 63 public-private initiatives, and says the principles they're releasing "closely resemble the four classic principles of medical ethics" - an analogy he finds uncertain.

Last year, when Steven Croft, the Bishop of Oxford, proposed ten commandments for artificial intelligence, I also tended to be dismissive: who's going to listen? What company is going to choose a path against its own financial interests? A machine learning expert friend, has a different complaint: corporations are not the problem, it's governments. No matter what companies decide, governments always demand carve-outs for intelligence and security services, and once they have it, game over.

I did appreciate Croft's contention that all commandments are aspirational. An agreed set of principles would at least provide a standard against which to measure technology and decisions. Principles might be particularly valuable for guiding academic researchers, some of whom currently regard social media as a convenient public laboratory.

Still, human rights law already supplies that sort of template. What can ethics boards do that the law doesn't already? If discrimination is already wrong, why do we need an ethics board to add that it's wrong when an algorithm does it?

At a panel kicking off this year's Privacy Law Scholars, Ryan Calo suggested an answer: "We need better moral imagination." In his view, a lot of the discussion of AI ethics centers on form rather than content: how should it be applied? Should there be a certification regime? Or perhaps compliance requirements? Instead, he proposed that we should be looking at how AI changes the affordances available to us. His analogy: retrieving the sailors left behind in the water after you destroyed their ship was an ethical obligation until the arrival of new technology - submarines - made it infeasible.

For Calo, too many conversations about AI avoid considering the content, As a frustrating example: "The primary problem around the ethics of driverless cars is not how they will reshape cities or affect people with disabilities and ownership structures, but whether they should run over the nuns or the schoolchildren."

As anyone who's ever designed a survey knows, defining the questions is crucial. In her posting, Bryson expresses regret that the intended board will not now be called into action to consider and perhaps influence Google's policy. But the fact that Google, not the board, was to devise policies and set the questions about them makes me wonder how effective it could have been. So much depends on who imagines the prospective future.

The current Kubrick exhibition at London's Design Museum paid considerable homage to Kubrick's vision and imagination in creating the mysterious and wonderful universe in 2001: A Space Odyssey. Both the technology and the furniture still look "futuristic" despite having been designed more than 50 years ago. What *has* dated is the women: they are still wearing 1960s stewardess uniforms and hats, and the one woman with more than a few lines spends them discussing her husband and his whereabouts; the secrecy surrounding the appearance of a monolith in a crater on the moon is for the men to raise. Calo was finding the same thing in rereading Isaac Asimov's Foundation trilogy: "Not one woman leader for four books," he said. "And people still smoke!" Yet they are surrounded by interstellar travel and mind-reading devices.

So while what these boards - as Helen Nissenbaum said in the same panel, "There are so many institutes announcing principles as if that's the end of the story." - are doing now is not inspiring, maybe what they *could* do might be. What if, as Calo suggested, there are human and civil rights commitments AI allows us to make that were impossible before?

"We should be imagining how we can not just preserve extant ethical values but generate new ones based on affordances that we now have available to us," he said, suggesting as one example "mobility as a right". I'm not really convinced that our streets are going to be awash in autonomous vehicles any time soon, but you can see his point. If we have the technology to give independent mobility to people who are unable to drive themselves...well, shouldn't we? You may disagree on that specific idea, but you have to admit: it's a much better class of conversation.tw


Illustrations: Space Station receptionist from 2001: A Space Odyssey.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 17, 2019

Genomics snake oil

DNA_Double_Helix_by_NHGRI-NIH-PD.jpgIn 2011, as part of an investigation she conducted into the possible genetic origins of the streak of depression that ran through her family, the Danish neurobiologist Lone Frank had her genome sequenced and interviewed many participants in the newly-opening field of genomics that followed the first complete sequencing of the human genome. In her resulting book, My Beautiful Genome, she commented on the "Wild West" developing around retail genetic testing being offered to consumers over the web. Absurd claims such as using DNA testing to find your perfect mate or direct your child's education abounded.

This week, at an event organized by Breaking the Frame, New Zealand researcher Andelka M. Phillips presented the results of her ongoing study of the same landscape. The testing is just as unreliable, the claims even more absurd - choose your diet according to your DNA! find out what your superpower is! - and the number of companies she's collected has reached 289 while the cost of the tests has shrunk and the size of the databases has ballooned. Some of this stuff makes astrology look good.

To be perfectly clear: it's not, or not necessarily, the gene sequencing itself that's the problem. To be sure, the best lab cannot produce a reading that represents reality from poor-quality samples. And many samples are indeed poor, especially those snatched from bed sheets or excavated from garbage cans to send to sites promising surreptitious testing (I have verified these exist, but I refuse to link to them) to those who want to check whether their partner is unfaithful or whether their child is in fact a blood relative. But essentially, for health tests at least, everyone is using more or less the same technology for sequencing.

More crucial is the interpretation and analysis, as Helen Wallace, the executive director of GeneWatch UK, pointed out. For example, companies differ in how they identify geographical regions, frame populations , and the makeup of their databases of reference contributions. This is how a pair of identical Canadian twins got varying and non-matching test results from five companies, one Ashkenazi Jew got six different ancestry reports, and, according to one study, up to 40% of DNA results from consumer genetic tests are false positives. As I type, the UK Parliament is conducting an inquiry into commercial genomics.

Phillips makes the data available to anyone who wants to explore it. Meanwhile, so far she's examined the terms of service and privacy policies of 71 companies, and finds them filled with technology company-speak, not medical information. They do not explain these services' technical limitations or the risks involved. Yet it's so easy to think of disastrous scenarios: this week, an American gay couple reported that their second child's birthright citizenship is being denied under new State Department rules. A false DNA test could make a child stateless.

Breaking the Frame's organizer, Dave King, believes that a subtle consequence of the ancestry tests - the things everyone was quoting in 2018 that tell you that you're 13% German, 1% Somalian, and whatever else - is to reinforce the essentially racist notion that "Germanness" has a biological basis. He also particularly disliked the services claiming they can identify children's talents; these claim, as Phillips highlighted, that testing can save parents money they might otherwise waste on impossible dreams. That way lies Gattaca and generations of children who don't get to explore their own abilities because they've already been written off.

Even more disturbing questions surround what happens with these large databases of perfect identifiers. In the UK, last October the Department of Health and Social Care announced its ambition to sequence 5 million genomes. Included was the plan to being in 2019 to offer whole genome sequencing to all seriously ill children and adults with specific rare diseases or hard-to-treat cancers as part of their care. In other words, the most desperate people are being asked first, a prospect Phil Booth, coordinator of medConfidential, finds disquieting. As so much of this is still research, not medical care, he said, like the late despised care.data, it "blurs the line around what is your data, and between what the NHS was and what some would like it to be". Exploitation of the nation's medical records as raw material for commercial purposes is not what anyone thought they were signing up for. And once you have that giant database of perfect identifiers...there's the Home Office, which has already been caught using the NHS to hunt illegal immigrants and DNA testing immigrants.

So Booth asked this: why now? Genetic sequencing is 20 years old, and to date it has yet to come close to being ready to produce the benefits predicted for it. We do not have personalized medicine, or, except in a very few cases (such as a percentage of breast cancer) drugs tailored to genetic makeup. "Why not wait until it's a better bet?" he asked. Instead of spending billions today - billions that, as an audience member pointed out, would produce better health more widely if spent on improving the environment, nutrition, and water - the proposal is to spend them on a technology that may still not be producing results 20 years from now. Why not wait, say, ten years and see if it's still worth doing?


Illustrations: DNA double helix (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 3, 2019

Reopening the source

SphericalCow2.gif
"There is a disruption coming." Words of doom?

Several months back we discussed Michael Salmony's fear that the Internet is about to destroy science. Salmony reminded that his comments came in a talk on the virtues of the open economy, and then noted the following dangers:

- Current quality-assurance methods (peer-review, quality editing, fact checking etc) are being undermined. Thus potentially leading to an avalanche of attention-seeking open garbage drowning out the quality research;
- The excellent high-minded ideals (breaking the hold of the big controllers, making all knowledge freely accessible etc) of OA are now being subverted by models that actually ask authors (or their funders) to spend thousands of dollars per article to get it "openly accessible". Thus again privileging the rich and well connected.

The University of Bath associate professor Joanna Bryson rather agreed with Salmony, also citing the importance of peer review. So I stipulate: yes, peer review is crucial for doing good science.

In a posting deploring the death of the monograph, Bryson notes that, like other forms of publishing, many academic publishers are small and struggle for sustainability. She also points to a Dutch presentation arguing that open access costs more.

Since she, as an academic researcher, has skin in this game, we have to give weight to her thoughts. However, many researchers dissent, arguing that academic publishers like Elsevier, Axel Springer profit from an unfair and unsustainable business model. Either way, an existential crisis is rolling toward academic publishers like a giant spherical concrete cow.

So to yesterday's session on the ten-year future of research, hosted by European Health Forum Gastein and sponsored by Elsevier. The quote of doom we began with was voiced there.

The focal point was a report (PDF), the result of a study by Elsevier and Ipsos MORI. Their efforts eventually generated three scenarios: 1) "brave open world", in which open access publishing, collaboration, and extensive data sharing rule; 2) "tech titans", in which technology companies dominate research; 3) "Eastern ascendance", in which China leads. The most likely is a mix of the three. This is where several of us agreed that the mix is already our present. We surmised, cattily, that this was more an event looking for a solution to Elsevier's future. That remains cloudy.

The rest does not. For the last year I've been listening to discussions about how academic work can find greater and more meaningful impact. While journal publication remains essential for promotions and tenure within academia, funders increasingly demand that research produce new government policies, change public conversations, and provide fundamentally more effective practice.

Similarly, is there any doubt that China is leading innovation in areas like AI? The country is rising fast. As for "tech titans", while there's no doubt that these companies lead in some fields, it's not clear that they are following the lead of the great 1960s and 1970s corporate labs like Bell Labs, Xerox PARC and IBM Watson, which invested in fundamental research with no connection to products. While Google, Facebook, and Microsoft researchers do impressive work, Google is the only one publicly showing off research, that seems unrelated to its core business">.

So how long is ten years? A long time in technology, sure: in 2009: Twitter, Android, and "there's an app for that" were new(ish), the iPad was a year from release, smartphones got GPS, netbooks were rising, and 3D was poised to change the world of cinema. "The academic world is very conservative," someone at my table said. "Not much can change in ten years."

Despite Sci-Hub, the push to open access is not just another Internet plot to make everything free. Much of it is coming from academics, funders, librarians, and administrators. In the last year, the University of California dropped Elsevier rather than modify its open access policy or pay extra for the privilege of keeping it. Research consortia in Sweden, Germany, and Hungary have had similar disputes; a group of Norwegian institutions recently agreed to pay €9 million a year to cover access to Elsevier's journals and the publishing costs of its expected 2,000 articles.

What is slow to change is incentives within academia. Rising scholars are judged much as they were 50 years ago: how much have they published, and where? The conflict means that younger researchers whose work has immediate consequences find themselves forced to choose between prioritizing career management - via journal publication - or more immediately effective efforts such as training workshops and newspaper coverage to alert practitioners in the field of new problems and solutions. Choosing the latter may help tens of thousands of people - at a cost of a "You haven't published" stall to their careers. Equally difficult, today's structure of departments and journals is poorly suited for the increasing range of multi-, inter-, and trans-disciplinary research. Where such projects can find publication remains a conundrum.

All of that is without considering other misplaced or perverse incensitives in the present system: novel ideas struggle to emerge; replication largely does not happen or fails, and journal impact factors are overvalued. The Internet has opened up beneficial change: Ben Goldacre's COMPare project to identify dubious practices such as outcome switching and misreported findings, and the push to publish data sets; and preprint servers give much wider access to new work. It may not be all good; but it certainly isn't all bad.


Illustrations: A spherical cow jumping over the moon (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 18, 2019

Math, monsters, and metaphors

Twitter-moral-labyrinth.jpg "My iPhone won't stab me in my bed," Bill Smart said at the first We Robot, attempting to explain what was different about robots - but eight years on, We Robot seems less worried about that than about the brains of the operation. That is, AI, which conference participant Aaron Mannes described as, "A pile of math that can do some stuff".

But the math needs data to work on, and so a lot of the discussion goes toward possible consequences: delivery drones displaying personalized ads (Ryan Calo and Stephanie Ballard); the wrongness of researchers who defend their habit of scraping publicly posted data by saying it's "the norm" when their unwitting experimental subjects have never given permission; the unexpected consequences of creating new data sources in farming (Solon Barocas, Karen Levy, and Alexandra Mateescu); and how to incorporate public values (Alicia Solow-Neiderman) into the control of...well, AI, but what is AI without data? It's that pile of math. "It's just software," Bill Smart (again) said last week. Should we be scared?

The answer seems to be "sometimes". Two types of robots were cited for "robotic space colonialism" (Kristen Thomasen), because they are here enough and now enough for legal cases to be emerging. These are 1) drones, and 2) delivery robots. Mostly. Mason Marks pointed out Amazon's amazing Kiva robots, but they're working in warehouses where their impact is more a result of the workings of capitalism that that of AI. They don't scare people in their homes at night or appropriate sidewalk space like delivery robots, which Paul Colhoun described as "unattended property in motion carrying another person's property". Which sounds like they might be sort of cute and vulnerable, until he continues: "What actions may they take to defend themselves?" Is this a new meaning for move fast and break things?

Colhoun's comment came during a discussion of using various forecasting methods - futures planning, design fiction, the futures wheel (which someone suggested might provide a usefully visual alternative to privacy policies) - that led Cindy Grimm to pinpoint the problem of when you regulate. Too soon, and you risk constraining valuable technology. Too late, and you're constantly scrambling to revise your laws while being mocked by technical experts calling you an idiot (see 25 years of Internet regulation). Still, I'd be happy to pass a law right now barring drones from advertising and data collection and damn the consequences. And then be embarrassed; as Levy pointed out, other populations have a lot more to fear from drones than being bothered by some ads...

The question remains: what, exactly do you regulate? The Algorithmic Accountability Act recently proposed by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) would require large companies to audit machine learning systems to eliminate bias. Discrimination is much bigger than AI, said conference co-founder Michael Froomkin in discussing Alicia Solow-Neiderman's paper on regulating AI, but special to AI is unequal access to data.

Grimm also pointed out that there are three different aspects: writing code (referring back to Petros Terzis's paper proposing to apply the regime of negligence laws to coders); collecting data; and using data. While this is true, it doesn't really capture the experience Abby Jacques suggested could be a logical consequence of following the results collected by MIT's Moral Machine: save the young, fit, and wealthy, but splat the old, poor, and infirm. If, she argued, you followed the mandate of the popular vote, old people would be scrambling to save themselves in parking lots while kids ran wild knowing the cars would never hit them. An entertaining fantasy spectacle, to be sure, but not quite how most of us want to live. As Jacques tells it, the trolley problem the Moral Machine represents is basically a metaphor that has eaten its young. Get rid of it! This was a rare moment of near-universal agreement. "I've been longing for the trolley problem to die," robotics pioneerRobin Murphy said. Jacques herself was more measured: "Philosophers need to take responsibility for what happens when we leave our tools lying around."

The biggest thing I've learned in all the law conferences I go to is that law proceeds by analogy and metaphor. You see this everywhere: Kate Darling is trying to understand how we might integrate robots into our lives by studying the history of domesticating animals; Ian Kerr and Carys Craig are trying to deromanticize "the author" in discussions of AI and copyright law; the "property" in "intellectual property" draws an uncomfortable analogy to physical objects; and Hideyuki Matsumi is trying to think through robot registration by analogy to Japan's Koseki family registration law.

Google koala car.jpgGetting the metaphors right is therefore crucial, which explains, in turn, why it's important to spend so much effort understanding what the technology can really do and what it can't. You have to stop buying the images of driverless cars to produce something like the "handoff model" proposed by Jake Goldenfein, Deirdre Mulligan, and Helen Nissenbaum to explore the permeable boundaries between humans and the autonomous or connected systems driving their cars. Similarly, it's easy to forget, as Mulligan said in introducing her paper with Daniel N. Kluttz, that in "machine learning" algorithms learn only from the judgments at the end; they never see the intermediary reasoning stages.

So metaphor matters. At this point I had a blinding flash of realization. This is why no one can agree about Brexit. *Brexit* is a trolley problem. Small wonder Jacques called the Moral Machine a "monster".

Previous We Robot events as seen by net.wars: 2018 workshop and conference; 2017; 2016 workshop and conference, 2015; 2013, and 2012. We missed 2014.

Illustrations: The Moral Labyrinth art installation, by Sarah Newman and Jessica Fjeld, at We Robot 2019; Google driverless car.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

April 12, 2019

The Algernon problem

charly-movie-image.jpgLast week we noted that it may be a sign of a maturing robotics industry that it's possible to have companies specializing in something as small as fingertips for a robot hand. This week, the workshop day kicking off this year's We Robot conference provides a different reason to think the same thing: more and more disciplines are finding their way to this cross-the-streams event. This year, joining engineers, computer scientists, lawyers, and the odd philosopher are sociologists, economists, and activists.

The result is oddly like a meeting of the Research Institute for the Science of Cyber Security, where a large part of the point from the beginning has been that human factors and economics are as important to good security as technical knowledge. This was particularly true in the face-off between the economist Rob Seamans and the sociologist Beth Bechky, which pitted quantitative "things we can count" against qualitative "study the social structures" thinking. The range of disciplines needed to think about what used to be "computer" security keeps growing as the ways we use computers become more complex; robots are computer systems whose mechanical manifestations interact with humans. This move has to happen.

One sign is a change in language. Madeline Elish, currently in the news for her newly published 2016 We Robot paper, Moral Crumple Zones, said she's trying to replace the term "deploying" with "integrating" for arriving technologies. "They are integrated into systems," she explained, "and when you say "integrate" it implies into what, with whom, and where." By conrast, "deployment" is military-speak, devoid of context. I like this idea, since by 2015, it was clear from a machine learning conference at the Royal Society that many had begun seeing robots as partners rather than replacements.

Later, three Japanese academics - the independent researcher Hideyuki Matsumi, Takayuki Kato, and Pumio Shimpo - tried to explain why Japanese people like robots so much - more, it seems, than "we" do (whoever "we" are). They suggested three theories: the influence of TV and manga; the influence of the mainstream Shinto religion, which sees a spirit in everything; and the Japanese government strategy to make the country a robotics powerhouse. The latter has produced a 356-page guideline for research development.

"Japanese people don't like to draw distinctions and place clear lines," Shinto said. "We think of AI as a friend, not an enemy, and we want to blur the lines." Shimpo had just said that even though he has two actual dogs he wants an Aibo. Kato dissented: "I personally don't like robots."

The MIT researcher Kate Darling, who studies human responses to robots, found positive reinforcement in studies that have found that autistic kids respond well to robots. "One theory is that they're social, but not too social." An experiment that placed these robots in homes for 30 days last summer had "stellar results". But: when the robots were removed at the end of the experiment, follow-up studies found that the kids were losing the skills the robots had brought them. The story evokes the 1958 Daniel Keyes story Flowers for Algernon, but then you have to ask: what were the skills? Did they matter to the children or just to the researchers and how is "success" defined?

The opportunities anthropomorphization opens for manipulation are an issue everywhere. Woody Hartzog called the tendency to believe what the machine says "automation bias", but that understates the range of motivations: you may believe the machine because you like it, because it's manipulated you, or because you're working in a government benefits agency where you can't be sure you won't get fired if you defy the machine's decision. Would that everyone could see Bill Smart and Cindy Grimm follow up their presentation from last year to show: AI is just software; it doesn't "know" things; and it's the complexity that gets you. Smart hates the term "autnomous" for robots "because in robots it means deterministic software running on a computer. It's teleoperation via computer code."

This is the "fancy hammer" school of thinking about robots, and it can be quite valuable. Kevin Bankston soon demonstrated this: "Science fiction has trained us to worry about Skynet instead of housing discrimination, and expect individual saviors rather than communities working together to deal with community problems." AI is not taking our jobs; capitalists are using AI to take our jobs - a very different problem. As long as we see robots and AI as autonomous, we miss that instead they ares agents carrying out others' plans. This is a larger example of a pervasive problem with smartphones, social media sites, and platforms generally: they are designed to push us to forget the data-collecting, self-interested, manipulative behemoth behind them.

Returning to Elish's comment, we are one of the things robots integrate with. At the moment, this is taking the form of making random people research subjects: the pedestrian killed in Arizona by a supposedly self-driving car, the hapless prisoners whose parole is decided by it's-just-software, the people caught by the Metropolitan Police's staggeringly flawed facial recognition, the homeless people who feel threatened by security robots, the Caltrain passengers sharing a platform with an officious delivery robot. Did any of us ask to be experimented on?


Illustrations: Cliff Robertson in Charly, the movie version of "Flowers for Algernon".

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 1, 2019

Beyond data protection

3rd-cpdp2019-sign.jpgFor the group assembled this week in Brussels for Computers, Privacy, and Data Protection, the General Data Protection Regulation that came into force in May 2018 represented the culmination of years of effort. The mood, however, is not so much self-congratulatory as "what's next?".

The first answer is a lot of complaints. An early panel featured a number of these. Max Schrems, never one to shirk, celebrated GDPR day in 2018 by joining with La Quadrature du Net to file two complaints against Google, WhatsApp, Instagram, and Facebook over "forced consent". Last week, he filed eight more complaints against Amazon, Apple, Spotify, Netflix, YouTube, SoundCloud, DAZN, and Flimmit regarding their implementation of subject access rights. A day or so later, the news broke: the French data protection regulator, CNIL, has fined Google €50 million (PDF) on the basis of their complaint - the biggest fine so far under the new regime that sets the limit at 4% of global turnover. Google is considering an appeal.

It's a start. We won't know for probably five years whether GDPR will have the intended effect of changing the balance of power between citizens and data-driven companies (even though one site is already happy to call it a failure already. Meanwhile, one interesting new development is Apple's crackdown on Facebook and then Google for abusing its enterprise app system to collect comprehensive data on end users. While Apple is certainly far less dependent on data collection than the rest of GAFA/FAANG, this action is a little like those types of malware that download anti-virus software to clean your system of the competition.

The second - more typical of a conference - is to stop and think: what doesn't GDPR cover? The answers are coming fast: AI, automated decision-making, household or personal use of data, and (oh, lord) blockchain. And, a questioner asked late on Wednesday, "Is data protection privacy, data, or fairness?"

Several of these areas are interlinked: automated decision-making is currently what we mean when we say "AI", and while we talk a lot about the historical bias stored in data and the discrimination that algorithms derive from training data and bake into their results. Discussions of this problem, Angsar Koene tend to portray accuracy and fairness as a tradeoff, with accuracy presented as a scientifically neutral reality and fairness as a fuzzy human wish. Instead, he argued, accuracy depends on values we choose to judge it by. Why shouldn't fairness just be one of those values?

A bigger limitation - which we've written about here since 2015 - is that privacy law tends to focus on the individual. Seda Gürses noted that focusing on the algorithm - how to improve it and reduce its bias - similarly ignores the wider context and network externalities. Optimize the Waze algorithm so each driver can reach their destination in record time, and the small communities whose roads were not built for speedy cut-throughs bear the costs of extra traffic, noise, and pollution they generate. Next-generation privacy will have to reflect that wider context; as Dennis Hirsch put it, social protection rather than individual control. As Schrems' and others' complaints show, individual control is rarely ours on today's web in any case.

Privacy is not the only regulation that suffers from that problem. At Tuesday's pre-conference Privacy Camp, several speakers deplored the present climate in which platforms' success in removing hate speech, terrorist content, and unauthorized copyright material is measured solely in numbers: how many pieces, how fast. Such a regime does not foster thoughtful consideration, nuance, respect for human rights, or the creation of a robust system of redress for the wrongly accused. "We must move away from the idea that illegal content can be perfectly suppressed and that companies are not trying hard enough if they aren't doing it," Mozilla Internet policy manager Owen Bennett said, going on to advocate for a wider harm reduction approach.

The good news, in a way, is that privacy law has fellow warriors: competition, liability, and consumer protection law. The first two of those, said Mireille Hildebrandt need to be rethought, in part because some problems will leave us no choice. She cited, for example, the energy market: as we are forced to move to renewables both supply and demand will fluctuate enormously. "Without predictive technology I don't see how we can solve it." Continuously predicting the energy use of each household will, she wrote in a paper in 2013 (PDF), pose new threats to privacy, data protection non-discrimination, and due process.

One of the more interesting new (to me, at least) players on this scene is Algorithm Watch, which has just released a report on algorithmic decision-making in the EU that recommends looking at other laws that are relevant to specific types off decisions, such as applying equal pay legislation to the gig economy. Data protection law doesn't have to do it all.

Some problems may not be amenable to law at all. Paul Nemitzposed this question: given that machine learning training data is always historical, and that therefore the machines are always perforce backward-looking, how do we as humans retain the drive to improve if we leave all our decisions to machines? No data protection law in the world can solve that.

Illustrations: The CPDP 2019 welcome sign in Brussels.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 9, 2018

Escape from model land

Thumbnail image for lennysmith-davidtuckett-cruise-2018-11-08.jpg
"Models are best for understanding, but they are inherently wrong," Helen Dacre said, evoking robotics engineer Bill Smart on sensors. Dacre was presenting a tool that combines weather forecasts, air quality measurements, and other data to help airlines and other stakeholders quickly assess the risk of flying after a volcanic eruption. In April 2010, when Iceland's Eyjafjallajökull blew its top, European airspace shut down for six days at an estimated overall cost of £1.1 billion. Since then, engine manufacturers have studied the effect of atmospheric volcanic ash on aircraft engines, and are finding that a brief excursion through peak levels of concentration is less damaging than prolonged exposure at lower levels. So, do you fly?

This was one of the projects presented at this week's conference of the two-year-old network Challenging Radical Uncertainty in Science, Society and the Environment (CRUISSE). To understand "radical uncertainty", start with Frank Knight, who in 1921 differentiated between "risk", where the outcomes are unknown but the probabilities are known, and uncertainty, where even the probabilities are unknown. Timo Ehrig summed this up as "I know what I don't know" versus "I don't know what I don't know", evoking Donald Rumsfeld's "unknown unknowns". In radical uncertainty decisions, existing knowledge is not relevant because the problems are new: the discovery of metal fatigue in airline jets; the 2008 financial crisis; social media; climate change. The prior art, if any, is of questionable relevance. And you're playing with live ammunition - real people's lives. By the million, maybe.

How should you change the planning system to increase the stock of affordable housing? How do you prepare for unforeseen cybersecurity threats? What should we do to alleviate the impact of climate change? These are some of the questions that interested CRUISSE founders Leonard Smith and David Tuckett. Such decisions are high-impact, high-visibility, with complex interactions whose consequences are hard to foresee.

It's the process of making them that most interests CRUISSE. Smith likes to divide uncertainty problems into weather and climate. With "weather" problems, you make many similar decisions based on changing input; with "climate" problems your decisions are either a one-off or the next one is massively different. Either way, with climate problems you can't learn from your mistakes: radical uncertainty. You can't reuse the decisions; but you *could* reuse the process by which you made the decision. They are trying to understand - and improve - those processes.

This is where models come in. This field has been somewhat overrun by a specific type of thinking they call OCF, for "optimum choice framework". The idea there is that you build a model, stick in some variables, and tweak them to find the sweet spot. For risks, where the probabilities are known, that can provide useful results - think cost-benefit analysis. In radical uncertainty...see above. But decision makers are tempted to build a model anyway. Smith said, "You pretend the simulation reflects reality in some way, and you walk away from decision making as if you have solved the problem." In his hand-drawn graphic, this is falling off the "cliff of subjectivity" into the "sea of self-delusion".

Uncertainty can come from anywhere. Kris de Meyer is studying what happens if the UK's entire national electrical grid crashes. Fun fact: it would take seven days to come back up. *That* is not uncertain. Nor are the consequences: nothing functioning, dark streets, no heat, no water after a few hours for anyone dependent on pumping. Soon, no phones unless you still have copper wire. You'll need a battery or solar-powered radio to hear the national emergency broadcast.

The uncertainty is this: how would 65 million modern people react in an unprecedented situation where all the essentials of life are disrupted? And, the key question for the policy makers funding the project, what should government say? *Don't* fill your bathtub with water so no one else has any? *Don't* go to the hospital, which has its own generators, to charge your phone?

"It's a difficult question because of the intention-behavior gap," de Meyer said. De Meyer is studying this via "playable theater", an effort that starts with a story premise that groups can discuss - in this case, stories of people who lived through the blackout. He is conducting trials for this and other similar projects around the country.

In another project, Catherine Tilley is investigating the claim that machines will take all our jobs . Tilley finds two dominant narratives. In one, jobs will change, not disappear, and automation more of them, enhanced productivity, and new wealth. In the other, we will be retired...or unemployed. The numbers in these predictions are very large, but conflicting, so they can't all be right. What do we plan for education and industrial policy? What investments do we make? Should we prepare for mass unemployment, and if so, how?

Tilley identified two common assumptions: tasks that can be automated will be; automation will be used to replace human labor. But interviews with ten senior managers who had made decisions about automation found otherwise. Tl;dr: sectoral, national, and local contexts matter, and the global estimates are highly uncertain. Everyone agrees education is a partial solution - "but for others, not for themselves".

Here's the thing: machines are models. They live in model land. Our future depends on escaping.


Illustrations: David Tuckett and Lenny Smith.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 25, 2018

The Rochdale hypothesis

Unity_sculpture,_Rochdale_(1).JPGFirst, open a shop. Thus the pioneers of Rochdale, Lancashire, began the process of building their town. Faced with the jobs and loss of income brought by the Industrial Revolution, a group of 28 people, about half of them weavers, designed the set of Rochdale principles, and set about finding £1 each to create a cooperative that sold a few basics. Ten years later, Wikipedia tells us, Britain was home to thousands of imitators: cooperatives became a movement.

Could Rochdale form the template for building a public service internet?

This was the endpoint of a day-long discussion held as part of MozFest and led by a rogue band from the BBC. Not bad, considering that it took us half the day to arrive at three key questions: What is public? What is service? What is internet?

Pause.

To some extent, the question's phrasing derives from the BBC's remit as a public service broadcaster. "Public service" is the BBC's actual mandate; broadcasting the activity it's usually identified with, is only the means by which it fulfills that mission. There might be - are - other choices. To educate, inform, to entertain, those are its mandate. Neither says radio or TV.

Probably most of the BBC's many global admirers don't realize how broadly the BBC has interpreted that. In the 1980s, it commissioned a computer - the Acorn, which spawned ARM, whose chips today power smartphones - and a series of TV programs to teach the nation about computing. In the early 1990s, it created a dial-up Internet Service Provider to help people get online. Some ten or 15 years ago I contributed to an online guide to the web for an audience with little computer literacy. This kind of thing goes way beyond what most people - for example, Americans - mean by "public broadcasting".

But, as Bill Thompson explained in kicking things off, although 98% of the public has some exposure to the BBC every week, the way people watch TV is changing. Two days later, the Guardian reported that the broadcasting regulator, Ofcom, believes the BBC is facing an "existential crisis" because the younger generation watches significantly less television. An eighth of young people "consume no BBC content" in any given week. When everyone can access the best of TV's back catalogue on a growing array of streaming services, and technology giants like Netflix and Amazon are spending billions to achieve worldwide dominance, the BBC must change to find new relevance.

So: the public service Internet might be a solution. Not, as Thompson went on to say, the Internet to make broadcasting better, but the Internet to make *society* better. Few other organizations in the world could adopt such a mission, but it would fit the BBC's particular history.

Few of us are happy with the Internet as it is today. Mozilla's 2018 Internet Health Report catalogues problems: walled gardens, constant surveillance to exploit us by analyzing our data, widespread insecurity, and increasing censorship.

So, again: what does a public service Internet look like? What do people need? How do you avoid the same outcome?

"Code is law," said Thompson, citing Lawrence Lessig's first book. Most people learned from that book that software architecture could determine human behaviour. He took a different lesson: "We built the network, and we can change it. It's just a piece of engineering."

Language, someone said, has its limits when you're moving from rhetoric to tangible service. Canada, they said, renamed the Internet "basic service" - but it changed nothing. "It's still concentrated and expensive."

Also: how far down the stack do we go? Do we rewrite TCP/IP? Throw out the web? Or start from outside and try to blow up capitalism? Who decides?

At this point an important question surfaced: who isn't in the room? (All but about 30 of the world's population, but don't get snippy.) Last week, the Guardian reported that the growth of Internet access is slowing - a lot. UN data to be published next month by the Web Foundation, shows growth dropped from 19% in 2007 to less than 6% in 2017. The report estimates that it will be 2019, two years later than expected, before half the world is online, and large numbers may never get affordable access. Most of the 3.8 billion unconnected are rural poor, largely women, and they are increasingly marginalized.

The Guardian notes that many see no point in access. There's your possible starting point. What would make the Internet valuable to them? What can we help them build that will benefit them and their communities?

Last week, the New York Times suggested that conflicting regulations and norms are dividing the Internet into three: Chinese, European, and American. They're thinking small. Reversing the Internet's increasing concentration and centralization can't be by blowing up the center because it will fight back. But decentralizing by building cooperatively at the edges...that is a perfectly possible future consonant with its past, even we can't really force clumps of hipsters to build infrastructure in former industrial towns, by luring them there with cheap housing prices. Cue Thompson again: he thought of this before, and he can prove it: here's his 2000 manifesto on e-mutualism.

Building public networks in the many parts of Britain where access is a struggle...that sounds like a public service remit to me.

Illustrations: Illustrations: The Unity sculpture, commemorating the 150th anniversary of the Rochdale Pioneers (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 18, 2018

Not the new oil

Ada_Lovelace_Chalon_portrait.jpg"Does data age like fish or like wine?" the economist Diane Coyle asked last week. It was one of a long list of questions she suggested researchers need to answer in a presentation at the new Ada Lovelace Institute. More important, the meeting generally asked, how can data best be used to serve the common good? The newly-created Ada Lovelace Institute is being set up to answer this sort of question.

This is a relatively new way of looking at things that has been building up over the last year or two - active rather than passive, social rather than economic, and requiring a different approach from traditional discussions of individual privacy. That might mean stewardship - management as a public good - rather than governance according to legal or quasi-legal rules; and a new paradigm for privacy, which for the last decades has been cast as an individual right rather than a social compact. As we have argued here before, it is long since time to change that last bit, a point made by Ivana Bartoletti, head of the data privacy and data protection practice for GemServ.

One of the key questions for Coyle, as an economist, is how to value data - hence the question about how it ages. In one effort, she tried to get price and volume statistics from cloud providers, and found no agreement on how they thought about their business or how they made the decision to build a new data center. Bytes are the easiest to measure - but that's not how they do it. Some thought about the number of data records, or computations per second, but these measures are insufficient without knowing the content.

"Forget 'the new oil'," she said; the characteristics are too different. Well, that's good news in a sense; if data is not the new oil then we don't have to be dinosaur bones or plankton. But given how many businesses have spent the last 20 years building their plans on the presumption that data *is* the new oil, getting them to change that view will be an uphill slog. Coyle appears willing to try: data, she said, is a public good, non-rivalrous in use, and, like many digital goods, with high fixed but low marginal costs. She went on to say, however, that personal data is not valuable, citing the small price you get if you divide Facebook's profits across its many users.

This is, of course, not really true, any more than you can decide between wine and fish: data's value depends on the beholder, the beholder's purpose, the context, and a host of other variables. The same piece of data may be valueless at times and highly valuable at others. A photograph of Brett Kavanaugh and Christine Blasey Ford on that bed in 1982, for example, would have been relatively valueless at the time, and yet be worth a fortune now, whether to suppress or to publish. The economic value might increase as long as it was kept secret - but diminish rapidly once it was made public, while the social value is zero while it's secret but huge if made public. As commodities go, data is weird. Coyle invoked Erwin Schrödinger: you don't know what you've got until you look at it. And even then, you have to keep looking as circumstances change.

That was the opening gambit, but a split rapidly surfaced in the panel, which also included Emma Prest, the executive director of DataKind. Prest and Bartoletti raised issues of consent and ethics, and data turned from a public good into a matter of human rights.

If you're a government or a large company focused on economic growth, then viewing data as a social good means wringing as much profit as you can out of it. That to date has been the direction, leading to amassing giant piles of the stuff and enabling both open and secret trades in surveillance and tracking. One often-proposed response is to apply intellectual property rights; the EU tried something like this in 1996 when it passed the Database Directive, generally unloved today, but this gives organizations rights in databases they compile. It doesn't give individuals property rights over "my" data. As tempting as IP rights might be, one problem is that a lot of data is collaboratively created. "My" medical record is a composite of information I have given doctors and their experience and knowledge-based interpretation. Shouldn't they get an ownership share?

Of course someone - probably a security someone - will be along shortly to point out that ethics, rights, and public goods are not things criminals respect. But this isn't about bad guys. Oil or not, data has always also been a source of power. In that sense, it's heartening to see that so many of these conversations - at the nascent Ada Lovelace Institute, at the St Paul's Institute PDF), at the LSE, and at Data & Society, to name just a few - are taking place. If AI is about data, robotics is at least partly about AI in a mobile substrate. Eventually, these discussions of the shape of the future public sphere will be seen for what they are: debates over the future distribution of power. Don't tell Whitehall.


Illustrations: Ada Lovelace.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 21, 2018

Facts are screwed

vlad-enemies-impaled.gif"Fake news uses the best means of the time," Paul Bernal said at last week's gikii conference, an annual mingling of law, pop culture, and technology. Among his examples of old media turned to propaganda purposes: hand-printed woodcut leaflets, street singers, plays, and pamphlets stuck in cracks in buildings. The big difference today is data mining, profiling, targeting, and the real-time ability to see what works and improve it.

Bernal's most interesting point, however, is that like a magician's plausible diversion the surface fantasy story may stand in front of an earlier fake news story that is never questioned. His primary example was Vlad, the Impaler, the historical figure who is thought to have inspired Dracula. Vlad's fame as a vicious and profligate killer, derives from those woodcut leaflets. Bernal suggests the reasons: a) Vlad had many enemies who wrote against him, some of it true, most of it false; b) most of the stories were published ten to 20 years after he died; and c) there was a whole complicated thing about the rights to Transylvanian territory.

"Today, people can see through the vampire to the historical figure, but not past that," he said.

His main point was that governments' focus on content to defeat fake news is relatively useless. A more effective approach would have us stop getting our news from Facebook. Easy for me personally, but hard to turn into public policy.

Soon afterwards, Judith Rauhofer outlined a related problem: because Russian bots are aimed at exacerbating existing divisions, almost anyone can fall for one of the fake messages. Spurred on by a message from the Tumblr powers that be advising that she had shared a small number of messages that were traced to now-closed Russian accounts, Rauhofer investigated. In all, she had shared 18 posts - and these had been reblogged 2.7 million times, and are still being recirculated. The focus on paid ads means there is relatively little research on organic and viral sharing of influential political messages. Yet these reach vastly bigger audiences and are far more trusted, especially because people believe they are not being influenced by them.

In the particular case Rauhofer studied, "There are a lot of minority groups under attack in the US, the UK, Germany, and so on. If they all united in their voting behavior and political activity they would have a chance, but if they're fighting each other that's unlikely to happen." Divide and conquer, in other words, works as well as it ever has.

The worst part of the whole thing, she said, is that looking over those 18 posts, she would absolutely share them again and for the same reason: she agreed with them.

Rauhofer's conclusion was that the combination of prioritization - that is, the ordering of what you see according to what the site believes you're interested in - and targeting form "a fail-safe way of creating an environment where we are set against each other."

So in Bernal's example, an obvious fantasy masks an equally untrue - or at least wildly exaggerated - story, while in Rauhofer's the things you actually believe can be turned into weapons of mass division. Both scenarios require much more nuance and, as we've argued here before, many more disciplines to solve than are currently being deployed.

Andrea Matwyshyn, in providing five mini-fables as a way of illustrating five problems to consider when designing AI - or, as she put it, five stories of "future AI failure". These were:

- "AI inside" a product can mean sophisticated machine learning algorithms or a simple regression analysis; you cannot tell from the outside what is real and what's just hype, and the specifics of design matter. When Google's algorithm tagged black people as "gorillas", the company "fixed" the algorithm by removing "gorilla" from its list of possible labels. The algorithm itself wasn't improved.

- "Pseudo-AI" has humans doing the work of bots. Lots of historical examples for this one, most notably the mechanical Turk; Matwyshyn chose the fake autonomaton the Digesting Duck.

- Decisions that bring short-term wins may also bring long-term losses in the form of unintended negative consequences that haven't been thought through. Among Matwyshyn's examples were a number of cases where human interaction changed the analysis such as the failure of Google flu trends and Microsoft's Tay bot.

- Minute variations or errors in implementation or deployment can produce very different results than intended. Matwyshyn's prime example was a pair of electronic hamsters she thought could be set up to repeat each other w1ords to form a recursive loop. Perhaps responding to harmonics less audible to humans, they instead screeched unintelligibly at each other. "I thought it was a controlled experiment," she said, "and it wasn't."

- There will always be system vulnerabilities and unforeseen attacks. Her example was squirrels that eat power lines, but ten backhoes is the traditional example.

To prevent these situations, Matwyshyn emphasized disclosure about code, verification in the form of third-party audits, substantiation in the form of evidence to back up the claims that are made, anticipation - that is, liability and good corporate governance, and remediation - again a function of good corporate governance.

"Fail well," she concluded. Words for our time.


Illustrations: Woodcut of Vlad, with impaled enemies.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 8, 2018

Block that metaphor

oldest-robot-athens-2015-smaller.jpgMy favourite new term from this year's Privacy Law Scholars conference is "dishonest anthropomorphism". The term appeared in a draft paper written by Brenda Leung and Evan Selinger as part of a proposal for its opposite, "honest anthropomorphism". The authors' goal was to suggest a taxonomy that could be incorporated into privacy by design theory and practice, so that as household robots are developed and deployed they are less likely to do us harm. Not necessarily individual "harm" as in Isaac Asimov's Laws of Robotics, which tended to see robots as autonomous rather than a projection of its manufacturer into our personal space, therefore glossing over this more intentional and diffuse kind of deception. Pause to imagine that Facebook goes into making robots and you can see what we're talking about here.

"Dishonest anthropomorphism" derives from an earlier paper, Averting Robot Eyes by Margo Kaminski, Matthew Rueben, Bill Smart, and Cindy Grimm, which proposes "honest anthropomorphism" as a desirable principle in trying to protect people from the privacy problems inherent in admitting a robot, even something as limited as a Roomba, into your home. (At least three of these authors are regular attendees at We Robot since its inception in 2012.) That paper categorizes three types of privacy issues that robots bring: data privacy, boundary management, and social/relational.

The data privacy issues are substantial. A mobile phone or smart speaker may listen to or film you, but it has to stay where you put it (as Smart has memorably put it, "My iPad can't stab me in my bed"). Add movement and processing, and you have a roving spy that can collect myriad kinds of data to assemble an intimate picture of your home and its occupants. "Boundary management" refers to capabilities humans may not realize their robots have and therefore don't know to protect themselves against - thermal sensors that can see through walls, for example, or eyes that observe us even when the robot is apparently looking elsewhere (hence the title).

"Social/relational" refers to the our social and cultural expectations of the beings around us. In the authors' examples, unscrupulous designers can take advantage of our inclination to apply our expectations of other humans to entice us into disclosing more than we would if we truly understood the situation. A robot that mimics human expressions that we understand through our own muscle memory may be highly deceptive, inadvertently or intentionally. Robots may also be given the capability of identifying micro-reactions we can't control but that we're used to assuming go unnoticed.

A different session - discussing research by Marijn Sax, Natalie Helberger, and Nadine Bol - provided a worked example, albeit one without the full robot component. In other words: they've been studying mobile health apps. Most of these are obviously aimed at encouraging behavioral change - walk 10,000 steps, lose weight, do yoga. What the authors argue is that they are more aimed at effecting economic change than at encouraging health, an aspect often obscured from users. Quite apart from the wrongness of using an app marketed to improve your health as a vector for potentially unrelated commercial interests, the health framing itself may be questionable. For example, the famed 10,000 steps some apps push you to take daily has no evidence basis in medicine: the number was likely picked as a Japanese marketing term in the 1960s. These apps may also be quite rigid; in one case that came up during the discussion, an injured nurse found she couldn't adapt the app to help her follow her doctor's orders to stay off her feet. In other words, they optimize one thing, which may or may not have anything to do with health or even health's vaguer cousin, "wellness".

Returning to dishonest anthropomorphism, one suggestion was to focus on abuse rather than dishonesty; there are already laws that bar unfair practices and deception. After all, the entire discipline of user design is aimed at nudging users into certain behaviors and discouraging others. With more complex systems, even if the aim is to make the user feel good it's not simple: the same user will react differently to the same choice at different times. Deciding which points to single out in order to calculate benefit is as difficult as trying to decide where to begin and end a movie story, which the screenwriter William Goldman has likened to deciding where to cut a piece of string. The use of metaphor was harmless when we were talking desktops and filing cabinets; much less so when we're talking about a robot cat that closely emulates a biological cat and leads us into the false sense that we can understand it in the same way.

Deception is becoming the theme of the year, perhaps partly inspired by Facebook and Cambridge Analytica. It should be a good thing. It's already clear that neither the European data protection approach nor the US consumer protection approach will be sufficient in itself to protect privacy against the incoming waves of the Internet of Things, big data, smart infrastructure, robots, and AI. As the threats to privacy expand, the field itself must grow in new directions. What made these discussions interesting is that they're trying to figure out which ones.

Illustrations: Recreation of oldest known robot design (from the Ancient Greek Technology exhibition)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 1, 2018

The three IPs

Thumbnail image for 1891_Telegraph_Lines.jpgAgainst last Friday's date history will record two major European events. The first, as previously noted is the arrival into force of the General Data Protection Regulation, which is currently inspiring a number of US news sites to block Europeans. The second is the amazing Irish landslide vote to repeal the 8th amendment to the country's constitution, which barred legislators from legalizing abortion. The vote led the MEP Luke Ming Flanagan to comment that, "I always knew voters were not conservative - they're just a bit complicated."

"A bit complicated" sums up nicely most people's views on privacy; it captures perfectly the cognitive dissonance of someone posting on Facebook that they're worried about their privacy. As Merlin Erroll commented, terrorist incidents help governments claim that giving them enough information will protect you. Countries whose short-term memories include human rights abuses set their balance point differently.

The occasion for these reflections was the 20th birthday of the Foundation for Information Policy Research. FIPR head Ross Anderson noted on Tuesday that FIPR isn't a campaigning organization, "But we provide the ammunition for those who are."

Led by the late Caspar Bowden, FIPR was most visibly activist in the late 1990s lead-up to the passage of the now-replaced Regulation of Investigatory Powers Act (2000). FIPR in general and Bowden in particular were instrumental in making the final legislation less dangerous than it could have been. Since then, FIPR helped spawn the 15-year-old European Digital Rights and UK health data privacy advocate medConfidential.

Many speakers noted how little the debates have changed, particularly regarding encryption and surveillance. In the case of encryption, this is partly because mathematical proofs are eternal, and partly because, as Yes, Minister co-writer Antony Jay said in 2015, large organizations such as governments always seek to impose control. "They don't see it as anything other than good government, but actually it's control government, which is what they want.". The only change, as Anderson pointed out, is that because today's end-to-end connections are encrypted, the push for access has moved to people's phones.

Other perennials include secondary uses of medical data, which Anderson debated in 1996 with the British Medical Association. Among significant new challenges, Anderson, like many others noted the problems of safety and sustainability. The need to patch devices that can kill you changes our ideas about the consequences of hacking. How do you patch a car over 20 years? he asked. One might add: how do you stop a botnet of pancreatic implants without killing the patients?

We've noted here before that built infrastructure tends to attract more of the same. Today, said Duncan Campbell, 25% of global internet traffic transits the UK; Bude, Cornwall remains the critical node for US-EU data links, as in the days of the telegraph. As Campbell said, the UK's traditional position makes it perfectly placed to conduct global surveillance.

One of the most notable changes in 20 years: there were no less than two speakers whose open presence would have been unthinkable: Ian Levy, the technical director of the National Cyber Security centre, the defensive arm of GCHQ, and Anthony Finkelstein, the government's chief scientific advisor for national security. You wouldn't have seen them even ten years ago, when GCHQ was deploying its Mastering the Internet plan, known to us courtesy of Edward Snowden. Levy made a plea to get away from the angels versus demons school of debate.

"The three horsemen, all with the initials 'IP' - intellectual property, Internet Protocol, and investigatory powers - bind us in a crystal lattice," said Bill Thompson. The essential difficulty he was getting at is that it's not that organizations like Google DeepMind and others have done bad things, but that we can't be sure they haven't. Being trustworthy, said medConfidential's Sam Smith, doesn't mean you never have to check the infrastructure but that people *can* check it if they want to.

What happens next is the hard question. Onora O'Neill suggested that our shiny, new GDPR won't work, because it's premised on the no-longer-valid idea that personal and non-personal data are distinguishable. Within a decade, she said, new approaches will be needed. Today, consent is already largely a façade; true consent requires understanding and agreement.

She is absolutely right. Even today's "smart" speakers pose a challenge: where should my Alexa-enabled host post the privacy policy? Is crossing their threshold consent? What does consent even mean in a world where sensors are everywhere and how the data will be used and by whom may be murky. Many of the laws built up over the last 20 years will have to be rethought, particularly as connected medical devices pose new challenges.

One of the other significant changes will be the influx of new and numerous stakeholders whose ideas about what the internet is are very different from those of the parties who have shaped it to date. The mobile world, for example, vastly outnumbers us; the Internet of Things is being developed by Asian manufacturers from a very different culture.

It will get much harder from here, I concluded. In response, O'Neill was not content. It's not enough, she said, to point out problems. We must propose at least the bare bones of solutions.


Illustrations: 1891 map of telegraph lines (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 20, 2018

Deception

werobot-pepper-head_zpsrvlmgvgl.jpg"Why are robots different?" 2018 co-chair Mark Lemley asked repeatedly at this year's We Robot. We used to ask this in the late 1990s when trying to decide whether a new internet development was worth covering. "Would this be a story if it were about telephones?" Tom Standage and Ben Rooney frequently asked at the Daily Telegraph.

The obvious answer is physical risk and our perception of danger. The idea that autonomously moving objects may be dangerous is deeply biologically hard-wired. A plant can't kill you if you don't go near it. Or, as Bill Smart put it at the first We Robot in 2012, "My iPad can't stab me in my bed." Autonomous movement fools us into thinking things are smarter than they are.

It is probably not much consolation to the driver of the crashed autopiloting Tesla or his bereaved family that his predicament was predicted two years ago at We Robot 2016. In a paper, Madeline Elish called humans in these partnerships "Moral Crumple Zones", because, she argued, in a human-machine partnership, the human would take all the pressure, like the crumple zone in a car.

Today, Tesla is fulfilling her prophecy by blaming the driver for not getting his hands onto the steering wheel fast enough when commanded. (Other prior art on this: Dexter Palmer's brilliant 2016 book Version Control.)

As Ian Kerr pointed out, the user's instructions are self-contradictory. The marketing brochure uses the metaphors "autopilot" and "autosteer" to seduce buyers into envisioning a ride of relaxed luxury while the car does all the work. But the legal documents and user manual supplied with the car tell you that you can't rely on the car to change lanes, and you must keep your hands on the wheel at all times. A computer ingesting this would start smoking.

Granted, no marketer wants to say, "This car will drive itself in a limited fashion, as long as you watch the road and keep your hands on the steering wheel." The average consumer reading that says, "Um...you mean I have to drive it?"

The human as moral crumple zone also appears in analyses of the Arizona Uber crash. Even-handedly, Brad Templeton points plenty of blame at Uber and its decisions: the car's LIDAR should have spotted the pedestrian crossing the road in time to stop safely. He then writes, "Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired." And yet humans are notoriously bad at the job required of her: monitor a machine. Safety drivers are typically deployed in pairs to split the work - but also to keep each other attentive.

The larger We Robot discussion was part about public perception of risk, based on a paper (PDF) by Aaron Mannes that discussed how easy it is to derail public trust in a company or new technology when statistically less-significant incidents spark emotional public outrage. Self-driving cars may in fact be safer overall than human drivers despite the fatal crash in Arizona; Mannes also mentioned were Three Mile Island, which made the public much more wary of nuclear power, and the Ford Pinto, which spent the 1970s occasionally catching fire.

Mannes suggested that if you have that trust relationship you may be able to survive your crisis. Without it, you're trying to win the public over on "Frankenfoods".

So much was funnier and more light-hearted seven years ago, as a long-time attendee pointed out; the discussions have darkened steadily year by year as theory has become practice and we can no longer think the problems are as far away as the Singularity.

In San Francisco, delivery robots cause sidewalk congestion and make some homeless people feel surveilled; in Chicago and Durham we risk embedding automated unfairness into criminal justice; the egregious extent of internet surveillance has become clear; and the world has seen its first self-driving car road deaths. The last several years have been full of fear about the loss of jobs; now the more imminent dragons are becoming clearer. Do you feel comfortable in public spaces when there's a like a mobile unit pointing some of its nine cameras at you?

Karen Levy, finds that truckers are less upset about losing their jobs than about automation invading their cabs, ostensibly for their safety. Sensors, cameras, and wearables that monitor them for wakefulness, heart health, and other parameters are painful and enraging to this group, who chose their job for its autonomy.

Today's drivers have the skills to step in; tomorrow's won't. Today's doctors are used to doing their own diagnostics; tomorrow's may not be. In the paper by Michael Froomkin, Ian Kerr, and Joëlle Pinea (PDF), automation may mean not only deskilling humans (doctors) but also a frozen knowledge base. Many hope that mining historical patient data will expose patterns that enable more accurate diagnostics and treatments. If the machines take over, where will the new approaches come from?

Worse, behind all that is sophisticated data manipulation for which today's internet is providing the prototype. When, as Woody Hartzog suggested, Rocco, your Alexa-equipped Roomba, rolls up to you, fakes a bum wheel, and says, "Daddy, buy me an upgrade or I'll die", will you have the heartlessness to say no?

Illustrations: Pepper and handler at We Robot 2016.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


April 14, 2018

Late, noisy, and wrong

Thumbnail image for Bill Smart - We Robot 2016.jpg"All sensors are terrible," Bill Smart and Cindy Grimm explained as part of a pre-conference workshop at this year's We Robot. Smart, an engineer at Oregon State with prior history here, loves to explain why robots and AI aren't as smart as people think. "Just a fancy hammer," he said the first year.

Thursday's target was broad: the reality of sensors, algorithms, and machine learnng.

One of his slides read:


  • It's all just math and physics.

  • There is no intelligence.

  • It's just a computer program.

  • Sensors turn physics into numbers.

That last one is the crucial bit, and it struck me as surprising only because in all the years I've read about and glibly mentioned sensors and how many there are in our phones they've never really been explained to me. I'm not an electrical engineering student, so like most of us, I wave around the words. Of course I know that digital means numbers, and computers do calculations with numbers not fuzzy things like light and sound, and therefore the camera in my phone (which is a sensor) is storing values describing light levels rather than photographing light in the way that analogue film did, But I don't' - or didn't until Thursday - really know what sensors do measure. For most purposes, it's OK that my understanding is...let's call it abstract. But it does make it easy to overestimate what the technology can do now and how soon it will be able to fulfil the fantasies of mad scientists.

Smart's point is that when you start talking about what AI can do - whether or not you're using my aspirational intelligence recasting of the term - you'd better have some grasp of what it really is. It means the difference between a blob on the horizon that can be safely ignored and a woman pushing a bicycle across a roadway in front of an oncoming LIDAR-equipped Uber self-driving car;

So he begins with this: "All sensors are terrible." We don't use better ones because either such a thing does not exist or because they're too expensive. They are all "noisy, late, and wrong" and "you can never measure what you want to."

What we want to measure are things like pressure, light, and movement, and because we imagine machines as analogues of ourselves, we want them to feel the pressure, see the light, and understand the movement. However, what sensors can measure is electrical current. So we are always "measuring indirectly through assumptions and physics". This is the point AI Weirdness makes too, more visually, by showing what happens when you apply a touch of surrealism to the pictures you feed through machine learning.

He described what a sensor does this way: "They send a ping of energy into the world. It interacts, and comes back." In the case of LIDAR - he used a group of humans to enact this - a laser pulse is sent out, and the time it takes to return is a number of oscillations of a crystal. This has some obvious implications: you can't measure anything shorter than one oscillation.

Grimm explains that a "time of flight" sensor like that is what cameras - back to old Kodaks - use to auto-focus. Smartphones are pretty good at detecting a cluster of pixels that looks like a face and using that to focus on. But now let's imagine it's being used in a knee-high robot on a sidewalk to detect legs. In an art installation Smart and Grimm did they found that it doesn't work in Portland...because of all those hipsters wearing black jeans.

So there are all sorts of these artefacts, and we will keep tripping over them because most of us don't really know what we're talking about. With image recognition, the important thing to remember is that the sensor is detecting pixel values, not things - and a consequence of that is that we don't necessarily know *what* the system has actually decided is important and we can't guarantee what it might be recognizing. So turn machine learning loose on a batch of photos of Audis, and if they all happen to be photographed at the same angle the system won't recognize an Audi photographed at a different one. Teach a self-driving car all the roads in San Francisco and it still won't know anything about driving in Portland.

That circumscription is important. Teach a machine learning system on a set of photos of Abraham Lincoln and a zebra fish, and you get a system that can't imagine it might be a cat. The computer - which, remember, is working with an array of numbers - looks at the numbers in the array and based on what it has identified as significant in previous runs makes the call based on what's closest. It's numbers in, numbers out, and we can't guarantee what it's "recognizing".

A linguistic change would help make all this salient. LIDAR does not "see" the roadway in front of the car that's carrying it. Google's software does not "translate" language. Software does not "recognize" images. The machine does not think, and it has no gender.

So when Mark Zuckerberg tells Congress that AI will fix everything, consider those arrays of numbers that may interpret a clutch of pixels as Abraham Lincoln when what's there is a zebra fish...and conclude he's talking out of his ass.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier co\lumns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


March 30, 2018

Conventional wisdom

Hague-Grote_Markt_DH3.JPGOne of the problems the internet was always likely to face as a global medium was the conflict over who gets to make the rules and whose rules get to matter. So far, it's been possible to kick the can down the road for Future Government to figure out while each country makes its own rules. It's clear, though, that this is not a workable long-term strategy, if only because the longer we go on without equitable ways of solving conflicts, the more entrenched the ad hoc workarounds and because-we-can approaches will become. We've been fighting the same battles for nearly 30 years now.

I didn't realize how much I longed for a change of battleground until last week's Internet Law Works-in-Progress paper workshop, when for the first time I heard an approach that sounded like it might move the conversation beyond the crypto wars, the censorship battles, and the what-did-Facebook-do-to-our-democracy anguish. The paper was presented by Asaf Lubin, a Yale JSD candidate whose background includes a fellowship at Privacy International. In it, he suggested that while each of the many cases of international legal clash has been considered separately by the courts, the reality is that together they all form a pattern.

asaf-lubin-yale.pngThe cases Lubin is talking about include the obvious ones, such as United States v. Microsoft, currently under consideration in the US Supreme Court and Apple v. FBI. But they also include the prehistoric cases that created the legal environment we've lived with for the last 25 years: 1996's US v. Thomas, the first jurisdictional dispute, which pitted the community standards of California against those of Tennessee (making it a toss-up whether the US would export the First Amendment Puritanism); 1995's Stratton Oakmont v. Prodigy, which established that online services could be held liable for the content their users posted; and 1991's Cubby v. CompuServe, which ruled that CompuServe was a distributor, not a publisher and could not be held liable for user-posted content. The difference in those last two cases: Prodigy exercised some editorial control over postings; CompuServe did not. In the UK, notice-and-takedown rules were codified after the Godfrey v. Demon Internet extended defamation law to the internet..

Both access to data - whether encrypted or not - and online content were always likely to repeatedly hit jurisdictional boundaries, and so it's proved. Google is arguing with France over whether right-to-be-forgotten requests should be deindexed worldwide or just in France or the EU. The UK is still planning to require age verification for pornography sites serving UK residents later this year, and is pondering what sort of regulation should be applied to internet platforms in the wake of the last two weeks of Facebook/Cambridge Analytica scandals.

The biggest jurisdictional case, United States v. Microsoft, may have been rendered moot in the last couple of weeks by the highly divisive Clarifying Lawful Overseas Use of Data (CLOUD) Act. Divisive because: the technology companies seem to like it, EFF and CDT argue that it's an erosion of privacy laws because it lowers the standard of review for issuing warrants, and Peter Swire and Jennifer Daskal think it will improve privacy by setting up a mechanism by which the US can review what foreign governments do with the data they're given; they also believe it will serve us all better than if the Supreme Court rules in favor of the Department of Justice (which they consider likely).

Looking at this landscape, "They're being argued in a siloed approach," Lubin said, going on to imagine the thought process of the litigants involved. "I'm only interested in speech...or I'm a Mutual Legal Assistance person and only interested in law enforcement getting data. There are no conversations across fields and no recognition that the problems are the same." In conversation at conferences, he's catalogued reasons for this. Most cases are brought against companies too small to engage in too-complex litigation and who fear antagonizing the judge. Larger companies are strategic about which cases they argue and in front of whom; they seek to avoid having "sticky precedents" issued by judges who don't understand the conflicts or the unanticipated consequences. Courts, he said, may not even be the right forums for debating these issues.

The result, he went on to say, is that these debates conflate first-order rules, such as the right balance on privacy and freedom of expression, with second-order rules, such as the right procedures to follow when there's a conflict of laws. To solve the first-order rules, we'd need something like a Geneva Convention, which Lubin thought unlikely to happen.

To reach agreement on the second-order rules, however, he proposes a Hague Convention, which he described as "private international law treaties" that could address the problem of agreeing the rules to follow when laws conflict. As neither a lawyer nor a policy wonk, the idea sounded plausible and interesting: these are not debates that should be solved by either "Our lawyers are bigger and more expensive than your lawyers" or "We have bigger bombs." (Cue Tom Lehrer: "But might makes right...") I have no idea if such an idea can work or be made to happen. But it's the first constructive new suggestion I've heard anyone make for changing the conversation in a long, long time.


Illustrations: The Hague's Grote Markt (via Wikimedia; Asaf Lubin.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 2, 2018

In sync

Discarding images-King David music.jpgUntil Wednesday, I was not familiar with the use of "sync" to stand for a music synchronization license - that is, a license to use a piece of music in a visual setting such as a movie, video game, or commercial. The negotiations involved can be Byzantine and very, very slow, in part because the music's metadata is so often wrong or missing. In one such case, described at Music 4.5's seminar on developing new deals and business models for sync (Flash), it took ten years to get the wrong answer from a label to the apparently simple question: who owns the rights to this track on this compilation album?

The surprise: this portion of the music business is just as frustrated as activists with the state of online copyright enforcement. They don't love the Digital Millennium Copyright Act (2000) any more than we do. We worry about unfair takedowns of non-infringing material and bans on circumvention tools; they hate that the Act's Safe Harbor grants YouTube and Facebook protection from liability as long as they remove content when told it's infringing. Google's automated infringement detection software, ContentID, I heard Wednesday, enables the "value gap", which the music industry has been fretting about for several years now because the sites have no motivation to create licensing systems. There is some logic there.

However, where activists want to loosen copyright, enable fair use, and restore the public domain, they want to dump Safe Harbor, either by developing a technological bypass; or change the law; or by getting FaceTube to devise a fairer, more transparent revenue split. "Instagram," said one, "has never paid the music industry but is infringing copyright every day."

To most of us, "online music" means subscription-based streaming services like Spotify or download services like Amazon and iTunes. For many younger people, especially Americans though, YouTube is their jukebox. Pex estimates that 84% of YouTube videos contain at least ten seconds of music. Google says ContentID matches 99.5% of those, and then they are either removed or monetized. But, Pex argues, 65% of those videos remain unclaimed and therefore provide no revenue. Worse, as streaming grows, downloads are crashing. There's a detectable attitude that if they can fix licensing on YouTube they will have cracked it for all sites hosting "creator-generated content".

It's a fair complaint that ContentID was built to protect YouTube from liability, not to enable revenues to flow to rights holders. We can also all agree that the present system means millions of small-time creators are locked out of using most commercial music. The dancing baby case took eight years to decide that the background existence of a Prince song in a 29-second home video of a toddler dancing was fair use. But sync, too, was designed for businesses negotiating with businesses. Most creators might indeed be willing to pay to legally use commercial music if licensing were quick, simple, and cheap.

There is also a question of whether today's ad revenues are sustainable; a graphic I can't find showed that the payout per view is shrinking. Bloomberg finds that increasingly winning YouTubers are taking all with little left for the very long tail.

The twist in the tale is this. MP3 players unbundled albums into songs as separate marketable items. Many artists were frustrated by the loss of control inherent in enabling mix tapes at scale. Wednesday's discussion heralded the next step: unbundling the music itself, breaking it apart into individual beats, phrases and bars, each licensable.

One speaker suggested scenarios. The "content" you want to enjoy is 42 minutes long but your commute is only 38 minutes. You might trim some "unnecessary dialogue" and rearrange the rest so now it fits! My reaction: try saying "unnecessary dialogue" to Aaron Sorkin and let's see how that goes.

I have other doubts. I bet "rearranging" will take longer than watching the four minutes. Speeding up the player slightly achieves the same result, and you can do that *now* for free (try really blown it. More useful was the suggestion that hearing-impaired people could benefit from being able to tweak the mix to fade the background noise and music in a pub scene to make the actors easier to understand. But there, too, we actually already have closed captions. It's clear, however, that the scenarios may be wrong, but the unbundling probably isn't.

In this world, we won't be talking about music, but "music objects". Many will be very low-value...but the value of the total catalogue might rise. The BBC has an experiment up already: The Mermaid's Tears, an "object-based radio drama" in which you can choose to follow any one of the three characters to experience the story.

Smash these things together, and you see a very odd world coming at us. It's hard to see how fair use survives a system that aims to license "music objects" rather than "music". In 1990, Pamela Samuelson warned about copyright maximlism. That agenda does not appear to have gone away.


Illustrations: King David dancing before the Ark of the Covenant, 'Maciejowski Bible', Paris ca. 1240 (via Discarding Images.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 2, 2018

Schrödinger's citizen

cpdp-nationality2.pngOne of the more intriguing panels at this year's Computers, Privacy, and Data Protection (obEgo: I moderated) began with a question from Peter Swire: Can the nationality of the target ever be a justified basis for different surveillance rules?

France, the Netherlands, Sweden, Germany, and the UK, explained Mario Oetheimer, an expert on data protection and international human rights with the European Union Agency for Fundamental Rights, do apply a lower level of safeguards for international surveillance as compared to domestic surveillance. He believes Germany is the only EU country whose surveillance legislation includes nationality criteria.

The UK's 2016 Investigatory Powers Act (2016), parts of which were struck down this week in the European Court of Justice, was an example. Oetheimer, whose agency has a report on fundamental rights in surveillance, said introducing nationality-based differences will "trickle down" into an area where safeguards are already relatively underdeveloped and hinder developing further protections.

Thumbnail image for peterswire-cpdp2018.pngIn his draft paper, Swire favors allowing greater surveillance of non-citizens than citizens. While some countries - he cited the US and Germany - provide greater protection from surveillance to their own citizens than to foreigners, there is little discussion about why that's justified. In the US, he traces the distinction to Watergate, when Nixon's henchmen were caught unacceptably snooping on the opposition political party. "We should have very strong protections in a democracy against surveilling the political opposition and against surveilling the free press." But granting everyone else the same protection, he said, is unsustainble politically and incorrect as a matter of law and philosophy.

This is, of course, a very American view, as the late Caspar Bowden impatiently explained to me in 2013. Elsewhere, human rights - including privacy - are meant to be universal. Still, there is a highly practical reason for governments and politicians to prefer their own citizens: foreigners can't vote them out of office. For this reason (besides being American), I struggle to believe in the durability of any rights granted to non-citizens. The difference seems to me the whole point of having citizens in the first place. At the very least, citizens have the unquestioned right to live and enter the country, which non-citizens do not have. But, as Bowden might have said, there is a difference between *fewer* rights and *no* rights. Before that conversation, I did not really understand about American exceptionalism.

Like so many other things, citizenship and nationality are multi-dimensional rather than binary. Swire argues that it's partly a matter of jurisdiction: governments have greater ability and authority to ask for information about their own citizens. Here is my reference to Schrödinger's cat: one may be a dual citizen, simultaneously both foreign and not-foreign and regarded suspiciously by all.

Joseph Cannataci disagreed, saying that nationality does not matter: "If a person is a threat, I don't care if he has three European passports...The threat assessment should reign supreme."

German privacy advocate Thorsten Wetzling outlined Germany's surveillance law, recently reformulated in response to the Snowden revelations. Germany applies three categories to data collection: domestic, domestic-foreign (or "international"), and foreign. "International" means that one end of the communication is in Germany; "foreign" means that both ends are outside the country. The new law specifically limits data collected on those outside Germany and subjects non-targeted foreign data collection to new judicial oversight.

Wetzling believes we might find benefits in extending greater protection to foreigners than accrues to domestic citizens. Extending human rights protection would mean "the global practice of intelligence remains within limits", and would give a country the standing to suggest to other countries that they reciprocate. This had some resonance for me: I remember hearing the computer scientist George Danezis say something about since we all have few nationalities, at any given time we can be surveilled by a couple of hundred other countries. We can have a race to the bottom...or to the top.

One of Swire's points was that one reason to allow greater surveillance of foreigners is that it's harder to conduct. Given that technology is washing away that added difficulty, Amie Stepanovich asked, shouldn't we recognize that? Like Wetzling, she suggested that privacy is a public good; the greater the number of people who have it the more we may benefit.

As abstruse as these legal points may sound, ultimately the US's refusal to grant human rights to foreigners is part of what's at stake in determining whether the US's privacy regime is strong enough for the EU-US Privacy Shield to pass its legal challenges. As the internet continues to raise jurisdictional disputes, Swire's question will take its place alongside others, such as how much location should matter when law enforcement wants access to data (Microsoft v. United States, due to be heard in the US Supreme Court on February 27) and countries follow the UK's lead in claiming extraterritorial jurisdiction over data and the right to bulk-hack computers around the world.

But, said Cannataci in disputing Swire's arguments, the US Constitution says, "All men are created equal". Yes, it does. But in "men" the Founding Fathers did not include women, black people, slaves, people who didn't own property.... "They didn't mean it," I summarized. Replied Cannataci: "But they *should* have." Indeed.


Illustrations: The panel, left to right: Cannataci, Swire, Stepanovich, Grossman, Wetzling, Oetheimer.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 26, 2018

Bodies in the clouds

andrea-matwyshyn.jpgThis year's Computers, Privacy, and Data Protection conference had the theme "The Internet of Bodies". I chaired the "Bodies in the Clouds" panel, which was convened by Lucie Krahulcova of Access Now, and this is something like what I may have said to introduce it.

The notion of "cyberspace" as a separate space derives from the early days of the internet, when most people outside of universities or large science research departments had to dial up and wait while modems mated to get there. Even those who had those permanent connections were often offline in other parts of their lives. Crucially, the people you met in that virtual land were strangers, and it was easy to think there were no consequences in real life.

In 2013, New America Foundation co-founder Michael Lind called cyberspace an idea that makes you dumber the moment you learn of it and begged us to stop believing the internet is a mythical place that governments and corporations are wrongfully invading. While I disagreed, I can see that those with no memory of those early days might see it that way. Today's 30-year-olds were 19 when the iPhone arrived, 18 when Facebook became a thing, 16 when Google went public, and eight when Netscape IPO'd. They have grown up alongside iTunes, digital maps, and GPS, surrounded online by everyone they know. "Cyberspace" isn't somewhere they go; online is just an extension of their phones or laptops..

And yet, many of the laws that now govern the internet were devised with the separate space idea in mind. "Cyberspace", unsurprisingly, turned out not to be exempt from the laws governing consumer fraud, copyright, defamation, libel, drug trafficking, or finance. Many new laws passed in this period are intended to contain what appeared to legislators with little online experience to be a dangerous new threat. These laws are about to come back to bite us.

At the moment there is still *some* boundary: we are aware that map lookups, video sites, and even Siri requests require online access to answer, just as we know when we buy a device like a "smart coffee maker" or a scale that tweets our weight that it's externally connected, even if we don't fully understand the consequences. We are not puzzled by the absence of online connections as we would be if the sun disappeared and we didn't know what an eclipse was.

Security experts had long warned that traditional manufacturers were not grasping the dangers of adding wireless internet connections to their products, and in 2016 they were proved right, when the Mirai botnet harnessed video recorders, routers, baby monitors, and CCTV cameras to delier monster attacks on internet sites and service providers.

For the last few years, I've called this the invasion of the physical world by cyberspace. The cyber-physical construct of the Internet of Things will pose many more challenges to security, privacy, and data protection law. The systems we are beginning to build will be vastly more complex than the systems of the past, involving many more devices, many more types of devices, and many more service providers. An automated city parking system might have meters, license plate readers, a payment system, middleware gateways to link all these, and a wireless ISP. Understanding who's responsible when such systems go wrong or how to exercise our privacy rights will be difficult. The boundary we can still see is vanishing, as is our control over it.

For example, how do we opt out of physical tracking when there are sensors everywhere? It's clear that the Cookie Directive approach to consent won't work in the physical world (though it would give a new meaning to "no-go areas").

Today's devices are already creating new opportunities to probe previously inaccessible parts of our lives. Police have asked for data from Amazon Echos in a Arkansas murder case. In Germany, investigators used the suspect's Apple Health app while re-enacting the steps they believed he took and compared the results to the data the app collected at the time of the crime to prove his guilt.

A friend who buys and turns on an Amazon Echo is deemed to have accepted its privacy policy. Does visiting their home mean I've accepted it too? What happens to data about me that the Echo has collected if I am not a suspect? And if it controls their whole house, how do I get it to work after they've gone to bed?

At Privacy Law Scholars in 2016, Andrea Matwyshyn introduced a new idea: the Internet of Bodies, the theme of this year's CPDP. As she spotted then, the Internet of Bodies make us dependent for our bodily integrity and ability to function on this hybrid ecosystem. At that first discussion of what I'm sure will be an important topic for many years to come, someone commented, "A pancreas has never reported to the cloud before."

A few weeks ago, a small American ISP sent a letter to warn a copyright-infringing subscriber that continuing to attract complaints would cause the ISP to throttle their bandwidth, potentially interfering with devices requiring continuous connections, such as CCTV monitoring and thermostats. The kind of conflict this suggests - copyright laws designed for "cyberspace" touching our physical ability to stay warm and alive in a cold snap - is what awaits us now.

Illustrations: Andrea Matwyshyn.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


January 19, 2018

Expressionism

Thumbnail image for discardingimages-escherbackground.jpg"Regulatory oversight is going to be inevitable," Adam Kinsley, Sky's director of policy, predicted on Tuesday. He was not alone in saying this is the internet's direction of travel, and we shouldn't feel too bad about it. "Regulation is not inherently bad," suggested Facebook's UK public policy manager, Karim Palant.

The occasion was the Westminster eForum's seminar on internet regulation (PDF). The discussion focused on the key question, posed at the outset by digital policy consultant Julian Coles: who is responsible, and for what? Free speech fundamentalists find it easy to condemn anything smacking of censorship. Yet even some of them are demanding proactive removal of some types of content.

Two government initiatives sparked this discussion. The first is the UK's Internet Safety Strategy green paper, published last October. Two aspects grabbed initial attention: a levy on social media companies and age verification for pornography sites, now assigned to the British Board of Film Classification to oversee. But there was always more to pick at, as Evelyn Douek helpfully summarized at Lawfare. Coles' question is fundamental, and 2018 may be its defining moment.

The second, noted by Graham Smith, was raised by the European Commission at the December 2017 Global Internet Forum, and aims to force technology companies to take down extremist content within one to two hours of posting. Smith's description: "...act as detective, informant, arresting officer, prosecutor, defense, judge, jury, and prison warder all at once." Open Rights Group executive director Jim Killock added later that it's unreasonable to expect technology companies to do the right thing perfectly within a set period at scale, making no mistakes.

As Coles said - and as Old Net Curmudgeons remember - the present state of the law was largely set in the mid-to-late 1990s, when the goal of fostering innovation led both the US Congress (via Section 230 of the Communications Decency Act, 1996) and the EU (via the Electronic Commerce Directive, 2000) to hold that ISPs are not liable for the content they carry.

However, those decisions also had precedents of their own. The 1991 US case Cubby v. CompuServe ended in CompuServe's favor, holding it not liable for defamatory content posted to one of its online forums. In 2000, the UK's Godfrey v. Demon Internet successfully applied libel law to Usenet postings, ultimately creating the notice and takedown rules we still live by today. Also crucial in shaping those rules was Scientology's actions in 1994-1995 to remove its top-level secret documents from the internet.

In the simpler landscape when these laws were drafted, the distinction between access providers and content providers was cleaner. Before then, the early online services - CompuServe, AOL, and smaller efforts such as the WELL, CIX, and many others were hybrids - social media platforms by a different name - providing access and a platform for content providers, who curated user postings and chat.

Eventually, when social media were "invented" (Coles's term; more correctly, when everything migrated to the web), today's GAFA (or, in the US, FAANG) inherited that freedom from liability. GAFA/FAANG straddle that briefly sharp boundary between pipes and content like the dead body on the Quebec-Ontario boundary sign in the Canadian film Bon Cop, Bad Cop. The vertical integration that is proceeding apace - Verizon buying AOL and Yahoo!; Comcast buying NBC Universal; BT buying TV sports rights - is setting up the antitrust cases of 2030 and ensuring that the biggest companies - especially Amazon - play many roles in the internet ecosystem. They might be too big for governments to regulate on their own (see also: paying taxes), but public and advertisers' opinions are joining in.

All of this history has shaped the status quo that Kinsley seems to perceive as somewhat unfair when he noted that the same video that is regulated for TV broadcast is not for Facebook streaming. Palant noted that Facebook isn't exactly regulation-free. Contrary to popular belief, he said, many aspects of the industry, such as data and advertising, are already "heavily regulated". The present focus, however, is content, a different matter. It was Smith who explained why change is not simple: "No one is saying the internet is not subject to general law. But if [Kinsley] is suggesting TV-like regulation...where it will end up is applying to newspapers online." The Authority for Television on Demand, active from 2010 to 2015, already tested this, he said, and the Sun newspaper got it struck down. TV broadcasting's regulatory regime was the exception, Smith argued, driven by spectrum scarcity and licensing, neither of which applies to the internet.

New independent Internet Watch Foundation chair Andrew Puddephatt listed five key lessons from the IWF's accumulated 21 years of experience: removing content requires clear legal definitions; independence is essential; human analysts should review takedowns, which have to be automated for reasons of scale; outside independent audits are also necessary; companies should be transparent about their content removal processes.

If there is going to be a regulatory system, this list is a good place to start. So far, it's far from the UK's present system. As Killock explained, PIPCU, CTRIU, and Nominet all make censorship decisions - but transparency, accountability, oversight, and the ability to appeal are lacking.


Illustrations: "Escher background" (from Discarding Images, Boccaccio, "Des cleres et nobles femmes" (French version of "De mulieribus claris"), France ca. 1488-1496, BnF, Français 599, fol. 89v).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


December 1, 2017

Unstacking the deck

Thumbnail image for Alice_par_John_Tenniel_42.pngA couple of weeks ago, I was asked to talk to a workshop studying issues in decision-making in standards development organizations about why the consumer voice is important. This is what I think I may have said.

About a year ago, my home router got hacked thanks to a port deliberately left open by the manufacturer and documented (I now know) in somewhat vague terms on page 210 of a 320-page manual. The really important lesson I took from the experience was that security is a market failure: you can do everything right and still lose. The router was made by an eminently respectable manufacturer, sold by a knowledgeable expert, configured correctly, patched up to date, and yet still failed a basic security test. The underlying problem was that the manufacturer imagined that the port it left open would only ever be used by ISPs wishing to push updates to their customers and that ordinary customers would not be technically capable of opening the port when needed. The latter assumption is probably true, but the former is nonsense. No attacker says, "Oh, look, a hole! I wonder if we're allowed to use it." Consumers are defenseless against manufacturers who fail to understand this.

But they are also, as we have seen this year, defenseless against companies' changing business plans and models. In April, Google's Nest subsidiary decided to turn off devices made by Revolv, a company it bought in 2014 that made a smart home hub. Again, this is not a question of ending support for a device that continues to function as would have happened any time in the past. The fact that the hub is controlled by an app means both the hardware and the software can be turned off when the company loses interest in the product. These are, as Arlo Gilbert wrote at Medium, devices people bought and paid for. Where does Google get the right, in Gilbert's phrasing, to "reach into your home and pull the plug"?

In August, sound system manufacturer Sonos offered its customers two choices: accept its new privacy policy, which requires customers to agree to broader and more detailed data collection, or watch your equipment decline in functionality as updates are no longer applied and possibly cease to function. Here, the issue appears to be that Sonos wants its speakers to integrate with voice assistants, and the company therefore must conform to privacy policies issued by upstream companies such as Amazon. If you do not accept, eventually you have an ex-sound system. Why can't you accept the privacy policy if and only if you want to add the voice assistant?

Finally, in November, Logitech announced it would end service and support for its Harmony Hub devices in March 2018. This might have been a "yawn" moment except that "end of life" means "stop working". The company eventually promised to replace all these devices with newer Harmony Hubs, which can control a somewhat larger range of devices, but the really interesting thing is why it made the change. According to Ars Technica, Logitech did not want to renew an encryption certificate whose expiration will leave Harmony Link devices vulnerable to attacks. It was, as the linked blog posting makes plain, a business decision. For consumers and the ecologically conscientious, a wasteful one.

So, three cases where consumers, having paid money for devices in good faith, are either forced to replace them or accept being extorted for their data. In a world where even the most mundane devices are reconfigurable via software and receive updates over the internet, consumers need to be protected in new ways. Standards development organizations have a role to play in that, even if it's not traditionally been their job. We have accepted "Pay-with-data" as a tradeoff for "free" online; now this is "pay-with-data" as part of devices we've paid to buy.

The irony is that the internet was supposed to empower consumers by redressing the pricing information imbalance between buyers and sellers. While that has certainly happened, the incoming hybrid cyber-physical world will up-end that. We will continue to know a lot more about pricing than we used to, but connected software allows the companies that make the objects that clutter our homes to retain control of those items throughout their useful lives. In such a situation the power balance that applies is "Possession is nine-tenths of the law." And possession will no longer be measurable by the physical location of the object but by who has access to change what it does. Increasingly, that's not us. Consumers have no ability to test their cars for regulatory failures (VW) or know whether Uber is screwing the regulators or Uber drivers are screwing riders. This is a new imbalance of power we cannot fix by ourselves.

Worse, much of this will be invisible to us. All the situations discussed here became visible. But I only found out about the hack on my router because I am eccentric enough to run my own mail server and the spam my router sent got my outgoing email bounced when it caused an anti-spam service to blacklist my mail server. In the billion-object Internet of Things, such communications and many of their effects will primarily be machine-to-machine and hidden from human users, and the world will cease to function in unpredictable odd ways.

Illustrations: John Tenniel's Alice, under attack by a pack of cards.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2017

Twister

Thumbnail image for werbach-final-panel-cropped.jpg"We were kids working on the new stuff," said Kevin Werbach. "Now it's 20 years later and it still feels like that."

Werbach was opening last weekend's "radically interdisciplinary" (Geoffrey Garrett) After the Digital Tornado, at which a roomful of internet policy veterans tried to figure out how to fix the internet. As Jaron Lanier showed last week, there's a lot of this where-did-we-all-go-wrong happening.

The Digital Tornado in question was a working paper Werbach wrote in 1997, when he was at the Federal Communications Commission. In it, Werbach sought to pose questions for the future, such as what the role of regulation would be around...well, around now.

Some of the paper is prescient: "The internet is dynamic precisely because it is not dominated by monopolies or governments." Parts are quaint now. Then, the US had 7,000 dial-up ISPs and AOL was the dangerous giant. It seemed reasonable to think that regulation was unnecessary because public internet access had been solved. Now, with minor exceptions, the US's four ISPs have carved up the country among themselves to such an extent that most people have only one ISP to "choose" from.

To that, Gigi Sohn, the co-founder of Public Knowledge, named the early mistake from which she'd learned: "Competition is not a given." Now, 20% of the US population still have no broadband access. Notably, this discussion was taking place days before current FCC chair Ajit Pai announced he would end the network neutrality rules adopted in 2015 under the Obama administration.

Everyone had a pet mistake.

Tim Wu, regarding decisions that made sense for small companies but are damaging now they're huge: "Maybe some of these laws should have sunsetted after ten years."

A computer science professor bemoaned the difficulty of auditing protocols for fairness now that commercial terms and conditions apply.

Another wondered if our mental image of how competition works is wrong. "Why do we think that small companies will take over and stay small?"

Yochai Benkler argued that the old way of reining in market concentration, by watching behavior, no longer works; we understood scale effects but missed network effects.

Right now, market concentration looks like Google-Apple-Microsoft-Amazon-Facebook. Rapid change has meant that the past Big Tech we feared would break the internet has typically been overrun. Yet we can't count on that. In 1997, market concentration meant AOL and, especially, desktop giant Microsoft. Brett Fischmann paused to reminisce that in 1997 AOL's then-CEO Steve Case argued that Americans didn't want broadband. By 2007 the incoming giant was Google. Yet, "Farmville was once an enormous policy concern," Christopher Yoo reminded; so was Second Life. By 2007, Microsoft looked overrun by Google, Apple, and open source; today it remains the third largest tech company. The garage kids can only shove incumbents aside if the landscape lets them in.

"Be Facebook or be eaten by Facebook", said Julia Powles, reflecting today's venture capital reality.

Wu again: "A lot of mergers have been allowed that shouldn't have been." On his list, rather than AOL and Time-Warner, cause of much 1999 panic, was Facebook and Instagram, which the Office of Fair Trading approvied OK because Facebook didn't have cameras and Instagram didn't have advertising. Unrecognized: they were competitors in the Wu-dubbed attention economy.

Thumbnail image for Tornado-Manitoba-2007-jpgBoth Bruce Schneier, who considered a future in which everything is a computer, and Werbach, who found early internet-familiar rhetoric hyping the blockchain, saw more oncoming gloom. Werbach noted two vectors: remediable catastrophic failures, and creeping recentralization. His examples of the DAO hack and the Parity wallet bug led him to suggest the concept of governance by design. "This time," Werbach said, adding his own entry onto the what-went-wrong list, "don't ignore the potential contributions of the state."

Karen Levy's "overlooked threat" of AI and automation is a far more intimate and intrusive version of Shoshana Zuboff's "surveillance capitalism"; it is already changing the nature of work in trucking. This resonated with Helen Nissenbaum's "standing reserves": an ecologist sees a forest; a logging company sees lumber-in-waiting. Zero hours contracts are an obvious human example of this, but look how much time we spend waiting for computers to load so we can do something.

Levy reminded that surveillance has a different meaning for vulnerable groups, linking back to Deirdre Mulligan's comparison of algorithmic decision-making in healthcare and the judiciary. The first is operated cautiously with careful review by trained professionals who have closely studied its limits; the second is off-the-shelf software applied willy-nilly by untrained people who change its use and lack understanding of its design or problems. "We need to figure out how to ensure that these systems are adopted in ways that address the fact that...there are policy choices all the way down," Mulligan said. Levy, later: "One reason we accept algorithms [in the judiciary] is that we're not the ones they're doing it to."

Yet despite all this gloom - cognitive dissonance alert - everyone still believes that the internet has been and will be positively transformative. Julia Powles noted, "The tornado is where we are. The dandelion is what we're fighting for - frail, beautiful...but the deck stacked against it." In closing, Lauren Scholz favored a return to basic ethical principles following a century of "fallen gods" including really big companies, the wisdom of crowds, and visionaries.

Sohn, too, remains optimistic. "I'm still very bullish on the internet," she said. "It enables everything important in our lives. That's why I've been fighting for 30 years to get people access to communications networks.".


Illustrations: After the Digital Tornado's closing panel (left to right): Kevin Werbach, Karen Levy, Julia Powles, Lauren Scholz; tornado (Justin1569 at Wikipedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 17, 2017

Counterfactuals

Thumbnail image for lanier-lrm-2017.jpgOn Tuesday evening, virtual reality pioneer and musician Jaron Lanier, in London to promote his latest book, Dawn of the New Everything, suggested the internet took a wrong turn in the 1990s by rejecting the idea of combating spam by imposing a tiny - "homeopathic" - charge to send email. Think where we'd be now, he said. The mindset of paying for things would have been established early, and instead of today's "behavior modification empires" we'd have a system where people were paid for the content they produce.

Lanier went on to invoke the ghost of Ted Nelson who began his earliest work on Project Xanadu in 1960, before ARPAnet, the internet, and the web. The web fosters copying. Xanadu instead gave every resource a permanent and unique address, and linking instead of copying meant nothing ever lost its context.

The problem, as Nelson's 2011 autobiography Possiplex and a 1995 Wired article, made plain, is that trying to get the thing to work was a heartbreaking journey filled with cycles of despair and hope that was increasingly orthogonal to where the rest of the world was going. While efforts continue, it's still difficult to comprehend, no matter how technically visionary and conceptually advanced it was. The web wins on simplicity.

But the web also won because it was free. Tim Berners-Lee is very clear about the importance he attaches to deciding not to patent the web and charge licensing fees. Lanier, whose personal stories about internetworking go back to the 1980s, surely knows this. When the web arrived, it had competition: Gopher, Archie, WAIS. Each had its limitations in terms of user interface and reach. The web won partly because it unified all their functions and was simpler - but also because it was freer than the others.

Suppose those who wanted minuscule payments for email had won? Lanier believes today's landscape would be very different. Most of today's machine learning systems, from IBM Watson's medical diagnostician to the various quick-and-dirty translation services rely on mining an extensive existing corpus of human-generated material. In Watson's case, it's medical research, case studies, peer review, and editing; in the case of translation services it's billions of side-by-side human-translated pages that are available on the web (though later improvements have taken a new approach). Lanier is right that the AIs built by crunching found data are parasites on generations of human-created and curated knowledge. By his logic, establishing payment early as a fundamental part of the internet would have ensured that the humans that created all that data would be paid for their contributions when machine learning systems mined it. Clarity would result: instead of the "cruel" trope that AIs are rendering humans unnecessary, it would be obvious that AI progress relied on continued human input. For that we could all be paid rather than being made "wards of the state".

Consider a practical application. Microsoft's LinkedIn is in court opposing HiQ, a company that scrapes LinkedIn's data to offer employers services that LinkedIn might like to offer itself. The case, which was decided in HiQ's favor in August but is appeal-bound, pits user privacy (argued by EPIC) against innovation and competition (argued by EFF). Everyone speaks for the 500 million whose work histories are on LinkedIn, but no one speaks for our individual ownership of our own information.

Let's move to Lanier's alternative universe and say the charge had been applied. Spam dropped out of email early on. We developed the habit of paying for information. Publishers and the entertainment industry would have benefited much sooner, and if companies like Facebook and LinkedIn had started, their business models would have been based on payments for posters and charges for readers (he claims to believe that Facebook will change its business model in this direction in the coming years; it might, but if so I bet it keeps the advertising).

In that world, LinkedIn might be our broker or agent negotiating terms with HiQ on our behalf rather than in its own interests. When the web came along, Berners-Lee might have thought pay-to-click logical, and today internet search might involve deciding which paid technology to use. If, that is, people found it economic to put the information up in the first place. The key problem with Lanier's alternative universe: there were no micropayments. A friend suggests that China might be able to run this experiment now: Golden Shield has full control, and everyone uses WeChat and AliPay.

I don't believe technology has a manifest destiny, but I do believe humans love free and convenient, and that overwhelms theory. The globally spreading all-you-can-eat internet rapidly killed the existing paid information services after commercial access was allowed in 1994. I'd guess that the more likely outcome of charging for email would have been the rise of free alternatives to email - instant messaging, for example, which happened in our world to avoid spam. The motivation to merge spam with viruses and crack into people's accounts to send spam would have arisen earlier than it did, so security would have been an earlier disaster. As the fundamental wrong turn, I'd instead pickcentralization.

Lanier noted the culminating irony: "The left built this authoritarian network. It needs to be undone."

The internet is still young. It might be possible, if we can agree on a path.


Illustrations: Jaron Lanier in conversation with Luke Robert Mason (Eva Pascoe);

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


November 3, 2017

Life forms

Thumbnail image for Cephalopod_barnstar.pngWould you rather be killed by a human or a machine?

At this week's Royal Society meeting on AI and Society, Chris Reed recounted asking this question of an audience in Singapore. They all picked the human, even though they knew it was irrational, because they thought at least they'd know *why*.

A friend to whom I related this had another theory: maybe they thought there was a chance they could talk the the human killer out of it, whereas the machine would be implacable. It's possible.

My own theory pins this distaste for machine killing on a different, crucial underlying factor: a sense of shared understanding. The human standing over you with the axe or driving the oncoming bus may be a professional paid to dispatch you, a serial killer, an angry ex, or mentally ill, but they all have a personal understanding of what a human life means because they all have one they know they, too, will one day lose. The meaning of removing someone else's life is thoroughly embedded in all of us. Not having that is more or less the definition of a machine, or was until Philip K. Dick and his replicants. But there is no reason to assume that every respondent had the same reason.

Similarly, a commenter in the audience found similar responses to an Accenture poll he encountered on Twitter that inquired whether he would be in favor of AI making health decisions. When he checked the voting results, 69% had said no. Here again, the death of a patient by medical mistake keeps a human doctor awake at night (if television is to be believed), while to a machine it's a statistic, no matter how heavily weighted in its inner backpropagating neural networks.

Marion-Oswald-in-template.jpgThese two anecdotes resonated because earlier, Marion Oswald had opened her talk by asking whether, like Peter Godfrey-Smith's observation of cephalopods, interacting with AI was the closest we can come to interacting with an intelligent alien. Arguably, unless the aliens are immortal, on issues of life and death we can actually expect to have more shared understanding with them, as per above, than with machines.

The primary focus of Oswald's talk was actually to discuss her work studying HART, an algorithmic model used by Durham Constabulary to decide whether offenders qualified for deferred prosecution and help with their problems. The study raises all sorts of questions we're going to have to consider over the coming years about the role of police in society.

These issues were somewhat taken up later by Mireille Hildebrandt, who warned of the risks of transforming text-driven law - the messy stuff centuries of court cases have contested and interpreted - to data-driven law Allowing that to happen, she argued, transforms law into administration. "Contestability is the heart of the rule of law," she said. "There is more to the law that predictability and expedience." A crucial part of that is being able to test the system, and here Hildebrandt was particularly gloomy, in that although legal systems that comb the legal corpus are currently being marketed as aids for lawyers, she views it as inevitable that at some point they will become replacements. Some time after that, the skills necessary to test the inner workings of these systems will have vanished from the systems' human owners' firms.

At the annual We Robot conference, a recurring theme is the hard edges of computer systems, an aspect Ellen Ullman examined closely in her 1997 book, Close to the Machine. In Bill Smart's example, the difference between 59.99 miles an hour and 60.01 miles an hour is indistinguishable, but to a computer fitted with the right sensors the difference is a speeding ticket. An aspect of this that is insufficiently discussed is that all biological beings have some level of unpredictability. Robots and AI with far greater sensing precision than is available to humans will respond to changes we can't detect, making them appear less predictable, and therefore more intelligent, than they actually are. This is a deception we will have to learn to decode.

Already, machines that are billed as tools to aid human judgement are often much more trusted than they should be. Danielle Citron's 2006 paper Technological Due Process studied this in connection with benefits scoring systems in Texas and California, and found two problems. First, humans tended to trust the machine's decisions rather than apply their own judgement, a problem Hildebrandt referred to as "judgemental atrophy". Second, computer programmers are not trained lawyers, and are therefore not good at accurately translating legal text into decision-making systems. How do you express a fuzzy but widely understood and often-used standard like the UK's "reasonable person" in computer code? You'd have to precisely define the attopoint at which "reasonable" abruptly flicks to "unreasonable".

Ultimately, Oswald came down against the "intelligent alien" idea: "These are people-made, and it's up to us to find the benefits and tackle the risks," she said. "Ignorance of mathematics is no excuse."

That determination rests on the notion that the people building AI systems and the people using them have shared values. We already know that's not true, but even so: I vote less alien than a cephalopod on everything but the fear of death.

Illustrations: Cephalopod (via Obsidian Soul; Marion Oswald.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 22, 2017

Fakeout

original-LOC-opper-newspaper.png"Fake news is not some unfortunate consequence," the writer and policy consultant Maria Farrell commented at the UK Internet Governance Forum last week. "It is the system working as it should in the attention economy."

The occasion was a panel featuring Simon Milner, Facebook's UK policy director; Carl Miller, from the Demos think tank, James Cook, Business Insider UK's technology editor; the MP and shadow minister for industrial strategy Chi Onwurah (Labour - Newcastle upon Tyne Central); and, as moderator, Nominet chair Mark Wood.

cropped-Official_portrait_of_Chi_Onwurah.jpgThey all agreed to disagree on the definition of "fake news". Cook largely saw it as a journalism problem: fact checkers and sub-editors are vanishing. Milner said Facebook has a four-pronged strategy: collaborate with others to find industry solutions, as in the Facebook Journalism Project; disrupt the economic flow - that is, target clickbait designed to take people *off* Facebook to sites full of ads (irony alert); take down fake accounts (30,000 before the French election); try to build new products that improve information diversity and educate users. Miller wants digital literacy added to the national curriculum: "We have to change the skills we teach people. Journalists used to make those decisions on our behalf, but they don't any more." Onwurah, a chartered electrical engineer who has worked for Ofcom, focused on consequences: she felt the technology giants could do more to combat the problem, and expressed intelligent concern about algorithmic "black boxes" that determine what we see.

Boil this down. Onwurah is talking technology and oversight. Milner also wants technology: solutions should be content-neutral but identify and eliminate bad behavior at the scale of 2 billion users, who don't want to read terms and conditions or be repeatedly asked for ratings. Miller - "It undermines our democracy" - wants governments to take greater responsibility: "it's a race between politics and technology". Cook wants better journalism, but, "It's terrifying, as someone in technology, to think of government seeing inside the Facebook algorithm." Because other governments will want their privilege, too; Apple is censoring its app store in order to continue selling iPhones in China.

Thumbnail image for MariaFarrellPortrait.jpgIt was Farrell's comment, though, that sparked the realization that fake news cannot be solved by thinking of it as a problem in only one of the fields of journalism, international relations, economic inequality, market forces, or technology. It is all those things and more, and we will not make any progress until we take an approach that combines all those disciplines.

Fake news is the democratization of institutional practices that have become structural over many decades. Much of today's fake news uses tactics originally developed by publishers to sell papers. Even journalists often fail to ask the right questions, sometimes because of editorial agendas, sometimes because the threat of lost access to top people inhibits what they ask.

Everyone needs the traditional journalist's mindset of asking, "What's the source?" and "What's their agenda?" before deciding on a story's truth. But there's no future in blaming the people who share these stories (with or without believing them) or calling them stupid. Today we're talking about absurdist junk designed to make people share it; tomorrow's equivalent may be crafted for greater credibility and hence be far more dangerous. Miller's concern for the future of democracy is right. It's not just that these stories are used to poison the information supply and sow division just before an election; the incessant stream of everyday crap causes people to disengage because they trust nothing.

In 1987 I founded The Skeptic in 1987 to counter what the late, great Simon Hoggart called paranormal beliefs' "background noise, interfering with the truth". Of course it matters that a lie on the internet can nearly cause a shoot-out at a pizza restaurant. But we can't solve it with technology, fact-checking, or government fiat at it. Today's generation is growing up in a world where everybody cheats and then lies about it: sports stars.

What we're really talking about here is where to draw the line between acceptable fakery ("spin") and unacceptable fakery. Astrology columns get a pass. Apparently so do professional PR people, as in the 1995 book Toxic Sludge Is Good for You: Lies, Damn Lies, and the Public Relations Industry, by John Stauber and Sheldon Rampton (made into a TV documentary in 2002). In mainstream discussions we don't hear that Big Tobacco's decades-long denial about its own research or Exxon Mobil's approach to climate change undermine democracy. If these are acceptable, it seems harder to condemn the Macedonian teen seeking ad revenue.

This is the same imbalance as prosecuting lone, young, often neuro-atypical computer hackers while the really pressing issues are attacks by criminals and organized gangs.

That analogy is the point: fake news and cybersecurity are sibling problems. Both are tennis, not figure skating; that is, at all times there is an adversary actively trying to frustrate you. "Fixing the users" through training is only one piece of either puzzle.

Treating cybersecurity as a purely technical problem failed. Today's crosses many fields: computer science, philosophy, psychology, law, international relations, economics. So does the VOX-Pol project to study online extremism. This is what we need for fake news.


Illustrations: "The fin de siecle newspaper proprietor", by Frederick Burr Opper, 1894 (from the Library of Congress via Wikipedia); Chi Onwurah; Maria Farrell.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


August 24, 2017

The greatest show on Earth

parthenon-final-partial.jpg
"Oh, yeah, I'm going down Broadway."
"Working. I'm going to try to get a peek at it out the window."
"I'm going up to Centennial Park. I want to be real quiet and meditative."
"No."
"Our hotel's having a viewing party on the roof."
"Just going out in my backyard."
"I've seen one befo-er. They happen all the time. If it wasn't for my grandkids, I'd just stay home."

The question, posed to random strangers around Nashville, was, "Do you have plans to see the eclipse?"

The last speaker, a woman at a bus stop, is of course right. Eclipses do happen all the time. But the Great American Eclipse of 2017 was the first total solar eclipse to hit Nashville since 1478, and it's 99 years since one cut such a long and wide swath across the US - a path 70 miles wide and 2,500 miles long, stretching from the west coast of Oregon to the east coast of South Carolina.
solar-eclipse-t-shirts-tiedye.jpgThis is the also the first one with 24-hour channels to provide major event packaging. I don't remember hearing a thing about the eclipse of February 1979, even though totality covered almost the entire giant state of Montana. The Weather Channel unearthed the ABC News report, which newscaster Frank Reynolds concluded by hoping that in 38 years, "May the shadow of the moon fall on a world at peace". (Ouch.) For hundreds of small towns, the path provided a once-in-a-lifetime bonanza of visitors. T-shirts for all! (A few places, like Carbondale, Illinois, will get a second bite in 2024.

An estimated 1 million people descended on Nashville. Kids got a day off school. Opryland hosted three days of special events. The baseball team invited 10,000 people into the stadium for a viewing party, then kicked everyone out to readmit ticket holders for the (big? hah!) game. The many, many rooftop viewing parties included one at the famed music bar Tootsie's Orchid Lounge, which reportedly charged $500 per person (including free drinks). The even more famous Bluebird café seemed unimpressed; for Monday's open mic night you still had to reserve online at noon. (As if.) They had nonetheless sold out at 2:30pm.

900px-SolarEclipseDiamondRing-corvallisOR-2017-08-21.jpgAfterwards, everyone had seen at least some part of "it". The most frustrated folks were the 8,000 people who had assembled at the Science Center. After clear skies all through the partial phases, a big, dark cloud occluded the show right before totality. "We didn't see the diamond ring, we didn't see the corona, we didn't see Baily's Beads," the on-site Weather Channel reporter lamented.

The bus stop woman was right - but she was also wrong. She'd probably only seen a partial eclipse. I now know that hardly anyone who has experienced totality - the seconds or minutes when the moon fully covers the sun - says "They happen all the time." They say, "Where and when?"

solar-eclipse-discarding-images.jpgIn 1999, when totality passed over Cornwall on its way to northern Europe, I was surprised to hear the astronomy writer Ian Ridpath say he'd been awaiting it since childhood. But in southwest London, even at 90ish% the dimming light had a glassy, almost sepia tone, and the atavistic thought, "What if it doesn't come back?" was unavoidable. Seeing just that much created an immediate sense of direct emotional connection to our ancestors, from medieval peasants to the ancient Sumerians, and their terror at not knowing what was happening. Legends from the earliest recorded solar eclipse, in China around 2000 B.C., have the emperor ordering the astronomers executed.

Here in 2017, we know. In Centennial Park, I recruited Pete, a passing retired Ohio journalist, to watch with me because I liked his eclipse T-shirt. He noted the absurdity of newscasters who cautiously said "expected at", as if the eclipse were a murder suspect whose mention required "alleged". Totality would arrive at 13:27. Were they suggesting there was mathematical doubt about this?

A few minutes after noon a cheer went up: the first chip in the sun was visible.

pete-shirt-cropped.jpgThe changes in the light are slow and subtle at first, as is the temperature drop, later reported as 6 degrees (Fahrenheit). In modern life, the first clear signs are often street lights coming on, as did those behind the Parthenon's columns. Having seen it once, the 90% level was instantly recognizable. The final phases happen fast. We saw all the things the Science Center folks missed, despite the late-arriving distraction of four people with a small dog who set up nearby with 15 minutes to go. They lit incense, and produced a large, native-looking drum, and proceeded to beat it throughout totality. Did they think their activity was crucial in ensuring that the sun re-emerged at full strength? Neither moon nor sun nor annoyance intervened to prevent the sun's return to normal, on time and under budget.

The agnostics, atheists, and skeptics among us may see all this as a persuasive display of science: it happened when and where scientists predicted, with flawless accuracy. But...

"Did you see it?" Replied one last accosted stranger: "It's amazing what God can do."

Illustrations: The waiting audience by the Nashville Parthenon; Solar eclipse (Thomas of Cantimpré, Liber de natura rerum, France ca. 1290; Valenciennes, Bibliothèque municipale, ms. 320, fol. 196v, via Discarding Images); eclipse T-shirts; the diamond ring effect (via Wikimedia from Tuanna2010 in Corvallis, Oregon; Nashville eclipse T-shirt.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

August 18, 2017

Cage match

RUR-robots.jpg"The robot", a friend used to call his telephone answering machine. "The robot", another friend calls his automated tea-maker. For its 2017 robot exhibition London's Science Museum elected to focus on robots that "take a human form or behave in a human-like way", in the catalogue's phrasing - The Verge has pictures. Focusing on form rather than intelligence helpfully avoids the mundane and the formless: the Roomba, search engines, and those petty, bureaucratic automatic faucets. It also eliminates pets like the Aibo, drones and other unmanned autonomous vehicles, and the factory-worker industrial robots that populated a prior Science Museum robot exhibition - 1995, I think.

Two points stick in my mind from 1995. The robots were large, impressive, and Japanese, and performed their jobs on a schedule. One, designed to paint automobiles, wore an artist's beret. Second, they were all behind glass. In Japan, these same robots were out in the open. Here in the UK, the authorities panicked: what if one got loose and hurt people? So they were confined inside big glass boxes and visitors were kept well back, as if the robots were giant scorpions poised to attack.

R-shadow-walker.jpgWhat the 2017 definition does embrace is movie stars: the T-800 from the Terminator movies and Maria from Fritz Lang's 1927 spawned-a-million-sf-movies classic, Metropolis. The limitations of this approach are subtle: Cynthia Breazeal, featured on video explaining her work at the MIT Media Lab, has has said that her inspiration to go into robotics was falling in love with R2D2 when she was eight. Not at all humanoid, designed for comedy over function, yet stole the show from its cranky bipedal companion. A few years ago, at We Robot 2015, R2D2 creator Tony Dyson commented that after hundreds of similar stories from robotics engineers, "No one ever fell in love with C3PO." But: not qualified for inclusion. I was also sorry to see that Shadow Robot got so little notice. Its founder, Richard Greenhill is your classic English eccentric, pouring his mind, heart, and life into trying to build a general-purpose robot in an attic.

oldest-robot-athens-2015-smaller.jpgThe goal, however, was trying to get at the human response to robots and our millennia-old desire for artificial companions. As far back as the third century BC, the ancient Greeks, whose power sources were limited to sun, wind, water, and gravity, were trying both to automate function and copy human form. It's from their work that the medieval clockwork automata in this exhibition logically descend. There's a second tributary in the Jewish mythology surrounding the Golem, as seen in last November's exhibition at Berlin's Jewish Museum. A humanoid being formed from inanimate matter such as clay and unable to speak, in some versions the Golem was created to protect the Jewish community. Movie robots are still recognizable as descendants.

The Science Museum skipped the Golem; arguably better befitting our times, it focused on mechanical antecedents. The body as machine section strikes a reminiscent chord: as a child I had a Visible Man, an 18-inch-tall rendering of the human body and its organs inside a transparent skin; you assembled it from bags of plastic parts. I read now that the Visible Man and (later) Visible Woman toys were anatomically correct, a detail I don't recall; what I remember is that their 3D puzzle quality made them a great way to learn human anatomy.

kodomoroid-cropped.jpgMost of the completed robots on display - there are also prototypes-in-development - were designed either to be watched and admired, like the T-800 or Fritz Lang's Maria, or for a particular function, like the Baxter industrial robot. A few are both, like the Kodomoroid Japanese TV newsreader, a rendering of a young woman in a white dress befitting a first communion. This particular robot bothers me, not because it's so humanoid but because some people will tend to call it "she", a genderizing issue the exhibition touches on elsewhere. People feminize boats and fiddles, too, but there's no chance that these will be taken as models whose form actual women should aspire to emulate. As technical wizards perfect their renderings, that risk exists for all genders. Even with a white dress, folded hands, and a projection-ready expression, it's still a fancy hammer.

It's definitely a pity that more interaction isn't possible. The Telenoid, for example, is a telepresence device; sitting on a couch in a glass cage makes its qualities hard to appreciate. However, small children may notice that they can interact with the little boy-styled Zeno R25. Even through glass, when you look it in the eye it twitches, then mimics your head movements. Supposedly, it reads stories and tells jokes (it'll be here all week!).

This is where the 2017 and 1995 exhibitions merged: most of the robots were still behind glass or displayed out of reach. A rare exception was Aldebaran's emotion-recognizing Pepper, which was noticeably a little-kid magnet - close enough to their size and one whose shiny white surface they were able to touch. The biggest, most notable difference in those 20-plus years, therefore, is this: then, the robots were put behind glass to protect *us* (or at least, the Science Museum from legal action); now, they're behind glass to protect *them*. Still alien, after all these years.

The exhibition ends September 3, so you still have time.

Illustrations:: Robots at the Science Museum; the earliest known humanoid robot;

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 24, 2013

Forcing functions

At last Saturday's OpenTech, perennial grain-of-sand-in-the-Internet-oyster Bill Thompson, in a session on open data, asked an interesting question. In a nod to NTK's old slogan, "They stole our revolution - now we're stealing it back", he asked: how can we ensure that open data supports values of democracy, openness, transparency, and social justice? The Internet pioneers did their best to embed these things into their designs, and the open architecture, software, and licensing they pioneered can be taken without paying by any oppressive government or large company that cares to, Is this what we want for open data, too?

Thompson writes (and, if I remember correctly, actually said, more or less):

...destruction seems like a real danger, not least because the principles on which the Internet is founded leave us open to exploitation and appropriation by those who see openness as an opportunity to take without paying - the venture capitalists, startups and big tech companies who have built their empires in the commons and argue that their right to build fences and walls is just another aspect of 'openness'.

Constraining the ability to take what's been freely developed and exploit it has certainly been attempted, most famously by Richard Stallman's efforts to use copyright law to create software licenses that would bar companies from taking free software and locking it up into proprietary software. It's part of what Creative Commons is about, too: giving people the ability to easily specify how their work may be used. Barring commercial exploitation without payment is a popular option: most people want a cut when they see others making a profit from their work.

The problem, unfortunately, is that it isn't really possible to create an open system that can *only* be used by the "good guys" in "good" ways. The "free speech, not free beer" analogy Stallman used to explain "free software" applies. You can make licensing terms that bar Microsoft from taking GNU/Linux, adding a new user interface, and claiming copyright in the whole thing. But you can't make licensing terms that bar people using Linux from using it to build wiretapping boxes for governments to install in ISPs to collect everyone's email. If you did, either the terms wouldn't hold up in a court of law or it would no longer be free software but instead proprietary software controlled by a well-meaning elite.

One of the fascinating things about the early days of the Internet is the way everyone viewed it as an unbroken field of snow they could mold into the image they wanted. What makes the Internet special is that any of those models really can apply: it's as reasonable to be the entertainment industry and see it as a platform that just needs some locks and laws to improve its effectiveness as a distribution channel as to be Bill Thompson and view it as a platform for social justice that's in danger of being subverted.

One could view the legal history of The Pirate Bay as a worked example, at least as it's shown in the documentary TPB-AFK: The Pirate Bay - Away From Keyboard, released in February and freely downloadable under a Creative Commons license from a torrent site near you (like The Pirate Bay). The documentary has had the best possible publicity this week when the movie studios issued DMCA takedown notices to a batch of sites.

I'm not sure what leg their DMCA claims could stand on, so the most likely explanation is the one TorrentFreak came up with: that the notices are collateral damage. The only remotely likely thing in the documentary to have set them off - other than simple false positives - is the four movie studio logos that appear in it.

There are many lessons to take away from the movie, most notably how much more nuanced the TPB founders' views are than they came across at the time. My favorite moment is probably when Fredrik Tiamo discusses the opposing counsels' inability to understand how TPB actually worked: "We tried to get organized, but we failed every single time." Instead, no boss, no contracts, no company. "We're just a couple of guys in a chat room." My other favorite is probably the moment when Monique Wadsted, Hollywood's lawyer on the case, explains that the notion that young people are disaffected with copyright law is a myth.

"We prefer AFK to IRL," says one of the founders, "because we think the Internet is real."

Given its impact on their business, I'm sure the entertainment industry thinks the Internet is real, too. They're just one of many groups who would like to close down the Internet so it can't be exploited by the "bad guys": security people, governments, child protection campaigners, and so on. Open data will be no different. So, sadly, my answer to Bill Thompson is no, there probably isn't a way to do what he has in mind. Closed in the name of social justice is still closed. Open systems can be exploited by both good and bad guys (for your value of "good" and "bad"); the group exploiting a closed system is always *someone's* bad guy.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted irregularly during the week at the net.wars Pinboard - or follow on Twitter.


December 14, 2012

Defending Facebook

The talks at the monthly Defcon London are often too abstruse for the "geek adjacent". Not so this week, when Chris Palow, Facebook's engineering manager, site integrity London, outlined the site's efforts to defend itself against attackers.

This is no small thing: the law of truly large numbers means that a tiny percentage of a billion users is still a lot of abusers. And Palow has had to scale up very quickly: when he joined five years ago, the company had 30 million users. Today, that's just a little more than a third of the site's *fake* accounts, based on the 83 million the company claimed in its last quarterly SEC filing.

As became rapidly apparent, there are fakes and there are fakes. Most of those 83 million are relatively benign: accounts for people's dogs, public/private variants, duplicate accounts created when a password is lost, and so on. The rest, about 1.5 percent - which is still 14 million - are the troublemakers, spreading spam and malicious links such as the Koobface worm. Eliminating these is important; there is little more damaging to a social network than rampant malware that leverages the social graph to put users in danger in a space they use because they believe it is safe.

This is not an entirely new problem, but none of the prior solutions are really available to Facebook. Prehistoric commercial social environments like CompuServe and AOL, because people paid to use them, could check credit cards. (Yes, the irony was that in the window between sign-up and credit card verification lay a golden opportunity for abusers to harass the rest of the Net from throwaway email accounts.) Usenet and other free services were defenseless against malicious posters, and despite volunteer community efforts most of the audience fled as a result. As a free service whose business model requires scale, Facebook can't require a credit card or heavyweight authentication, and its ad-supported business model means it can't afford to lose any of its audience, so it's damned in all directions. It's also safe to say that the online criminal underground is hugely more developed and expert now.

Fake accounts are the entry points for all sorts of attacks; besides the usual issues of phishing attacks and botnet recruitment, the more fun exploit is using those links to vacuum up people's passwords in order to exploit them on all the other sites across the Web where those same people have used those same passwords.

So a lot of Palow's efforts are directed at making sure those accounts don't get opened in the first place. Detection is a key element; among other techniques is a lightweight captcha-style request to identify a picture.

"It's still easy for one user to have three or four accounts," he said, "but we can catch anyone registering 1 million fakes. Most attacks need scale."

For the small-scale 16-year-old in the bedroom, he joked that the most effective remedy is found in the site's social graph: their moms are on Facebook. In a more complicated case from the Philippines using cheap human labor to open 500 accounts a day in order to spam links selling counterfeit athletic shoes the miscreants talked about their efforts *on* Facebook.

Another key is preventing, or finding and fixing, bugs in the code that runs the site. Among the strategies Palow listed for this, which included general improvements to coding practice such as better testing, regular reviews, and static and dynamic analysis, is befriending the community of people who find and report bugs.

Once accounts have been created, spotting the spammers involves looking for patterns that sound very much like the ones that characterize Usenet spam: are the same URLs being posted across a range of accounts, do those accounts show other signs of malware infection, are they posted excessively on a single channel, and so on.

Other more complex historical attacks include the Tunisian government's effort to steal passwords. Palow also didn't have much nice to say about ad-replacement schemes such as the now-defunct Phorm.

The current hot issue is what Palow calls "toolbars" and I would call browser extensions. Many of these perform valuable functions from the user's point of view, but the price, which most users don't see until it's too late, is that they operate across all open windows, from your insecure reading of the tennis headlines to your banking session. This particular issue is beginning to be locked down by browser vendors, who are implementing content security policies, essentially the equivalent of the Android and iOS curated app stores. As this work is progressing at different rates, in some cases Facebook can leverage the browsers' varying blocking patterns to identify malware.

More complex responses involve partnerships with software and anti-virus vendors. There will be more of this: the latest trend is stealing tokens on Facebook (such as the iPhone Facebook app's token) to enable spamming off-site.

A fellow audience member commented that sometimes it's more effective long-term to let the miscreants ride for a month while you formulate a really heavy response and then drop the anvil. Perhaps: but this is the law of truly large numbers again. When you have a billion users the problem is that during that month a really shocking number of people can be damaged. Palow's life, therefore, is likely to continue to be patch, patch, patch.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.


November 2, 2012

Survival instincts

The biggest divide in New York this week, in the wake of Hurricane Sandy has been, as a friend pointed out, between the people who had to worry about getting to work and the people who didn't. Reuters summed this up pretty well. Slightly differently, The Atlantic had it as three New Yorks: one underwater, one dark and dry, and one close to normal. The stories I've read since by people living in "dark and dry" emerging into the light at around 40th street bear out just how profound the difference is between the powerless and the empowered - in the electrical sense.

This is not strictly speaking about rich and poor (although the Reuters piece linked above makes the point that the city is more economically divided than it has been in some time); the Lower Manhattan area known as Tribeca, for example, is home to plenty of wealthy people - and was flooded. Instead, my friend's more profound divide is about whether you do the kind of work that requires physical presence. Freelance writers, highly paid software engineers, financial services personnel, and a load of other people can work remotely. If your main office is a magazine or a large high-technology company like, say, Google, whose New York building is at 15th and 8th, as long as you have failover systems in place so that your network and data centers keep operating, your staff can work from wherever they can find power and Internet access. Even small companies and start-ups can keep going if their systems are built on or can failover to the right technology.

One of my favorite New York retailers, J&R (they sell everything from music to computers from a series of stores in lower Manhattan, not far from last year's Occupy Wall Street site), perfectly demonstrated this digital divide. The Web site noted yesterday (Thursday) that its shops, located in "dark and dry", are all closed, but the Web site is taking orders as normal.

Plumbers, doormen, shop owners, restaurateurs, and fire fighters, on the other hand, have to be physically present - and they are vital in keeping the other group functioning. So in one sense the Internet has made cities much more resilient, and in another it hasn't made a damn bit of difference.

The Internet was still very young when people began worrying about the potential for a "digital divide". Concerns surfaced early about the prospects for digital exclusion of vulnerable groups such as the elderly and, the cognitively impaired, as well as those in rural areas poorly served by the telecommunications infrastructure and the poor. And these are the groups that, in the UK, efforts at digital engagement are intended to help.

Yet the more significant difference may be not who *is* online - after all, why should anyone be forced online who doesn't want to go? - but who can *work* online. Like traveling with only carry-on luggage, it makes for a more flexible life that can be altered to suit conditions. If your physical presence is not required, today you avoided long lines and fights at gas stations, traffic jams approaching the bridges and tunnels, waits for buses, and long trudges from the last open subway station to your actual destination.

This is not the place to argue about climate change. A single storm is one data point in a pattern that is measured in timespans longer than those of individual human lives.

Nonetheless, it's interesting to note that this storm may be the catalyst the the US needed to stop dragging its feet. As Business Week indicates , the status quo is bad for business, and the people making this point are the insurance companies, not scientists who can be accused of supporting the consensus in the interests of retaining their grant money (something that's been said to me recently by people who normally view a scientific consensus as worth taking seriously).

There was a brief flurry of argument this week on Dave Farber's list about whether the Internet was designed to survive a bomb outage or not. I thought this had been made clear by contemporary historians long ago: that while the immediate impetus was to make it easy for people to share files and information, DARPA's goal was very much also to build resilient networks. And, given that New York City is a telecommunications hub it's clear we've done pretty well with this idea, especially after the events of September 11, 2001 forced network operators to rethink their plans for coping with emergencies.

It seems clear that the next stage will be to do better at coming up with better strategies for making cities more resilient. Ultimately, the cause of climate change doesn't matter: if there are more and more "freak" weather patterns resulting on more and more extreme storms and natural disasters, then it's only common sense to try to plan for them: disaster recovery for municipalities rather than businesses. The world's reinsurance companies - the companies that eventually bear the brunt of the costs - are going to insist on it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.


October 26, 2012

Lie to me

I thought her head was going to explode.

The discussion that kicked off this week's Parliament and Internet conference revolved around cybersecurity and trust online, harmlessly at first. Then Helen Goodman (Labour - Bishop Auckland), the shadow minister for Culture, Media, and Sport, raised a question: what was Nominet doing to get rid of anonymity online? Simon McCalla, Nominet's CTO, had some answers: primarily, they're constantly trying to improve the accuracy and reliability of the Whois database, but it's only a very small criminal element that engage in false domain name registration. Like that.

A few minutes later, Andy Smith, PSTSA Security Manager, Cabinet Office, in answer to a question about why the government was joining the Open Identity Exchange (as part of the Identity Assurance Programme) advised those assembled to protect themselves online by lying. Don't give your real name, date of birth, and other information that can be used to perpetrate identity theft.

Like I say, bang! Goodman was horrified. I was sitting near enough to feel the splat.

It's the way of now that the comment was immediately tweeted, picked up by the BBC reporter in the room, published as a story, retweeted, Slashdotted, tweeted some more, and finally boomeranged back to be recontextualized from the podium. Given a reporter with a cellphone and multiple daily newspaper editions, George Osborne's contretemps in first class would still have reached the public eye the same day 15 years ago. This bit of flashback couldn't have happened even five years ago.

For the record, I think it's clear that Smith gave good security advice, and that the headline - the greater source of concern - ought to be that Goodman, an MP apparently frequently contacted by constituents complaining about anonymous cyberbullying, doesn't quite grasp that this is a nuanced issue with multiple trade-offs. (Or, possibly, how often the cyberbully is actually someone you know.) Dates of birth, mother's maiden names, the names of first pets...these are all things that real-life friends and old schoolmates may well know, and lying about the answers is a perfectly sensible precaution given that there is no often choice about giving the real answers for more sensitive purposes, like interacting with government, medical, and financial services. It is not illegal to fake or refuse to disclose these things, and while Facebook has a real names policy it's enforced with so little rigor that it has a roster of fake accounts the size of Egypt.

Although: the Earl of Erroll might be a bit busy today changing the fake birth date - April 1, 1900 - he cheerfully told us and Radio 4 he uses throughout; one can only hope that he doesn't use his real mother's maiden name, since that, as Tom Scott pointed out later, is in Erroll's Wikipedia entry. Since my real birth date is also in *my* Wikipedia entry and who knows what I've said where, I routinely give false answers to standardized security questions. What's the alternative? Giving potentially thousands of people the answers that will unlock your bank account? On social networking sites it's not enough for you to be taciturn; your birth date may be easily outed by well-meaning friends writing on your wall. None of this is - or should be - illegal.

It turns out that it's still pretty difficult to explain to some people how the Internet works or why . Nominet can work as hard as it likes on verifying its own Whois database, but it is powerless over the many UK citizens and businesses that choose to register under .com, .net, and other gTLDs and country codes. Making a law to enjoin British residents and companies from registering domains outside of .uk...well, how on earth would you enforce that? And then there's the whole problem of trying to check, say, registrations in Chinese characters. Computers can't read Chinese? Well, no, not really, no matter what Google Translate might lead you to believe.

Anonymity on the Net has been under fire for a long, long time. Twenty years ago, the main source of complaints was AOL, whose million-CD marketing program made it easy for anyone to get a throwaway email address for 24 hours or so until the system locked you out for providing an invalid credit card number. Then came Hotmail, and you didn't even need that. Then, as now, there are good and bad reasons for being anonymous. For every nasty troll who uses the cloak to hide there are many whistleblowers and people in private pain who need its protection.

Smith's advice only sounds outrageous if, like Goodman, you think there's a valid comparison between Nominet's registration activity and the function of the Driver and Vehicle Licensing Agency (and if you think the domain name system is the answer to ensuring a traceable online identity). And therein lies the theme of the day: the 200-odd Parliamentarians, consultants, analysts, government, and company representatives assembled repeatedly wanted incompatible things in conflicting ways. The morning speakers wanted better security, stronger online identities, and the resources to fight cybercrime; the afternoon folks were all into education and getting kids to hack and explore so they learn to build things and understand things and maybe have jobs someday, to their own benefit and that of the rest of the country. Paul Bernal has a good summary.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


October 19, 2012

Finding the gorilla

"A really smart machine will think like an animal," predicted Temple Grandin at last weekend's Singularity Summit. To an animal, she argued, a human on a horse often looks like a very different category of object than a human walking. That seems true; and yet animals also live in a sensory-driven world entirely unlike that of machines.

A day later, Melanie Mitchell, a professor of computer science at Portland State University, argued that analogies are key, she said, to human intelligence, producing landmark insights like comparing a brain to a computer (von Neumann) or evolutionary competition to economic competition (Darwin). This is true, although that initial analogy is often insufficient and may even be entirely wrong. A really significant change in our understanding of the human brain came with research by psychologists like Elizabeth Loftus showing that where computers retain data exactly as it was (barring mechanical corruption), humans improve, embellish, forget, modify, and partially lose stored memories; our memories are malleable and unreliable in the extreme. (For a worked example, see The Good Wife, season 1, episode 6.)

Yet Mitchell is obviously right when she says that much of our humor is based on analogies. It's a staple of modern comedy, for example, for a character to respond on a subject *as if* it were another subject (chocolate as if it were sex, a pencil dropping on Earth as if it were sex, and so on). Especially incongruous analogies: when Watson asks - in the video clip she showed - for the category "Chicks dig me" it's funny because we know that as a machine a) Watson doesn't really understand what it's saying, and b) Watson is pretty much the polar opposite of the kind of thing that "chicks" are generally imagined to "dig".

"You are going to need my kind of mind on some of these Singularity projects," said Grandin, meaning visual thinkers, rather than the mathematical and verbal thinkers who "have taken over". She went on to contend that visual thinkers are better able to see details and relate them to each other. Her example: the emergency generators at Fukushima located below the level of a plaque 30 feet up on the seawall warning that flood water could rise that high. When she talks - passionately - about installing mechanical overrides in the artificial general intelligences Singularitarians hope will be built one day soonish, she seems to be channelling Peter G. Neumann, who talks often about the computer industry's penchant for repeating the security mistakes of decades past.

An interesting sideline about the date of the Singularity: Oxford's Stuart Armstrong has studied these date predictions and concluded pretty much that, in the famed words of William Goldman, no one knows anything. Based on his study of 257 predictions collected by the Singularity Institute and published on its Web site, he concluded that most theories about these predictions are wrong. The dates chosen typically do not correlate with the age or expertise of the predicter or the date of the prediction. I find this fascinating: there's something like an 80 percent consensus that the Singularity will happen in five to 100 years.

Grandin's discussion of visual thinkers made me wonder whether they would be better or worse at spotting the famed invisible gorilla than most people. Spoiler alert: if you're not familiar with this psychologist test, go now and watch the clip before proceeding. You want to say better - after all, spotting visual detail is what visual thinkers excel at - but what if the demands of counting passes is more all-consuming for them than for other types of thinkers? The psychologist Daniel Kahneman, participating by video link, talked about other kinds of bias but not this one. Would visual thinkers be more or less likely to engage in the common human pastime of believing we know something based on too little data and then ignoring new data?

This is, of course, the opposite of today's Bayesian systems, which make a guess and then refine it as more data arrives: almost the exact opposite of the humans Kahneman describes. So many of the developments we're seeing now rely on crunching masses of data (often characterized as "big" but often not *really* all that big) to find subtle patterns that humans never spot. Linda Avey, founder of the personal genome profiling service 23andMe and John Wilbanks are both trying to provide services that will allow individuals to take control of and understand their personal medical data. Avey in particular seems poised to link in somehow to the data generated by seekers in the several-year-old self-quantified movement.

This approach is so far yielding some impressive results. Peter Norvig, the director of research at Google, recounted both the company's work on recognizing cats and its work on building Google Translate. The latter's patchy quality seems more understandable when you learn that it was built by matching documents issued in multiple languages against each other and building up statistical probabilities. The former seems more like magic, although Slate points out that the computers did not necessarily pick out the same patterns humans would.

Well, why should they? Do I pick out the patterns they're interested in? The story continues...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 12, 2012

My identity, my self

Last week, the media were full of the story that the UK government was going to start accepting Facebook logons for authentication. This week, in several presentations at the RSA Conference, representatives of the Government Digital Service begged to differ: the list of companies that have applied to become identity providers (IDPs) will be published at the end of this month and until then they are not confirming the presence or absence of any particular company. According to several of the spokesfolks manning the stall and giving presentations, the press just assumed that when they saw social media companies among the categories of organization that might potentially want to offer identity authentication, that meant Facebook. We won't actually know for another few weeks who has actually applied.

So I can mercifully skip the rant that hooking a Facebook account to the authentication system you use for government services is a horrible idea in both directions. What they're actually saying is, what if you could choose among identification services offered by the Post Office, your bank, your mobile network operator (especially for the younger generation), your ISP, and personal data store services like Mydex or small, local businesses whose owners are known to you personally? All of these sounded possible based on this week's presentations.

The key, of course, is what standards the government chooses to create for IDPs and which organizations decide they can meet those criteria and offer a service. Those are the details the devil is in: during the 1990s battles about deploying strong cryptography, the government's wanted copies of everyone's cryptography keys to be held in escrow by a Trusted Third Party. At the time, the frontrunners were banks: the government certainly trusted those, and imagined that we did, too. The strength of the disquiet over that proposal took them by surprise. Then came 2008. Those discussions are still relevant, however; someone with a long memory raised the specter of Part I of the Electronic Communications Act 2000, modified in 2005, as relevant here.

It was this historical memory that made some of us so dubious in 2010, when the US came out with proposals rather similar to the UK's present ones, the National Strategy for Trusted Identities in Cyberspace (NSTIC). Ross Anderson saw it as a sort of horror-movie sequel. On Wednesday, however, Jeremy Grant, the senior executive advisor for identity management at the US National Institute for Standards and Technology (NIST), the agency charged with overseeing the development of NSTIC, sounded a lot more reassuring.

Between then and now came both US and UK attempts to establish some form of national ID card. In the US, "Real ID", focused on the state authorities that issue driver's licenses. In the UK, it was the national ID card and accompanying database. In both countries the proposals got howled down. In the UK especially, the combination of an escalating budget, a poor record with large government IT projects, a change of government, and a desperate need to save money killed it in 2006.

Hence the new approach in both countries. From what the GDS representatives - David Rennie (head of proposition at the Cabinet Office), Steven Dunn (lead architect of the Identity Assurance Programme; Twitter: @cuica), Mike Pegman (security architect at the Department of Welfare and Pensions, expected to be the first user service; Twitter: @mikepegman), and others manning the GDS stall - said, the plan is much more like the structure that privacy advocates and cryptographers have been pushing for 20 years: systems that give users choice about who they trust to authenticate them for a given role and that share no more data than necessary. The notion that this might actually happen is shocking - but welcome.

None of which means we shouldn't be asking questions. We need to understand clearly the various envisioned levels of authentication. In practice, will those asking for identity assurance ask for the minimum they need or always go for the maximum they could get? For example, a bar only needs relatively low-level assurance that you are old enough to drink; but will bars prefer to ask for full identification? What will be the costs; who pays them and under what circumstances?

Especially, we need to know what the detail of the standards organizations must meet to be accepted as IDPs, in particular, what kinds of organization they exclude. The GDS as presently constituted - composed, as William Heath commented last year, of all the smart, digitally experienced people you *would* hire to reinvent government services for the digital world if you had the choice - seems to have its heart in the right place. Their proposals as outlined - conforming, as Pegman explained happily, to Kim Cameron's seven laws of identity - pay considerable homage to the idea that no one party should have all the details of any given transaction. But the surveillance-happy type of government that legislates for data retention and CCDP might also at some point think, hey, shouldn't we be requiring IDPs to retain all data (requests for authentication, and so on) so we can inspect it should we deem it necessary? We certainly want to be very careful not to build a system that could support such intimate secret surveillance - the fundamental objection all along to key escrow.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.


October 5, 2012

The doors of probability

Mike Lynch has long been the most interesting UK technology entrepreneur. In 2000, he became Britain's first software billionaire. In 2011 he sold his company, Autonomy, to Hewlett-Packard for $10 billion. A few months ago, Hewlett-Packard let him escape back into the wild of Cambridge. We've been waiting ever since for hints of what he'll do next; on Monday, he showed up at NESTA to talk about his adventures with Wired UK editor David Rowan.

Lynch made his name and his company by understanding that the rule formulated in 1750 by the English vicar and mathematician Thomas Bayes could be applied to getting machines to understand unstructured data. These days, Bayes is an accepted part of the field of statistics, but for a couple of centuries anyone who embraced his ideas would have been unwise to admit it. That began to change in the 1980s, when people began to realize the value of his ideas.

"The work [Bayes] did offered a bridge between two worlds," Lynch said on Monday: the post-Renaissance world of science, and the subjective reality of our daily lives. "It leads to some very strange ideas about the world and what meaning is."

As Sharon Bertsch McGrayne explains in The Theory That Would Not Die, Bayes was offering a solution to the inverse probability problem. You have a pile of encrypted code, or a crashed airplane, or a search query: all of these are effects; your problem is to find the most likely cause. (Yes, I know: to us the search query is the cause and the page of search results if the effect; but consider it from the computer's point of view.) Bayes' idea was to start with a 50/50 random guess and refine it as more data changes the probabilities in one direction or another. When you type "turkey" into a search engine it can't distinguish between the country and the bird; when you add "recipe" you increase the probability that the right answer is instructions on how to cook one.

Note, however, that search engines work on structured data: tags, text content, keywords, and metadata all going into building an index they can run over to find the hits. What Lynch is talking about is the stuff that humans can understand - raw emails, instant messages, video, audio - that until now has stymied the smartest computers.

Most of us don't really like to think in probabilities. We assume every night that the sun will rise in the morning; we call a mug a mug and not "a round display of light and shadow with a hole in it" in case it's really a doughnut. We also don't go into much detail in making most decisions, no matter how much we justify them afterwards with reasoned explanations. Even decisions that are in fact probabilistic - such as those of the electronic line-calling device Hawk-Eye used in tennis and cricket - we prefer to display as though they were infallible. We could, as Cardiff professor Harry Collins argued, take the opportunity to educate people about probability: the on-screen virtual reality animation could include an estimate of the margin for error, or the probability that the system is right (much the way IBM did in displaying Watson's winning Jeopardy answers). But apparently it's more entertaining - and sparks fewer arguments from the players - to pretend there is no fuzz in the answer.

Lynch believes we are just at the beginning of the next phase of computing, in which extracting meaning from all this unstructured data will bring about profound change.

"We're into understanding analog," he said. "Fitting computers to use instead of us to them." In addition, like a lot of the papers and books on algorithms I've been reading recently, he believes we're moving away from the scientific tradition of understanding a process to get an outcome and into taking huge amounts of data about outcomes and from it extracting valid answers. In medicine, for example, that would mean changing from the doctor who examines a patient, asks questions, and tries to understand the cause of what's wrong with them in the interests of suggesting a cure. Instead, why not a black box that says, "Do these things" if the outcome means a cured patient? "Many people think it's heresy, but if the treatment makes the patient better..."

At the beginning, Lynch said, the Autonomy founders thought the company could be worth £2 to £3 million. "That was our idea of massive back then."

Now, with his old Autonomy team, he is looking to invest in new technology companies. The goal, he said, is to find new companies built on fundamental technology whose founders are hungry and strongly believe that they are right - but are still able to listen and learn. The business must scale, requiring little or no human effort to service increased sales. With that recipe he hopes to find the germs of truly large companies - not the put in £10 million sell out at £80 million strategy he sees as most common, but multi-billion pound companies. The key is finding that fundamental technology, something where it's possible to pick a winner.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


September 21, 2012

This is not (just) about Google

We had previously glossed over the news, in February, that Google had overridden the "Do Not Track" settings in Apple's Safari Web browser, used on both its desktop and mobile machines. For various reasons, Do Not Track is itself a divisive issue, pitting those who favour user control over privacy issues against those who ask exactly how people plan to pay for all that free content0 if not through advertising. But there was little disagreement about this: Google goofed badly in overriding users' clearly expressed preferences. Google promptly disabled the code, but the public damage was done - and probably made worse by the company's initial response.

In August, the US Federal Trade Commission fined Google $22.5 million for that little escapade. Pocket change, you might say, and compared to Google's $43.6 billion in 2011 revenues you'd be right. As the LSE's Edgar Whitely pointed out on Monday, a sufficiently large company can also view such a fine strategically: paying might be cheaper than fixing the problem. I'm less sure: fines have a way of going up a lot if national regulators believe a company is deliberately and repeatedly flouting their authority. And to any of the humans reviewing the fine - neither Page nor Brin grew up particularly wealthy, and I doubt Google pays its lawyers more than six figures - I'd bet $22.5 million still seems pretty much like real money.

On Monday, Simon Davies, the founder and former director of Privacy International, convened a meeting at the LSE to discuss this incident and its eventual impact. This was when it became clear that whatever you think about Google in particular, or online behavioral advertising in general, the questions it raises will apply widely to the increasing numbers of highly complex computer systems in all sectors. How does an organization manage complex code? What systems need to be in place to ensure that code does what it's supposed to do, no less - and no more? How do we make these systems accountable? And to whom?

The story in brief: Stanford PhD student Jonathan Mayer studies the intersection of technology and privacy, not by writing thoughtful papers studying the law but empirically, by studying what companies do and how they do it and to how many millions of people.

"This space can inherently be measured," he said on Monday. "There are wide-open policy questions that can be significantly informed by empirical measurements." So, for example, he'll look at things like what opt-out cookies actually do (not much of benefit to users, sadly), what kinds of tracking mechanisms are actually in use and by whom, and how information is being shared between various parties. As part of this, Mayer got interested in identifying the companies placing cookies in Safari; the research methodology involved buying ads that included codes enabling him to measure the cookies in place. It was this work that uncovered Google's bypassage of Safari's Do Not Track flag, which has been enabled by default since 2004. Mayer found cookies from four companies, two of which he puts down to copied and pasted circumvention code and two of which - Google and Vibrant - he were deliberate. He believes that the likely purpose of the bypass was to enable social synchronizing features (such as Google+'s "+1" button); fixing one bit of coded policy broke another.

This wasn't much consolation to Whitley, however: where are the quality controls? "It's scary when they don't really tell you that's exactly what they have chosen to do as explicitly corporate policy. Or you have a bunch of uncontrolled programmers running around in a large corporation providing software for millions of users. That's also scary."

And this is where, for me, the issue at hand jumped from the parochial to the global. In the early days of the personal computer or of the Internet, it didn't matter so much if there were software bugs and insecurities, because everything based on them was new and understood to be experimental enough that there were always backup systems. Now we're in the computing equivalent of the intermediate period in a pilot's career, which is said to be the more dangerous time: that between having flown enough to think you know it all, and having flown enough to know you never will. (John F. Kennedy, Jr, was in that window when he crashed.)

Programmers are rarely brought into these kinds of discussions, yet are the people at the coalface who must transpose human language laws, regulations, and policies into the logical precision of computer code. As Danielle Citron explains in a long and important 2007 paper, Technological Due Process, that process inevitably generates many errors. Her paper focuses primarily on several large, automated benefits systems (two of them built by EDS) where the consequences of the errors may be denying the most needy and vulnerable members of society the benefits the law intends them to receive.

As the LSE's Chrisanthi Avgerou said, these issues apply across the board, in major corporations like Google, but also in government, financial services, and so on. "It's extremely important to be able to understand how they make these decisions." Just saying, "Trust us" - especially in an industry full of as many software holes as we've seen in the last 30 years - really isn't enough.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


July 14, 2012

The ninth circle of HOPE

Why do technologies fail? And what do we mean by failure?

These questions arise in the first couple of hours of HOPE 9, this year's edition of the hacker conference run biannually by 2600, the hacker quarterly.

Technology failure has a particular meaning in the UK, where large government projects have traditionally wasted large amounts of public money and time. Many failures are more subtle. To take a very simple example: this morning, the elevators failed. It was not a design flaw or loss of functionality: the technology worked perfectly as intended. It was not a usability flaw: what could be simpler than pushing a button? It was not even an accessibility or availability flaw: there were plenty of elevators. What it was, in fact, was a social - or perhaps a contextual - flaw. This group of people who break down complex systems to their finest components to understand them and make them jump through hoops simply failed to notice or read the sign that gave the hours of operation even though it was written in big letters and placed at eye level, just above the call button. This was, after all, well-understood technology that needed no study. And so they stood around in groups, waiting until someone came, pointed out the sign, and chased them away. RTFM, indeed.

But this is what humans do: we make assumptions based on our existing knowledge. To the person with a hammer, everything looks like a nail. To the person with a cup and nowhere to put it, the unfamiliar CD drive looks like a cup holder. To the kids discovering the Hole in the Wall project, a 2000 experiment with installing a connected computer in an Indian slum, the familiar wait-and-wait-some-more hourglass was a drum. Though that last is only a failure if you think it's important that the kids know it's an hourglass; they understood perfectly well the thing that mattered, which is that it was a sign the thing in the wall was doing something and they had to wait.

We also pursue our own interests, sometimes at the expense of what actually matters in a situation. Far Kron, speaking on the last four years of community fabrication, noted that the Global Village Construction project, which is intended to include a full set of the machines necessarily to build a civilization, includes nothing to aid more mundane things like fetching fresh water and washing clothes, which are overall a bigger drain on human time. I am tempted to suggest that perhaps the project needs to recruit some more women (who around the world tend to do most of the water fetching and clothes washing), but it may simply be that small, daily chores are things you worry about after you have your village. (Though this is the inverse of how human settlements have historically worked.)

A more intriguing example, cited by Chris Anderson, a former organizer with New York's IndyMedia, in the early panel on Technology to Change Society that inspired this piece, is Twitter. How is one of the most important social networks and messaging platforms in the world a failure?

"If you define success in technical terms you might only *be* successful in technical terms," he said. Twitter, he explained grew out of a number of prior open-source projects the founders were working. "Indymedia saw technology as being in service to goals, but lacks the social goals those projects started with."

Gus Andrews, producer of The Media Show, a YouTube series on digital media literacy, focused on the hidden assumptions creators make. Some believed, for example, that open source software was vital to One Laptop Per Child, for example, believed that being able to fix the software was a crucial benefit for the recipients.

In 2000, Lawrence Lessig argued that "code is law", and that technological design controls how it can be used. Andrews took a different view: "To believe that things are ineluctably coded into technology is to deny free will." Pointing at Everett Rogers' 1995 book, The Diffusion of Innovations, she said, "There are things we know about how technology enacts social change and one of the thing we know is that it's not the technology."

Not the technology? You might think that if anyone were going to be technology obsessed it would be the folks at a hacker conference. And certainly the public areas are filled with people fidgeting with radio frequencies, teaching others to solder, and showing off their latest 3D printers and their creations (this year's vogue: printing in brightly colored Lego plastic). But the roots of the hacker movement in general and of 2600 in particular are as much social and educational as they are technological.

Eric Corley, who has styled himself "Emmanuel Goldstein", edits the magazine, and does a weekly radio show for WBAI-FM in New York. At a London hacker conference in 1995, he summed up this ethos for me (and The Independent) by talking about hacking as a form of consumer advocacy. His ideas about keeping the Internet open and free, and about ferreting out information corporations would rather keep hidden were niche - and to many people scary - then, but mainstream now.

HOPE continues through Sunday.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 1, 2012

The pet rock manifesto

I understand why government doesn't listen to security experts on topics where their advice conflicts with the policies it likes. For example: the Communications Capabilities Development Programme, where experts like Susan Landau, Bruce Schneier, and Ross Anderson have all argued persuasively that a hole is a hole and creating a vulnerability to enable law enforcement surveillance is creating a vulnerability that can be exploited by...well, anyone who can come up with a way to use it.

All of that is of a piece with recent UK and US governments' approach to scientific advice in general, as laid out in The Geek Manifesto, the distillation of Mark Henderson's years of frustration serving as science correspondent at The Times (he's now head of communications for the Wellcome Trust). Policy-based evidence instead of evidence-based policy, science cherry-picked to support whatever case a minister has decided to make, the role of well-financed industry lobbyists - it's all there in that book, along with case studies of the consequences.

What I don't understand is why government rejects experts' advice when there's no loss of face involved, and where the only effect on policy would be to make it better, more relevant, and more accurately targeted at the problem it's trying to solve. Especially *this* government, which has in other areas has come such a long way.

Yet this is my impression from Wednesday's Westminster eForum on the UK's Cybersecurity strategy (PDF). Much was said - for example, by James Quinault, the director of the Office of Cybersecurity and Information Assurance - about information and intelligence sharing and about working collaboratively to mitigate the undeniably large cybersecurity threat (even if it's not quite as large as BAe Systems Detica's seemingly-pulled-out-of-the-air £27 billion would suggest; Detica's technical director, Henry Harrison didn't exactly defend that number, but said no one's come up with a better estimate for the £17 billion that report attributed to cyberespionage.)

It was John Colley, the managing director EMEA for (ISC)2 who said it: in a meeting he attended late last year with, among others, the MP James Brokenshire, Minister for Crime and Security at the Home Office shortly before the publication of the UK's four-year cybersecurity strategy (PDF), he asked who the document's formulators had talked to among practitioners, "the professionals involved at the coal face". The answer: well, none. GCHQ wrote a lot of it (no surprise, given the frequent, admittedly valid, references to its expertise and capabilities), and some of the major vendors were consulted. But the actual coal face guys? No influence. "It's worrying and distressing," Colley concluded.

Well, it is. As was Quinault's response when I caught him to ask whether he saw any conflict between the government's policies on CCDP and surveillance back doors built into communications equipment versus the government's goal of making Britain "one of the most secure places in the world to do business". That response was, more or less precisely: No.

I'm not saying the objectives are bad; but besides the issues raised when the document was published, others were highlighted Wednesday. Colley, for example, noted that for information sharing to work it needs two characteristics: it has to go both ways, and it has to take place inside a network of trust; GCHQ doesn't usually share much. In addition, it's more effective, according to both Colley and Stephen Wolthusen, a reader in mathematics at Royal Holloway's Information Security Group, to share successes rather than problems - which means that you need to be able to phone the person who's had your problem to get details. And really, still so much is down to human factors and very basic things, like changing the default passwords on Internet-facing devices. This is the stuff the coalface guys see every day.

Recently, I interviewed nearly a dozen experts of varying backgrounds about the future of infosecurity; the piece is due to run in Infosecurity Magazine sometime around now. What seemed clear from that exercise is that in the long run we would all be a lot more secure a lot more cheaply if we planned ahead based on what we have learned over the past 50 years. For example: before rolling out wireless smart meters all over the UK, don't implement remote disconnection. Don't link to the Internet legacy systems such as SCADA that were never designed with remote access in mind and whose security until now has depended on securing physical access. Don't plant medical devices in people's chests without studying the security risks. Stop, in other words, making the same mistakes over and over again.

The big, upcoming issue, Steve Bellovin writes in Privacy and Cybersecurity: the Next 100 Years (PDF), a multi-expert document drafted for the IEEE, is burgeoning complexity. Soon, we will be surrounded by sensors, self-driving cars, and the 2012 version of pet rocks. Bellovin's summation, "In 20 years, *everything* will be connected...The security implications of this are frightening." And, "There are two predictions we can be quite certain about: there will still be dishonest people, and our software will still have some bugs." Sounds like a place to start, to me.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


May 25, 2012

Camera obscura

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't), or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 24, 2012

A really fancy hammer with a gun

Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.

Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!

What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."

Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".

Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel". (Kate Darling, MIT Media Lab).

What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans? (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")

Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.

When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?

"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."

The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 30, 2012

The ghost of cash

"It's not enough to speak well of digital money," Geronimo Emili said on Wednesday. "You must also speak negatively of cash." Emili has a pretty legitimate gripe. In his home country, Italy, 30 percent of the economy is black and the gap between the amount of tax the government collects and the amount it's actually owed is €180 billion. Ouch.

This sets off a bit of inverted nationalist competition between him and the Greek lawyer Maria Giannakaki, there to explain a draft Greek law mandating direct payment of VAT from merchants' tills to eliminate fraud: which country is worse? Emili is sure it's Italy.

"We invented banks," he said. "But we love cash." Italy's cash habit costs the country €10 billion a year - and 40 percent of Europe's bank robberies.

This exchange took place at this year's Digital Money Forum, an annual event that pulls together people interested in everything from the latest mobile technology to the history of Anglo-Saxon coinage. Their shared common interest: what makes money work? If you, like most of this group, want to see physical cash eliminated, this is the key question.

Why Anglo-Saxon coinage? Rory Naismith explains that the 8th century began the shift from valuing coins merely for their metal content and assigning them a premium for their official status. It was the beginning of the abstraction of money: coins, paper, the elimination of the gold standard, numbers in cyberspace. Now, people like Emili and this event's convenor, David Birch, argue it's time to accept money's fully abstract nature and admit the truth: it's a collective hallucination, a "promise of a promise".

These are not just the ravings of hungry technology vendors: Birch, Emili, and others argue that the costs of cash fall disproportionately on the world's poor, and that cash is the key vector for crime and tax evasion. Our impressions of the costs are distorted because the costs of electronic payments, credit cards, and mobile wallets are transparent, while cash is free at the point of use.

When I say to Birch that eliminating cash also means eliminating the ability to transact anonymously, he says, "That's a different conversation." But it isn't, if eliminating crime and tax evasion are your drivers. In the two days only Bitcoin offers anonymity, but it's doomed to its niche market, for whatever reason. (I think it's too complicated; Dutch financial historian Simon Lelieveldt says it will fail because it has no central bank.)

I pause to be annoyed by the claim that cash is filthy and spreads disease. This is Microsoft-level FUD, and not worthy of smart people claiming to want to benefit the poor and eliminate crime. In fact, I got riled enough to offer to lick any currency (or coins; I'm not proud) presented. I performed as promised on a fiver and a Danish note. And you know, they *kept* that money?

In 1680, says Birch, "Pre-industrial money was failing to serve an industrial revolution." Now, he is convinced, "We are in the early part of the post-industrial revolution, and we're shoehorning industrial money in to fit it. It can't last." This is pretty much what John Perry Barlow said about copyright in 1993, and he was certainly right.

But is Birch right? What kind of medium is cash? Is it a medium of exchange, like newspapers, trading stored value instead of information, or is it a format, like video tape? If it's the former, why shouldn't cash survive, even if only as a niche market? Media rarely die altogether - but formats come and go with such speed that even the more extreme predictions at this event - such as Sandra Alzetta, who said that her company expects half its transactions to be mobile by 2020 -seem quite modest. Her company is Visa International, by the way.

I'd say cash is a medium of exchange, and today's coins and notes are its format. Past formats have included shells, feathers, gold coins, and goats; what about a format for tomorrow that printed or minted on demand, at ATMs? I ask the owner of the grocery shop around the corner if his life would be better if cash were eliminated, and he shrugs no. "I'd still have to go out and get the stuff."

What's needed is low-cost alternatives that fit in cultural contexts. Lydia Howland, whose organization IDEO works to create human-centered solutions to poverty, finds the same needs in parts of Britain that exist in countries like Kenya, where M-Pesa is succeeding in bringing access to banking and remote payments to people who have never had access to financial services before.

"Poor people are concerned about privacy," she said on Wednesday. "But they have so much anonymity in their lives that they pay a premium for every financial service." Also, because they do so much offline, there is little understanding of how they work or live. "We need to create a society where a much bigger base has a voice."

During a break, I try to sketch the characteristics of a perfect payment mechanism: convenient; transparent to the user; universally accepted; universally accessible and usable; resistant to tracking, theft, counterfeiting, and malware; and hard to steal on a large scale. We aren't there yet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 23, 2012

The year of the future

If there's one thing everyone seemed to agree on yesterday at Nominet's annual Internet policy conference, it's that this year, 2012, is a crucial one in the development of the Internet.

The discussion had two purposes. One is to feed into Nominet's policy-making as the body in charge of .uk, in which capacity it's currently grappling with questions such as how to respond to law enforcement demands to disappear domains. The other, which is the kind of exercise net.wars particularly enjoys and that was pioneered at the Computers, Freedom, and Privacy conference (next one spring 2013, in Washington, DC), is to peer into the future and try to prepare for it.

Vint Cerf, now Google's Chief Internet Evangelist, outlined some of that future, saying that this year, 2012, will see more dramatic changes to the Internet than anything since 1983. He had a list:

- The deployment of better authentication in the form of DNSSec;

- New certification regimes to limit damage in the event of more cases like 2011's Diginotar hack;

- internationalized domain names;

- The expansion of new generic top-level domains;

- The switch to IPv6 Internet addressing, which happens on June 6;

- Smart grids;

- The Internet of things: cars, light bulbs, surfboards (!), and anything else that can be turned into a sensor by implanting an RFID chip.

Cerf paused to throw in an update on his long-running project the interplanetary Internet he's been thinking about since 1998 (TXT).

"It's like living in a science fiction novel," he said yesterday as he explained about overcoming intense network lag by using high-density laser pulses. The really cool bit: repurposing space craft whose scientific missions have been completed to become part of the interplanetary backbone. Not space junk: network nodes-in-waiting.

The contrast to Ed Vaizey, the minister for culture, communications and the creative industries at the Department of Culture, Media, and Sport, couldn't have been more marked. He summed up the Internet's governance problem as the "three Ps": pornography, privacy, and piracy. It's nice rhetorical alliteration, but desperately narrow. Vaizey's characterization of 2012 as a critical year rests on the need to consider the UK's platform for the upcoming Internet Governance Forum leading to 2014's World Information Technology Forum. When Vaizey talks about regulating with a "light touch", does he mean the same things we do?

I usually place the beginning of the who-governs-the-Internet argument at1997, the first time the engineers met rebellion when they made a technical decision (revamping the domain name system). Until then, if the pioneers had an enemy it was governments, memorably warned off by John Perry Barlow's 1996 Declaration of the Independence of Cyberspace. After 1997, it was no longer possible to ignore the new classes of stakeholders, commercial interests and consumers.

I'm old enough as a Netizen - I've been online for more than 20 years - to find it hard to believe that the Internet Governance Forum and its offshoots do much to change the course of the Internet's development: while they're talking, Google's self-drive cars rack up 200,000 miles on San Francisco's busy streets with just one accident (the car was rear-ended; not their fault) and Facebook sucks in 800 million users (if it were a country, it would be the world's third most populous nation).

But someone has to take on the job. It would be morally wrong for governments, banks, and retailers to push us all to transact with them online if they cannot promise some level of service and security for at least those parts of the Internet that they control. And let's face it: most people expect their governments to step in if they're defrauded and criminal activity is taking place, offline or on, which is why I thought Barlow's declaration absurd at the time

Richard Allan, director of public policy for Facebook EMEA - or should we call him Lord Facebook? - had a third reason why 2012 is a critical year: at the heart of the Internet Governance Forum, he said, is the question of how to handle the mismatch between global Internet services and the cultural and regulatory expectations that nations and individuals bring with them as they travel in cyberspace. In Allan's analogy, the Internet is a collection of off-shore islands like Iceland's Surtsey, which has been left untouched to develop its own ecosystem.

Should there be international standards imposed on such sites so that all users know what to expect? Such a scheme would overcome the Balkanization problem that erupts when sites present a different face to each nation's users and the censorship problem of blocking sites considered inappropriate in a given country. But if that's the way it goes, will nations be content to aggregate the most open standards or insist on the most closed, lowest-common-denominator ones?

I'm not sure this is a choice that can be made in any single year - they were asking this same question at CFP in 1994 - but if this is truly the year in which it's made, then yes, 2012 is a critical year in the development of the Internet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 16, 2012

The end of the beginning

The coming months could see significant boosts to freedom of expression in the UK. Last night, the Libel Reform Campaign launched its report on alternatives to libel litigation at an event filled with hope that the Defamation Bill will form part of the Queen's speech in May. A day or two earlier, Consumer Focus hosted an event at the House of Commons to discuss responses to the consultation on copyright following the Hargreaves Review, which are due March 21. Dare we hope that a year or two from now the twin chilling towers of libel law and copyright might be a little shorter?

It's actually a good sign, said the former judge Sir Stephen Sedley last night, that the draft defamation bill doesn't contain everything reform campaigners want: all bills change considerably in the process of Parliamentary scrutiny and passage. There are some other favorable signs: the defamation bill is not locked to any particular party. Instead, there's something of a consensus that libel law needs to be reformed for the 21st century - after all, the multiple publication rule that causes Internet users so much trouble was created by the 1849 court case Duke of Bunswick v Harmer, in which the Duke of Brunswick managed to get the 17-year limit overridden on the basis that his manservant, sent from Paris to London, was able to buy copies of the magazine he believed had defamed him. These new purchases, he argued successfully, constituted a new publication of the libel. Well, you know the Internet: nothing ever really completely dies, and so that law, applied today, means liability in perpetuity. Ain't new technology grand?

The same is, of course, true in spades of copyright law, even though it's been updated much more recently; the Copyright, Designs, and Patents Act only dates to 1988 (and was then a revision of laws as recent as 1956). At the Consumer Focus event, Saskia Walzel argued that it's appropriate to expect to reform copyright law every ten to 15 years, but that the law should be based on principles, not technologies. The clauses that allow consumers to record TV programs on video recorders, for example, did not have to be updated for PVRs.

The two have something else in common: both are being brought into disrepute by the Internet because both were formulated in a time when publishers were relatively few in number and relatively powerful and needed to be kept in check. Libel law was intended to curb their power to damage the reputations of individuals with little ability to fight back. Copyright law kept them from stealing artists' and creators' work - and each other's.

Sedley's comment last night about libel reform could, with a little adaptation, apply equally well to copyright: "The law has to apply to both the wealthy bully and the small individual needing redress from a large media organization." Sedley went on to argue that it is in the procedures that the playing field can be leveled; hence the recommendation for options to speed up dispute resolutions and lower costs.

Of course, publishers are not what they were. Even as recently as 1988 the landscape of rightsholders was much more diverse. Many more independent record labels jostled for market share with somewhat more larger ones; scores of independent book publishers and bookshops were thriving; and photographers, probably the creators being damaged the most in the present situation, still relied for their livelihood on the services of a large ecology of small agencies who understood them and cared about their work. Compare that to now, when cross-media ownership is the order of the day, and we may soon be down to just two giant music companies.

It is for this reason that I have long argued (as Walzel also said on Tuesday) that if you really want to help artists and other creators, they will be better served by improving contract law so they can't be bullied into unfair terms than by tightening and aggressively enforcing copyright law.

Libel law can't be so easily mitigated, but in both cases we can greatly improve matters by allowing exceptions that serve the public interest. In the case of libel law, that means scientific criticism: if someone claims abilities that are contrary to our best understanding of science, critique on that basis should be allowed to proceed. Similarly, there is clearly no economic loss to rightsholders from allowing exceptions for parody, disabled access, and archiving.

It was Lord McNally, the Minister of Justice who called this moment in the work on libel law reform the end of the beginning, reminding those present that now is to use whatever influence campaigners have with Parliamentarians to get through the changes that are needed. He probably wouldn't think of it this way, but his comment reminded me of the 1970s and 1980s tennis champion Chris Evert, who commented that many (lesser) players focused on reaching the finals of tournaments and forgot, once there, that there was a step further to go to win the title.

So enjoy that celebratory drink - and then get back to work!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 9, 2012

Private parts

In 1995, when the EU Data Protection Directive was passed, Facebook founder and CEO Mark Zuckerberg was 11 years old. Google was three years away from incorporation. Amazon.com was a year old and losing money fast enough to convince many onlookers that it would never be profitable; the first online banner ads were only months old. It was the year eBay and Yahoo! were founded and Netscape went public. This is how long ago it was: CompuServe was a major player in online services, AOL was just setting up its international services, and both of them were still funded by per-minute usage fees.

In other words: even when it was published there were no Internet companies whose business models depended on exploiting user data. During the years it was being drafted only posers and rich people owned mobile phone, selling fax machines was a good business, and women were still wearing leggings the *first* time. It's impressive that the basic principles formulated then have held up well. Practice, however, has been another matter.

The discussions that led to the publication in January of of a package of reforms to the data protection rules began in 2008. Discussions among data protection commissioners, Peter Hustinx, the European Data Protection Supervisor, said at Thursday's Westminster eForum on data protection and electronic privacy, produced a consensus that changes were needed, including making controllers more accountable, increasing "privacy by design", and making data protection a top-level issue for corporate governance.

These aren't necessarily the issues that first spring to mind for privacy advocates, particularly in the UK, where many have complained that the Information Commissioner's Office has failed. (It was, for example, out of step with the rest of the world with respect to Google's Street View.) Privacy International has a long history of complaints about the ICO's operation. But even the EU hasn't performed as well as citizens might hope under the present regime: PI also exposed the transfer of SWIFT financial data to the US, while Edward Hasbrouck has consistently and publicly opposed the transfer of passenger name record data from the EU to the US.

Hustinx has published a comprehensive opinion of the reform package. The details of both the package itself and the opinion require study. But some of the main points are an effort to implement a single regime and the rights to erasure (aka the right to be forgotten), require breach notification within 24 hours of discovery, strengthen the data protection authorities and make them more accountable.

Of course, everyone has a complaint. The UK's deputy information commissioner, David Smith, complained that the package is too prescriptive of details and focuses on paperwork rather than privacy risk. Lord McNally, Minister of State at the Ministry of Justice, complained that the proposed fines of up to 2 percent of global corporate income are disproportionate and that 24 hours is too little time. Hustinx outlined his main difficulties: that the package has gaps, most notably surrounding the transfer of telephone data to law enforcement; that fines should be discretionary and proportionate rather than compulsory; and that there remain difficulties in dealing with national and EU laws.

We used to talk about the way the Internet enabled the US to export the First Amendment. You could, similarly, see the data protection laws as the EU's effort to export privacy rules; a key element is the prohibition on transferring data to countries without similar regimes - which is why the SWIFT and PNR cases were so problematic. In 1999, for a piece that's now behind Scientific American's paywall, PI's Simon Davies predicted that US companies might find themselves unable to trade in Europe because of data flows. Big questions, therefore, revolve around the business corporate rules, which allow companies to transfer data to third countries without equivalent data protection as long as the data stays within their corporate boundaries.

The arguments over data protection law have a lot in common with the arguments over copyright. In both cases, the goal is to find a balance of power between competing interests that keeps individuals from being squashed. Also like copyright, data protection policy is such a dry and esoteric subject that it's hard to get non-specialists engaged with it. Hard, but not impossible: copyright has never had a George Orwell to make the dangers up close and personal. Copyright law began, Lawrence Lessig argued in (I think it was) Free Culture, as a way to curb the power of publishers (although by now it has ended up greatly empowering them). Similarly while most of us may think of data protection law as protecting the abuse of personal data, a voice argued from the floor yesterday that the law was originally drafted to enable free data transfers within the single market.

There is another similarity. Rightsholders and government policymakers often talk as though the population-at-large are consumers, not creators in their own right. Similarly, yesterday, Mydex's David Alexander had this objection to make: "We seem to keep forgetting that humans are not just subjects, but participants in the management of their own personal data...Why can't we be participants?"


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 2, 2012

Drive by wire

The day in 1978 when I first turned on my CB radio, I discovered that all that time the people in the cars around me had been having conversations I knew nothing about. Suddenly my car seemed like a pre-Annie Sullivan Helen Keller.

Judging by yesterday's seminar on self-driving cars, something similar is about to happen, but on a much larger scale. Automate driving and then make each vehicle part of the Internet of Things and suddenly the world of motoring is up-ended.

The clearest example came from Jeroen Ploeg, who is part of a Dutch national project on Cooperative Advanced Cruise Control. Like everyone here, Ploeg is grappling with issues that recur across all the world's densely populated zones: congestion, pollution, and safety. How can you increase capacity without building more roads (expensive) while decreasing pollution (expensive, unpleasant, and unhealthy) and increasing safety (deaths from road accidents have decreased in the UK for the last few years but are still nearly 2,000 a year)? Decreasing space between cars isn't safe for humans, who also lack the precision necessary to keep a tightly packed line of cars moving evenly. What Ploeg explains, and then demonstrates on a ride in a modified Prius through the Nottingham lunchtime streets, is that given the ability to communicate the cars can collaborate to keep a precise distance that solves all three problems. When he turns on the cooperative bit so that our car talks to its fellow in front of us, the advance warnings significantly smooth our acceleration and braking.

"It has a big potential to increase throughput," he says, noting that packing safely closer together can cut down trucks' fuel requirements by up to 10 percent from the reduction in headwinds.

But other than that, "There isn't a business case for it," he says sadly. No: because we don't buy cars collaboratively, we buy them individually according to personal values like top speed, acceleration, fuel efficiency, comfort, sporty redness, or fantasy.

To robot vehicle researchers, the question isn't if self-driving cars will take over - the various necessary bits of technology are too close to ready - but when and how people will accept the inevitable. There are some obvious problems. Human factors, for one. As cars become more skilled - already, they help humans park, keep in lanes, and keep a consistent speed - humans forget the techniques they've learned. Gradually, says Natasha Merat, co-director at the Institute for Transport Studies at the University of Leeds, they stop paying attention. In critical situations, her research shows, they react more slowly; in urban situations more automated means they're more likely to watch DVDs until or unless they hear an alarm sound. (Curiously, her research shows that on motorways they continue to pay more attention; speed scares, apparently.) So partial automation may be more dangerous than full automation despite seeming like a good first step.

The more fascinating thing is what happens when vehicles start to communicate. Paul Newman, head of the Mobile Robotics Unit at Oxford proposes that your vehicle should learn your routes; one day, he imagines, a little light comes on indicating that it's ready to handle the drive itself. Newman wants to reclaim his time ("It's ridiculous to think that we're condemned to a future of congestion, accidents, and time-wasting"), but since GPS is too limited to guide an automated car - it doesn't work well inside cities, it's not fine-grained enough for parking lots - there's talk of guide boxes. Newman would rather take cues from the existing infrastructure the way humans do. But give vehicles the ability to communicate and share information - maps, pictures, and sensor data. "I don't need a funky French car bubble car. I want today's car with cameras and a 3G connection."

It's later, over lunch, that I realize what he's really proposing. Say all of Britain's roads are traversed once an hour by some vehicle or other. If each picks up infrastructure, geographical, and map data and shares it...you have the vehicle equivalent of Wikipedia to compete with Google's Street View.

Two topics are largely skipped at this event, both critical: fuel and security. John Miles, from Arup argued that it's a misconception that a large percentage of today's road traffic could be moved to rail. But is it safe to assume we'll find enough fuel to run all those extra vehicles either? Traffic increased in the UK by 85 percent since 1980; another 25 percent increase is expected in just the next 20 years.

But security is the crucial one because it must be built into V2V from the beginning. Otherwise, we're talking the apocryphal old joke about cars crashing unpredictably, like Windows.

It's easy to resist this particular future even without wondering whether people will accept statistics showing robot cars are safer if a child is killed by one: I don't even like cars that bossily remind me to wear a seatbelt. But, as several people said yesterday, I am the wrong age. The "iPod generation" don't identify cars so closely with independence, and they don't like looking up from their phones. The 30-year-old of 2032 who knows how to back into a tight parking space may be as rare as a 30-year-old today who can multiply three-digit numbers in his head. Me, I'll wave from the train.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 3, 2012

Beyond the soup kitchen

"The whole idea of what a homeless service is, is a soup kitchen," one of the representatives for The Connection at St Martin-in-the-Fields said yesterday. But does it have to be?

It was in the middle of "Teacamp", a monthly series of meetings that sport the same mix of geeks, government, and do-gooders as the annual UK Govcamp we covered a couple of weeks back. Meetings like this seem to be going on all the time all over the place, trying to figure out ways to use technology to help people. Hardly anyone has any budget, yet that seems not to matter: the optimism is contagious. This week's Teacamp also featured Westminster in Touch, an effort to support local residents and charities; the organization runs a biannual IT Support Forum to brainstorm (the next is March 28).

I have to admit: when I first read about Martha Lane Fox's Digital Inclusion initiative my worst rebellious instincts were triggered: why should anyone be bullied online if they didn't want to go there? Maybe at least some of those 9 million people who have never used the Internet in Britain would like to be left in peace to read books and listen to - rather than use - the wireless.

But the "digital divide" predicted even in the earliest days of the Net is real: those 9 million are those in the most vulnerable sectors of society. According to research published on the RaceOnline site, the percentage of people who have never used the Net correlates closely with income. This isn't really much of a surprise, although you would expect to see a slight tick upwards again at the very top economic levels, where not so long ago people were too grand, too successful, and too set in their ways to feel the need to go online. But they have proxies: their assistants can answer their email and do their Web shopping.

When Internet access was tied to computers, the homeless in particular were at an extreme disadvantage. You can't keep a desktop computer if you have nowhere - or only a very tiny, insecure space - to put it or power it, and you can't afford broadband or a landline. A laptop presents only slightly fewer problems. Even assuming you can find free wifi to use somewhere, how do you keep the laptop from being stolen or damaged? Where and how do you keep it charged? And so The Connection, like libraries and other places, runs a day center with a computing area and resources to help, including computer training.

But even that, they said, hasn't been reaching the most excluded, the under-25s that The Connection sees. When you think about it, it's logical, but I had to be reminded to think about it. Having missed out on - or been failed by - school education, this group doesn't see the Net as the opportunity the rest of us imagine it to be for them.

"They have no idea of creating anything to help their involvement."

So rather than being "digital natives", their position might be comparable to people who have grown up without language or perhaps autistic children whose intelligence and ability to learn has been disrupted by their brain wiring and development so much that the gap between them and their normally wired peers keeps increasing. Today's elderly who lack the motivation, the cognitive functioning, or the physical ability to go online will be catered to, even if only by proxy, until they die out. But imagine being 20 today and having no digital life beyond the completely passive experience of watching a few clips on YouTube or glancing at a Facebook page and thinking they have nothing to do with you. You will go through your entire life at a progressively greater disadvantage. Just as we assume that today's 80-year-olds grew up with movies, radio, and postal mail, when *you* are 80 (if the planet hasn't run out of energy and water and been forced to turn off all the computers by then), in devising systems to help you society will assume you grew up with television, email, and ecommerce. Whatever is put in place to help you navigate whatever that complex future will be like, will be completely outside your grasp.

So The Connection is helping them to do some simple things: upload interviews about their lives, annotate YouTube clips, create comic strips - anything to break this passive lack of interest. Beyond that, there's a big opportunity in smart phones, which don't need charging so often and are easier to protect - and can take advantage of free wifi just as a laptop can. The Connection is working on things like an SMS service that goes out twice a day and provides weather reports, maps of food runs, and information about free things to do. Should you be technically skilled and willing, they're looking for geeky types to help them put these ideas together and automate them. There are still issues around getting people phones, of course - and around the street value of a phone - but once you have a phone where you can be contacted by friend, family, and agencies, it's a whole different life. As it is again if you can be convinced that the Net belongs to you, too, not just all those other people.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


January 21, 2012

Camping out

"Why hasn't the marvelous happened yet?" The speaker - at one of today's "unconference" sessions at this year's UK Govcamp - was complaining that with 13,000-odd data sets up on his organization's site there ought to be, you know, results.

At first glance, GovCamp seems peculiarly British: an incongruous mish-mash of government folks, coders, and activists, all brought together by the idea that technology makes it possible to remake government to serve us better. But the Web tells me that events like this are happening in various locations around Europe. James Hendler, who likes to collect government data sets from around the world (700,000 and counting now!), tells me that events like this are happening all over the US, too - except that there this size of event - a couple of hundred people - is New York City.

That's both good and bad: a local area in the US can find many more people to throw at more discrete problems - but on the other hand the federal level is almost impossible to connect with. And, as Hendler points out, the state charters mean that there are conversations the US federal government simply cannot have with its smaller, local counterparts. In the UK, if central government wants a local authority to do something, it can just issue an order.

This year's GovCamp is a two-day affair. Today was an "unConference": dozens of sessions organized by participants to talk about...stuff. Tomorrow will be hands-on, doing things in the limited time available. By the end of the day, the Twitter feed was filling up with eagerness to get on with things.

A veteran camper - I'm not sure how to count how many there have been - tells me that everyone leaves the event full of energy, convinced that they can change the world on Monday. By later next week, they'll have come down from this exhilarated high to find they're working with the same people and the same attitudes. Wonders do not happen overnight.

Along those lines, Mike Bracken, the guy who launched the Guardian's open data platform, now at the Cabinet Office, acknowledges this when he thanks the crowd for the ten years of persistence and pain that created his job. The user, his colleague Mark O'Neill said recently is at the center of everything they're working on. Are we, yet, past proving the concept?

"What should we do first?" someone I couldn't identify (never knowing who's speaking is a pitfall of unConferences) asked in the same session as the marvel-seeker. One offered answer was one any open-source programmer would recognize: ask yourself, in your daily life, what do you want to fix? The problem you want to solve - or the story you want to tell - determines the priorities and what gets published. That's if you're inside government; if you're outside, based on last summer's experience following the Osmosoft teams during Young Rewired State, often the limiting factor is what data is available and in what form.

With luck and perseverance, this should be a temporary situation. As time goes on, and open data gets built into everything, publishing it should become a natural part of everything government does. But getting there means eliminating a whole tranche of traditional culture and overcoming a lot of fear. If I open this data and others can review my decisions will I get fired? If I open this data and something goes wrong will it be my fault?

In a session on creative councils, I heard the suggestion that in the interests of getting rid of gatekeepers who obstruct change organizational structures should be transformed into networks with alternate routes to getting things done until the hierarchy is no longer needed. It sounds like a malcontent's dream for getting the desired technological change past a recalcitrant manager, but the kind of solution that solves one problem by breaking many other things. In such a set-up, who is accountable to taxpayers? Isn't some form of hierarchy inevitable given that someone has to do the hiring and firing?

It was in a session on engagement where what became apparent that as much as this event seems to be focused on technological fixes, the real goal is far broader. The discussion veered into consultations and how to build persistent networks of people engaged with particular topics.

"Work on a good democratic experience," advised the session's leader. Make the process more transparent, make people feel part of the process even if they don't get what they want, create the connection that makes for a truly representative democracy. In her view, what goes wrong with the consultation process now - where, for example, advocates of copyright reform find themselves writing the same ignored advice over and over again in response to the same questions - is that it's trying to compensate for the poor connections to their representatives that most people have. Building those persistent networks and relationships is only a partial answer.

"You can't activate the networks and not at the same time change how you make decisions," she said. "Without that parallel change you'll wind up disappointing people."

Marvels tomorrow, we hope.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 16, 2011

Location, location, location

In the late 1970s, I used to drive across the United States several times a year (I was a full-time folksinger), and although these were long, long days at the wheel, there were certain perks. One was the feeling that the entire country was my backyard. The other was the sense that no one in the world knew exactly where I was. It was a few days off from the pressure of other people.

I've written before that privacy is not sleeping alone under a tree but being able to do ordinary things without fear. Being alone on an interstate crossing Oklahoma wasn't to hide some nefarious activity (like learning the words to "There Ain't No Instant Replay in the Football Game of Life"). Turn off the radio and, aside from an occasional billboard, the world was quiet.

Of course, that was also a world in which making a phone call was a damned difficult thing to do, which is why professional drivers all had CB radios. Now, everyone has mobile phones, and although your nearest and dearest may not know where you are, your phone company most certainly does, and to a very fine degree of "granularity".

I imagine normal human denial is broad enough to encompass pretending you're in an unknown location while still receiving text messages. Which is why this year's A Fine Balance focused on location privacy.

The travel privacy campaigner Edward Hasbrouck has often noted that travel data is particularly sensitive and revealing in a way few realize. Travel data indicate your religion (special meals), medical problems, and life style habits affecting your health (choosing a smoking room in a hotel). Travel data also shows who your friends are, and how close: who do you travel with? Who do you share a hotel room with, and how often?

Location data is travel data on a steady drip of steroids. As Richard Hollis, who serves on the ISACA Government and Regulatory Advocacy Subcommittee, pointed out, location data is in fact travel data - except that instead of being detailed logging of exceptional events it's ubiquitous logging of everything you do. Soon, he said, we will not be able to opt out - and instead of travel data being a small, sequestered, unusually revealing part of our lives, all our lives will be travel data.

Location data can reveal the entire pattern of your life. Do you visit a church every Monday evening that has an AA meeting going on in the basement? Were you visiting the offices of your employer's main competitor when you were supposed to have a doctor's appointment?

Research supports this view. Some of the earliest work I'm aware of is of Alberto Escudero-Pascual. A month-long experiment tracking the mobile phones in his department enabled him to diagram all the intra-departmental personal relations. In a 2002 paper, he suggests how to anonymize location information (PDF). The problem: no business wants anonymization. As Hollis and others said, businesses want location data. Improved personalization depends on context, and location provides a lot of that.

Patrick Walshe, the director of privacy for the GSM Association, compared the way people care about privacy to the way they care about their health: they opt for comfort and convenience and hope for the best. They - we - don't make changes until things go wrong. This explains why privacy considerations so often fail and privacy advocates despair: guarding your privacy is like eating your vegetables, and who except a cranky person plans their meals that way?

The result is likely to be the world that Microsoft UK's director of Search, advertising, and online UK, Dave Coplin, outlined, arguing that privacy today is at the turning point that the Melissa virus represented for security 11 years ago when it first hit.

Calling it "the new battleground," he said, "This is what happens when everything is connected." Similarly, Blaine Price, a senior lecturer in computing at the Open University, had this cheering thought: as humans become part of the Internet of Things, data leakage will become almost impossible to avoid.

Network externalities mean that the number of people using a network increase its value for all other users of that network. What about privacy externalities? I haven't heard the phrase before, although I see it's not new (PDF). But I mean something different than those papers do: the fact that we talk about privacy as an individual choice when instead it's a collaborative effort. A single person who says, "I don't care about my privacy" can override the pro-privacy decisions of dozens of their friends, family, and contacts. "I'm having dinner with @wendyg," someone blasts, and their open attitude to geolocation reveals mine.

In his research on tracking, Price has found that the more closely connected the trackers are the less control they have over such decisions. I may worry that turning on a privacy block will upset my closest friend; I don't obsess at night, "Will the phone company think I'm mad at it?"

So: you want to know where I am right now? Pay no attention to the geolocated Twitterer who last night claimed to be sitting in her living room with "wendyg". That wasn't me.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 2, 2011

Debating the robocalypse

"This House fears the rise of artificial intelligence."

This was the motion up for debate at Trinity College Dublin's Philosophical Society (Twitter: @phil327) last night (December 1, 2011). It was a difficult one, because I don't think any of the speakers - neither the four students, Ricky McCormack, Michael Coleman, Cat O'Shea, and Brian O'Beirne, nor the invited guests, Eamonn Healy, Fred Cummins, and Abraham Campbell - honestly fear AI all that much. Either we don't really believe a future populated by superhumanly intelligent killer robots is all that likely, or, like Ken Jennings, we welcome our new computer overlords.

But the point of this type of debate is not to believe what you are saying - I learned later that in the upper levels of the game you are assigned a topic and a position and given only 15 minutes to marshal your thoughts - but to argue your assigned side so passionately, persuasively, and coherently that you win the votes of the assembled listeners even if later that night, while raiding the icebox, they think, "Well, hang on..." This is where politicians and Dail/House of Commons debating style come from, As a participatory sport it was utterly new to me, and it explains a *lot* about the derailment of political common sense by the rise of public relations and lobbying.

Obviously I don't actually oppose research into AI. I'm all for better tools, although I vituperatively loathe tools that try to game me. As much fun as it is to speculate about whether superhuman intelligences will deserve human rights, I tend to believe that AI will always be a tool. It was notable that almost every speaker assumed that AI would be embodied in a more-or-less humanoid robot. Far more likely, it seems to me, that if AI emerges it will be first in some giant, boxy system (that humans can unplug) and even if Moore's Law shrinks that box it will be much longer before AI and robotics converge into a humanoid form factor.

Lacking conviction on the likelihood of all this, and hence of its dangers, I had to find an angle, which eventually boiled down to Walt Kelly and We have met the enemy and he is us. In this, I discovered, I am not alone: a 2007 ThinkArtificial poll found that more than half of respondents feared what people would do with AI: the people who program it, own it, and deploy it.

If we look at the history of automation to date, a lot of it has been used to make (human) workers as interchangeable as possible. I am old enough to remember, for example, being able to walk down to the local phone company in my home town of Ithaca, NY, and talk in person to a customer service representative I had met multiple times before about my piddling residential account. Give everyone the same customer relationship database and workers become interchangeable parts. We gain some convenience - if Ms Jones is unavailable anyone else can help us - but we pay in lost relationships. The company loses customer loyalty, but gains (it hopes) consistent implementation of its rules and the economic leverage of no longer depending on any particular set of workers.

I might also have mentioned automated trading systems, which are making the markets swing much more wildly much more often. Later, Abraham Campbell, a computer scientist working in augmented reality at University College Dublin, said as much as 25 percent of trading is now done by bots. So, cool: Wall Street has become like one of those old IRC channels where you met a cute girl named Eliza...

Campbell had a second example: the Siri, which will tell you where to hide a dead body but not where you might get an abortion. Google's removal of torrent sites from its autosuggestion/Instant feature didn't seem to me egregious censorship, partly because there are other search engines and partly (short-sightedly) because I hate Instant so much already. But as we become increasingly dependent on mediators to help us navigate our overcrowded world, the agenda and/or competence of the people programming them are vital to know. These will be transparent only as long as there are alternatives.

Simultaneously, back in England in work that would have made Jessica Mitford proud, Privacy International's Eric King and Emma Draper were publishing material that rather better proves the point. Big Brother Inc lays out the dozens of technology companies from democratic Western countries that sell surveillance technologies to repressive regimes. King and Draper did what Mitford did for the funeral business in the late 1960s (and other muckrakers have done since): investigate what these companies' marketing departments tell prospective customers.

I doubt businesses will ever, without coercion, behave like humans with consciences; it's why they should not be legally construed as people. During last night's debate, the prospective robots were compared to women and "other races", who were also denied the vote. Yes, and they didn't get it without a lot of struggle. The In the "Robocalypse" (O'Beirne), they'd better be prepared to either a) fight to meltdown for their rights or b) protect their energy sources and wait patiently for the human race to exterminate itself.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 25, 2011

Paul Revere's printing press

There is nothing more frustrating than watching smart, experienced people reinvent known principles. Yesterday's Westminster Forum on cybersecurity was one such occasion. I don't blame them, or not exactly: it's just maddening that we have made so little progress, while the threats keep escalating. And it is from gatherings like this one that government policy is made.

Rephrasing Bill Clinton's campaign slogan, "It's the people, stupid," said Philip Virgo, chairman of the security panel of the IT Livery Company, to kick off the day, a sentiment echoed repeatedly by nearly every other speaker. Yes, it's the people - who trust when they shouldn't, who attach personal devices to corporate networks, who disclose passwords when they shouldn't, who are targeted by today's Facebook-friending social engineers. So how many people experts on the program? None. Psychologists? No. Nor any usability experts or people whose jobs revolve around communication, either. (Or women, but I'm prepared to regard that as a separate issue.)

Smart, experienced guys, sure, who did a great job of outlining problems and a few possible solutions. Somewhere toward the end of the proceedings, someone allowed in passing that yes, it's not a good idea to require people to use passwords that are too complex to remember easily. This is the state of their art? It's 12 years since Angela Sasse and Anne Adams covered this territory in Users Are Not the Enemy. Sasse has gone on to help found the field of security economics, which seeks to quantify the cost of poorly designed security - not just in data breaches and DoS attacks but in the lost productivity of frustrated, overburdened users. Sasse argues that the problem isn't so much the people as user-hostile systems and technology.

"As user-friendly as a cornered rat," Virgo says he wrote of security software back in 1983. Anyone who's looked at configuring a firewall lately knows things haven't changed that much. In a world of increasingly mass-market software and devices, security software has remained resolutely elitist: confusing error messages, difficult configuration, obscure technology. How many users know what to do when their browser says a Web site certificate is invalid? Or how to answer anti-virus software that asks whether you want to authorise HIPS/RegMod-007?

"The current approach is not working," said William Beer, director of information security and cybersecurity for PriceWaterhouseCoopers. "There is too much focus on technology, and not enough focus from business and government leaders." How about academics and consumers, too?

There is no doubt, though, that the threats are escalating. Twenty years ago, the biggest worry was that a teenaged kid would write a virus that spread fast and furious in the hope of getting on the evening news. Today, an organized criminal underground uses personal information to target a small group of users inside RSA, leveraging that into a threat to major systems worldwide. (Trend Micro CTO Andy Dancer said the attack began in the real world with a single user befriended at their church. I can't find verification, however.)

The big issue, said Martin Smith, CEO of The Security Company, is that "There's no money in getting the culture right." What's to sell if there's no technical fix? Like when your plane is held to ransom by the pilot, or when all it takes to publish 250,000 US diplomatic cables is one alienated, low-ranked person with a DVD burner and a picture of Lady Gaga? There's a parallel here to pharmaceuticals: one reason we have few weapons to combat rampaging drug resistance is that for decades developing new antibiotics was not seen as a profitable path.

Granted, you don't, as Dancer said afterwards, want to frame security as an issue of "fixing the people" (but we already know better than that). Nor is it fair to ban company employees from social media lest some attacker pick it up and use it to create a false sense of trust. Banning the latest new medium, said former GCHQ head John Bassett, is just the instinctive reaction in a disturbance; in 1775 Boston the "problem" was Paul Revere's printing press stirring up trouble.

Nor do I, personally, want to live in a trust-free world. I'm happy to assume the server next to me is compromised, but "Trust no one" is a lousy way to live.

Since perfect security is not possible, Dancer advised, organizations should plan for the worst. Good advice. When did I first hear it? Twenty years ago and most months since, by Peter Neumann in his RISKS Forum. It is depressing and frustrating that we are still having this conversation as if it were new - and that we will have it all over again over the next decade as smart meters roll out to 26 million British households by 2020, opening up the electrical grid to attacks that are already being predicted and studied.

Neumann - and Dancer - is right. There is no perfect security because it's in no one's interest to create it. Plan for the worst.

To Gene Spafford, 1989: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room protected by armed guards - and even then I have my doubts."

For everything else, there's a stolen Mastercard.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 18, 2011

The write stuff

The tenth anniversary of the first net.wars column slid by quietly on November 2. This column wasn't born of 9/11 - net.wars-the-book was published in 1998 - but it did grow out of anger over the way the grief and shock over 9/11 was being hijacked to justify policies that were unacceptable in calmer times Ever since, the column has covered the various border wars between cyberspace and real life, with occasional digressions. This week's column is a digression. I feel I've earned it.

A few weeks ago I had this conversation with a friend:

wg: My friend's son is a writer on The Daily Show.
Friend, puzzled: Jon Stewart needs writers? I thought he did his own jokes.

For the record, Stewart has 12 to 14 staff writers. For a simple reason: comedy is hard, and even the vaudeville-honed joke machine that was Morey Amsterdam would struggle to devise two hours of original material every week.

Which is how we arrive at the enduring mystery of the sitcom. Although people may disagree about exactly when that is, when the form works, says the veteran sitcom writer and showrunner Ken Levine, it is TV's most profitable money machine. Sitcom writing requires not only a substantial joke machine but the ability to create an underlying storyline scaffold of recognizably human reality. And you must do all that under pressure, besieged by conflicting notes from the commissioning network and studio, and conforming to constraints as complex and specific as those of a sonnet: budgets, timing, and your actors' abilities. It takes a village. Or, since today most US sitcoms are written by a roomful of writers working together, a "gang-banging" village.

It is this experience that Levine decided, five years ago. to emulate. The ability to thrive in that environment is an essential skill, but beginning writers work alone until they are thrown in at the deep end on their first job. He calls his packed weekend event The Sitcom Room, and, having spent last weekend taking part in the fifth of the series, I can say the description is accurate. After a few hours of introduction about the inner workings of writers' rooms, scripts, and comedy in general, four teams of five people watch a group of actors perform a Levine-written scene with some obvious and some not-so-obvious things wrong with it. Each team then goes off to fix the scene in its designated room, which comes appropriately equipped with junk food, sodas, and a whiteboard. You have 12 hours (more if you're willing to make your own copies). Go.

After five seminars and 20 teams, Levine says every rewritten script has been different, a reminder that sitcom writing is a treasure hunt where the object of the search is unknown. Levine kindly describes each result as "magical"; attendees were more critical of other groups' efforts. (I liked ours best, although the ending still needed some work.)

I felt lucky: my group were all professionals used to meeting deadlines and working to specification, and all displayed a remarkable lack of ego in pitching and listening to ideas. We packed up around 1am, feeling that any changes we made after that point were unlikely to be improvements. On the other hand, if the point was to experience a writers' room, we failed utterly: both Levine and Sunday panelist Jane Espenson (see her new Web series, Husbands) talked about the brutally competitive environment of many of the real-life versions. Others were less blessed by chemistry: one team wrangled until 3am before agreeing on a strategy, then spent the rest of the night writing their script and getting their copies made. Glassy-eyed, on Sunday they disagreed when asked individually about what went wrong: publicly, their appointed "showrunner" blamed himself for not leading effectively. I imagine them indelibly bonded by their shared suffering.

What happens at this event is catalysis. "You will learn a lot about yourselves," Levine said on that first morning. How do you respond when your best ideas are not good enough to be accepted? How do you take to the discipline of delivering jokes and breaking stories on deadline? How do you function under pressure as part of a team creative effort? Less personally, can you watch a performance and see, instead of the actors' skills, the successes and flaws in your script? Can you stay calm when the "studio executive" (played by Levine's business partner, Dan O'Day) produces a laundry list of complaints and winds up with, "Except for a couple of things I wouldn't change anything"? And, not in the syllabus, can you help Dan play practical jokes on Ken? By the end of the weekend, everyone is on a giddy adrenaline high, exacerbated in our case by the gigantic anime convention happening all around us at the same hotel. (Yes. The human-sized fluffy yellow chick getting on the elevator is real. You're not hallucinating from lack of sleep. Check.)

I found Levine's blog earlier this year after he got into cross-fire with the former sitcom star Roseanne Barr over Charlie Sheen's meltdown. His blog reminds me of William Goldman's books on screenwriting: the same combination of entertainment and education. I think of Goldman's advice every day in everything I write. Now, I will think of Levine's, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 11, 2011

The sentiment of crowds

Context is king.

Say to a human, "I'll meet you at the place near the thing where we went that time," and they'll show up at the right place. That's from the 1987 movieBroadcast News: Aaron (Albert Brooks) says it; cut to Jane (Holly Hunter), awaiting him at a table.

But what if Jane were a computer and what she wanted to know from Aaron's statement was not where to meet but how Aaron felt about it? This is the challenge facing sentiment analysis.

At Wednesday's Sentiment Analysis Symposium, the key question of context came up over and over again as the biggest challenge to the industry of people who claim that they can turn Tweets, blog postings, news stories, and other mass data sources into intelligence.

So context: Jane can parse "the place", "the thing", and "that time" because she has expert knowledge of her past with Aaron. It's an extreme example, but all human writing makes assumptions about the knowledge and understanding of the reader. Humans even use those assumptions to implement privacy in a public setting: Stephen Fry could retweet Aaron's words and still only Jane would find the cafe. If Jane is a large organization seeking to understand what people are saying about it and Aaron is 6 million people posting on Twitter, Tom can use sentiment analyzer tools to give a numerical answer. And numbers always inspire confidence...

My first encounter with sentiment analysis was this summer during Young Rewired State, when a team wanted to create a mood map of the UK comparing geolocated tweets to indices of multiple deprivation. This third annual symposium shows that here is a rapidly engorging industry, part PR, part image consultancy, and part artificial intelligence research project.

I was drawn to it out of curiosity, but also because it all sounds slightly sinister. What do sentiment analyzers understand when I say an airline lounge at Heathrow Terminal 4 "brings out my inner Sheldon? What is at stake is not precise meaning - humans argue over the exact meaning of even the greatest communicators - but extracting good-enough meaning from high-volume data streams written by millions of not-monkeys.

What could possibly go wrong? This was one of the day's most interesting questions, posed by the consultant Meta Brown to representatives of the Red Cross, the polling organization Harris Interactive, and Paypal. Failure to consider the data sources and the industry you're in, said the Red Cross's Banafsheh Ghassemi. Her example was the period just after Hurricane Irene, when analyzing social media sentiment would find it negative. "It took everyday disaster language as negative," she said. In addition, because the Red Cross's constituency is primarily older, social media are less indicative than emails and call center records. For many organizations, she added, social media tend to skew negative.

Earlier this year, Harris Interactive's Carol Haney, who has had to kill projects when they failed to produce sufficiently accurate results for the client, told a conference, "Sentiment analysis is the snake oil of 2011." Now, she said, "I believe it's still true to some extent. The customer has a commercial need for a dial pointing at a number - but that's not really what's being delivered. Over time you can see trends and significant change in sentiment, and when that happens I feel we're returning value to a customer because it's not something they received before and it's directionally accurate and giving information." But very small changes over short time scales are an unreliable basis for making decisions.

"The difficulty in social media analytics is you need a good idea of the questions you're asking to get good results," says Shlomo Argamon, whose research work seems to raise more questions than answers. Look at companies that claim to measure influence. "What is influence? How do you know you're measuring that or to what it correlates in the real world?" he asks. Even the notion that you can classify texts into positive and negative is a "huge simplifying assumption".

Argamon has been working on technology to discern from written text the gender and age - and perhaps other characteristics - of the author, a joint effort with his former PhD student Ken Bloom. When he says this, I immediately want to test him with obscure texts.

Is this stuff more or less creepy than online behavioral advertising? Han-Sheong Lai explained that Paypal uses sentiment analysis to try to glean the exact level of frustration of the company's biggest clients when they threaten to close their accounts. How serious are they? How much effort should the company put into dissuading them? Meanwhile Verint's job is to analyze those "This call may be recorded" calls. Verint's tools turn speech to text, and create color voiceprint maps showing the emotional high points. Click and hear the anger.

"Technology alone is not the solution," said Philip Resnik, summing up the state of the art. But, "It supports human insight in ways that were not previously possible." His talk made me ask: if humans obfuscate their data - for example, by turning off geolocation - will this industry respond by finding ways to put it all back again so the data will be more useful?

"It will be an arms race," he agrees. "Like spam."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 30, 2011

Trust exercise

When do we need our identity to be authenticated? Who should provide the service? Whom do we trust? And, to make it sustainable, what is the business model?

These questions have been debated ever since the early 1990s, when the Internet and the technology needed to enable the widespread use of strong cryptography arrived more or less simultaneously. Answering them is a genuinely hard problem (or it wouldn't be taking so long).

A key principle that emerged from the crypto-dominated discussions of the mid-1990s is that authentication mechanisms should be role-based and limited by "need to know"; information would be selectively unlocked and in the user's control. The policeman stopping my car at night needs to check my blood alcohol level and the validity of my driver's license, car registration, and insurance - but does not need to know where I live unless I'm in violation of one of those rules. Cryptography, properly deployed, can be used to protect my information, authenticate the policeman, and then authenticate the violation result that unlocks more data.

Today's stored-value cards - London's Oyster travel card, or Starbucks' payment/wifi cards - when used anonymously do capture some of what the crypto folks had in mind. But the crypto folks also imagined that anonymous digital cash or identification systems could be supported by selling standalone products people installed. This turned out to be wholly wrong: many tried, all failed. Which leads to today, where banks, telcos, and technology companies are all trying to figure out who can win the pool by becoming the gatekeeper - our proxy. We want convenience, security, and privacy, probably in that order; they want security and market acceptance, also probably in that order.

The assumption is we'll need that proxy because large institutions - banks, governments, companies - are still hung up on identity. So although the question should be whom do we - consumers and citizens - trust, the question that ultimately matters is whom do *they* trust? We know they don't trust *us*. So will it be mobile phones, those handy devices in everyone's pockets that are online all the time? Banks? Technology companies? Google has launched Google Wallet, and Facebook has grand aspirations for its single sign-on.

This was exactly the question Barclaycard's Tom Gregory asked at this week's Centre for the Study of Financial Innovation round-table discussion (PDF) . It was, of course, a trick, but he got the answer he wanted: out of banks, technology companies, and mobile network operators, most people picked banks. Immediate flashback.

The government representatives who attended Privacy International's 1997 Scrambling for Safety meeting assumed that people trusted banks and that therefore they should be the Trusted Third Parties providing key escrow. Brilliant! It was instantly clear that the people who attended those meetings didn't trust their banks as much as all that.

One key issue is that, as Simon Deane-Johns writes in his blog posting about the same event, "identity" is not a single, static thing; it is dynamic and shifts constantly as we add to the collection of behaviors and data representing it.

As long as we equate "identity" with "a person's name" we're in the same kind of trouble the travel security agencies are when they try to predict who will become a terrorist on a particular flight. Like the browser fingerprint, we are more uniquely identifiable by the collection of our behaviors than we are by our names, as detectives who search for missing persons know. The target changes his name, his jobs, his home, and his wife - but if his obsession is chasing after trout he's still got a fishing license. Even if a link between a Starbucks card and its holder's real-world name is never formed, the more data the card's use enters into the system the more clearly recognizable as an individual he will be. The exact tag really doesn't matter in terms of understanding his established identity.

What I like about Deane-Johns' idea -

the solution has to involve the capability to generate a unique and momentary proof of identity by reference to a broad array of data generated by our own activity, on the fly, which is then useless and can be safely discarded"

is two things. First, it has potential as a way to make impersonation and identity fraud much harder. Second is that implicit in it is the possibility of two-way authentication, something we've clearly needed for years. Every large organization still behaves as though its identity is beyond question whereas we - consumers, citizens, employees - need to be thoroughly checked. Any identity infrastructure that is going to be robust in the future must be built on the understanding that with today's technology anyone and anything can be impersonated.

As an aside, it was remarkable how many people at this week's meeting were more concerned about having their Gmail accounts hacked than their bank accounts. My reasoning is that the stakes are higher: I'd rather lose my email reputation than my house.. Their reasoning is that the banking industry is more responsive to customer problems than technology companies. That truly represents a shift from 1997, when technology companies were smaller and more responsive.

More to come on these discussions...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 23, 2011

Your grandmother's phone

In my early 20s I had a friend who was an expert at driving cars with...let's call them quirks. If he had to turn the steering wheel 15 degrees to the right to keep the car going straight while peering between smears left by the windshield wipers and pressing just the exact right amount on the brake pedal, no problem. This is the beauty of humans: we are adaptable. That characteristic has made us the dominant species on the planet, since we can adapt to changes of habitat, food sources, climate (within reason), and cohorts. We also adapt to our tools, which is why technology designers get away with flaws like the iPhone's "death grip". We don't like it - but we can deal with it.

At least, we can deal with it when we know what's going on. At this week's Senior Market Mobile, the image that stuck in everyone's mind came early in the day, when Cambridge researchers Ian Hosking and Mike Bradley played a video clip of a 78-year-old woman trying to figure out how to get past an iPad's locked screen. Was it her fault that it seemed logical to her to hold it in one hand while jabbing at it in frustration? As Donald Norman wrote 20 years ago, for an interface to be intuitive it has to match the user's mental model of how it works.

That 78-year-old's difficulties, when compared with the glowing story of the 100-year-old who bonded instantly with her iPad, make another point: age is only one aspect of a person's existence - and one whose relevance they may reject. If you're having trouble reading small type or remembering the menu layout, pushing the buttons, or hearing a phone call what matters isn't that you're old but that you have vision impairment, cognitive difficulties, less dextrous fingers, or hearing loss. You don't have to be old to have any of those things - and not all old people have them.

For those reasons, the design decisions intended to aid seniors - who, my God, are defined as anyone over 55! - aid many other people too. All of these points were made with clarity by Mark Beasley, whose company specializes in marketing to seniors - you know, people who, unlike predominantly 30-something designers and marketers, don't think they're old and who resent being lumped together with a load of others with very different needs on the basis of age. And who think it's not uncool to be over 50. (How ironic, considering that when the Baby Boomers were 18 they minted the slogan, "Never trust anyone over 30.")

Besides physical attributes and capabilities, cultural aspects matter more in a target audience's than their age per se. We who learned to type on manual typewriters bash keyboards a lot harder than those who grew up with computers. Those who grew up with the phone grudgingly sited in the hallway, using it only for the briefest of conversations are less likely to be geared toward settling in for a long, loud intimate conversation on a public street.

Last year at this event, Mobile Industry Review editor Ewan McLeod lambasted the industry because even the iPhone did not effectively serve his parents' greatest need: an easy way to receive and enjoy pictures of their grandkids. This year, Stuart Arnott showed off a partial answer, Mindings, a free app for Android tablets that turns them into smart display frames. You can send them pictures or text messages or, in Arnott's example, a reminder to take medication that, when acknowledged by a touch goes on to display the picture or message the owner really wants to see.

Another project in progress, Threedom is an attempt to create an Android design with only three buttons that uses big icons and type to provide all the same functionality but very simply.

The problem with all of this - which Arnott seems to have grasped with Mindings - is that so much of these discussions focus on the mobile phone as a device in isolation. But that's not really learning the lesson of the iPod/iPhone/iPad, which is that what matters is the ecology surrounding the device. It is true that a proportion of today's elderly do not use computers or understand why they suddenly need a mobile phone. But tomorrow's elderly will be radically different. Depending on class and profession, people who are 60 now are likely to have spent many years of his working life using computers and mobile phones. When they reach 86, what will dictate their choice of phone will be only partly whatever impairments age may bring. A much bigger issue is going to be the legacy and other systems that the phone has to work with: implantable electronic medical devices, smart electrical meters, ancient software in use because it's familiar (and has too much data locked inside it), maybe even that smart house they keep telling us we're going to have one of these days. Those phones are going to have to do a lot more than just make it easy to call your son.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 2, 2011

White rabbits

I feel like I am watching magicians holding black top hats. They do...you're not sure what...to a mess of hexagonal output on the projection screen so comprehensible words appear...and people laugh. And then some command line screens flash in and out before your eyes and something absurd and out-of-place appears, like the Windows calculator, and everyone applauds. I am at 44con, a less-crazed London offshoot of the Defcon-style mix of security and hacking. Although, this being Britain, they're pushing the sponsored beer.

In this way we move through exploits: iOS, Windows Phone 7, and SAP, whose protocols are pulled apart by Sensepost's Ian de Villiers. And after that Trusteer Rapport, which seems to be favored by banks and other financial services, and disliked by everyone else. All these talks leave a slightly bruised feeling, not so much like you'd do better to eschew all electronics and move to a hut on a deserted beach without a phone as that even if you did that you'd be vulnerable to other people's decisions. While exploring the inner workings of USB flash drives (PDF), for example, Phil Polstra noted in passing that the Windows Registry logs every single time you insert one. I knew my computer tracked me, but I didn't quite realize the full extent.

The bit of magic that most clearly makes this point is Maltego. This demonstration displays neither hexagonal code nor the Windows calculator, but rolls everything privacy advocates have warned about for years into one juice tool that all the journalists present immediately start begging for. (This is not a phone hacking joke; this stuff could save acres of investigative time.) It's a form of search that turns a person or event into a colorful display of whirling dots (hits) that resolve into clusters. Its keeper, Roelof Temmingh, uses a mix of domain names, IP addresses, and geolocation to discover the Web sites White House users like to visit and tweets from the NSA parking lot. Version 4 - the first version of the software dates to 2007 - moves into real-time data mining.

Later, I ask a lawyers with a full, licensed copy to show me an ego search. We lack the time to finish, but our slower pace and diminished slickness make it plain that this software takes time and study to learn to drive. This is partly comforting: it means that the only people who can use it to do the full spy caper are professionals, rather than amateurs. Of course, those are the people who will also have - or be able to command - access to private databases that are closed to the rest of us, such as the utility companies' electronic customer records, which, when plugged in can link cyberworld and real world identities. "A one-click stalking machine," Temmingh calls it.

As if your mobile phone - camera, microphone, geolocation, email, and Web browsing history - weren't enough. One attendee tells me seriously that he would indeed go to jail for two years rather than give up his phone's password, even if compelled under the Regulation of Investigatory Powers Act. Even if your parents are sick and need you to take care of them? I ask. He seems to feel I'm unfairly moving the bar.

Earlier the current mantra that every Web site should offer secure HTTP came under fire. IOActive's Vincent Berg showed off how to figure out which grid tile of Google Maps and which Wikipedia pages someone has been looking at despite the connection's being carred over SSL. The basis of this is our old friend traffic analysis. It's not a great investigative tool because, as Berg himself points out, there would be many false positives, but side-channel leaks in Web pages are still a coming challenge (PDF). SSL has its well-documented problems, but "At some point the industry will get it right." We can but hope.

It was left to Alex Conran, whose TV program The Real Hustle starts its tenth season on BBC Three on Monday, to wind things up by reminding us that the most enduring hacks are the human ones. Conran says that after perpetrating more than 500 scams on an unsuspecting public (and debunking them afterwards), he has concluded that just as Western music relies on endless permutations of the same seven notes, scams rely on variations on the same five elements. They will sound familiar to anyone who's read The Skeptic over the last 24 years.

The five: misdirection, social compliance, the love of a special deal, time pressure, social proof (or reenforcement). "Con men are the hackers of human nature", Conran said, but noted that part of the point of his show is that if you educate people what the risks are they will take the necessary steps to protect themselves. And then dispensed this piece of advice: if you want to control the world, buy a hi-vis jacket. They're cheap, and when you're wearing one, apparently anyone you meet will do anything you tell them without question. No magic necessary.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 12, 2011

"Phony concerns about human rights"

Why can't you both condemn violent rioting and looting *and* care about civil liberties?

One comment of David Cameron's yesterday in the Commons hit a nerve: that "phony" (or "phoney", if you're British) human rights concerns would not get in the way of publishing CCTV images in the interests of bringing the looters and rioters to justice. Here's why it bothers me: even the most radical pro-privacy campaigner is not suggesting that using these images in this way is wrong. But in saying it, Cameron placed human rights on the side of lawlessness. One can oppose the privacy invasiveness of embedding crowdsourced facial recognition into Facebook and still support the use of the same techniques by law enforcement to identify criminals.

It may seem picky to focus on one phrase in a long speech in a crisis, but this kind of thinking is endemic - and, when it's coupled with bad things happening and a need for politicians to respond quickly and decisively, dangerous. Cameron shortly followed it with the suggestion that it might be appropriate to shut down access to social media sites when they are being used to plan "violence, disorder and criminality".

Consider the logic there: given the size of the population, there are probably people right now planning crimes over pints of beer in pubs, over the phone, and sitting in top-level corporate boardrooms. Fellow ORG advisory council member Kevin Marks blogs a neat comparison by Douglas Adams to cups of tea. But no, let's focus on social media.

Louise Mensch, MP and novelist, was impressove during the phone hacking hearings aside from her big gaffe about Piers Morgan. But she's made another mistake here in suggesting that taking Twitter and/or Facebook down for an hour during an emergency is about like shutting down a road or a railway station.

First of all, shutting down the tube in the affected areas has costs: innocent bystanders were left with no means to escape their violent surroundings. (This is the same thinking that wanted to shut down the tube on New Year's Eve 1999 to keep people out of central London.)

But more important, the comparison is wrong. Shutting down social networks is the modern equivalent of shutting down radio, TV, and telephones, not transport. The comparison suggests that Mensch is someone who uses social media for self-promotion rather than, like many of us, as a real-time news source and connector to friends and family. This is someone for whom social media are a late add-on to an already-structured life; in 1992 an Internet outage was regarded as a non-issue, too. The ability to use social media in an emergency surely takes pressure off the telephone network by helping people reassure friends and family, avoid trouble areas, find ways home, and so on. Are there rumors and misinformation? Sure. That's why journalists check stuff out before publishing it (we hope). But those are vastly overshadowed by the amount of useful and timely updates.

Is barring access is even possible? As Ben Rooney writes in the Wall Street Journal Europe, it's hard enough to ground one teenager these days, let alone a countryful. But let's say they decide to try. What approaches can they take?

One: The 95 percent approach. Shut down access to the biggest social media sites and hope that the crimes aren't being planned on the ones you haven't touched. Like the network that the Guardian finds was really used - Blackberry messaging.

Two: The Minority Report approach. Develop natural language processing and artificial intelligence technology to the point where it can interact on the social networks, spot prospective troublemakers, and turn them in before they commit crimes.

Three: The passive approach. Revive all the net.wars of the past two decades. Reinstate the real-world policing. One of the most important drawbacks to relying on mass surveillance technologies is that they encourage a reactive, almost passive, style of law enforcement. Knowing that the police can catch the crooks later is no comfort when your shop is being smashed up. It's a curious, schizophrenic mindset politicians have: blame social ills on new technology while imagining that other new technology can solve them.

The riots have ended - at least for now, but we will have to live for a long time with the decisions we make about what comes next. Let's not be hasty. Think of the PATRIOT Act, which will be ten years old soon.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 8, 2011

The grey hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views, also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers - not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea - and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden?
One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friendly default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night, believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help - but for that to be effective we need far greater transparency from all these - largely American - companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 17, 2011

If you build it...

Lawrence Lessig once famously wrote that "Code is law". Today, at the last day of this year's Computers, Freedom, and Privacy, Ross Anderson's talk about the risks of centralized databases suggested a corollary: Architecture is policy. (A great line and all mine, so I thought, until reminded that only last year CFP had an EFF-hosted panel called exactly that.)

You may *say* that you value patient (for example) privacy. And you may believe that your role-based access rules will be sufficient to protect a centralized database of personal health information (for example), but do the math. The NHS's central database, Anderson said, includes data on 50 million people that is accessible by 800,000 people - about the same number as had access to the diplomatic cables that wound up being published by Wikileaks. And we all saw how well that worked. (Perhaps the Wikileaks Unit could be pressed into service as a measure of security risk.)

So if you want privacy-protective systems, you want the person vendors build for - "the man with the checkbook" to be someone who understands what policies will actually be implemented by your architecture and who will be around the table at the top level of government, where policy is being drafted. When the man with the checkbook is a doctor, you get a very different, much more functional, much more privacy protective system. When governments recruit and listen to a CIO you do not get a giant centralized, administratively convenient Wikileaks Unit.

How big is the threat?

Assessing that depends a lot, said Bruce Schneier, on whether you accept the rhetoric of cyberwar (Americans, he noted, are only willing to use the word "war" when there are no actual bodies involved). If we are at war, we are a population to be subdued; if we are in peacetime we are citizens to protect. The more the rhetoric around cyberwar takes over the headlines, the harder it will be to get privacy protection accepted as an important value. So many other debates all unfold differently depending whether we are rhetorically at war or at peace: attribution and anonymity; the Internet kill switch; built-in and pervasive wiretapping. The decisions we make to defend ourselves in wartime are the same ones that make us more vulnerable in peacetime.

"Privacy is a luxury in wartime."

Instead, "This" - Stuxnet, attacks on Sony and Citibank, state-tolerated (if not state-sponsored) hacking - "is what cyberspace looks like in peacetime." He might have, but didn't, say, "This is the new normal." But if on the Internet in 1995 no one knew you were a dog; on the Internet in 2011 no one knows whether your cyberattack was launched by a government-sponsored military operation or a couple of guys in a Senegalese cybercafé.

Why Senegalese? Because earlier, Mouhamadou Lo, a legal advisor from the Computing Agency of Senegal, had explained that cybercrime affects everyone. "Every street has two or three cybercafés," he said. "People stay there morning to evening and send spam around the world." And every day in his own country there are one or two victims. "it shows that cybercrime is worldwide."

And not only crime. The picture of a young Senegalese woman, posted in Facebook, appeared in the press in connection with the Strauss-Kahn affair because it seemed to correspond to a description given of the woman in the case. She did nothing wrong; but there are still consequences back home.

Somehow I doubt the solution to any of this will be found in the trend the ACLU's Jay Stanley and others highlighted towards robot policing. Forget black helicopters and CCTV; what about infrared cameras that capture private moments in the dark and helicopters the size of hummingbirds that "hover and stare". The mayor of Ogden, Utah wants blimps over his city, and, as Vernon M Keenan, director of the Georgia Bureau of Investigation put it, "Law enforcement does not do a good job of looking at new technologies through the prism of civil liberties."

Imagine, said the ACLU's Jay Stanley: "The chilling prospect of 100 percent enforcement."

Final conference thoughts, in no particular order:

- This is the first year of CFP (and I've been going since 1994) where Europe and the UK are well ahead on considering a number of issues. One was geotracking (Europe has always been ahead in mobile phones); but also electronic health care records and how to manage liability for online content. "Learn from our mistakes!" pleaded one Dutch speaker (re health records).

- #followfriday: @sfmnemonic; @privacywonk; @ehasbrouck; @CenDemTech; @openrightsgroup; @privacyint; @epic; @cfp11.

- The market in secondary use of health care data is now $2 billion (PriceWaterhouseCooper via Latanya Sweeney).

- Index on Censorship has a more thorough write-up of Bruce Schneier's talk.

- Today was IBM's 100th birthday.

- This year's chairs, Lillie Coney (EPIC) and Jules Polonetsky, did an exceptional job of finding a truly diverse range of speakers. A rarity at technology-related conferences.

- Join the weekly Twitter #privchat, Tuesdays at noon Eastern US time, hosted by the Center for Democracy and Technology.

- Have a good year, everybody! See you at CFP 2012 (and here every Friday until then).

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 16, 2011

The democracy divide (CFP2011 Day 2)

Good news: the Travel Security Administration audited itself and found it was doing pretty well. At least, so said Kimberly Walton, special counsellor to the administrator for the TSA.

It's always tough when you're the raw meat served up to the Computers, Freedom, and Privacy crowd, and Walton was appropriately complimented for her courage in appearing. But still: we learned little that was new, other than that the TSA wants to move to a system of identifying people who need to be scrutinized more closely.

Like CAPPS-II? asked the ACLU's Daniel Mach? "It was a terrible idea."

No. It's different. Exactly how, Walton couldn't say. Yet.

Americans spent the latter portion of last year protesting the TSA's policies - but little has happened? Why? It's arguable that a lot has to do with a lot of those protests being online complaints rather than massed ranks of rebellious passengers at airport terminals. And a lot has to do with the fact that FOIA requests and lawsuits move slowly. ACLU, said Ginger McCall, has been unable to get any answers from the TSA except by lawsuit.

Apparently it's easier to topple a government.

"Instead of the reign of terror, the reign of terrified," said Deborah Hurley.(CFP2001 chair) during the panel considering the question of social media's role in the upheavals in Egypt and Tunisia. Those on the ground - Jillian York, Nasser Weddady, Mona Eltawy - say instead that social media enabled little pockets of protest, sometimes as small as just one individual, to find each other and coalesce like the pooling blobs reforming into the liquid metal man in Terminator 2. But what appeared to be sudden reversals of rulers' fortunes to outsiders who weren't paying attention were instead the culmination of years of small rebellions.

The biggest contributor may have been video, providing non-repudiable evidence of human rights abuses. When Tunisia's President Zine al-Abidine Ben Ali blocked video sharing sites, Tunisians turned to Facebook.

"Facebook has a lot of problems with freedom of expression," said York, "but it became the platform of choice because it was accessible, and Tunisia never managed to block it for more than a couple of weeks because when they did there were street protests."

Technology may or may not be neutral, but its context never is. In the US for many years, Section 230 of the Communications Decency Act has granted somewhat greater protection to online speech than to that in traditional media. The EU long ago settled these questions by creating the framework of notice-and-takedown rules and generally refusing to award online speech any special treatment. (You may like to check out EDRI's response to the ecommerce directive (PDF).)

Paul Levy, a lawyer with Public Citizen and organizer of the S230 discussion, didn't like the sound of this. It would be, he argued, too easy for the unhappily criticized to contact site owners and threaten to sue: the heckler's veto can trump any technology, neutral or not.

What, Hurley asked Google's policy director, Bob Boorstin, to close the day, would be the one thing he would do to improve individuals' right to self-determination? Give them more secure mobile devices, he replied. "The future is all about what you hold in your hand." Across town, a little earlier, Senators Franken and Blumenthal introduced the Location Privacy Protection Act 2011.

Certainly, mobile devices - especially Talk to Tweet - gave Africa's dissidents a direct way to get their messages out. But at the same time, the tools used by dictators to censor and suppress Internet speech are those created by (almost entirely) US companies.

Said Weddady in some frustration, "Weapons are highly regulated. If you're trading in fighter jets there are very stringent frames of regulations that prevent these things from falling into the wrong hands. What is there for the Internet? Not much." Worse, he said, no one seems to be putting political behind enforcing the rules that do exist. In the West we argue about filtering as a philosophical issue. Elsewhere, he said, it's life or death. "What am I worth if my ideas remain locked in my head?"

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 15, 2011

Public private lives

A bookshop assistant followed me home the other day, wrote down my street address, took a photograph of my house. Ever since, every morning I find an advertising banner draped over my car windshield that I have to remove before I can drive to work.

That is, of course, a fantasy scenario. But it's an attempt to describe what some of today's Web site practices would look like if transferred into the physical world. That shops do not follow you home is why the analogy between Web tracking and walking on a public street or going into a shop doesn't work. It was raised by Jim Harper, the director of information policy studies at the Cato Institute, on the first day of ACM Computers, Freedom, and Privacy, at his panel on the US's Do Not Track legislation. Casual observers on the street are not watching you in a systematic way; you can visit a shop anonymously, and. depending on its size and the number of staff, you may or may not be recognized the next time your visit.

This is not how the Web works. Web sites can fingerprint your browser by the ecology of add-ins that are peculiar to you and use technologies such as cookies and Flash cookies to track you across the Web and serve up behaviorally targeted ads. The key element - and why this is different from, say, using Gmail, which also analyzes content to post contextual ads - is that all of this is invisible to the consumer. As Harlan Yu, a PhD student in computer science at Princeton, said, advertisers and consumers are in an arms race. How wrong is this?

Clearly, enough consumers find behavioral targeting creepy enough that there is a small but real ecology of ad-blocking technologies - the balking consumer side of the arms race - including everything from Flashblock and Adblock for Mozilla to the do-not-track setting in the latest version of Internet Explorer. (Though there are more reasons to turn off ads than privacy concerns: I block them because anything moving or blinking on a page I'm trying to read is unbearably distracting.)

Harper addressed his warring panellists by asking the legislation's opponents, "Why do you think the Internet should be allowed to prey on the entrails of the hapless consumer?" And of the legislation's sympathizers, "What did the Internet ever do to you that you want to drown it in the bathtub?"

Much of the ensuing, very lively discussion centered on the issue of trade-offs, something that's been discussed here many times: if users all opt out of receiving ads, what will fund free content? Nah, said Ed Felten, on leave from Princeton for a stint at the FTC, what's at stake is behaviorally targeted ads, not *all* ads.

The good news is that although it's the older generation who are most concerned about issues like behavioral targeting, teens have their own privacy concerns. My own belief for years has been that gloomy prognostications that teens do not care about privacy are all wrong. Teens certainly do value their privacy; it's just that their threat model is their parents. To a large extent Danah Boyd provided evidence for this view. Teens, she said, faced with the constant surveillance of well-meaning but intrusive teachers and parents, develop all sorts of strategies to live their private lives in public. One teen deactivates her Facebook profile every morning and reactivates it to use at night, when she knows her parents won't be looking. Another works hard to separate his friends list into groups so he can talk to each in the manner they expect. A third practices a sort of steganography, hiding her meaning in plain sight by encoding it in cultural references she knows her friends will understand but her mother will misinterpret.

Meantime, the FTC is gearing up to come down hard on mobile privacy. Commissioner Edith Ramirez of course favors consumer education, but she noted that the FTC will be taking a hard line with the handful of large companies who act as gatekeepers to the mobile world. Google, which violated Gmail users' privacy by integrating the social networking facility Buzz without first asking consent, will have to submit to privacy audits for the next 20 years. Twitter, whose private messaging was broken into by hackers, will be audited for the next ten years - twice as long as the company has been in existence.

"No company wants to be the subject of an FTC enforcement action," she said. "What happens next is largely in industry's hands." Engineers and developers, she said, should provide voluntary, workable solutions.

Europeans like to think the EU manages privacy somewhat better, but one of the key lessons to emerge from the first panel of the day, a compare-and-contrast discussion of data-sharing between the EU and the US was that there's greater parity than you might think. What matters, said Edward Hasbrouck, is not data protection but how the use of data affects fundamental rights - to fly or transfer money.

In that discussion, while the Department of Homeland Security representative, Mary Ellen Callahan, argued that the US is much more protective of privacy than a simple comparison of data protection laws might suggest. (There is a slew of pieces of US privacy legislation in progress.) The US operates fewer wiretaps by a factor of thousands, she argued, and is far more transparent.

Ah, yes, said Frank Schmiedel, answering questions to supplement the videotaped appearance of European Commission vice-president Viviane Reding, but if the US is going to persist in its demand that the EU transfer passenger name record, financial, and other data, one of these days, Alice, one of these days...the EU may come knocking, expecting reciprocity. Won't that be fun?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 14, 2011

Untrusted systems

Why does no one trust patients?

On the TV series House, the eponymous sort-of-hero has a simple answer: "Everybody lies." Because he believes this, and because no one appears able to stop him, he sends his minions to search his patients' homes hoping they will find clues to the obscure ailments he's trying to diagnose.

Today's Health Privacy Summit in Washington, DC, the zeroth day of this year's Computers, Freedom, and Privacy conference, pulled together, in the best Computers, Freedom, and Privacy tradition, speakers from all aspects of health care privacy. Yet many of them agreed on one thing: health data is complex, decisions about health data are complex, and it's demanding too much of patients to expect them to be able to navigate these complex waters. And this is in the US, where to a much larger extent than in Europe the patient is the customer. In the UK, by contrast, the customer is really the GP and the patient has far less direct control. (Just try looking up a specialist in the phone book.)

The reality is, however, as several speakers pointed out, that doctors are not going to surrender control of their data either. Both physicians and patients have an interest in medical records. Patients need to know about their care; doctors need records both for patient care and for billing and administrative purposes. But beyond these two parties are many other interests who would like access to the intimate information doctors and patients originate: insurers, researchers, marketers, governments, epidemiologists. Yet no one really trusts patients to agree to hand over their data; if they did, these decisions would be a lot simpler. But if patients can't trust their doctor's confidentiality, they will avoid seeking health care until they're in a crisis. In some situations - say, cancer - that can end their lives much sooner than is necessary.

The loss of trust, said lawyer Jim Pyles, could bring on an insurance crisis, since the cost of electronic privacy breaches could be infinite, unlike the ability of insurers to insure those breaches. "If you cannot get insurance for these systems you cannot use them."

If this all (except for the insurance concerns) sounds familiar to UK folk, it's not surprising. As Ross Anderson pointed out, greatly to the Americans' surprise, the UK is way ahead on this particular debate. Nationalized medicine meant that discussions began in the UK as long ago as 1992.

One of Anderson's repeated points is that the notion of the electronic patient record has little to do with the day-to-day reality of patient care. Clinicians, particularly in emergency situations, want to look at the patient. As you want them to do: they might have the wrong record, but you know they haven't got the wrong patient.

"The record is not the patient," said Westley Clarke, and he was so right that this statement was repeated by several subsequent speakers.

One thing that apparently hasn't helped much is the Health Insurance Portability and Accountability Act, which one of the breakout sessions considered scrapping. Is HIPAA a failure or, as long-time Canadian privacy activist Stephanie Perrin would prefer it, a first step? The distinction is important: if HIPPA is seen as an expensive failure it might be scrapped and not replaced. First steps can be succeeded by further, better steps.

Perhaps the first of those should be another of Perrin's suggestions: a map of where your data goes, much like Barbara Garson's book Money Makes the World Go Around? followed her bank deposit as it was loaned out across the world. Most of us would like to believe that what we tell our doctors remains cosily tucked away in their files. These days, not so much.

For more detail see Andy Oram's blog.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 27, 2011

Mixed media

In a fight between technology and the law, who wins? This question has been debated since Net immemorial. Techies often seem to be sure that law can't win against practical action. And often this has been true: the release of PGP defeated the International Traffic in Arms Regulations that banned the export of strong cryptography; TOR lets people all over the world bypass local Net censorship rules; and, in the UK, over the last few weeks Twitter has been causing superinjunctions to collapse.

On the other hand, technology by itself is often not enough. The final defeat of the ITAR had at least as much to do with the expansion of ecommerce and the consequent need for secured connections as it did with PGP. TOR is a fine project, but it is not a mainstream technology. And Twitter is a commercial company that can be compelled to disclose what information it has about its users (though granted, this may be minimal) or close down accounts.

Last week, two events took complementary approaches to this question. The first, Big Tent UK, hosted by Google, Privacy International, and Index on Censorship, featured panels and discussions loosely focused on how law can control technology. The second, OpenTech loosely focused on how technology can change our understanding of the world, if not up-end the law itself. At the latter event, projects like Lisa Evans' effort to understand government spending relied on government-published data, while others, such as OpenStreetMap and OpenCorporates seek to create open-source alternatives to existing proprietary services.

There's no question that doing things - or, in my case, egging on people who are doing things - is more fun than purely intellectual debate. I particularly liked the open-source hardware projects presented at OpenTech, some of which are, as presenter Paul Downey said, trying to disrupt a closed market. See for example, River Simple's effort to offer an open-source design for a haydrogen-powered car. Downey whipped through perhaps a dozen projects, all based on the notion that if something can be represented by lines on a PowerPoint slide you can send it to a laser cutter.

But here again I suspect the law will interfere at some point. Not only will open-source cars have to obey safety regulations, but all hardware designs will come up against the same intellectual property issues that have been dogging the Net from all directions. We've noted before Simon Bradshaw's work showing that copyright as applied to three-dimensional objects will be even more of a rat's nest than it has been when applied to "simple" things like books, music, and movies.

At BigTentUK, copyright was given a rest for once in favor of discussions of privacy, the limits of free speech, and revolution. As is so often the case with this type of discussion, it wasn't long before someone - British TV producer Peter Bazalgette - invoked George Orwell. Bizarrely, he aimed "Orwellian" at Privacy International executive director Simon Davies, who a minute before had proposed that the solution to at least some of the world's ongoing privacy woes would be for regulators internationally to collaborate on doing their jobs. Oddly, in an audience full of leading digital rights activists and entrepreneurs, no one admitted to representing the Information Commissioner's office.

Yet given these policy discussions as his prelude, the MP Jeremy Hunt (Con-South West Surry), the secretary of state for Culture, Olympics, Media, and Sport, focused instead on technical progress. We need two things for the future, he said: speed and mobility. Here he cited Bazalgette's great-great-grandfather's contribution to building the sewer system as a helpful model for today. Tasked with deciding the size of pipes to specify for London's then-new sewer system, Joseph Bazalgette doubled the size of pipe necessary to serve the area of London with the biggest demand; we still use those same pipes. We should, said Hunt, build bandwidth in the same foresighted way.

The modern-day Bazalgette, instead, wants the right to be forgotten: people, he said, should have the right to delete any information that they voluntarily surrender. Much like Justine Roberts, the founder of Mumsnet, who participated in the free speech panel, he seemed not to understand the consequences of what he was asking for. Roberts complained that the "slightly hysterical response" to any suggestion of moderating free speech in the interests of child safety inhibits real discussion; the right to delete is not easily implemented when people are embedded in a three-dimensional web of information.

The Big Tent panels on revolution and conflict would have fit either event, including href="http://en.wikipedia.org/wiki/Wael_Ghonim">Wael Ghonim who ran a Facebook page that fomented pro-democracy demonstrations in Egypt and respresentatives of PAX and Unitar, projects to use the postings of "citizen journalists" and public image streams respectively to provide early warnings of developing conflict.

In the end, we need both technology and law, a viewpoint best encapsulated by Index on Censorship chief executive John Kampfner, who said he was worried by claims that the Internet is a force for good. "The Internet is a medium, a tool," he said. "You can choose to use it for moral good or moral ill."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 18, 2011

Block party

When last seen in net.wars, the Internet Watch Foundation was going through the most embarrassing moment of its relatively short life: the time it blocked a Wikipedia page. It survived, of course, and on Tuesday this week it handed out copies of its latest annual report (PDF) and its strategic plan for the years 2011 to 2014 (PDF) in the Strangers Dining Room at the House of Commons.

The event was, more or less, the IWF's birthday party: in August it will be 15 years since the suspicious, even hostile first presentation, in 1996, of the first outline of the IWF. It was an uneasy compromise between an industry accused of facilitating child abuse, law enforcement threatening technically inept action, and politicians anxious to be seen to be doing something, all heightened by some of the worst mainstream media reporting I've ever seen.

Suspicious or not, the IWF has achieved traction. It has kept government out of the direct censorship business and politicians and law enforcement reasonably satisfied. Without - as was pointed out - cost to the taxpayer, since the IWF is funded from a mix of grants, donations, and ISPs' subscription fees.

And to be fair, it has been arguably successful at doing what it set out to do, which is to disrupt the online distribution of illegal pornographic images of children within the UK. The IWF has reported for some years now that the percentage of such images hosted within the UK is near zero. On Tuesday, it said the time it takes to get foreign-hosted content taken down has halved. Its forward plan includes more of the same, plus pushing more into international work by promoting the use its URL list abroad and developing partnerships.

Over at The Register Jane Fae Ozniek has done a good job of tallying up the numbers the IWF reported, and also of following up on remarks made by Culture Minister Ed Vaizey and Home Office Minister James Brokenshire that suggested the IWF or its methods might be expanded to cover other categories of material. So I won't rehash either topic here.

Instead, what struck me is the IWF's report that a significant percentage of its work now concerns sexual abuse images and videos that are commercially distributed. This news offered a brief glance into a shadowy world that is illegal for any of us to study since under UK law (and the laws of many other countries) it's illegal to access such material. If this is a correct assessment, it certainly follows the same pattern as the world of malware writing, which has progressed from the giggling, maladjusted teenager writing a bit of disruptive code in his bedroom to a highly organized, criminal, upside-down image of the commercial software world (complete, I'm told by experts from companies like Symantec and Sophos, with product trials, customer support, and update patches). Similarly, our, or at least my, image was always of like-minded amateurs exchanging copies of the things they managed to pick up rather like twisted stamp collectors.

The IWF report says it has identified 715 such commercial sources, 321 of which were active in 2010. At least 47.7 percent of the commercially branded material is produced by the top ten, and the most prolific of these brands used 862 URLs. The IWF has attempted to analyze these brands, and believes that they are operated in clusters by criminals. To quote the report:

Each of the webpages or websites is a gateway to hundreds or even thousands of individual images or videos of children being sexually abused, supported by layers of payment mechanisms, content sores, membership systems, and advertising frames. Payment systems may include pre-pay cards, credit cards, "virtual money" or e-payment systems, and may be carried out across secure webpages, text, or email.

This is not what people predicted when they warned at the original meeting that blocking access to content would drive it underground into locations that were harder to police. I don't recall anyone saying: it will be like Prohibition and create a new Mafia. How big a problem this is and how it relates to events like yesterday's shutdown of boylovers.net remains to be seen. But there's logic to it: anything that's scarce attracts a high price and anything high-priced and illegal attracts dedicated criminals. So we have to ask: would our children be safer if the IWF were less successful?

The IWF will, I think always be a compromise. Civil libertarians will always be rightly suspicious of any organization that has the authority and power to shut down access to content, online or off. Still, the IWF's ten-person board now includes, alongside the representatives of ISPs, top content sites, and academics, a consumer representative, and seems to be less dominated by repressive law enforcement interests. There's an independent audit in the offing, and while the IWF publishes no details of its block list for researchers to examine, it advocates transparency in the form of a splash screen that tells users a site that is blocked and why. They learned, the IWF's departing head, Peter Robbins, said in conversation, a lot from the Wikipedia incident.

My summary: the organization will know it has its balance exactly right when everyone on all sides has something to complain about.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 11, 2011

Question, explore, discover...action!

Here's a thing I bet you don't know: when 350 people simultaneously dump a small vialful of small sugar pills (also known as 31C homeopathic belladonna) into their mouths and bite down it makes a helluva CRUNCH.

In this case, the noise was heard around the world, even in Antarctica. (How cool is that?)

It was a great stunt, but made a real point: homeopathic "remedies" rely on the notion that you can dilute a substance until there is nothing left of it and the stuff you dilute it with - sugar, water - will somehow "remember" the contact and relay the substance's effect. Which means that by the lights of anything we know about chemistry they have no effect beyond that of a placebo. Why, especially in this time of economic crisis, are we funding it on the National Health Service? Because, the (last) government said (PDF), efficacy is only one of many criteria, and...people like it. Equality of access to sugar pills, dontcha know.

The CRUNCH was at 10:23 on Sunday morning, the time (and the campaign name) chosen from Avogadro's number, the point of dilution past which no molecule of the original substance remains in the solution. The bottle says belladonna; the reality is sugar pills.

Why are people so willing to believe? A lot of the patterns of what Bruce Hood called "supernatural thinking" are visible in the very young children whose development he studies.

"Children are not blank slates," he said, echoing my first thought when I heard Richard Dawkins talk about children's indoctrination with religion. "Children believe things they think are plausible. That's the case for all of us." This is the downside of being human: "They already have misconceptions by the time they're 12 months old." Even a very young human brain is optimized for seeing patterns, particularly patterns that look like faces. By the time children are three or four, they're thinking about ghosts and spirits. By the time they're four or five, they already have the notion of mind/body dualism and essential energies.

The upshot, he said, is that as adults try to organize the world in their minds, even extremely rational people will find that under the right circumstances the misconceptions they had as very young children will emerge. "We don't throw bad ideas away." Stress, illness, and aging all can compromise reason.

One of Hood's examples involved a test in which people were asked to stab pictures of loved ones in the eyes. They know they're pictures; they know it won't hurt them...and yet they resist doing it. Even the most experienced, hardened skeptic can react like this: I suggested to James Randi once that he should mount a mass voodoo demonstration by asking skeptics around the world who had Randi dolls to take the three voodoo pins and simultaneously stab them in the heart. He got a very uncomfortable look on his face.

So: granted that supernatural (or magical) thinking is endemic, what do you do?

Well, for one thing, said Eugenie Scott, a former university professor and executive director of the National Center for Science Education, you bear in mind that, "What matters is what people hear, not what you say." Ultimately, she added, "You are trying to persuade people, so you have to think how to communicate."

It's true: you're not going to get very far making people feel that you think they're stupid. What skeptics can do, suggested Hayley Stevens as part of the ghost-hunting panel, is to suggest alternative explanations. There is no question that people have powerful experiences they can't explain; skepticism is not about denying the subjective reality of those experiences but about trying to understand what might have caused them.

For Stevens, the more helpful approach is to help people think about the experience rationally. Rather than just saying a particular report must be sleep paralysis, she suggested, explain what it is, explore how it might be affecting the person, and offer them different resources for understanding it. "Never say this is the answer; say this is what we think it could be," she said. Often, keeping a "ghost diary" can provide valuable clues or help a person work out a likely cause for themselves.

Although: that might move people on from thinking magically, but it doesn't necessarily draw them to science or that stuff many people seem to find scarier than ghosts, mathematics. For that, you want Colin Wright, who juggles, then explores how juggling works (and how to write it down) by using mathematics, and then uses mathematics to predict where there might be tricks jugglers are missing. The result goes something like this. With a lot more fun.

But going back to the big CRUNCH. As Steven Novella, who spoke about neurology, wrote afterwards, it was a stunt, not a scientific experiment. Even so, it made a serious point: you can down a randomly purchased bunch of these things without harm because they have no effect whatsoever. As the late journalist John Diamond wrote, there's no such thing as alternative medicine; there is just medicine that works and medicine that doesn't. CRUNCH.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 21, 2011

Fogged

The Reform Club, I read on its Web site, was founded as a counterweight to the Carlton Club, where conservatives liked to meet and plot away from public scrutiny. To most of us, it's the club where Phileas Fogg made and won his bet that he could travel around the world in 80 days, no small feat in 1872.

On Wednesday, the club played host to a load of people who don't usually talk to each other much because they come at issues of privacy from such different angles. Cityforum, the event's organizer, pulled together representatives from many parts of civil society, government security, and corporate and government researchers.

The key question: what trade-offs are people willing to make between security and privacy? Or between security and civil liberties? Or is "trade-off" the right paradigm? It was good to hear multiple people saying that the "zero-sum" attitude is losing ground to "proportionate". That is, the debate is moving on from viewing privacy and civil liberties as things we must trade away if we want to be secure to weighing the size of the threat against the size of the intrusion. It's clear to all, for example, that one thing that's disproportionate is local councils' usage of the anti-terrorism aspects of the Regulation of Investigatory Powers Act to check whether householders are putting out their garbage for collection on the wrong day.

It was when the topic of the social value of privacy was raised that it occurred to me that probably the closest model to what people really want lay in the magnificent building all around us. The gentleman's club offered a social network restricted to "the right kind of people" - that is, people enough like you that they would welcome your fellow membership and treat you as you would wish to be treated. Within the confines of the club, a member like Fogg, who spent all day every day there, would have had, I imagine, little privacy from the other members or, especially, from the club staff, whose job it was to know what his favorite drink was and where and when he liked it served. But the club afforded members considerable protection from the outside world. Pause to imagine what Facebook would be like if the interface required each would-be addition to your friends list to be proposed and seconded and incomers could be black-balled by the people already on your list.

This sort of web of trust is the structure the cryptography software PGP relies on for authentication: when you generate your public key, you are supposed to have it signed by as many people as you could. Whenever someone wanted to verify the key, they could look at the list of who had signed it for someone they themselves knew and could trust. The big question with such a structure is how you make managing it scale to a large population. Things are a lot easier when it's just a small, relatively homogeneous group you have to deal with. And, I suppose, when you have staff to support the entire enterprise.

We talk a lot about the risks of posting too much information to things like Facebook, but that may not be its biggest issue. Just as traffic data can be more revealing than the content of messages, complex social linkages make it impossible to anonymize databases: who your friends are may be more revealing than your interactions with them. As governments and corporations talk more and more about making "anonymized" data available for research use, this will be an increasingly large issue. An example: an little-known incident in 2005, when the database of a month's worth of UK telephone calls was exported to the US with individuals' phone numbers hashed to "anonymize" them. An interesting technological fix comes from Microsoft' in the notion of differential privacy, a system for protecting databases both against current re-identification and attacks with external data in the future. The catch, if it is one, is that you must assign to your database a sort of query budget in advance - and when it's used up you must burn the database because it can no longer be protected.

We do know one helpful thing: what price club members are willing to pay for the services their club provides. Public opinion polls are a crude tool for measuring what privacy intrusions people will actually put up with in their daily lives. A study by Rand Europe released late last year attempted to examine such things by framing them in economic terms. The good news is they found that you'd have to pay people £19 to get them to agree to provide a DNA sample to include in their passport. The weird news is that people would pay £7 to include their fingerprints. You have to ask: what pitch could Rand possibly have made that would make this seem worth even one penny to anyone?

Hm. Fingerprints in my passport or a walk across a beautiful, mosaic floor to a fine meal in a room with Corinthian columns, 25-foot walls of books, and a staff member who politely fails to notice that I have not quite confirmed to the dress code? I know which is worth paying for if you can afford it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 7, 2011

Scanning the TSA

There are, Bruce Schneier said yesterday at the Electronic Privacy Information Center mini-conference on the TSA (video should be up soon), four reasons why airport security deserves special attention, even though it directly affects a minority of the population. First: planes are a favorite terrorist target. Second: they have unique failure characteristics - that is, the plane crashes and everybody dies. Third: airlines are national symbols. Fourth: planes fly to countries where terrorists are.

There's a fifth he didn't mention but that Georgetown lawyer Pablo Molina and We Won't Fly founder James Babb did: TSAism is spreading. Random bag searches on the DC Metro and the New York subways. The TSA talking about expanding its reach to shopping malls and hotels. And something I found truly offensive, giant LED signs posted along the Maryland highways announcing that if you see anything suspicious you should call the (toll-free) number below. Do I feel safer now? No, and not just because at least one of the incendiary devices sent to Maryland state offices yesterday apparently contained a note complaining about those very signs.

Without the sign, if you saw someone heaving stones at the cars you'd call the police. With it, you peer nervously at the truck in front of you. Does that driver look trustworthy? This is, Schneier said, counter-productive because what people report under that sort of instruction is "different, not suspicious".

But the bigger flaw is cover-your-ass backward thinking. If someone tries to bomb a plane with explosives in a printer cartridge, missing a later attempt using the exact same method will get you roasted for your stupidity. And so we have a ban on flying with printer cartridges over 500g and, during December, restrictions on postal mail, something probably few people in the US even knew about.

Jim Harper, a policy scholar with the Cato Institute and a member of the Department of Homeland Security's Data Privacy and Integrity Advisory Committee, outlined even more TSA expansion. There are efforts to create mobile lie detectors that measure physiological factors like eye movements and blood pressure.

Technology, Lillie Coney observed, has become "like butter - few things are not improved if you add it."

If you're someone charged with blocking terrorist attacks you can see the appeal: no one wants to be the failure who lets a bomb onto a plane. Far, far better if it's the technology that fails. And so expensive scanners roll through the nation's airports despite the expert assessment - on this occasion, from Schneier and Ed Luttwak, a senior associate with the Center for Strategic and International Studies - that the scanners are ineffective, invasive, and dangerous. As Luttwak said, the machines pull people's attention, eyes, and brains away from the most essential part of security: watching and understanding the passengers' behavior.

"[The machine] occupies center stage, inevitably," he said, "and becomes the focus of an activity - not aviation security, but the operation of a scanner."

Equally offensive in a democracy, many speakers argued, is the TSA's secrecy and lack of accountability. Even Meera Shankar, the Indian ambassador, could not get much of a response to her complaint from the TSA, Luttwak said. "God even answered Job." The agency sent no representative to this meeting, which included Congressmen, security experts, policy scholars, lawyers, and activists.

"It's the violation of the entire basis of human rights," said the Stanford and Oxford lawyer Chip Pitts around the time that the 112th Congress was opening up with a bipartisan reading of the US Constitution. "If you are treated like cattle, you lose the ability to be an autonomous agent."

As Libertarian National Committee executive director Wes Benedict said, "When libertarians and Ralph Nader agree that a program is bad, it's time for our government to listen up."

So then, what are the alternatives to spending - so far, in the history of the Department of Homeland Security, since 2001 - $360 billion, not including the lost productivity and opportunity costs to the US's 100 million flyers?

Well, first of all, stop being weenies. The number of speakers who reminded us that the US was founded by risk-takers was remarkable. More people, Schneier noted, are killed in cars every month than died on 9/11. Nothing, Ralph Nader said, is spent on the 58,000 Americans who die in workplace accidents every year or the many thousands more who are killed by pollution or medical malpractice.

"We need a comprehensive valuation of how to deploy resources in a rational manner that will be effective, minimally invasive, efficient, and obey the Constitution and federal law," Nader said

So: dogs are better at detecting explosives than scanners. Intelligent profiling can whittle down the mass of suspects to a more manageable group than "everyone" in a giant game of airport werewolf. Instead, at the moment we have magical thinking, always protecting ourselves from the last attack.

"We're constantly preparing for the rematch," said Lillie Coney. "There is no rematch, only tomorrow and the next day." She was talking as much about Katrina and New Orleans as 9/11: there will always, she said, be some disaster, and the best help in those situations is going to come from individuals and the people around them. Be prepared: life is risky.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 10, 2010

Payback

A new word came my way while I was reviewing the many complaints about the Transportation Security Administration and its new scanner toys and pat-down procedures: "Chertoffed". It's how "security theater" (Bruce Schneier's term) has transformed the US since 2001.

The description isn't entirely fair to Chertoff, who was only the *second* head of the Bush II-created Department of Homeland Security and has now been replaced: he served from 2005-2009. But since he's the guy who began the scanner push and also numbers scanner manufacturers among the clients of his consultancy company, The Chertoff Group - it's not really unfair either.

What do you do after defining the travel experience of a generation? A little over a month ago, Chertoff showed up at London's RSA Data Security conference to talk about what he thought needed to happen in order to secure cyberspace. We need, he said, a doctrine to lay out the rules of the road for dealing with cyber attacks and espionage - the sort of thing that only governments can negotiate. The analogy he chose was to the doctrine that governed nuclear armament, which he said (at the press Q&A) "gave us a very stable, secure environment over the next several decades."

In cyberspace, he argued, such a thing would be valuable because it makes clear to a prospective attacker what the consequences will be. "The greatest stress on security is when you have uncertainty - the attacker doesn't know what the consequences will be and misjudges the risk." The kinds of things he wants a doctrine to include are therefore things like defining what is a proportionate response: if your country is on the receiving end of an attack from another country that's taking out the electrical power to hospitals and air traffic control systems with lives at risk, do you have the right to launch a response to take out the platform they're operating from? Is there a right of self-defence of networks?

"I generally take the view that there ought to be a strong obligation on countries, subject to limitations of practicality and legal restrictions, to police the platforms in their own domains," he said.

Now, there are all sorts of reasons many techies are against government involvement - or interference - in the Internet. First and foremost is time: the World Summit on the Information Society and its successor, the Internet Governance Forum, have taken years to do...no one's quite sure what, while the Internet's technology has gone on racing ahead creating new challenges. But second is a general distrust, especially among activists and civil libertarians. Chertoff even admitted that.

"There's a capability issue," he said, "and a question about whether governments put in that position will move from protecting us from worms and viruses to protecting us from dangerous ideas."

This was, of course, somewhat before everyone suddenly had an opinion about Wikileaks. But what has occurred since makes that distrust entirely reasonable: give powerful people a way to control the Net and they will attempt to use it. And the Net, as in John Gilmore's famous aphorism, "perceives censorship as damage and routes around it". Or, more correctly, the people do.

What is incredibly depressing about all this is watching the situation escalate into the kind of behavior that governments have quite reasonably wanted to outlaw and that will give ammunition to those who oppose allowing the Net to remain an open medium in which anyone can publish. The more Wikileaks defenders organize efforts like this week's distributed denial-of-service attacks, the more Wikileaks and its aftermath will become the justification for passing all kinds of restrictive laws that groups like the Electronic Frontier Foundation and the Open Rights Group have been fighting against all along.

Wikileaks itself is staying neutral on the subject, according to the statement on its (Swiss) Web site: Wikileaks spokesman Kristinn Hrafnsson said: "We neither condemn nor applaud these attacks. We believe they are a reflection of public opinion on the actions of the targets."

Well, that's true up to a point. It would be more correct to say that public opinion is highly polarized, and that the attacks are a reflection of the opinion of a relatively small section of the public: people who are at the angriest end of the spectrum and have enough technical expertise to download and install software to make their machines part of a botnet - and not enough sense to realize that this is a risky, even dangerous, thing to do. Boycotting Amazon.com during its busiest time of year to express your disapproval of its having booted Wikileaks off its servers would be an entirely reasonable protest. Vandalism is not. (In fact the announced attack on Amazon's servers seems not to have succeeded, though others have.

I have written about the Net and what I like to call the border wars between cyberspace and real life for nearly 20 years. Partly because it's fascinating, partly because when something is new you have a real chance to influence its development, and partly because I love the Net and want it to fulfill its promise as a democratic medium. I do not want to have to look back in another 20 years and say it's been "Chertoffed". Governments are already mad about the utterly defensible publication of the cables; do we have to give them the bullets to shoot us with, too?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 29, 2010

Wanted: less Sir Humphrey, more shark


Seventeen MPs showed up for Thursday's Backbenchers' Committee debate on privacy and the Internet, requested by Robert Halfon (Con-Harlow). They tell me this is a sell-out crowd. The upshot: Google and every other Internet company may come to rue the day that Google sent its Street View cars around Britain. It crossed a line.

That line is this: "Either your home is your castle or it's not." Halfon, talking about StreetView and email he had from a vastly upset woman in Cornwall whose home had been captured and posted on the Web. It's easy for Americans to forget how deep the "An Englishman's home is his castle" thing goes.

Halfon's central question: are we sleepwalking into a privatized surveillance society, and can we stop it? "If no one has any right to privacy, we will live in a Big Brother society run by private companies." StreetView, he said, "is brilliant - but they did it without permission." Of equal importance to Halfon is the curious incident of the silent Information Commissioner (unlike apparently his equivalent everywhere else in the world) and Google's sniffed wi-fi data. The recent announcement that the sniffed data includes contents of email messages, secure Web pages, and passwords has prompted the ICO to take another look.

The response of the ICO, Halfon said, "has been more like Sir Humphrey than a shark with teeth, which is what it should be."

Google is only one offender; Julian Huppert (LibDem-Cambridge) listed some of the other troubles, including this week's release of Firesheep, a Firefox add-on designed to demonstrate Facebook's security failings. Several speakers raised the issue of the secret BT/Phorm trials. A key issue: while half the UK's population choose to be Facebook users (!), and many more voluntarily use Google daily, no one chose to be included in StreetView; we did not ask to be its customers.

So Halfon wants two things. He wants an independent commission of inquiry convened that would include MPs with "expertise in civil liberties, the Internet, and commerce" to suggest a new legal framework that would provide a means of redress, perhaps through an Internet bill of rights. What he envisions is something that polices the behavior of Internet companies the way the British Medical Association or the Law Society provides voluntary self-regulation for their fields. In cases of infringement, fines, perhaps.

In the ensuing discussion many other issues were raised. Huppert mentioned "chilling" (Labour) government surveillance, and hoped that portions of the Digital Economy Act might be repealed. Huppert has also been asking Parliamentary Questions about the is-it-still-dead? Interception Modernization Programme; he is still checking on the careful language of the replies. (Asked about it this week, the Home Office told me they can't speculate in advance about the details will that be provided "in due course"; that what is envisioned is a "program of work on our communications abilities"; that it will be communications service providers, probably as defined in RIPA Section 2(1), storing data, not a government database; that the legislation to safeguard against misuse will probably but not certainly, be a statutory instrument.)

David Davis (Con-Haltemprice and Howden) wasn't too happy even with the notion of decentralized data held by CSPs, saying these would become a "target for fraudsters, hackers and terrorists". Damien Hinds (Con-East Hampshire) dissected Google's business model (including £5.5 million of taxpayers' money the UK government spent on pay-per-click advertising in 2009).

Perhaps the most significant thing about this debate is the huge rise in the level of knowledge. Many took pains to say how much they value the Internet and love Google's services. This group know - and care - about the Internet because they use it, unlike 1995, when an MP was about as likely to read his own email as he was to shoot his own dog.

Not that I agreed with all of them. Don Foster (LibDem-Bath) and Mike Weatherley (Con-Hove) were exercised about illegal file-sharing (Foster and Huppert agreed to disagree about the DEA, and Damian Collins (Con-Folkestone and Hythe complained that Google makes money from free access to unauthorized copies). Nadine Dorries (Con-Mid Bedfordshire) wanted regulation to young people against suicide sites.

But still. Until recently, Parliament's definition of privacy was celebrities' need for protection from intrusive journalists. This discussion of the privacy of individuals is an extraordinary change. Pressure groups like PI, , Open Rights Group, and No2ID helped, but there's also a groundswell of constituents' complaints. Mark Lancaster (Con-Milton Keynes North) noted that a women's refuge at a secret location could not get Google to respond to its request for removal and that the town of Broughton formed a human chain to block the StreetView car. Even the attending opposition MP, Ian Lucas (Lab-Wrexham), favored the commission idea, though he still had hopes for self-regulation.

As for next steps, Ed Vaizey (Con-Wantage and Didcot), the Minister for Communication, Culture, and the Creative Industries, said he planned to convene a meeting with Google and other Internet companies. People should have a means of redress and somewhere to turn for mediation. For Halfon that's still not enough. People should have a choice in the first place.

To be continued...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 1, 2010

Duty of care

"Anyone who realizes how important the Web is," Tim Berners-Lee said on Tuesday, "has a duty of care." He was wrapping up a two-day discussion meeting at the Royal Society. The subject: Web science.

What is Web science? Even after two days, it's difficult to grasp, in part because defining it is a work in progress. Here are some of the disciplines that contributed: mathematics, philosophy, sociology, network science, and law, plus a bunch of much more directly Webby things that don't fit easily into categories. Which of course is the point: Web science has to cover much more than just the physical underpinnings of computers and network wires. Computer science or network science can use the principles of mathematics and physics to develop better and faster machines and study architectures and connections. But the Web doesn't exist without the people putting content and applications on it, and so Web science must be as much about human behaviour as about physics.

"If we are to anticipate how the Web will develop, we will require insight into our own nature," Nigel Shadbolt, one of the event's convenors, said on Monday. Co-convenor Wendy Hall has said, similarly, "What creates the Web is us who put things on it, and that's not natural or engineered.". Neither natural (biological systems) or engineered (planned build-out like the telecommunications networks), but something new. If we can understand it better, we can not only protect it better, but guide it better toward the most productive outcomes, just as farmers don't haphazardly interbreed species of corn but use their understanding to select for desirable traits.

The simplest parts of the discussions to understand, therefore, were (ironically) the mathematicians. Particularly intriguing was the former chief scientist Robert May, whose approach to removing nodes from the network to make it non-functional applied equally to the Web, epidemiology, and banking risk.

This is all happening despite the recent Wired cover claiming the "Web is dead". Dead? Facebook is a Web site; Skype, the app store, IM clients, Twitter, and the New York Times all reach users first via the Web even if they use their iPhones for subsequent visits (and how exactly did they buy those iPhones, hey?) Saying it's dead is almost exactly the old joke about how no one goes to a particular restaurant any more because it's too crowded.

People who think the Web is dead have stopped seeing it. But the point of Web science is that for 20 years we've been turning what started as an academic playground into a critical infrastructure, and for government, finance, education, and social interaction to all depend on the Web it must have solid underpinnings. And it has to keep scaling - in a presentation on the state of deployment of IPv6 in China, Jianping Wu noted that Internet penetration in China is expected to jump from 30 percent to 70 percent in the next ten to 20 years. That means adding 400-900 million users. The Chinese will have to design, manage, and operate the largest infrastructure in the world - and finance it.

But that's the straightforward kind of scaling. IBMer Philip Tetlow, author of The Web's Awake (a kind of Web version of the Gaia hypothesis), pointed out that all the links in the world are a finite set; all the eyeballs in the world looking at them are a finite set...but all the contexts surrounding them...well, it's probably finite but it's not calculable (despite Pierre Levy's rather fanciful construct that seemed to suggest it might be possible to assign a URI to every human thought). At that level, Tetlow believes some of the neat mathematical tools, like Jennifer Chayes' graph theory, will break down.

"We're the equivalent of precision engineers," he said, when what's needed are the equivalent of town planners and urban developers. "And we can't build these things out of watches."

We may not be able to build them at all, at least not immediately. Helen Margetts outlined the constraints on the development of egovernment in times of austerity. "Web science needs to map, understand, and develop government just as for other social phenomena, and export back to mainstream," she said.

Other speakers highlighted gaps between popular mythology and reality. MIT's David Carter noted that, "The Web is often associated with the national and international but not the local - but the Web is really good at fostering local initiatives - that's something for Web science to ponder." Noshir Contractor, similarly, called out The Economist over the "death of distance": "More and more research shows we use the Web to have connections with proximate people."

Other topics will be far more familiar to net.wars readers: Jonathan Zittrain explored the ways the Web can be broken by copyright law, increasing corporate control (there was a lovely moment when he morphed the iPhone's screen into the old CompuServe main menu), the loss of uniformity so that the content a URL points to changes by geographic location. These and others are emerging points of failure.

We'll leave it to an unidentified audience question to sum up the state of Web science: "Nobody knows what it is. But we are doing it."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

August 13, 2010

Pirate flags

Wednesday's Future Human - The Piracy Panacea event missed out on a few topics, among them network neutrality, an issue I think underlies many net.wars debates: content control, privacy, security. The Google-Verizon proposals sparked much online discussion this week. I can only reiterate my belief that net neutrality should be seen as an anti-trust issue. A basic principle of anti-trust law (Standard Oil, the movie studios) is that content owners should not be allowed to own the means of distribution, and I think this readily applies to cable companies that own TV stations and telephone companies that are carriers for other people's voice services.

But the Future Human event was extraordinary enough without that. Imagine: more than 150 people squished into a hot, noisy pub, all passionately interested in...copyright! It's only a few years ago that entire intellectual property law school classes would fit inside a broom cupboard. The event's key question: does today's "piracy" point the way to future innovation?

The basis of that notion seemed to be that historically pirates have forced large imperial powers to change and weren't just criminals. The event's light-speed introduction whizzed through functionally democratic pirate communities and pirate radio, and a potted history of authorship from Shakespeare and Newton to Lady Gaga. There followed mock trials of a series of escalating copyright infringements in which it became clear that the audience was polarized and more or less evenly divided.

There followed our panel: me, theoretically representing the Open Rights Group; Graham Linehan, creator of Father Ted and The IT Crowd; Jamie King, writer and director of Steal This Film; and economist Thierry Rayna. Challenged, of course, by arguers from the audience, one of whom declined to give her affiliation on the grounds that she'd get lynched (I doubt this). Partway through the panel someone complained on Twitter that we weren't answering the question the event had promised to tackle: how can the creative industries build on file-sharing and social networks to create the business models of the future?

It seems worth trying to answer that now.

First, though, I think it's important to point out that I don't think there's much that's innovative about downloading a TV show or MP3. The people engaged in downloading unauthorized copies of mainstream video/audio, I think, are not doing anything particularly brave. The people on the front lines are the ones running search engines and services. These people are indeed innovators, and some of them are doing it at substantial personal risk. And they cannot, in general, get legal licenses from rights holders, a situation that could be easily changed by the rights holders. Napster, which kicked the copyright wars into high gear and made digital downloads a mainstream distribution method, is now ten years ago. Yet rights holders are still trying to implement artificial scarcity (to replace real scarcity) and artificial geography (to replace real geography). The death of distance, as Economist writer Frances Cairncross called it in 1997, changes everything, and trying to pretend it doesn't is absurd. The download market has been created by everyone *but* the record companies, who should have benefited most.

Social networks - including the much-demonized P2P networks - provide the greatest mechanism for word of mouth in the history of human culture. And, as we all know, word of mouth is the most successful marketing available, at least for entertainment.

It also seems obvious that P2P and social networks are a way for companies to gauge the audience better before investing huge sums. It was obvious from day one, for example, that despite early low official ratings and mixed reviews, Gossip Girl was a hit. Why? Because tens of thousands of people were downloading it the instant it came online after broadcast. Shouldn't production company accountants be all over this? Use these things as a testbed instead of having the fall pilots guessed on by a handful of the geniuses who commissioned Cavemen and the US version of Coupling and cancelled Better Off Ted. They could have a lot clearer picture of what kind of audience a show might find and how quickly.

Trying to kill P2P and other technologies just makes them respawn like the Hydra. The death of Napster (central server) begat Gnutella and eDonkey (central indexes), lawsuits against whose software developers begat the even more decentralized BitTorrent. When millions and tens of millions of people are flocking to a new technology rights holders should be there, too.

The real threat is always going to be artists taking their business into their own hands. For every Lady Gaga there are thousands of artists who, given some basic help can turn their work into the kind of living wage that allows them to pursue their art full-time and professionally. I would think there is a real business in providing these artists with services - folksingers, who've never had this kind of help, have produced their own recordings for decades, and having done it myself I can tell you it's not easy. This was the impulse behind the foundation of CDBaby, and now of Jamie King's VoDo. In the long run, things like this are the real game-changers.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 23, 2010

Information Commissioner, where is thy sting?

Does anyone really know what their computers are doing? Lauren Weinstein asked recently in a different context.

I certainly don't. Mostly, I know what they're not doing, and then only when it inconveniences me. Don't most of us have an elaborate set of workarounds for things that are just broken enough not to work but not so broken that we have to fix them?

But companies - particularly companies who have made their fortunes by being clever with technology - are supposed to do better than that. And so we come to the outbreak of legal actions against Google for collecting wifi data - not only wireless network names (SSIDs) and information identifying individual computer devices (MAC addresses) while it was out photographing every house for StreetView, but also payload data. The company says this sniffing was accidental. Privacy International's Simon Davies says that no engineer he's spoken to buys this: either the company collected it deliberately or the company's internal management systems are completely broken.

This was the topic of Tuesday's Big Brother Watch event. We actually had a Googler, Sarah Hunter, head of UK public policy, on the premises taking notes (as far as I could discern she did not have a camera mounted on her head, which seems like a missed opportunity), but the court actions in progress against the company meant that she was under strict orders from legal not to say anything much.

You can't really blame her. The list of government authorities investigating Google over the wifi data now includes: 38 US states and the District of Columbia, led by Connecticut; Germany; France; and Australia. Britain? Not so much.

"I find it amazing that Google did it without permission and seemed to get away with it without anyone causing a fuss," said Rob Halfon MP, who took time between votes on Tuesday to deliver a call to action. "There has to be a limit to what these companies do," he said, calling Street View "a privatized version of Big Brother." Halfon has tabled an early day motion on surveillance and the Internet.

There are two separate issues here. The first is Street View itself, which many countries have been unhappy about.

I was sympathetic when Google first launched Street View in the US and ran into privacy issues. It was, I thought and think, an innocently geeky kind of mistake to make: a look! This is so COOL! kind of moment. In the flush of excitement, I reasoned, it was probably easy to lose sight of the fact that people might object to having their living room windows peered into in a drive-by shoot and the resulting images posted online. Who would stop to ask the opinions of the inept, confused user of typical geek contempt, "my mother"?

By the time Street View arrived in Europe, however, there was no excuse. That the product's launch has sparked public anger in every country with every launch, along with other controversial actions (think Google Books), suggests that the company's standard MO is that of the teenager who deliberately avoids her parents' permission because she knows it will be denied.

It is, I think, reasonable to argue, as Google does, that the company is taking pictures of public areas, something that is not illegal in the US although it has various restrictions in other places. The keys, I think, are first of all the scale of the operation, and second the public display part of the equation, an element that is restricted in some European countries. As Halfon said, "Only big companies have the financial muscle to do this kind of mapping."

The second issue, the wifi data, is much more clear-cut. It seems unquestionable that accidental or not - and in fact we would not know the company had sniffed this data if it hadn't told us itself - laws have been broken in a number of countries. In the UK, it seems likely that the action was illegal under the Regulation of Investigatory Powers Act (2000) and the Computer Misuse Act would apply. Google's founders and CEO, Sergey Brin, Larry Page, and Eric Schmidt, seem to take the view that no harm, no foul.

But that's not the point, which is why Privacy International, having been told the Information Commissioner was not interested in investigating, went to the Metropolitan Police.

"There has to be a point where Google is brought to account because of its systemic failure," he said. "If all the criminal investigation does is to sensitise Google, then internally there may be some evolution."

The key, however, for the UK, is the unwillingness of the Information Commissioner to get involved. First, the ICO declined to restrict Street View. Then it refused to investigate the wifi issue and wanted the data destroyed, an action PI argued would mean destroying the evidence needed for a forensic investigation.

It was this failure that Davies and Alex Deane, director of Big Brother Watch, picked on.

"I find it peculiar that the British ICO was so reluctant to investigate Google when so many other ICOs were willing," Deane said. "The ICO was asleep on the job."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. .

July 9, 2010

The big button caper

There's a moment early in the second season of the TV series Mad Men when one of the Sterling Cooper advertising executives looks out the window and notices, in a tone of amazement, that young people are everywhere. What he was seeing was, of course, the effect of the baby boom. The world really *was* full of young people.

"I never noticed it," I said to a friend the next day.

"Well, of course not," he said. "You were one of them."

Something like this will happen to today's children - they're going to wake up one day and think the world is awash in old people. This is a fairly obvious consequence of the demographic bulge of the Baby Boomers, which author Ken Dychtwald has compared to "a pig going through a python".

You would think that mobile phone manufacturers and network operators would be all over this: carrying a mobile phone is an obvious safety measure for an older, perhaps infirm or cognitively confused person. But apparently the concept is more difficult to grasp than you'd expect, and so Simon Rockman, the founder and former publisher of What Mobile and now working for the GSM Association, convened a senior mobile market conference on Tuesday.

Rockman's pitch is that the senior market is a business opportunity: unlike other market sectors it's not saturated; older users are less likely to be expensive data users and more loyal. The margins are better, he argues, even if average revenue per user is low.

The question is, how do you appeal to this market? To a large extent, seniors are pretty much like everyone else: they want gadgets that are attractive, even cool. They don't want the phone equivalent of support stockings. Still, many older people do have difficulties with today's ultra-tiny buttons, icons, and screens, iffy sound quality, and complex menu structures. Don't we all?

It took Ewan MacLeod, the editor of Mobile Industry Review to point out the obvious. What is the killer app for most seniors in any device? Grandchildren, pictures of. MacLeod has a four-week-old son and a mother whose desire to see pictures apparently could only be fully satisfied by a 24-hour video feed. Industry inadequacy means that MacLeod is finding it necessary to write his own app to make sending and receiving pictures sufficiently simple and intuitive. This market, he pointed out, isn't even price-sensitive. Tell his mother she'll need to spend £60 on a device so she can see daily pictures of her grandkids, and she'll say, "OK." Tell her it will cost £500, and she'll say..."OK."

I bet you're thinking, "But the iPhone!" And to some extent you're right: the iPhone is sleek, sexy, modern, and appealing; it has a zoom function to enlarge its display fonts, and it is relatively easy to use. And so MacLeod got all the grandparents onto iPhones. But he's having to write his own app to easily organize and display the photos the phones receive: the available options are "Rubbish!"

But even the iPhone has problems (even if you're not left-handed). Ian Hosking, a senior research associate at the Cambridge Engineering Design Centre, overlaid his visual impairment simulation software so it was easy to see. Lack of contrast means the iPhone's white on black type disappears unreadably with only a small amount of vision loss. Enlarging the font only changes the text in some fields. And that zoom feature, ah, yes, wonderful - except that enabling it requires you to double-tap and then navigate with three fingers. "So the visual has improved, but the dexterity is terrible."

Oops.

In all this you may have noticed something: that good design is good design, and a phone design that accommodates older people will also most likely be a more usable phone for everyone else. These are principles that have not changed since Donald Norman formulated them in his classic 1998 book The Design of Everyday Things. To be sure there is some progress. Evelyne Pupeter-Fellner, co-founder of Emporia, for example, pointed out the elements of her company's designs that are quietly targeted at seniors: the emergency call system that automatically dials, in turn, a list of selected family members or friends until one answers; the ringing mechanism that lights up the button to press to answer. The radio you can insert the phone into that will turn itself down and answer the phone when it rings. The design that lets you attach it to a walker - or a bicycle. The single-function buttons. Similarly, the Doro was praised.

And yet it could all be so different - if we would only learn from Japan, where nearly 86 percent of seniors have - and use data on - mobile phones, according to Kei Shimada, founder of Infinita.

But in all the "beyond big buttons" discussion and David Doherty's proposition that health applications will be the second killer app, one omission niggled: the aging population is predominantly female, and the older the cohort the more that is true.

Who are least represented among technology designers and developers?

Older women.

I'd call that a pretty clear mismatch. Somewhere between we who design and they who consume is your problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 18, 2010

Things I learned at this year's CFP

- There is a bill in front of Congress to outlaw the sale of anonymous prepaid SIMs. The goal seems to be some kind of fraud and crime prevention. But, as Ed Hasbrouck points out, the principal people who are likely to be affected are foreign tourists and the Web sites who sell prepaid SIMS to them.

- Robots are getting near enough in researchers' minds for them to be spending significant amounts of time considering the legal and ethical consequences in real life - not in Asimov's fictional world where you could program in three safety llaws and your job was done. Ryan Calo points us at the work of Stanford student Victoria Groom on human-robot interaction. Her dissertation research not yet on the site, discovered that humans allocate responsibility for success and failure proportionately according to how anthropomorphic the robot is.

- More than 24 percent of tweets - and rising sharply - are sent by automated accounts, according to Miranda Mowbray at HP labs. Her survey found all sorts of strange bots: things that constantly update the time, send stock quotes, tell jokes, the tea bot that retweets every mention of tea...

- Google's Kent Walker, the 1997 CFP chair, believes that censorship is as big a threat to democracy as terrorism, and says that open architectures and free expression are good for democracy - and coincidentally also good for Google's business.

- Microsoft's chief privacy strategist, Peter Cullen, says companies must lead in privacy to lead in cloud computing. Not coincidentally, others are the conference note that US companies are losing business to Europeans in cloud computing because EU law prohibits the export of personal data to the US, where data protection is insufficient.

- It is in fact possible to provide wireless that works at a technical conference. And good food!

- The Facebook Effect is changing the attitude of other companies about user privacy. Lauren Gelman, who helps new companies with privacy issues, noted that because start-ups all see Facebook's success and want to be the next 400 million-user environment, there was a strong temptation to emulate Facebook's behavior. Now, with the angry cries mounting from consumers, she's having to spend less effort convincing them about the level of pushback companies will get from consumers if they change their policies and defy their expectations. Even so, it's important to ensure that start-ups include privacy in their budgets and not become an afterthought. In this respect, she makes me realize, privacy in 2010 is at the stage that usability was in the early 1990s.

- All new program launches come through the office of the director of Yahoo!'s business and human rights program, Ebele Okabi-Harris. "It's very easy for the press to focus on China and particular countries - for example, Australia last year, with national filtering," she said, "but for us as a company it's important to have a structure around this because it's not specific to any one region." It is, she added later, a "global problem".

- We should continue to be very worried about the database state because the ID cards repeal act continues the trend toward data sharing among government departments and agencies, according to Christina Zaba from No2ID.

- Information brokers and aggregators, operating behind the scenes, are amassing incredible amounts of details about Americans and it can require a great deal of work to remove one's information from these systems. The main customers of these systems are private investigators, debt collectors, media, law firms, and law enforcement. The Privacy Rights Clearinghouse sees many disturbing cases, as Beth Givens outlined, as does Pam Dixon's World Privacy forum.

- I always knew - or thought I knew - that the word "robot" was not coined by Asimov but by Karel Capek for his play R.U.R. (for "Rossum's Universal Robots", which coincidentally I also know that playing a robot in same was Michael Caine's first acting job). But Twitterers tell me that this isn't quite right. The word is derived from the Czech word "robota", "compulsory work for a feudal landlord". And that it was actually coined by Capek's older brother, Josef..

- There will be new privacy threats emerging from automated vehicles, other robots, and voicemail transcription services, sooner rather than later.

- Studying the inner workings of an organization like the International Civil Aviation Organization is truly difficult because the time scales - ten years to get from technical proposals to mandated standard, which is when the public becomes aware of - are a profound mismatch for the attention span of media and those who fund NGOs. Anyone who feels like funding an observer to represent civil society at ICAO should get in touch with Edward Hasbrouck.

- A lot of our cybersecurity problems could be solved by better technology.

- Lillie Coney has a great description of deceptive voting practices designed to disenfranchise the opposition: "It's game theory run amok!"

- We should not confuse insecure networks (as in vulnerable computers and flawed software) with unsecured networks (as in open wi-fi).

- Next year's conference chairs are EPIC's Lillie Coney and Jules Polonetsky. It will be in Washington, DC, probably the second or third week in June. Be there!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

May 7, 2010

Wish list

It's 2am on election night, so of course no one can think about anything except the returns. Reported so far: 57 of 650 seats. Swing from Labour to Conservative: 4 percent.

The worst news of the night so far is that people have been turned away from polling stations because the queues couldn't be processed fast enough to get everyone through before the official closing time of 10pm. Creative poll workers locked the unvoted inside the station and let them vote. Uncreative ones sent them home, or tried to - I'm glad to see there were angry protests and, in some cases, sit-ins. Incredibly, some people couldn't vote because their stations ran out of ballot papers. In one area, hundreds of postal ballots are missing. It's an incredible shambles considering Britain's centuries of experience of running elections. Do not seize on this mess as an excuse to bring in electronic voting, something almost every IT security expert warns is a very bad idea. Print some more ballot papers, designate more polling stations, move election day to Saturday.

Reported: 69 Swing: 3.8 percent: Both Conservatives and LibDems have said they will scrap the ID card. Whether they'll follow through remains to be seen. My sense from interviews with Conservative spokespeople for articles in the last year is that they want to scrap large IT projects in favor of smaller, more manageable ones undertaken in partnership with private companies. That should spell death for the gigantic National Identity Register database and profound change for the future of NHS IT; hopefully smaller systems should give individuals more control. It does raise the question of handing over data to private companies in, most likely, other countries. The way LibDem peers suddenly switched sides on the Digital Economy Act last month dinged our image of the LibDems as the most sensible on net.wars issues of all the parties. Whoever gets in, yes, please, scrap the National Identity Register and stick to small, locally grown IT projects that serve their users. That means us, not the Whitehall civil service.

Reported: 82. Swing: 3.6 percent: Repeal the Digital Economy Act and take time out for a rethink and public debate. The copyright industries are not going to collapse without three-strikes and disconnection notices. Does the UK really want laws that France has rejected?

Reported: 104. Swing: 4.1 percent: Coincidentally, today I received today a letter "inviting" me to join a study on mobile phones and brain cancer; I would be required to answer periodic surveys about my phone use. The explanatory leaflet notes: "Imperial College will review your health directly through routine medical and other health-related records" using my NHS number, name, address, and date of birth - for the next 20 to 30 years. Excuse me? Why not ask me to report relevant health issues, and request more detailed access only if I report something relevant? This Labour government has fostered this attitude of We Will Have It All. I'd participate in the study if I could choose what health information I give; I'm not handing over untrammeled right of access. New government: please cease to regard our health data as yours to hand over "for research purposes" to whomever you feel like. Do not insult our intelligence and knowledge by claiming that anonymizing data protects our privacy; such data can often be very easily reidentified.

Reported: 120. Swing: 3.9 percent: Reform libel law. Create a public interest defense for scientific criticism, streamline the process, and lower costs for defendants. Re-allocate the burden of proof to the plaintiff. Stop hearing cases with little or no connection to the UK.

Reported: 149. Swing: 4.3 percent: While you're reforming legal matters, require small claims court to hear cases in which photographers (and other freelances) pursue publishers who have infringed their copyright. Photographers say these courts typically kick such "specialist" cases up to higher levels, making it impracticably expensive to get paid.

Reported: 231. Swing: 4.8 percent: Any government that's been in power as long as Labour currently has is going to seem tired and in need of new ideas. But none of the complaints above - the massive growth in surveillance, the lack of regard for personal privacy, the sheer cluelessness about IT - knocked Labour down. Even lying about the war didn't do it. It was, as Clinton's campaign posted on its office walls, the economy. Stupid.

Reported: 327. Swing: 5 percent: Scrap ContactPoint, the (expensive, complicated) giant database intended to track children through their school days to adulthood - and, by the time they get there, most likely beyond. Expert reports the government commissioned and paid for advised against taking the risk of data breaches. Along with it modernize data protection instead of data retention.

Reported: 626. Swing: 5.3 percent:
A hung Parliament (as opposed to hanging chad). Good. For the last 36 years Britain has been ruled by an uninterrupted elected dictatorship. It is about time the parties were forced to work together again. Is anyone seriously in doubt that the problems the country has are bigger than any one party's interests? Bring on proportional representation. Like they have in Scotland.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 16, 2010

Data-mining the data miners

The case of murdered Colombian student Anna Maria Chávez Niño, presented at this week's Privacy Open Space, encompasses both extremes of the privacy conundrum posed by a world in which 400 million people post intimate details about themselves and their friends onto a single, corporately owned platform. The gist: Chávez met her murderers on Facebook; her brother tracked them down, also on Facebook.

Speaking via video link to Cédric Laurant, a Brussels-based independent privacy consultant, Juan Camilo Chávez noted that his sister might well have made the same mistake - inviting dangerous strangers into her home - by other means. But without Facebook he might not have been able to identify the killers. Criminals, it turns out, are just as clueless about what they post online as anyone else. Armed with the CCTV images, Chávez trawled Facebook for similar photos. He found the murderers selling off his sister's jacket and guitar. As they say, busted.

This week's PrivacyOS was the fourth in a series of EU-sponsored conferences to collaborate on solutions to that persistent, growing, and increasingly complex problem: how to protect privacy in a digital world. This week's focused on the cloud.

"I don't agree that privacy is disappearing as a social value," said Ian Brown, one of the event's organizers, disputing Mark privacy-is-no-longer-a-social-norm Zuckerberg's claim. The world's social values don't disappear, he added, just because some California teenagers don't care about them.

Do we protect users through regulation? Require subject releases for YouTube or Qik? Require all browsers to ship with cookies turned off? As Lilian Edwards observed, the latter would simply make many users think the Internet is broken. My notion: require social networks to add a field to photo uploads requiring users to enter an expiration date after which it will be deleted.

But, "This is meant to be a free world," Humberto Morán, managing director of Friendly Technologies, protested. Free as in speech, free as in beer, or free as in the bargain we make with our data so we can use Facebook or Google? We have no control over those privacy policy contracts.

"Nothing is for free," observed NEC's Amardeo Sarma. "You pay for it, but you don't know how you pay for it." The key issue.

What frequent flyers know is that they can get free flights once in a while in return for their data. What even the brightest, most diligent, and most paranoid expert cannot tell them is what the consequences of that trade will be 20 years from now, though the Privacy Value Networks project is attempting to quantify this. It's hard: any photographer will tell you that a picture's value is usually highest when it's new, but sometimes suddenly skyrockets decades later when its subject shoots unexpectedly to prominence. Similarly, the value of data, said David Houghton, changes with time and context.

It would be more right to say that it is difficult for users to understand the trade-offs they're making and there are no incentives for government or commerce to make it easy. And, as the recent "You have 0 Friends" episode of South Park neatly captures, the choice for users is often not between being careful and being careless but between being a hermit and participating in modern life.

Better tools ought to be a partial solution. And yet: the market for privacy-enhancing technologies is littered with market failures. Even the W3C's own Platform for Privacy Preferences (P3P), for example, is not deployed in the current generation of browsers - and when it was provided in Internet Explorer users didn't take advantage of it. The projects outlined at PrivacOS - PICOS and PrimeLife - are frustratingly slow to move from concept to prototype. The ideas seem right: providing a way to limit disclosures and authenticate identity to minimize data trails. But, Lilian Edwards asked: is partial consent or partial disclosure really possible? It's not clear that it is, partly because your friends are also now posting information about you. The idea of a decentralized social network, workshopped at one session, is interesting, but might be as likely to expand the problem as modulate it.

And, as it has throughout the 25 years since the first online communities were founded, the problem keeps growing exponentially in size and complexity. The next frontier, said Thomas Roessler: the sensor Web that incorporates location data and input from all sorts of devices throughout our lives. What does it mean to design a privacy-friendly bathroom scale that tweets your current and goal weights? What happens when the data it sends gets mashed up with the site you use to monitor the calories you consume and burn and your online health account? Did you really understand when you gave your initial consent to the site what kind of data it would hold and what the secondary uses might be?

So privacy is hard: to define, to value, to implement. As Seda Gürses, studying how to incorporate privacy into social networks, said, privacy is a process, not an event. "You can't do x and say, Now I have protected privacy."


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats non-spam comments for reasons surpassing understanding.

March 19, 2010

Digital exclusion: the bill

The workings of British politics are nearly as clear to foreigners as cricket; and unlike the US there's no user manual. (Although we can recommend Anthony Trollope's Palliser novels and the TV series Yes, Minister as good sources of enlightenment on the subject.) But what it all boils down to in the case of the Digital Economy Bill is that the rights of an entire nation of Internet users are about to get squeezed between a rock and an election unless something dramatic happens.

The deal is this: the bill has completed all the stages in the House of Lords, and is awaiting its second reading in the House of Commons. Best guesses are that this will happen on or about March 29 or 30. Everyone expects the election to be called around April 8, at which point Parliament disbands and everyone goes home to spend three weeks intensively disrupting the lives of their constituency's voters when they're just sitting down to dinner. Just before Parliament dissolves there's a mad dash to wind up whatever unfinished business there is, universally known as the "wash-up". The Digital Economy Bill is one of those pieces of unfinished business. The fun part: anyone who's actually standing for election is of course in a hurry to get home and start canvassing. So the people actually in the chamber during the wash-up while the front benches are hastily agreeing to pass stuff thought on the nod are likely to be retiring MPs and others who don't have urgent election business.

"What we need," I was told last night, "is a huge, angry crowd." The Open Rights Group is trying to organize exactly that for this Wednesday, March 24.

The bill would enshrine three strikes and disconnection into law. Since the Lords' involvement, it provides Web censorship. It arguably up-ends at least 15 years of government policy promoting the Internet as an engine of economic growth to benefit one single economic sector. How would the disconnected vote, pay taxes, or engage in community politics? What happened to digital inclusion? More haste, less sense.

Last night's occasion was the 20th anniversary of Privacy International (Twitter: @privacyint), where most people were polite to speakers David Blunkett and Nick Clegg. Blunkett, who was such a front-runner for a second Lifetime Menace Big Brother Award that PI renamed the award after him, was an awfully good sport when razzed; you could tell that having his personal life hauled through the tabloid press in some detail has changed many of his views about privacy. Though the conversion is not quite complete: he's willing to dump the ID card, but only because it makes so much more sense just to make passports mandatory for everyone over 16.

But Blunkett's nearly deranged passion for the ID card was at least his own. The Digital Economy Bill, on the other hand, seems to be the result of expert lobbying by the entertainment industry, most especially the British Phonographic Industry. There's a new bit of it out this week in the form of the Building a Digital Economy report, which threatens the loss of 250,000 jobs in the UK alone (1.2 million in the EU, enough to scare any politician right before an election). Techdirt has a nice debunking summary.

A perennial problem, of course, is that bills are notoriously difficult to read. Anyone who's tried knows these days they're largely made up of amendments to previous bills, and therefore cannot be read on their own; and while they can be marked up in hypertext for intelligent Internet perusal this is not a service Parliament provides. You would almost think they don't really want us to read these things.

Speaking at the PI event, Clegg deplored the database state that has been built up over the last ten to 15 years, the resulting change in the relationship between citizen and state, and especially the omission that, "No one ever asked people to vote on giant databases." Such a profound infrastructure change, he argued, should have been a matter for public debate and consideration - and wasn't. Even Blunkett, who attributed some of his change in views to his involvement in the movie Erasing David (opening on UK cinema screens April 29), while still mostly defending the DNA database, said that "We have to operate in a democratic framework and not believe we can do whatever we want."

And here we are again with the Digital Economy Bill. There is plenty of back and forth among industry representatives. ISPs estimate the cost of the DEB's Web censorship provisions at up to £500 million. The BPI disagrees. But where is the public discussion?

But the kind of thoughtful debate that's needed cannot take place in the present circumstances with everyone gunning their car engines hoping for a quick getaway. So if you think the DEB is just about Internet freedoms, think again; the way it's been handled is an abrogation of much older, much broader freedoms. Are you angry yet?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 12, 2010

The cost of money

Everyone except James Allan scrabbled in the bag Joe DiVanna brought with him to the Digital Money Forum (my share: a well-rubbed 1908 copper penny). To be fair, Allan had already left by then. But even if he hadn't he'd have disdained the bag. I offered him my pocketful of medium-sized change and he looked as disgusted as if it were a handkerchief full of snot. That's what living without cash for two years will do to you.

Listen, buddy, like the great George Carlin said, your immune system needs practice.

People in developed countries talk a good game about doing away with cash in favor of credit cards, debit cards, and Oyster cards, but the reality, as Michael Salmony pointed out, is that 80 percent of payments in Europe are...cash. Cash seems free to consumers (where cards have clearer charges), but costs European banks €84 billion a year. Less visibly banks also benefit (when the shadow economy hoards high-value notes it's an interest-free loan), and governments profit from Seigniorage (when people buy but do not spend coins).

"Any survey about payment methods," Salmony said Wednesday, "reveals that in all categories cash is the preferred payment method." You can buy a carrot or a car; it costs you nothing directly; it's anonymous, fast, and efficient. "If you talk directly to supermarkets, they all agree that cash is brilliant - they have sorting machines, counting machines...It's optimized so well, much better than cards."

The "unbanked", of course, such as the London migrants Kavita Datta studies, have no other options. Talk about the digital divide, this is the digital money divide: the cashless society excludes people who can't show passports, can't prove their address, or are too poor to have anything to bank with.

"You can get a job without a visa, but not without a bank account," one migrant worker told her. Electronic payments, ain't they grand?

But go to Africa, Asia, or South America, and everything turns upside down. There, too, cash is king - but there, unlike here with banks and ATMs on every corner and a fully functioning system of credit cards and other substitutes, cash is a terrible burden. Of the 2.6 billion people living on less than $2 a day, said Ignacio Mas, fewer than 10 percent have access to formal financial services. Poor people do save, he said, but their lack of good options means they save in bad ways.

They may not have banks, but most do have mobile phones, and therefore digital money means no long multi-bus rides to pay bills. It means being able to send money home at low cost. It means saving money that can't be easily stolen. In Ghana 80 percent of the population have no access to financial services - but 80 percent are covered by MTN, which is partnering with the banks to fill the gap. In Pakistan, Tameer Microfinance Bank partnered with Telenor to launch Easy-Peisa, which did 150,000 transactions its first month and expects a million by December. One million people produce milk in Pakistan; Nestle pays them all painfully by check every month. The opportunity in these countries to leapfrog traditional banking and head into digital payments is staggering, and our banks won't even care. The average account balance of customers for Kenya's M-Pesa customers is...$3.

When we're not destroying our financial system, we have more choices. If we're going to replace cash, what do we replace it with and what do we need? Really smart people to figure out how to do it right - like Isaac Newton, said Thomas Levenson. (Really. Who knew Isaac Newton had a whole other life chasing counterfeiters?) Law and partnership protocols and banks to become service providers for peer-to-peer finance, said Chris Cook. "An iTunes moment," said Andrew Curry. The democratization of money, suggested conference organizer David Birch.

"If money is electronic and cashless, what difference does it make what currency we use?" Why not...kilowatt hours? You're always going to need to heat your house. Global warming doesn't mean never having to say you're cold.

Personally, I always thought that if our society completely collapsed, it would be an excellent idea to have a stash of cigarettes, chocolate, booze, and toilet paper. But these guys seemed more interested in the notion of Facebook units. Well, why not? A currency can be anything. Second Life has Linden dollars, and people sell virtual game world gold for real money on eBay.

I'd say for the same reason that most people still walk around with notes in their wallet and coins in their pocket: we need to take our increasing abstraction step by step. Many have failed with digital cash, despite excellent technology, because they asked people to put "real" money into strange units with no social meaning and no stored trust. Birch is right: storing value in an Oyster card is no different than storing value in Beenz. But if you say that money is now so abstract that it's a collective hallucination, then the corroborative details that give artistic verisimilitude to an otherwise bald and unconvincing currency really matter.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

March 5, 2010

The surveillance chronicles

There is a touching moment at the end of the new documentary Erasing David, which had an early screening last night for some privacy specialists. In it, Katie, the wife of the film's protagonist, filmmaker David Bond, muses on the contrast between the England she grew up in and the "ugly" one being built around her. Of course, many people become nostalgic for a kinder past when they reach a certain age, but Katie Bond is probably barely 30, and what she is talking about is the engorging Database State (PDF).

Anyone watching this week's House of Lords debate on the Digital Economy Bill probably knows how she feels. (The Open Rights Group has advice on appropriate responses.)

At the beginning, however, Katie's biggest concern is that her husband is proposing to "disappear" for a month leaving her alone with their toddler daughter and her late-stage pregnancy.

"You haven't asked," she points out firmly. "You're leaving me with all the child care." Plus, what if the baby comes? They agree in that case he'd better un-disappear pretty quickly.

And so David heads out on the road with a Blackberry, a rucksack, and an increasingly paranoid state of mind. Is he safe being video-recorded interviewing privacy advocates in Brussels? Did "they" plant a bug in his gear? Is someone about to pounce while he's sleeping under a desolate Welsh tree?

There are real trackers: Cerberus detectives Duncan Mee and Cameron Gowlett, who took up the challenge to find him given only his (rather common) name. They try an array of approaches, both high- and low-tech. Having found the Brussels video online, they head to St Pancras to check out arriving Eurostar trains. They set up a Web site to show where they think he is and send the URL to his Blackberry to see if they can trace him when he clicks on the link.

In the post-screening discussion, Mee added some new detail. When they found out, for example, that David was deleting his Facebook page (which he announced on the site and of which they'd already made a copy), they set up a dummy "secret replacement" and attempted to friend his entire list of friends. About a third of Bond's friends accepted the invitation. The detectives took up several party invitations thinking he might show.

"The Stasi would have had to have a roomful of informants," said Mee. Instead, Facebook let them penetrate Bond's social circle quickly on a tiny budget. Even so, and despite all that information out on the Internet, much of the detectives' work was far more social engineering than database manipulation, although there was plenty of that, too. David himself finds the material they compile frighteningly comprehensive.

In between pieces of the chase, the filmmakers include interviews with an impressive array of surveillance victims, politicians (David Blunkett, David Davis), and privacy advocates including No2ID's Phil Booth and Action on Rights for Children's Terri Dowty. (Surprisingly, no one from Privacy International, I gather because of scheduling issues.)

One section deals with the corruption of databases, the kind of thing that can make innocent people unemployable or, in the case of Operation Ore, destroy lives such as that of Simon Bunce. As Bunce explains in the movie, 98.2 percent of the Operation Ore credit card transactions were fraudulent.

Perhaps the most you-have-got-to-be-kidding moment is when former minister David Blunkett says that collecting all this information is "explosive" and that "Government needs to be much more careful" and not just assume that the public will assent. Where was all this people-must-agree stuff when he was relentlessly championing the ID card ? Did he - my god! - learn something from having his private life exposed in the press?

As part of his preparations, Bond investigates: what exactly do all these organizations know about him? He sends out more than 80 subject access requests to government agencies, private companies, and so on. Amazon.com sends him a pile of paper the size of a phone book. Transport for London tells him that even though his car is exempt his movements in and out of the charging zone are still recorded and kept. This is a very English moment: after bashing his head on his desk in frustration over the length of his wait on hold, when a woman eventually starts to say, "Sorry for keeping you..." he replies, "No problem".

Some of these companies know things about him he doesn't or has forgotten: the time he "seemed angry" on the phone to a customer service representative. "What was I angry about on November 21, 2006?" he wonders.

But probably the most interesting journey, after all, is Katie's. She starts with some exasperation: her husband won't sign this required form giving the very good nursery they've found the right to do anything it wants with their daughter's data. "She has no data," she pleads.

But she will have. And in the Britain she's growing up in, that could be dangerous. Because privacy isn't isolation and it isn't not being found. Privacy means being able to eat sand without fear.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 26, 2010

The community delusion

The court clerk - if that's the right term - seemed slightly baffled by the number of people who showed up for Tuesday's hearing in Simon Singh v. British Chiropractic Association. There was much rearrangement, as the principals asked permission to move forward a row to make an extra row of public seating and then someone magically produced eight or ten folding chairs to line up along the side. Standing was not allowed. (I'm not sure why, but I guess something to do with keeping order and control.)

It was impossible to listen to the arguments without feeling a part of history. Someday - ten, 50, 150 years from now - a different group of litigants will be sitting in that same court room or one very like it in the same building and will cite "our" case, just as counsel cited precedents such as Reynolds and Branson v Bower. If Singh's books don't survive, his legal case will, as may the effects of the campaign to reform libel law (sign the petition!) it has inspired and the Culture, Media, and Sport report (Scribd) that was published on Wednesday. And the sheer stature of the three judges listening to the appeal - Lord Chief Justice Lord Judge (to Americans: I am not making this up!), Master of the Rolls Lord Neuberger, and Lord Justice Sedley - ensures it will be taken seriously.

There are plenty of write-ups of what happened in court and better-informed analyses than I can muster to explain what it means. The gist, however: it's too soon to tell which pieces of law will be the crucial bits on which the judges make their decision. They certainly seemed to me to be sympathetic to the arguments Singh's counsel, Adrienne Page QC, made and much less so to the arguments the BCA's counsel, Heather Rogers QC. But the case will not be decided on the basis of sympathy; it will be decided on the basis of legal analysis. "You can't read judges," David Allen Green (aka jackofkent) said to me over lunch. So we wait.
But the interesting thing about the case is that this may be the first important British legal case to be socially networked: here is a libel case featuring no pop stars or movie idols, and yet they had to turn some 20 or 30 people away from the courtroom. Do judges read Twitter?

Beginning with Howard Rheingold's 1993 book The Virtual Community, it was clear that the Net's defining characteristic as a medium is its enablement of many-to-many communication. Television, publishing, and radio are all one-to-many (if you can consider a broadcaster/publisher a single gatekeeper voice). Telephones and letters are one-to-one, by and large. By 1997, business minds, most notably John Hagel III and Arthur Armstrong in net.gain, had begun saying that the networked future of businesses would require them to build communities around themselves. I doubt that Singh thinks of his libel case in that light, but today's social networks (which are a reworking of earlier systems such as Usenet and online conferencing systems) are enabling him to do just that. The leverage he's gained from that support is what is really behind both the challenge to English libel law and the increasing demand for chiropractors generally to provide better evidence or shut up.

Given the value everyone else, from businesses to cause organizations to individual writers and artists, places on building an energetic, dedicated, and active fan base, it's surprising to see Richard Dawkins, whose supporters have apparently spent thousands of unpaid hours curating his forums for him, toss away what by all accounts was an extraordinarily successful community supporting his ideas and his work. The more so because apparently Dawkins has managed to attract that community without ever noticing what it meant to the participants. He also apparently has failed to notice that some people on the Net, some of the time, are just the teeniest bit rude and abusive to each other. He must lead a very sheltered life, and, of course, never have moderated his own forums.

What anyone who builds, attracts, or aspires to such a community has to understand from the outset is that if you are successful your users will believe they own it. In some cases, they will be right. It sounds - without having spend a lot of time poring over Dawkins' forums myself - as though in this case in fact the users, or at least the moderators, had every right to feel they owned the place because they did all the (unpaid) work. This situation is as old as the Net - in the days of per-minute connection charges CompuServe's most successful (and economically rewarding to their owners) forums were built on the backs of volunteers who traded their time for free access. And it's always tough when users rediscover the fact that in each individual virtual community, unlike real-world ones, there is always a god who can pull the plug without notice.

Fortunately for the causes of libel law reform and requiring better evidence, Singh's support base is not a single community; instead, it's a group of communities who share the same goals. And, thankfully, those goals are bigger than all of us.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. I would love to hear (net.wars@skeptic.demon.co.uk) from someone who could help me figure out why this blog vapes all non-spam comments without posting them.

February 12, 2010

Light year

This year is going to be the first British general election in which blogging is going to be a factor, someone said on Monday night at the event organized by the Westminster Skeptics on the subject of political blogging: does it make any difference? I had to stop and think: really? Things like the Daily Kos have been part of the American political scene for so long now - Kos was founded in 2002 - that they've been through two national elections already.

But there it was: "2005 was my big break," said Paul Staines, who blogs as Guido Fawkes. "I was the only one covering it. 2010 is going to be much tougher." To stand out, he went on to say, you're going to need a good story. That's what they used to tell journalists.

Due to the wonders of the Net, you can experience the debate for yourself. The other participants were Sunny Hundal (Liberal Conspiracy), Mick Fealty (Slugger O'Toole), Jonathan Isaby (Conservative Home), and the Observer journalist Nick Cohen, there to act as the token nay-sayer. (I won't use skeptic, because although the popular press like to see a "skeptic" as someone who's just there to throw brickbats, I use the term rather differently: skepticism is inquiry and skeptics ask questions and examine evidence.)

All four of political bloggers have a precise idea of what they're trying to do and who they're writing for. Jonathan Isaby, who claims he's the first British journalist to leave a full-time newspaper job (at the Telegraph) for new media, said he's read almost universally among Conservative candidates. Paul Staines aims Guido Fawkes at "the Westminster bubble". Mick Fealty uses Slugger O'Toole to address a "differentiated audience" that is too small for TV, radio, and newspapers. Finally, Sunny Hundal uses Liberal Conspiracy to try to "get the left wing to become a more coherent force".

Despite their various successes, Cohen's basic platform defended newspapers. Blogging, he said, is not replacing the essential core of journalism: investigation and reporting. He's right up to a point. But some do exactly that. Westminster Skeptics convenor David Allen Green, then standing approximately eight inches away, is one example. But it's probably true that for every blogger with sufficient curiosity and commitment to pick up a phone or bang on someone's door there are a couple of hundred more who write blog postings by draping a couple of hundred words of opinion around a link to a story that appeared in the mainstream media.

Of course, as Cohen didn't say, plenty of journalists\, through lack of funding, lack of time, or lack of training, find themselves writing news stories by draping a couple of hundred words of rewritten press release around the PR-provided quotes - and soul-destroying work it is, too. My answer to Cohen, therefore, is to say that commercial publishers have contributed to their own problems, and that one reason blogs have become such an entrenched medium is that they cover things that no newspaper will allow you to write about in any detail. And it's hard to argue with Cohen's claim that almost any blogger finding a really big story will do the sensible thing and sell it to a newspaper.

If you can. Arguably the biggest political story of 2009 was MPs' expenses. That material was released because of the relentless efforts of Heather Brooke, who took up the 2005 arrival into force of the UK's Freedom of Information Act as a golden opportunity. It took her nearly five years to force the disclosure of MPs' expenses - and when she finally succeeded the Telegraph wrote its own stories after poring over the details that were disclosed.

The fact is that political blogging has been with us for far longer than one five-year general election cycle. It's just that most of it does not take the same form as the "inside politics" blogs of the US or the traditional Parliamentary sketches in the British newspapers. The push for Libel reform began with Jack of Kent (David Allen Green); the push to get the public more engaged with their MPs began with MySociety's Fax Your MP. It was clear as long ago as 2006 that MPs were expert users of They Work For You: it's how they keep tabs on each other. MySociety's sites are not blogs - but they are the source material without which political blogging would be much harder work.

I don't find it encouraging to hear Isaby predict that in the upcoming election (expected in May) blogging "will keep candidates on their toes" because "gaffes will be more quickly reported". Isn't this the problem with US elections? That everyone gets hung up on calumnies such as that Al Gore claimed to have invented the Internet. Serious issues fall by the wayside, and good candidates can be severely damaged by biased reporting that happens to feed an eminently quotable sarcastic joke. Still: anything for a little light into the smoke-filled back rooms where British politics is still made. Even with smoking now banned, it's murky back there.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 29, 2010

Game night

Why can't computer games get any serious love? The maverick Labour MP Tom Watson convened a meeting this week to ask just that. (Watson is also pushing for the creation of an advocacy group, Gamers' Voice (Facebook).) From the dates, the meeting is not in response to claims that playing computer games causes rickets.

Pause to go, "Huh?"

We all know what causes rickets in the UK. Winter at these crazy high latitudes causes rickets in the UK. Given the amount of atmosphere and cloud it has to get through in the darker months, sunlight can't muster enough oomph to make Vitamin D on the skins of the pasty, blue-white people they mostly have here. The real point of the clinical review paper that kicked off this round of media nonsense, Watson rants, is that half of all UK adults are deficient in Vitamin D in the winter and spring. Well, duh. Wearing sunscreen has made it worse. So do clothes. And this: to my vast astonishment on arrival here they don't put Vitamin D in the milk. But, hey, let's blame computer games!

And yet: games are taking over. In December Chart-Track market researchfound that the UK games industry is now larger than its film industry. Yesterday's game-playing kids are today's game-playing parents. One day we'll all be gamers on this bus. Criminals pay more for stolen World of Warcraft accounts than for credit card accounts (according to Richard Bartle), and the real-money market for virtual game world props is worth billions (PDF). But the industry gets no government support. Hence Watson's meeting.

At this point, I must admit that net.wars, too, has been deficient: I hardly ever cover games. As a freelance, I can't afford to be hooked on them, so I don't play them, so I don't know enough to write about them. In the early-to-mid 1990s I did sink hours into Hitchhiker's Guide to the Galaxy, Minesweeper, Commander Keen, Lemmings, Wolfenstein 3D, Doom, Doom 2, and some of Duke Nukem. At some point, I decided it was a bad road. When I waste time unproductively I need to feel that I'm about to do something useful. I switched the mouse to the left hand, mostly for ergonomic reasons, and my slightly lower competence with it was sufficient to deter further exploration. The other factor: Quake made it obvious that I'd reached my theoretical limit.

I know games are different now. I've watched a 20-something friend play World of Warcraft and Grand Theft Auto; I've even traded deaths with him in one of those multiplayer games where your real-life best friends are your mortal enemies. Watching him play The Sims as a recalcitrant teenager (is there any other kind?) was the most fun. It seemed like Cosmic Justice to see him shriek in frustration at the computer because the adults in his co-op household were *refusing to wash the dishes*. Ha!

For people who have jobs, games are a (sometimes shameful) hobby; for people who are self-employed they are a dangerous menace. Games are amateur sports without the fresh air. And they are today's demon medium, replacing TV, comic books (my parents believed these rotted the brain), and printed multi-volume novels. All of that contributes to why games get relatively little coverage outside of specialist titles and writers such as Aleks Krotoski and are studied by rare academics like Douglas Thomas and Richard Bartle.

Except: it's arguable that the structure of games and the kind of thinking they require - logical, problem-solving, exploratory, experimental - does in fact inspire a kind of mental fitness that is a useful background skill for our computer-dominated world. There are, as Tom Chatfield, one of the evening's three panelists and an editor at Prospect, says in his new book Fun, Inc, many valuable things people can and do learn from games. (I once watched an inveterate game-playing teen extract himself from the maze at Hampton Court in 15 seconds flat.)

And in fact, that's the thought with which the seminal game cum virtual world was started: in writing MUD, Bartle wanted to give people the means to explore their identities by creating different ones.

It's also fun. And an escape from drab reality. And a challenge. And active, rather than passive, entertainment. The critic Sam Leith (who has compared World of Warcraft to Chartres Cathedral) pointed out that the violent shoot-'em-up games that get the media attention are a small, stereotyped sector of the market that deliberately insert shocking violence recursively to get media attention and increase sales. Limiting the conversation to one stereotypical theme is the problem, not games themselves.

Philip Oliver, founder and CEO of the UK's large independent games developer, Blitz Games, listed some cases in point: in their first 12 weeks of release his company sold 500,000 copies of its The Biggest Loser TV and 3.8 million copies of its Burger King advertising game. And what about that wildly successful Wii Fit?

If you say, "That's different", there is the problem.

Still, if game players are all going to be stereotyped as violent players shooting things...I'm not sure who pointed out that the Houses of Parliament are a fabulous gothic castle in which to set a shoot-'em-up, but it's a great idea. Now, that would really be government support!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 6, 2009

Wigging

The received wisdom in tennis has always been that drugs are a non-issue. There is, the argument goes, no drug that can supply the particular mix of talents and skills that are needed to win you tennis matches. In her 1985 book, Passing Shots on Tour, Pam Shriver noted another reason for the women, courtesy of former player JoAnne Russell: they're too cheap to buy their own drugs.

The situation with respect to recreational drugs has been a little less shrouded in mystery. The 1970s top ten player and 1977 Australian Open winner Vitas Gerulaitis, for example, admitted to cocaine use, and in his 1995 autobiography, I Never Played the Game, US veteran sports commentator Howard Cosell speculated on the unlikelihood that at least some of tennis's dozens of young, rich, successful people who traveled in jet-setting circles hadn't at least dabbled in such things. Other revelations have surfaced from time to time, most notoriously Jennifer Capriati's 1993 marijuana drug bust. Now, Andre Agassi has admitted to using crystal meth in 1997, the year his ranking plunged to a low of 141.

As advertisements for drug use go, this is a pretty good one for the ill-effects: one of the most talented players in the history of the game couldn't even keep himself in the top 100 while using.

Still, Agassi's admission - and still more, the ATP's acceptance of the lies he told to avoid exposure and a three-month suspension - has set off a predictable firestorm between the self-righteous and the forgiving. McEnroe's admission in his 2004 autobiography that he had (unknowingly, he said) taken steroids during his playing career, caused much less outcry.

It has long been my belief that players should not be tested, certainly not disqualified, for recreational drug use. Agassi's case seems to suggest otherwise, as the ATP's notification of his failed test frightened him into rehabilitating himself, his game, and his life, turning him from an underachiever to a tennis great. But if the tours are going to behave as rescuers in this way they should also direct their energies to finding ways to lower the injury rate, a much more visibly widespread and career-damaging problem.

In any event, it was always clear that in today's corporate sports exposing drug use on the part of tennis's top stars would benefit no one. Neither tours nor tournament promoters nor sponsors can tolerate scandal concerning their top box office draws. Even competitors do not benefit as much as you might think if a top star is taken out. Yes, their opportunities to rise in the rankings or win a particular tournament may be enhanced. But the star players like Agassi and McEnroe pull in the money and fans that enable everyone else to make a living.

It certainly seems as though today things would be handled differently. Take, for example, the case of the young, up-and-coming Belgian player Yanina Wickmayer, a semifinalist at the recent US Open, who has just been suspended for a year, potentially permanently wrecking her career, for failing to notify the drug testing authorities of her daily whereabouts (reportedly her appeal will rest on being unable to log onto the WADA Web site for two weeks). The whereabouts rule was the subject of much criticism by the players when it was introduced at the beginning of the year. They thought of the difficulties of leaving town hastily after losses; they thought of the logistical problems of sudden schedule changes. No one mentioned Internet failures, but it's an oh-so-credible explanation.

A lot of things have changed since 1997 to satisfy critics. The tours are no longer responsible for their own drug testing, removing both the obvious conflict of interest (good) and the best source of help for the players (bad). The retired Spanish player Sergi Bruguera (Spanish), who lost to Agassi in the 1996 Olympic final in Atlanta, is complaining that Agassi should now be relieved of his gold medal. His logic is unclear given the reported dates, but it's easy to understand the betrayal a player would feel on learning that another got special protection. WADA has said both that it would like the case investigated and that now, past the eight years' statute of limitations, there's nothing that can be done to punish Agassi.

But the people who should be most upset are those innocent athletes who are wrongfully accused. WADA's preferred zero-tolerance view seems to be that contrary to the presumption of innocence in a democratic society there is no such thing as an innocent explanation. Even so, there have certainly been cases of contaminated supplements and medically necessary ingestion, and confusion over which substances should be on the banned list (PDF).

Agassi's telling the truth about himself was certainly not a bad thing for him or his publishers; it is not even a bad thing for the game, since rational policy-making depends on the availability of factual evidence. But it will still make it harder for any athlete who is actually innocent to be believed, no matter what the exculpating evidence. As unintended consequences go, that's a real shame.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 24, 2009

Security for the rest of us


Many governments, faced with the question of how to improve national security, would do the obvious thing: round up the usual suspects. These would be, of course, the experts - that is, the security services and law enforcement. This exercise would be a lot like asking the record companies and film studios to advise on how to improve copyright: what you'd get is more of the same.

This is why it was so interesting to discover that the US National Academies of Science was convening a workshop to consult on what research topics to consider funding, and began by appointing a committee that included privacy advocates and usability experts, folks like Microsoft researcher Butler Lampson, Susan Landau, co-author of books on privacy and wiretapping, and Donald Norman, author of the classic book The Design of Everyday Things. Choosing these people suggests that we might be approaching a watershed like that of the late 1990s, when the UK and the US governments were both forced to understand that encryption was not just for the military any more. The peace-time uses of cryptography to secure Internet transactions and protect mobile phone calls from casual eavesdropping are much broader than crypto's war-time use to secure military communications.

Similarly, security is now everyone's problem, both individually and collectively. The vulnerability of each individual computer is a negative network externality, as NYU economist Nicholas Economides pointed out. But, as many asked, how do you get people to understand remote risks? How do you make the case for added inconvenience? Each company we deal with makes the assumption that we can afford the time to "just click to unsubscribe" or remember one password, without really understanding the growing aggregate burden on us. Norman commented that door locks are a trade-off, too: we accept a little bit of inconvenience in return for improved security. But locks don't scale; they're acceptable as long as we only have to manage a small number of them.

In his 2006 book, Revolutionary Wealth, Alvin Toffler comments that most of us, without realizing it, have a hidden third, increasingly onerous job, "prosumer". Companies, he explained, are increasingly saving money by having us do their work for them. We retrieve and print out our own bills, burn our own CDs, provide unpaid technical support for ourselves and our families. One of Lorrie Cranor's students did the math to calculate the cost in lost time and opportunities if everyone in the US read annually the privacy policy of each Web site they visited once a month. Most of these things require college-level reading skills; figure 244 hours per year per person, $3,544 each...$781 billion nationally. Weren't computers supposed to free us of that kind of drudgery? As everything moves online, aren't we looking at a full-time job just managing our personal security?

That, in fact, is one characteristic that many implementations of security share with welfare offices - and that is becoming pervasive: an utter lack of respect for the least renewable resource, people's time. There's a simple reason for that: the users of most security systems are deemed to be the people who impose it, not the people - us - who have to run the gamut.

There might be a useful comparison to information overload, a topic we used to see a lot about ten years back. When I wrote about that for ComputerActive in 1999, I discovered that everyone I knew had a particular strategy for coping with "technostress" (the editor's term). One dealt with it by never seeking out information and never phoning anyone. His sister refused to have an answering machine. One simply went to bed every day at 9pm to escape. Some refused to use mobile phones, others to have computers at home..

But back then, you could make that choice. How much longer will we be able to draw boundaries around ourselves by, for example, refusing to use online banking, file tax returns online, or participate in social networks? How much security will we be able to opt out of in future? How much do security issues add to technostress?

We've been wandering in this particular wilderness a long time. Angela Sasse, whose 1999 paper Users Are Not the Enemy talked about the problems with passwords at British Telecom, said frankly, "I'm very frustrated, because I feel nothing has changed. Users still feel security is just an obstacle there to annoy them."

In practice, the workshop was like the TV game Jeopardy: the point was to generate research questions that will go into a report, which will be reviewed and redrafted before its eventual release. Hopefully, eventually, it will all lead to a series of requests for proposals and some really good research. It is a glimmer of hope.

Unless, that is, the gloominess of the beginning presentations wins out. If you listened to Lampson, Cranor, and to Economides, you got the distinct impression that the best thing that could happen for security is that we rip out the Internet (built to be open, not secure), trash all the computers (all of whose operating systems were designed in the pre-Internet era), and start over from scratch. Or, like the old joke about the driver who's lost and asking for directions, "Well, I wouldn't start from here".

So, here's my question: how can we make security scale so that the burden stays manageable?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

June 4, 2009

Computers, Freedom, and Privacy 2009 - Day Four

The challenge posed by many of today's panelists: activism transfer. How do you get people communicating via Twitter, Facebook, and other social networks to take to the streets? Because that's where the real impact is.

How little things have changed since 1994, my first year at CFP, when Simon Davies dressed up as the Pope, read from the Book of Unix, and told everyone that if they wanted governments to listen they needed to stop sending around email petitions and organize at the grass roots level. In India, explained Gaurav Mishra, this meant getting people to vote instead of complaining that the system was corrupt and staying home.

Use online tools to build offline institutions, he concluded. "Real social change will not happen online."

But today's China panel - probably the best of all this year's offerings - made the point that although we have tended to assume that the Internet will bring democracy and light to anywhere it penetrates, China shows that the Internet can also be used to spread propaganda. You'd think this would have been obvious, but policy has tended to assume otherwise.

Said Rebecca MacKinnon, who is writing a book about China and the Internet, "It's true that China has shown that authoritarianism can do a lot better in the internet age than a lot of people ever expected."

China has implemented several different elements of control: many overseas sites and services are blocked (so many blogging sites are down "for maintenance" on this 20th anniversary of Tiannamen Square that there's a joke about China Maintenance Day). There is some change, but it's a slow evolution: "The Internet may be liberalizing people to some extent, but on the other hand, we're not going to see any kind of regime change." The liquid metal man in Terminator 2 only becomes a threat when the little blobs of metal flow together; you can let little local pockets of increasing liberalization occur as long as they never join together to become national.

In a later panel on taking Tweets to the street, Ralf Bendrath recounted creating a 75,000-person demonstration against surveillance and in favor of privacy in Germany starting with little more than a wiki. But, he noted that individual liberals are not the only voices who will be able to use these tools.

"We celebrate Obama's use of these tools because we believe in his ideology," said Mishra, going on to point out that in India a right-wing party that wants to restrict women's movements is at the forefront of using Twitter, Facebook, and blogging. "As much as I hate to say this, very soon we will find enthusiasm for these tools being tempered by realism that anybody can use them." The tools by themselves do not give us more power.

"Use online tools to build offline institutions," said Bendrath. "Real social change will not happen online."

Over and out. Anyone with ideas for next year should submit them not at www.cfp2010.org. Have a good year, folks!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on Twitter or email netwars@skeptic.demon.co.uk (but please turn off HTML).

Computers, Freedom, and Privacy 2009 - Day Three

"Do you feel guilty about killing newspapers?" Saul Hansell asked Craig Newmark yesterday. The founder of Craig's List, widely credited with stealing newspapers' classified ads, offered the mildly presented answer that it would be more correct to say that Craig's List, Amazon, and eBay took the newspapers' audience by offering them a more friendly and convenient marketplace.

At some point in the early 19-00s, Charlotte-Anne Lucas explained today, newspapers changed from charging for content to charging for audiences, leading them to selecting content based on its mass appeal. Exactly, she didn't say, like AOL in the mid 1990s, when it switched from making its money from connect time, which favored all sorts of niche content, to making its money from advertising, which required mass eyeballs.

One advantage bloggers have, noted Marcy Wheeler is that they don't have to frame every story as a controversy that can be resolved in 700 words (how like a sitcom).

My other favorite quote of the day, from a panel on whether government secrecy makes any sense in the post-Internet world "Secrecy makes people stupid." The speaker, Steve Aftergood, a senior research analyst with the Federation of American Scientists, went on to note that the US spends $10 billion a year on keeping secrets - that is, protecting classified information. He didn't draw the obvious conclusion...

The panel, which included a former undercover agent (Mike German, now with the ACLU), a former director of the US Information Security Oversight Office (Bill Leonard), and a former chief information policy officer from the NSA (Mike Levin), is worth listening to in full. Satirists could have fun with Aftergood's later note, that while you can find out that the 2008 intelligence budget was $47.7 billion, and the 2007 budget was $43.5 billion, the 2006 number is classified - as is the budget from 50 years ago. Aftergood tried to find out the number from the 1940s and was refused; appeal was denied, second appeal was denied, and a lawsuit to force disclosure was unsuccessful. He's not sure how this figure could damage national security; I say with these numbers he could go on Letterman.

Still, it's a fair point to say that secrets are harder to keep than they've ever been, not least because the intelligence community is adopting the same kinds of tools the rest of us use, albeit versions closed to public access. Perhaps we can get away from the sort of thing John Le Carre wrote about at the end of one of his books, in which an agent died for a fact that would be published in a Russian newspaper the following week. The good news is there's to be a review of all these procedures, a "unique opportunity", the panel called it, to effect real change.

We finished today with a selection of ultra-short presentations. Lock your credit record with a ten-digit code, said Jeremy Duffy, and celebrate Sam Warren, Brandeis's less famous partner, said Paul Rosenzweig. The highlight for me, though: meeting < a href="http://www.veni.com">Veni Markowski, whom I've read about for years as Bulgaria's cyberspace king. He's going to work now for the government to coordinate international action on cybersecurity. Good stuff.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Readers are welcome to post here, follow on follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 3, 2009

Computers, Freedom, and Privacy 2009 - Day Two

One hundred and thirty-three days into the Obama Administration. He still still has a lot of fans - one conference attendee was wearing silver Obama logo earrings yesterday and CNet writer Declan McCullough was pleased that a FOIA request that kept him waiting for over a year was answered within a few weeks of the inauguration - privacy advocates are beginning to carp that his record on privacy seems unlikely to be any improvement on his immediate predecessor's. Kicking off the day's first session, Susan Crawford talked some good principles, but a basic one - answering public questions - was off-limits. `

McCullough also noted that Obama has yet to fulfill his promise to post non-emergency legislation for public comment for five days before signing it.

Meanwhile, however, said the ACLU's Caroline Fredrickson, the US's Real ID effort, which threatened to unify state-issued driver's licenses into a single national ID card-equivalent, has halted under the pressure of the refusal of many individual states to participate. Why? Unworkable, costly, and invasive. Sounds like Britain's ID card, though the UK government still persists, lacking state governments to stand in its way.

"A mistake in the database can render you an unperson," she noted.

There was another good line on this: "Information asymmetry is how repressive regimes operate." The Internet's power to flatten information hierarchies all by itself might be why Nicole Wong wakes up every morning and checks her Blackberry to find out which country Google is blocked in today. As the deputy general counsel for Google, it's her job not only to track that sort of thing but to try to remove these blockages by negotiating with national governments. The New York Times recently described Wong as the person with the most influence over the exercise of free speech in the world.

Wong was part of my panel on Internet censorship, we were arguing about censorship in the US, the UK, and Australia, and debating whether John Gilmore's oft-quoted aphorism is still correct. "The Internet perceives censorship as damage, and routes around it," Gilmore thinks he probably said sometime in 1990 or thereabouts. Is that still true, given the computing power to do deep packet inspection? Very possibly not. Derek Bambauer had a neat list of the stages of Internet censorship. Version 1.0: it can't be done. Version 2.0: the bad guys do it. Version 3.0: everyone does it. Australia is on round two of let's-filter-the-Internet, and it is the world's pilot on this. The danger, Wong commented, is that we may get tied up in arguing whether it's OK to filter specific types of content; the existence of a filter in a country like Australia legitimizes filtering for the more repressive countries coming online that she has to negotiate with.

Perhaps the most surprising bit of the day was the appearance on the same panel of Bruce Schneierand Stewart Baker without acrimony. Valerie Caproni, the FBI's general counsel, also on that panel, was a little frostier, particularly when travel data privacy expert Edward Hasbrouck attacked her and the US government's apparent belief that foreigners do not have the same human rights as US citizens. Both Schneier and Baker fired off a few good lines. Schneier pointed out that as technology increases and gives each of us more personal power amplitude, the harm that ten armed men can do to society keeps getting bigger. At what point, he asked, is that noise bigger than society?

Baker, who's made a sort of career of insulting the CFP crowd, more or less agreed: there is an illusion that the continued working of Moore's Law is always going to be beneficial to society. That aside, Baker was slightly miffed. After winning the Big Brother award for Worst Public Official in 2007, he said, Privacy International had yet to deliver his award. Via Twitter PI promised to deliver it. Eventually. When he least expects it.

More tomorrow.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 2, 2009

Computers, Freedom, and Privacy 2009 - Day One


"Did you check that with your ethics committee?"

The speaker, who was feeling the strain of being a newcomer to privacy issues among a very tough, highly activist crowd, turned a little shakier than she already was.

"I didn't need to," she said, or something very like it. "It's not interacting with humans, just computers."

We spend a lot of time talking about where the line might be between human intelligence and artificial intelligence, but the important question may not be the usual one, Not "What does it mean to be human?" but "How far down the layer of abstractions does human interaction persist?" If I send you email intended to deceive, clearly I'm interacting with a human. If I set up a Facebook account and use it to get you to friend me by first friending one of your less careful friends and never communicate directly with you, the line gets a little more attenuated. Someone who had thought more about computers than about people might get confused.

This sort of question is going to come up a lot as we get better at datamining, the subject of an all-day tutorial on the first day of CFP (you'll find a lot of streams and papers on the conference Web site, if you'd like to investigate further), and you can pick up notes-in-progress on the conference real-time Twitter feed. (I missed out on the annual civil liberties in cyberspace tutorial, and others on health data privacy and behavioral advertising.)

The important point, as speakers like Khaled El Emam, a research chair at the University of Ottawa, and Bradley Malin, made clear, is that it's actually very difficult to anonymize data, no matter how much governments would like to persuade us otherwise. Pharmaceutical companies want medical data for research; governments want to give it to them in return for (they hope) lowered medical costs.

But what is identifiable data? Do you include data that can be reidentified when matched against a different dataset? The typical threat model assumes that an attacker will try once and give up. But in one case, Canadian media matched anonymized prescription data for an acne drug against published obituaries, and managed to find four families that matched. Media are persistent: they will call each family until they find the right one.

When we talk about anonymized data, therefore, we have to ask many more questions than we do now. What are the chances of unique records? What are the chances of unique records in the databases this database may be matched to? That determines how easy it is to find a particular individual's record. With just a name, full date of birth, and postal codes for the last year, 98 percent of 11 years of patient data covering 4 million people in Montreal was uniquely identifiable.

People have of course been working on this problem because patient data is incredibly valuable for research to improve public health.

The problem, as Malin noted, is that "People have been proposing methodologies for ten-plus years, and there's not much in the way of technology transfer."

El Emam had an explanation: "A lot of stuff is unusable." Really anonymizing the data using tools such as generalization, perturbation, or multi-party computation, is currently not a practical option: it leaves you with a dataset you can't analyze using standard research tools. Ouch.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or reply by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 8, 2009

Automated systems all the way down

Are users getting better or worse?

At what? you might ask. Naturally: at being thorns in the side of IT security people. Users see security as damage, and route around it.

You didn't need to look any further than this week's security workshop, where this question was asked, to see this principle in action. The hotel-supplied wireless was heavily filtered: Web and email access only, no VPNs, "undesirable" sites blocked. Over lunch, the conversation: how to set up VPNs using port 443 to get around this kind of thing. The perfect balanced sample: everyone's a BOFH *and* a hostile user. Kind of like Jacqui Smith, who has announced plans to largely circumvent the European Court of Human Rights' ruling that Britain has to remove the DNA of innocent people from the database. Apparently, this government perceives European law as damage.

But the question about users was asked seriously. The workshop gathered security folks from all over to brain storm and compare notes: what are the emerging security threats? What should we be worrying about? And, most important, what should people be researching?

Three working groups - smart environments, malware and fraud, and critical systems - came up with three different lists, mostly populated with familiar stuff - but the familiar stuff keeps going and getting worse. According to Symantec's latest annual report spam, for example, was up 162 percent in 2008 over 2007, with a total of 349.6 billion messages sent - simply a staggering waste of resources. What has changed is targeting; new attacks are short-lived, small distribution affairs - much harder to shut down.

Less familiar to me was the "patch window" problem, which basically goes like this: it takes 24 hours for 80 percent of Windows users to get a new patch from Windows Update. An attacker who downloads the patch as soon as it's available can quickly - within minutes - reverse-engineer it to find out what bug(s) it's fixing. Then the attacker has most of a day in which to exploit the bug. Last year, Carnegie-Mellon's David Brumley and others found a way to automate this process (PDF). An ironic corollary: the more bug-free the program, the easier a patch window attack becomes. Various solutions were discussed for this, none of them entirely satisfactory; the most likely was to roll out the patch locked, and distribute a key only after the download cycle is complete.

But back to the trouble with users: systems are getting more and more complex. A core router now has 5,000 lines of code; an edge router 11,000. Someone has to read and understand all those lines. And that's just one piece. "Today's networks are now so complex we don't understand them any more," said Cisco's Michael Behrenger. Critical infrastructures need to be more like the iPhone, a complex system that nonetheless just about anyone can operate.

As opposed, I guess, to being like what most people have now: systems that are a mish-mash of strategies for getting around things that don't work. But I do see his point. Once you could debug even a large network by reading the entire configuration. Pause to remember the early days of Demon Internet, when the technical support staff would debug your connection by directly editing the code of the dial-up software we were all using, KA9Q. If you'd taken *those* humans out of the system, no one could have gotten online.

It's my considered view that while you can blame users for some things - the one in 12.5 million spam recipients Christian Kreibich said actually buys the pharma products so advertised springs to mine - blaming them in general is a lot like the old saw about how "only a poor workman blames his tools". It's more than 20 years since Donald Norman pointed out in The Design of Everyday Things that user error is often a result of poor system design. Yet a depressing percentage of security folks complaining about system complexity don't even know his name and a failure to understand human factors is security's single biggest failure.

Joseph Bonneau made this point in a roundabout way by considering Facebook which, he said, really is inventing the Web - not just in the rounded corners sense, but in the sense of inventing its own protocols for things for which standards already exist. Plus - and more important for the user question - it's training users to do things that security people would rather they didn't, like click on emailed links without checking the URLs. "Social networks," he said, "are repeating all the Web's security problems - phishing, spam, 419 scams, identity theft, malware, cross-site scripting, click fraud, stalking...privacy is the elephant in the room." Worse, "They really don't yet have a business model, which makes dealing with security difficult."

It's a typical scenario in computing, where each new generation reinvents every wheel. And that's the trouble with automation with everything, too. Have these people never used voice menus?

Get rid of the humans and replace them with automated systems that operate perfectly, great. But won't humans have to write the automated systems? No, automated systems will do that. And who will program those? Computers. And who...

Never mind.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to follow (and reply) on , post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 24, 2009

The way we were

Two people in the audience said they were actually at Woodstock.

The math: Champaign-Urbana's Virginia Theater seats 1,600 ("I saw all the Star Wars movies in this theater," said the guy behind me). Audience skews somewhat to Baby Boom and older. Mostly white. Half a million people at Woodstock. Hard to know, but the guy sitting next to me and I agreed: two *feels* right.

This week is Roger Ebert's Film Festival, a small, personal event likely to remain so because of its location: his Illinois home town. A nice, Midwestern town, chiefly known for the university whence came Mosaic. People outside the US may not know Ebert's work as well as those inside it: a Pulitzer Prize-winning print critic, he and fellow Chicago newspaper critic Gene Siskel invented TV movie criticism. The festival is a personal love letter to movie fans, to his home town, and to the movies he picks because he feels they deserve to be more widely known and/or appreciated.

This is what it's like: the second day the parents of one of the featured directors casually pull me to lunch in the student union cafeteria. "I used to sit at this table when I was a student here," said the wife. She pointed across the cafeteria. "Roger Ebert used to sit at that table over there." Her husband pointed in a third direction and added, "And that table over there is where we met."

People come because they love movies - and also love seeing them in a fine theater with perfect sound and projection filled with the ultimate in appreciative audiences. Watching Woodstock last night, people so much forgot that they weren't at a live concert that they applauded each act in turn. And when Country Joe yelled, "What does it spell?" they yelled back "FUCK" at increasingly high volume. (I will remind you that this is America's heartland; these are supposed to be the people whose sensibilities are too delicate for Janet Jackson's nipple. Hah.)

The next morning, at a panel about the tribulations of movie distribution in these troubled times, I found I was back at work. Woodstock Michael Wadleigh - who's heavy into saving the planet now - told a quaint story about the film's release. His contract gave him final cut. Warner Brothers saw his finished length - four hours - and was ready to ignore it and cut it down to one hour 50 minutes. Received wisdom: successful movies aren't longer than that. Received wisdom: rock and roll documentaries are not successful movies anyway. Received wisdom: we have more lawyers than you. Nyaaah. Come and sue us. This attitude toward artists seems familiar, somehow.

So Wadleigh and his producers stole back his film, just like in S.O.B.. The producer then called the studios and convinced them that Wadleigh was deranged enough to actually set fire to himself and all the footage if the studio didn't release the film exactly as he'd cut it. Studio relents (that probably wouldn't happen now either). Film is released at nearly four hours. Still the biggest-grossing documentary in history. Now remastered, cleaned up, sound digitized, etc. for a new DVD. That was, like flower power, then..

Cut to Nina Paley, sitting a few directors down the panel from Wadleigh. Paley, like most of the others here - Guy Madden (My Winnipeg), Karen Gehres (Begging Naked), Carl Deal and Tia Lessin (Trouble the Water) - can't find distribution. Unlike Lessin, who reacted with some umbrage to the notion of giving stuff away, Paley decided that rather than sign away effectively all rights to her movie for five or ten years she turned it over to her audience to distribute for her. Yes, she put all the movie's files on the Internet for free under a share-alike Creative Commons license. Go ye and download. I'll wait.

And what happened? People downloaded! People shared! People started inviting her to speak! People started demanding to buy DVDs. She started making money.

Wait. What?

Boggle, MPAA, boggle.

That doesn't mean to say that movie distribution isn't in trouble: it is. Wadleigh and the Warner Brothers publicity person, Ronnee Sass, next to him, may have a mutual admiration society, but even films that have won top prizes at Cannes and Sundance are having trouble getting seen. Art theaters are shutting down and the small distributors that service them are going out of business.

"Why?" I was asked over lunch. A dozen reasons. People have more entertainment options. Corporate-owned studios would rather gamble on blockbusters. Theaters got unpleasant - carved-up, badly angled, out-of-focus screening rooms with sticky floors and too-loud, distorted sound. To people who were watching movies on small TV screena with commercial disruptions, home theaters look like an improvement - you can talk to your friends, eat what you want, pick your own movies, and pause whenever you like. More, in fact, like reading a novel or listening to music than going to a movie in the old sense, when you didn't - couldn't - yawn halfway through the magic and say, "I'll finish it tomorrow.".

What people have forgotten is the way a theater filled with audience response changes the experience. Would Woodstock have been the same if everyone had stayed home and watched it on TV?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to follow on Twitter, post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 13, 2009

Threat model

It's not about Phorm, it's about snooping. At Wednesday morning's Parliamentary roundtable, "The Internet Threat", the four unhappy representatives I counted from Phorm had a hard time with this. Weren't we there to trash them and not let them reply? What do you mean the conversation isn't all about them?

We were in a committee room many medieval steps up unside the House of Lords. The gathering, was convened by Baroness Miller of Chilthorne Domer with the idea of helping Parliamentarians understand the issues raised not only by Phorm but also by the Interception Modernisation Programme, Google, Microsoft, and in fact any outfit that wants to collect huge amounts of our data for purposes that won't be entirely clear until later.

Most of the coverage of this event has focused on the comments of Sir Tim Berners-Lee, the indefatigable creator of the 20-year-old Web (not the Internet, folks!), who said categorically, "I came here to defend the integrity of the Internet as a medium." Using the Internet, he said, "is a fundamental human act, like the act of writing. You have to be able to do it without interference and/or snooping." People use the Internet when they're in crisis; even just a list of URLs you've visited is very revealing of sensitive information.

Other distinguished speakers included Professor Wendy Hall, Nicholas Bohm representing the Foundation for Information Policy Research, the Cambridge security research group's Richard Clayton, the Open Rights Group's new executive director, Jim Killock, and the vastly experienced networking and protocol consultant Robb Topolski.

The key moment, for me, was when one of the MPs the event was intended to educate asked this: "Why now?" Why, in other words, is deep packet inspection suddenly a problem?

The quick answer, as Topolski and Clayton explained, is "Moore's Law." It was not, until a couple-three years ago, possible to make a computer fast enough to sit in the middle of an Internet connection and not only sniff the packets but examine their contents before passing them on. Now it is. Plus, said Clayton, "Storage."

But for Kent Ertegrul, Phorm's managing director, it was all about Phorm. The company had tried to get on the panel and been rejected. His company's technology was being misrepresented. Its system makes it impossible for browsing habits to be tracked back to people. Tim Berners-Lee, of all people, if he understood their system, would appreciate the elegance of what they've actually done.

Berners-Lee was calm, but firm. "I have not at all criticized behavioral advertising," he pointed out. "What I'm saying is a mistake is snooping on the Internet."

Right on.

The Internet, Berners-Lee and Topolski explained, was built according to the single concept that all the processing happens at the ends, and that the middle is just a carrier medium. That design decision has had a number of consequences, most of them good. For example, it's why someone can create the new application of the week and deploy it without getting permission. It's why VOIP traffic flows across the lines of the telephone companies whose revenues it's eating. It is what network neutrality is all about.

Susan Kramer, saying she was "the most untechie person" (and who happens to be my MP), asked if anyone could provide some idea of what lawmakers can actually do. The public, she said, is "frightened about the ability to lose privacy through these mechanisms they don't understand".

Bohm offered the analogy of water fluoridation: it's controversial because we don't expect water flowing into our house to have been tampered with. In any event, he suggested that if the law needs to be made clearer it is in the area of laying down the purposes for which filtering, management, and interference can be done. It should, he said, be "strictly limited to what amounts to matters of the electronic equivalent of public health, and nothing else."

Fluoridation of water is a good analogy for another reason: authorities are transparent about it. You can, if you take the trouble, find out what is in your local water supply. But one of the difficulties about a black-box-in-the-middle is that while we may think we know what it does today - because even if you trust, say, Richard Clayton's report on how Phorm works (PDF) there's no guarantee of how the system will change in the future. Just as, although today's government may have only good intentions in installing a black box in every ISP that collects all traffic data, the government of ten years hence may use the system in entirely different ways for which today's trusting administration never planned. Which is why it's not about Phorm and isn't even about behavioural advertising; Phorm was only a single messenger in a bigger problem.

So the point is this: do we want black boxes whose settings we don't know and whose workings we don't understand sitting at the heart of our ISPs' networks examining our traffic? This was the threat Baroness Miller had in mind - a threat *to* the Internet, not the threat *of* the Internet beloved of the more scaremongering members of the press. Answers on a postcard...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML)

February 27, 2009

Modern liberty

Tomorrow is a thing: a series of events around Britain called the Modern Liberty Convention. Practically everyone I know (and a lot of people I don't) is on the speakers' list at one site or another. A Canadian friend emailed envirously about this: the Brits have it right! she said.

Well, not entirely. The reason you need an event like the Modern Liberty Convention is because you have a problem. Or, as the University College London Student Human Rights Programme has caefully documented, because you've lost a load of freedoms you thought you had (PDF). The list they've compiled is pretty astonishing. In the fact of the Human Rights Act and 800 years of the Magna Carta, 25 Acts of Parliament and 50 individual measures have served to remove freedoms that most British people took pretty much for granted. This is, of course, the problem with an unwritten constitution: it's fine to govern by gentlemen's agreement as long as everyone concerned is a gentleman - that is, that they share a consistent set of values and can imagine that the laws they're creating will apply to them just as much as everyone else they affect.

That this hasn't been the case for sometime is thoroughly documented by the convention's researchers, the University College London Student Human Rights Programme in What we've lost, an inventory of 25 Acts of Parliament and 50 measures that in the few short years of this century have acid-washed liberties that Britons have taken for granted in the 800 years since Magna Carta.

My contribution is to form, on behalf of the Open Rights Group, part of a panel called Business gets personal - can privacy have a future?

The answer, I think, is "maybe" and "sometimes". Businesses invade our privacy for all sorts of different reasons with varying amounts of power over us, so there isn't going to be just one answer. Constitutions don't necessarily help with this, largely because the threat companies pose is so recent. Even the written US constitution can't help us much; there was no such thing as a multinational corporation with an economy bigger than a government's back in the 18th century.

Amazon and eBay retain our user histories in ways that benefit us as well as them. It's helpful to be able to look over past Amazon purchases to make sure we don't give someone the same gift twice; Amazon uses our purchase history to recommend new things we might like. On eBay, your history is your reputation; it's what enables trading with strangers with some confidence. We get less in return - small discounts, preferential seating - in return for the privacy we give away when we sign up for loyalty cards or frequent flyer programs. But in these cases we have choices: we can buy books and groceries with cash from local shops; we can either not fly or vary the airline. As privacy advocates have said for some years, in these situations we tend to sell our privacy very cheaply.

We have little choice about using other types of businesses, such as banks and telephone companies - and there is no market pressure on them to adopt privacy-protecting policies. The nature of their businesses ensures that they have access to particularly intimate information about us. More than that, government mandates such as the anti-terrorism and data retention laws require them to retain that information and make it available. We can't get a better privacy regime by changing banks (unless the new bank is off-shore somewhere) or by switching from BT to Vodafone. Just last week, the US announced proposals to require not only ISPs (as in this country) but anyone operating a Wi-Fi hotspot to retain access logs for two years. The only way those businesses can be forced to change is by changing the law.

The most interesting are the social media, not only social networks like Facebook and Twitter but Web boards. These businesses provide the infrastructure for people to invade their own privacy to an extent that a business would probably never dare ask them to. Users do have some power in relation to these businesses because using these systems really is discretionary. Facebook, when it announced unilateral new terms and conditions last week became only the latest in a long series of online services to discover the speed with which users can revolt. Facebook's response - to try to create a Bill of Rights and ensure the democratic participation of its users in decisions it makes about the site - is interesting. The company has a serious and deep-rooted conflict: if its users don't trust it they won't stay; but the only potential money-making asset it has is its users and their data.

The big mystery is Google. We aren't locked into using it by lack of competitors or government regulation, and we understand its business model perfectly well - collect mountains of data on all of us. And yet we're seduced by that slick interface and those helpful results.

We can't rely on government to control these companies, not least because they'd love to have access to all this data, too. If we want privacy in future, we need to start by making better choices where we can, including in our politics.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 2, 2009

No rest for 2009

It's been a quiet week, as you'd expect. But 2009 is likely to be a big year in terms of digital rights.

Both the US and the UK are looking to track non-citizens more closely. The UK has begun issuing foreigners with biometric ID cards. The US, which began collecting fingerprints from visiting tourists two years ago says it wants to do the same with green card holders. In other words, you can live in the US for decades, you can pay taxes, you can contribute to the US economy - but you're still not really one of us when you come home.

The ACLU's Barry Steinhardt has pointed out, however, that the original US-VISIT system actually isn't finished: there's supposed to be an exit portion that has yet to be built. The biometric system is therefore like a Roach Motel: people check in but they never leave.

That segues perfectly into the expansion of No2ID's "database state". The UK is proceeding with its plan for a giant shed to store all UK telecommunications traffic data. Building the data shed is a lot like saying we're having trouble finding a few needles in a bunch of haystacks so the answer is to build a lot bigger haystack.

Children in the UK can also look forward to ContactPoint (budget £22.4 million) going live at the end of January, only the first of several. The conservativers apparently have pledged to scrap ContactPoint in favor of a less expensive system that would track only children deemed to be at risk. If the conservatives don't get their chance to scrap it - probably even if they do - the current generation may be the last that doesn't get to grow up taking for granted that their every move is being tracked. Get 'em young, as the Catholic church used to say, and they're yours for life.

The other half of that is, of course, the National Identity Register. Little has been heard of the ID card in recent months; although the Home Office says 1,000 people have actually requested one. Since these have begun rolling out to foreigners, it's probably best to keep an eye on them.

On January 19, look for the EU to vote on copyright term extension in sound recordings. They have now: 50 years. They want: 95 years. The problem: all the independent reviewers agree it's a bad idea economically. Why does this proposal keep dogging us? Especially given that even the UK government accepts that recording contracts mean that little of the royalties will go to the musicians the law is supposedly trying to help, why is the European Parliament even considering it? Write your MEP. Meanwhile, the economic downturn reaches Cliff Richards; his earliest recordings begin entering the public domain...oh, look - yesterday, January 1, 2009.

Those interested in defending file-sharing technology, the public domain, or any other public interest in intellectual property will find themselves on the receiving end of a pack of new laws and initiatives out to get them.

The RIAA recently announced it would cease suing its customers in the US. It plans to "work with ISPs". Anyone who's been around the UK and France in recent months should smell the three-strikes policy that the Open Rights Group has been fighting against. ORG's going to find it a tougher battle, now that the govermment is considering a stick and carrot approach: make ISPs liable for their users' copyright infringement, but give them a slice of the action for legal downloads. One has to hope that even the most cash-strapped ISPs have more sense.

Last year's scare over the US's bald statement that customs authorities have the right to search and impound computers and other electronic equipment carried by travellers across the national borders will probably be followed up with lengthy protest over new rules known as the Anti-Counterfeiting Trade Agreement and being negotiated by the US, EU, Japan, and other countries. We don't know as much as we'd like about what the proposals actually are, though some information escaped last June. Negotiations are expected to continue in 2009.

The EU has said that it has no plans to search individual travellers, which is a relief; in fact, in most cases it would be impossible for a border guard to tell whether files on a computer were copyright violations. Nonetheless, it seems likely that this and other laws will make criminals of most of us; almost everyone who owns an MP3 player has music on it that technically infringes the copyright laws (particularly in the UK, where there is as yet no exemption for personal copying).

Meanwhile, Australia's new $44 million "great firewall" is going ahead despiteknown flaws in the technology. Nearer home, British Culture Secretary Andy Burnham would like to rate the Web, lest it frighten the children.

It's going to be a long year. But on the bright side, if you want to make some suggestions for the incoming Obama administration, head over to Change.org and add your voice to those assembling under "technology policy".

Happy new year!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 26, 2008

Apologies not accepted

It's Christmas, time of peace, goodwill, and all that jazz. So my contribution: please stop apologizing. Yes, this means you. All of you.

You, whose company policies are badly drafted and annoying but are not your fault. Instead of apologizing in a maddeningly neutral tone of voice, I'd rather you said yes, the policy is insane, yes, it drives everyone crazy, but no, there's nothing I can do about it because I'm not allowed to depart from this script here on this computer that says to tell you I apologize.

You, who are staffing the airplane that's late. We know it's late. We know it's late because we've been in the plane circling Philadelphia waiting to land for the last 20 minutes, and now we've just flown away and landed at Atlantic City. No one wants to go to Atlantic City on a flight from London to Philadelphia, not even the most intrepid gamblers. But you should not be apologizing. The people who should be apologizing are the beanheads at US Airways' Phoenix headquarters, who have gambled with their passengers' time and patience, and have decided that saving money by not carrying enough fuel across the ocean to hold if necessary is a more important goal. In 2008, I got caught this way twice on the London-Philadelphia route. The first time, we diverted to Boston and were four hours late. The second time, Atlantic City - that saved us a half hour. The staff shouldn't be apologizing. You should be saying, "We're getting screwed, too."

You, in the anti-fraud department at the credit card company. The problem is the algorithms behind the way the computer is programmed. I know - and you know - that it's not your fault that the system keeps kicking out my card every time I try to make a transaction. Of course, it's not my fault either, which is why it would be nice if once in a while your company wrote to me and indicated that it understood that its computers are badly programmed and that the intransigence of its anti-fraud detection is costing it customer goodwill. After all, what good is an emergency credit card if you can't use it in an emergency because putting through a transaction from a foreign country without warning will cause your card to be suspended?

It shouldn't be your job to apologize; you'd be giving better customer service by sympathizing, passing on the complaint, and helping customers figure out how to get the company to improve a bad situation. Telling us to call first before putting through a charge probably is just adding fuel to the ire fire. Being unable to give any indication of what might constitute a high-risk transaction versus one the system would accept doesn't help either. Security by obscurity is bad enough; it's worse when it's so obscure to a system's users that they can't begin to tell when they're taking a risk and when they're not. Pushing me on to the sales department to confirm my replacement card so they can try to sell me card protection insurance is a further insult.

If you're going to apologize for something, what you should be apologizing for is acting all surprised and hurt when you call me up and demand my security information and I say, "You've got to be kidding me. How do I know who you are?" Given the troubles with phishing scams, I'd have thought you'd be pleased any customer has the nous to refuse to disclose such information. What the credit card companies need to do is put together a two-way handshaking authentication scheme so that we take turns disclosing bits of information we know about each other. But don't apologize! Change something! Fix something! Or if you can't, just be really, really efficient about getting the business of the call done as quickly as possible.

A friend of mine once commented that he didn't like apologies because "People only apologize because they want you to like them."

It makes sense. Look who's not apologizing: Bernie Madoff, a victim of the credit crunch. Yes, because you see, the downturn is exposing malfeasance that remained hidden in more prosperous times because you could keep getting new money to hide the absence of the old. Madoff's $50 billion steal would have eventually been exposed anyway, but I bet he wishes he could have timed things so he vanished to a country with no extradition first.

And look who else is not apologizing? Yes, Dubya, this means you. In the eight years he's been in office, the Bush administration has supported torture, pursued an unpopular and dangerous war, squandered much of the world's goodwill towards our country, rolled back freedom of information, vastly expanded surveillance at the expense of civil liberties, and played the policy laundering game with the EU at our expense. He won't apologize for any of it, of course; instead he'll probably spend the next ten years building a presidential library designed to prove he did everything right.

See? The guys who do the damage don't care if we like them. The people who are apologizing? All the wrong people.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 5, 2008

Saving seeds

The 17 judges of the European Court of Human Rights ruled unanimously yesterday that the UK's DNA database, which contains more than 3 million DNA samples, violates Article 8 of the European Convention on Human Rights. The key factor: retaining, indefinitely, the DNA samples of people who have committed no crime.

It's not a complete win for objectors to the database, since the ruling doesn't say the database shouldn't exist, merely that DNA samples should be removed once their owners have been acquitted in court or the charges have been dropped. England, the court said, should copy Scotland, which operates such a policy.

The UK comes in for particular censure, in the form of the note that "any State claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance..." In other words, before you decide to be the first on your block to use a new technology and show the rest of the world how it's done, you should think about the consequences.

Because it's true: this is the kind of technology that makes surveillance and control-happy governments the envy of other governments. For example: lacking clues to lead them to a serial killer, the Los Angeles Police Department wants to copy Britain and use California's DNA database to search for genetic profiles similar enough to belong to a close relative .The French DNA database, FNAEG, was proposed in 1996, created in 1998 for sex offenders, implemented in 2001, and broadened to other criminal offenses after 9/11 and again in 2003: a perfect example of function creep. But the French DNA database is a fiftieth the size of the UK's, and Austria's, the next on the list, is even smaller.

There are some wonderful statistics about the UK database. DNA samples from more than 4 million people are included on it. Probably 850,000 of them are innocent of any crime. Some 40,000 are children between the ages of 10 and 17. The government (according to the Telegraph) has spent £182 million on it between April 1995 and March 2004. And there have been suggestions that it's too small. When privacy and human rights campaigners pointed out that people of color are disproportionately represented in the database, one of England's most experienced appeals court judges, Lord Justice Sedley, argued that every UK resident and visitor should be included on it. Yes, that's definitely the way to bring the tourists in: demand a DNA sample. Just look how they're flocking to the US to give fingerprints, and how many more flooded in when they upped the number to ten earlier this year. (And how little we're getting for it: in the first two years of the program, fingerprinting 44 million visitors netted 1,000 people with criminal or immigration violations.)

At last week's A Fine Balance conference on privacy-enhancing technologies, there was a lot of discussion of the key technique of data minimization. That is the principle that you should not collect or share more data than is actually needed to do the job. Someone checking whether you have the right to drive, for example, doesn't need to know who you are or where you live; someone checking you have the right to borrow books from the local library needs to know where you live and who you are but not your age or your health records; someone checking you're the right age to enter a bar doesn't need to care if your driver's license has expired.

This is an idea that's been around a long time - I think I heard my first presentation on it in about 1994 - but whose progress towards a usable product has been agonizingly slow. IBM's PRIME project, which Jan Camenisch presented, and Microsoft's purchase of Credentica (which wasn't shown at the conference) suggest that the mainstream technology products may finally be getting there. If only we can convince politicians that these principles are a necessary adjunct to storing all the data they're collecting.

What makes the DNA database more than just a high-tech fingerprint database is that over time the DNA stored in it will become increasingly revealing of intimate secrets. As Ray Kurzweil kept saying at the Singularity Summit, Moore's Law is hitting DNA sequencing right now; the cost is accordingly plummeting by factors of ten. When the database was set up, it was fair to characterize DNA as a high-tech version of fingerprints or iris scans. Five - or 15, or 25, we can't be sure - years from now, we will have learned far more about interpreting genetic sequences. The coded, unreadable messages we're storing now will be cleartext one day, and anyone allowed to consult the database will be privy to far more intimate information about our bodies, ourselves than we think we're giving them now.

Unfortunately, the people in charge of these things typically think it's not going to affect them. If the "little people" have no privacy, well, so what? It's only when the powers they've granted are turned on them that they begin to get it. If a conservative is a liberal who's been mugged, and a liberal is a conservative whose daughter has needed an abortion, and a civil liberties advocate is a politician who's been arrested...maybe we need to arrest more of them.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 21, 2008

The art of the impossible

So the question of last weekend very quickly became: how do you tell plausible fantasy from wild possibility? It's a good conversation starter.

One friend had a simple assessment: "They are all nuts," he said, after glancing over the weekend's program. The problem is that 150 years ago anyone predicting today's airline economy class would also have sounded nuts.

Last weekend's (un)conference was called Convergence, but the description tried to convey the sense of danger of crossing the streams. The four elements that were supposed to converge: computing, biotech, cognitive technology, and nanotechnology. Or, as the four-colored conference buttons and T-shirts had it, biotech, infotech, cognotech, and nanotech.

Unconferences seem to be the current trend. I'm guessing, based on very little knowledge, that it was started by Tim O'Reilly's FOO camps or possibly the long-running invitation-only Hackers conference. The basic principle is: collect a bunch of smart, interesting, knowledgeable people and they'll construct their own program. After all, isn't the best part of all conferences the hallway chats and networking, rather than the talks? Having been to one now (yes, a very small sample), I think in most cases I'm going to prefer the organized variety: there's a lot to be said for a program committee that reviews the proposals.

The day before, the Center for Responsible Nanotechnology ran a much smaller seminar on Global Catastrophic Risks. It made a nice counterweight: the weekend was all about wild visions of the future; the seminar was all about the likelihood of our being wiped out by biological agents, astronomical catastrophe, or, most likely, our own stupidity. Favorite quote of the day, from Anders Sandberg: "Very smart people make very stupid mistakes, and they do it with surprising regularity." Sandberg learned this, he said, at Oxford, where he is a philosopher in the Institute for the Future of Humanity.

Ralph Merkle, co-inventor of public key cryptography, now working on diamond mechanosynthesis, said to start with physics textbooks, most notably the evergreen classic by Halliday and Resnick. You can see his point: if whatever-it-is violates the laws of physics it's not going to happen. That at least separates the kinds of ideas flying around at Convergence and the Singularity Summit from most paranormal claims: people promoting dowsing, astrology, ghosts, or ESP seem to be about as interested in the laws of physics as creationists are in the fossil record.

A sidelight: after years of The Skeptic, I'm tempted to dismiss as fantasy anything where the proponents tell you that it's just your fear that's preventing you from believing their claims. I've had this a lot - ghosts, alien spacecraft, alien abductions, apparently these things are happening all over the place and I'm just too phobic to admit it. Unfortunately, the behavior of adherents to a belief just isn't evidence that it's wrong.

Similarly, an idea isn't wrong just because its requirements are annoying. Do I want to believe that my continued good health depends on emulating Ray Kurzweil and taking 250 pills a day and, a load of injections weekly? Certainly not. But I can't prove it's not helping him. I can, however, joke that it's like those caloric restriction diets - doing it makes your life *seem* longer.

Merkle's other criterion: "Is it internally consistent?" This one's harder to assess, particularly if you aren't a scientific expert yourself.

But there is the technique of playing the man instead of the ball. Merkle, for example, is a cryonicist and is currently working on diamond mechanosynthesis. Put more simply, he's busy designing the tools that will be needed to build things atom by atom when - if - molecular manufacturing becomes a reality. If that sounds nutty, well, Merkle has earned the right to steam ahead unworried because his ideas about cryptography, which have become part of the technology we use every day to protect ecommerce transactions, were widely dismissed at first.

Analyzing language is also open to the scientifically less well-educated: do the proponents of the theory use a lot of non-standard terms that sound impressive but on inspection don't seem to mean anything? It helps if they can spell, but that's not a reliable indicator - snake oil salesmen can be very professional, and some well-educated excellent scientists can't spell worth a damn.

The Risks seminar threw out a useful criterion for assessing scenarios: would it make a good movie? If your threat to civilization can be easily imagined as a line delivered by Bruce Willis, it's probably unlikely. It's not a scientifically defensible principle, of course, but it has a lot to recommend it. In human history, what's killed the most people while we're worrying about dramatic events like climate change and colliding asteroids? Wars and pandemics.

So, where does that leave us? Waiting for deliverables, of course. Even if a goal sounds ludicrous working towards it may still produce useful results. A project like Aubrey de Grey's ideas about "curing aging" by developing techniques for directly repairing damage (or SENS, for Strategies for Engineered Negligible Senescence) seems a case in point. And life extension is the best hope for all of these crazy ideas. Because, let's face it: if it doesn't happen in our lifetime, it was impossible.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 14, 2008

The USB stick in the men's room

How can we compete with free?

This is the question the entertainment industry has been asking ever since the first MP3 was uploaded. We are supposed to feel sorry for them, pass laws to protect their business model, and arrest the wicked "pirates" who "steal" their work and...well, I suppose "fence" would be the right word for getting it out to others.

Many of us have argued many times that the numbers rightsholders - the software industry, the entertainment industry - comes up with to estimate the direct cost of piracy to their bottom lines are questionable, if not greatly exaggerated. Not all free downloads would have been sales; some customers would not have paid for the work if they couldn't first sample it for free. Agonizingly slowly, the entertainment industry is beginning to behave in the ways we've argued for all along. Digital rights management is vanishing from downloaded music; MGM is putting its movies on YouTube; and TV networks are posting their shows online. Legal streaming and downloading is coming along, and while the torrenting population keeps growing, the legal population will grow faster and eventually outstrip it.

But all these pieces of the acrimonious copyright wars, are merely about distribution. The more profound copyright wars are just starting; and these are between free content and paid content.

In the free content category: Blogs. Advertorial, including infomercials. Services - Web, print, or otherwise - that are automatically generated from existing content such as news wires and other sites. User-generated sites like Flickr and YouTube.

In the paid content category: all the traditional media.

Clearly some people do manage to compete with free: bottled water, Windows, and iTunes all are successful despite the existence of tap water, Linux, and BitTorrent. Others are struggling: Craigslist is killing the classified advertising in many US newspapers, including the New York Times and its subsidiary, the Boston Globe; Flickr is making life hard for photographers; copy-and-paste blogs are hammering newspapers (again).

Free by itself isn't exactly the problem. Take, for example, Flickr and photographers. No matter how good their best photos are, few Flickr posters have what professionals have: the ability to produce, to order, without fail exactly the photographs required by the client. For a live event where time and reliability of the essence, you need a professional.

But the rest of the time... Flickr would be no threat if it hosted only a few hundred images. What's killing photographers is the law of truly large numbers: given hundreds of millions of images the chances that someone will be able to find a free one that is good enough go up. Volume is the killer.

Similarly, the problem for newspapers isn't that any of the millions of blogs out there can do what they do. It's the aggregate impact of all those expert blogs on single topics, coupled with the loss of advertising revenues from copy-and-pasters mashed up with the quaintly long lead times necessary for print.

Still, there were hints at last week's American Film Institute Digifest that music and film companies might be beginning to find an answer. If the first day was all about cross-media promotion, the second was all about using multiple media to make movies and music into the kernel of a broader experience - the kind you can't copy by downloading for free.

Christopher Sandberg, for example, talked about the "participation drama" The Company P built around The Truth About Marika, the story of a young woman searching for a missing friend. Based on a true story, the TV drama formed merely the center of a five-week reality role-playing game that included conspiracy Web sites, staged TV "debates", real-world and in-game clues.

"It's not about new media. It's the level of engagement," he said. "The audience can get as close as they want to the core story."

In a second example, the band Nine Inch Nails' Trent Reznor kicked off the launch of his Year Zero CD by planting a USB stick bearing the first release of one of the CD's tracks on top of a urinal in a men's room at one of their concerts. A complex alternative reality game later, the most active fans in the community were taken on a bus to a secret show. Three million fans played the game. Plus, the CD itself was cool: heated up, the top changed color and displayed a secret message.

The key question, asked by someone in the audience: did the effort mean the band sold more CDs?

"All projects have specific goals and objectives," said Susan Bonds, head of 42 Entertainment, which ran the project, "and sometimes they're tied to sales." In this case, because the music industry's album sales are dropping and Nine Inch Nails has a particularly technology-savvy fan base, the goal was more "building the people who will show up at your shows and consume your albums and be your audience on the Web and figuring out how to connect to them."

The tiny folk scene has long known that audiences like the perceived added value of buying CDs direct from the musicians. That that doesn't scale to millions - because there's only so much artist to go around. But the arts have always been about selling special experiences first and foremost. Participatory media will reach their own scaling problems - how many alternative reality games does anyone have time for? - but at last they've made a start on finding a positive response to the ease with which digital media can be copied.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her | | Comments (0) | TrackBacks (0)

November 7, 2008

Reality TV

The Xerox machine in the second season of Mad Men has its own Twitter account, as do many of the show's human characters. Other TV characters have MySpace pages and Facebook groups, and of course they're all, legally or illegally, on YouTube.

Here at the American Film Institute's Digifest in Hollywood - really Hollywood, with the stars on the sidewalks and movie theatres everywhere - the talk is all of "cross-platform". This event allows the AFI's Digital Content Lab to show off some of the projects it's fostered over the last year, and the audience is full of filmmakers, writers, executives, and owners of technology companies, all trying to figure out digital television.

One of the more timely projects is a remix of the venerable PBS Newshour with Jim Lehrer. A sort of combination of Snopes, Wikipedia, and any of a number of online comment sites, the goal of The Fact Project is to enable collaboration between the show's journalists and the public. Anyone can post a claim or a bit of rhetoric and bring in supporting or refuting evidence; the show's journalistic staff weigh in at the end with a Truthometer rating and the discussion is closed. Part of the point, said the project's head, Lee Banville, is to expose to the public the many small but nasty claims that are made in obscure but strategic places - flyers left on cars in supermarket parking lots, or radio spots that air maybe twice on a tiny local station.

The DCL's counterpart in Australia showed off some other examples. Areo, for example, takes TV sets and footage and turns them into game settings. More interesting is the First Australians project, which in the six-year process of filming a TV documentary series created more than 200 edited mini-documentaries telling each interviewee's story. Or the TV movie Scorched, which even before release created a prequel and sequel by giving a fictional character her own Web site and YouTube channel. The premise of the film itself was simple but arresting. It was based on one fact, that at one point Sydney had no more than 50 weeks of water left, and one what-if - what if there were bush fires? The project eventually included a number of other sites, including a fake government department.

"We go to islands that are already populated," said the director, "and pull them into our world."

HBO's Digital Lab group, on the other hand, has a simpler goal: to find an audience in the digital world it can experiment on. Last month, it launched a Web-only series called Hooking Up. Made for almost no money (and it looks it), the show is a comedy series about the relationship attempts of college kids. To help draw larger audiences, the show cast existing Web and YouTube celebrities such as LonelyGirl15, KevJumba, and sxePhil. The show has pulled in 46,000 subscribers on YouTube.

Finally, a group from ABC is experimenting with ways to draw people to the network's site via what it calls "viewing parties" so people can chat with each other while watching, "live" (so to speak), hit shows like Grey's Anatomy. The interface the ABC party group showed off was interesting. They wanted, they said, to come up with something "as slick as the iPhone and as easy to use as AIM". They eventually came up with a three-dimensional spatial concept in which messages appear in bubbles that age by shrinking in size. Net old-timers might ask churlishly what's so inadequate about the interface of IRC or other types of chat rooms where messages appear as scrolling text, but from ABC's point of view the show is the centrepiece.

At least it will give people watching shows online something to do during the ads. If you're coming from a US connection, the ABC site lets you watch full episodes of many current shows; the site incorporates limited advertising. Perhaps in recognition that people will simply vanish into another browser window, the ads end with a button to click to continue watching the show and the video remains on pause until you click it.

The point of all these initiatives is simple and the same: to return TV to something people must watch in real-time as it's broadcast. Or, if you like, to figure out how to lure today's 20- and 30-somethings into watching television; Newshour's TV audience is predominantly 50- and 60-somethings.

ABC's viewing party idea is an attempt - as the team openly said - to recreate what the network calls "appointment TV". I've argued here before that as people have more and more choices about when and where to watch their favourite scripted show, sports and breaking news will increasingly rule television because they are the only two things that people overwhelmingly want to see in real time. If you're supported by advertising, that matters, but success will depend on people's willingness to stick with their efforts once the novelty is gone. The question to answer isn't so much whether you can compete with free (cue picture of a bottle of water) but whether you can compete with freedom (cue picture of evil file-sharer watching with his friends whenever he wants).


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 31, 2008

Machine dreams

Just how smart are humans anyway? Last week's Singularity Summit spent a lot of time talking about the exact point at which computer processing power would match that of the human brain, but that's only the first step. There's the software to make the hardware do stuff, and then there's the whole question of consciousness. At that point, you've strayed from computer science into philosophy and you might as well be arguing about angels on the heads of pins. Of course everyone hopes they'll be alive to see these questions settled, but in the meantime all we have is speculation and the snide observation that it's typical that a roomful of smart people would think that all problems can be solved by more intelligence.

So I've been trying to come up with benchmarks for what constitutes artificial intelligence, and the first thing I think is that the Turing test is probably too limited. In it, a judge has to determine which of two typing correspondents is the machine and which the human, That's fine as far as it goes, but one of the consistent threads that un through all this is a noticeable disdain for human bodies.

While our brain power is largely centralized, it still seems to me likely that both its grey matter and the rest of our bodies are an important part of the substrate. How we move through space, how our bodies react and feed our brains is part and parcel of how our minds work, however much we may wish to transcend biology. The fact that we can watch films of bonobos and chimpanzees and recognise our own behaviour in their interactions should show us that we're a lot closer to most animal species than we think - and a lot further from most machines.

For that sort of reason, the Turing test seems limited. A computer passes that test if, when paired against a human, the judge can't tell which is which. At the moment, it seems clear the winner is going to be spambots - some spam messages are already devised cleverly enough to fool even Net-savvy individuals into opening them sometimes. But they're hardly smart - they're just programmed that way. And a lot depends on the capability of the judge - some people even find Eliza convincing, though it's incredibly easy to send off-course into responses that are clearly those of a machine. Find a judge who wants to believe and you're into the sort of game that self-styled psychics like to play.

Nor can we judge a superhuman intelligence by the intractable problems it solves. One of the more evangelist speakers last weekend talked about being able to instantly create tall buildings via nanotechnology. (I was, I'm afraid, irresistibly reminded of that Bugs Bunny cartoon where Marvin pours water on beans to produce instant Martians to get rid of Bugs.) This is clearly just silly: you're talking about building a gigantic building out of molecules. I don't care how many billions of nanobots you have, the sheer scale means it's going to take time. And, as Kevin Kelly has written, no matter how smart a machine is, figuring out how to cure cancer or roll back aging won't be immediate either because you can't really speed up the necessary experiments. Biology takes time.

Instead, one indicator might be variability of response; that is, that feeding several machines the same input - or giving the same machine the same input at different times - produces different, equally valid interpretations. If, for example, you give a 10th grade class Jane Austen's Pride and Prejudice to read and report on, different students might with equal legitimacy describe it as a historical account of the economic forces affecting 18th century women, a love story, the template for romantic comedy, or even the story of the plain sister in a large family whose talents were consistently overlooked until her sisters got married.

In The Singularity Is Near, Ray Kurzweil laments that each human must read a text separately and that knowledge can't be quickly transferred from one to another the way a speech recognition program can be loaded into a new machine in seconds - but that's the point. Our strength is that our intelligences are all different, and we aren't empty vessels into which information is poured but stews in which new information causes varying chemical reactions.

You might argue that search engines can already do this, in that you don't get the same list of hits if you type the same keywords into Google versus Yahoo! versus Ask.com, and if you come back tomorrow you may get a different response from any one of them. That's true. It isn't the kind of input I had in mind, but fair enough.

The other benchmark that's occurred to me so far is that machines will be getting really smart when they get bored.

ZDNet UK editor Rupert Goodwins has a variant on this from when he worked at Sinclair Research. "If it went out one evening, drank too much, said the next morning, 'never again' and repeated the exercise immediately. Truly human." But see? There again: a definition of human intelligence that requires a body.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 26, 2008

Wimsey's whimsy

One of the things about living in a foreign country is this: every so often the actual England I live in collides unexpectedly with the fictional England I grew up with. Fictional England had small, friendly villages with murders in them. It had lowering, thick fogs and grim, fantastical crimes solvable by observation and thought. It had mathematical puzzles before breakfast in a chess game. The England I live in has Sir Arthur Conan Doyle's vehement support for spiritualism, traffic jams, overcrowding, and four million people who read The Sun.

This week, at the GikIII Workshop, in a break between Internet futures, I wandered out onto a quadrangle of grass so brilliantly and perfectly green that it could have been an animated background in a virtual world. Overlooking it were beautiful, stolid, very old buildings. It had a sign: Balliol College. I was standing on the quad where, "One never failed to find Wimsey of Balliol planted in the center of the quad and laying down the law with exquisite insolence to somebody." I know now that many real people came out of Balliol (three kings, three British prime ministers, Aldous Huxley, Robertson Davies, Richard Dawkins, and Graham Greene) and that those old buildings date to 1263. Impressive. But much more startling to be standing in a place I first read about at 12 in a Dorothy Sayers novel. It's as if I spent my teenaged years fighting alongside Angel avatars and then met David Boreanaz.

Organised jointly by Ian Brown at the Oxford Internet Institute and the University of Edinburgh's Script-ed folks, GikIII (prounounced "geeky") is a small, quirky gathering that studies serious issues by approaching them with a screw loose. For example: could we control intelligent agents with the legal structure the Ancient Romans used for slaves (Andrew Katz)? How sentient is a robot sex toy? Should it be legal to marry one? And if my sexbot rapes someone, are we talking lawsuit, deactivation, or prison sentence (Fernando Barrio)? Are RoadRunner cartoons all patent applications for devices thought up by Wile E. Coyote (Caroline Wilson)? Why is The Hound of the Baskervilles a metaphor for cloud computing (Miranda Mowbray)?

It's one of the characteristics of modern life that although questions like these sound as practically irrelevant as "how many angels, infinitely large, can fit on the head of a pin, infinitely small?", which may (or may not) have been debated here seven and a half centuries ago, they matter. Understanding the issues they raise matters in trying to prepare for the net.wars of the future.

In fact, Sherlock Holmes's pursuit of the beast is metaphorical; Mowbray was pointing out the miasma of legal issues for cloud computing. So far, two very different legal directions seem likely as models: the increasingly restrictive EULAs common to the software industry, and the service-level agreements common to network outsourcing. What happens if the cloud computing company you buy from doesn't pay its subcontractors and your data gets locked up in a legal battle between them? The terms and conditions in effect for Salesforce.com warn that the service has 30 days to hand back your data if you terminate, a long time in business. Mowbray suggests that the most likely outcome is EULAs for the masses and SLAs at greater expense for those willing to pay for them.

On social networks, of course, there are only EULAs, and the question is whether interoperability is a good thing or not. If the data people put on social networks ("shouldn't there be a separate disability category for stupid people?" someone asked) can be easily transferred from service to service, won't that make malicious gossip even more global and permanent? A lot of the issues Judith Rauhofer raised in discussing the impact of global gossip are not new to Facebook: we have a generation of 35-year-olds coping with the globally searchable history of their youthful indiscretions on Usenet. (And WELL users saw the newly appointed CEO of a large tech company delete every posting he made in his younger, more drug-addled 1980s.) The most likely solution to that particular problem is time. People arrested as protesters and marijuana smokers in the 1960s can be bank presidents now; in a few years the work force will be full of people with Facebook/MySpace/Bebo misdeeds and no one will care except as something laugh at drunkenly late out in the pub.

But what Lilian Edwards wants to know is this: if we have or can gradually create the technology to make "every ad a wanted ad" - well, why not? Should we stop it? Online marketing is at £2.5 billion a year according to Ofcom, and a quarter of the UK's children spend 22 hours a week playing computer games, where there is no regulation of industry ads and where Web 2.0 is funded entirely by advertising. When TV and the Internet roll together, when in-game is in-TV and your social network merges with megamedia, and MTV is fully immersive, every detail can be personalized product placement. If I grew up five years from now, my fictional Balliol might feature Angel driving across the quad in a Nissan Prairie past a billboard advertising airline tickets.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 4, 2008

The new normal

The (only) good thing about a war is you can tell when it's over.

The problem with the "War on Terror" is that terrorism is always with us, as Liberty's director, Shami Chakrabarti, said yesterday at the Homeland and Border Security 08 conference. "I do think the threat is very serious. But I don't think it can be addressed by a war." Because, "We, the people, will not be able to verify a discernible end."

The idea that "we are at war" has justified so much post 9/11 legislation, from the ID card (in the UK) and Real ID (US) to the continued expansion of police powers.

How long can you live in a state of emergency before emergency becomes the new normal? If there is no end, when do you withdraw the latitude wartime gives a government?

Several of yesterday's speakers talked about preserving "our way of life" while countering the threat with better security. But "our way of life" is a moving target.

For example, Baroness Pauline Neville-Jones, the shadow security minister, talked about the importance of controlling the UK's borders. "Perimeter security is absolutely basic." Her example: you can't go into a building without having your identity checked. But it's not so long ago - within the 18 years I've been living in London - that you could do exactly that, even sometimes in central London. In New York, of course, until 9/11, everything was wide open; these days midtown Manhattan makes you wait in front of barriers while you're photographed, checked, and treated with great suspicion if the person you're visiting doesn't answer the phone.

Only seven years ago, flying did not involve two hours of standing in line. Until January, tourists do not have to register three days before flying to the US for pre-screening.

It's not clear how much would change with a Conservative government. "There is a very great deal by this government we would continue," said Neville-Jones. But, she said, besides trackling threats, whether motivated (terrorists) or not (floods, earthquakes, "we are also at any given moment in the game of deciding what kind of society we want to have and what values we want to preserve." She wants "sustainable security, predicated on protecting people's freedom and ensuring they have more, not less, control over their lives." And, she said, "While we need protective mechanisms, the surveillance society is not the route down which we should go. It is absolutely fundamental that security and freedom lie together as an objective."

To be sure, Neville-Jones took issue with some of the present government's plans - the Conservatives would not, she said, go ahead with the National Identity Register, and they favour "a more coherent and wide-ranging border security force". The latter would mean bringing together many currently disparate agencies to create a single border strategy. The Conservatives also favour establishing a small "homeland command for the armed forces" within the UK because, "The qualities of the military and the resources they can bring to complex situations are important and useful." At the moment, she said, "We have to make do with whoever happens to be in the country."

OK. So take the four core elements of the national security strategy according to Admiral Lord Alan West, a Parliamentary under-secretary of state at the Home Office: pursue, protect, prepare, and prevent. "Prevent" is the one that all this is about. If we are in wartime, and we know that any measure that's brought in is only temporary, our tolerance for measures that violate the normal principles of democracy is higher.

Are the Olympics wartime? Security is already in the planning stages, although, as Tarique Ghaffur pointed out, the Games are one of several big events in 2012. And some events like sailing and Olympic football will be outside London, as will 600 training camps. Add in the torch relay, and it's national security.

And in that case, we should be watching very closely what gets brought in for the Olympics, because alongside the physical infrastructure that the Games always leave behind - the stadia and transport - may be a security infrastructure that we wouldn't necessarily have chosen for daily life.

As if the proposals in front of us aren't bad enough. Take for example, the clause of the counterterrorism bill (due for its second reading in the Lords next week) that would allow the authorities to detain suspects for up to 42 days without charge. Chakrabarti lamented the debate over this, which has turned into big media politics.

"The big frustration," she said, "is that alternatives created by sensible, proportionate means of early intervention are being ignored." Instead, she suggested, make the data legally collected by surveillance and interception admissible in fair criminal trials. Charge people with precursor terror offenses so they are properly remanded in custody and continue the investigation for the more serious plot. "That is a way of complying with ancient principles that you should know what you are accused of before being banged up, but it gives the police the time and powers they need."

Not being at war gives us the time to think. We should take it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 30, 2008

Ten

It's easy to found an organization; it's hard to keep one alive even for as long as ten years. This week, the Foundation for Information Policy Research celebrated its tenth birthday. Ten years is a long time in Internet terms, and even longer when you're trying to get government to pay attention to expertise in a subject as difficult as technology policy.

My notes from the launch contain this quote from FIPR's first director, Caspar Bowden, which shows you just how difficult FIPR's role was going to be: "An educational charity has a responsibility to speak the truth, whether it's pleasant or unpleasant." FIPR was intended to avoid the narrow product focus of corporate laboratory research and retain the traditional freedoms of an academic lab.

My notes also show the following list of topics FIPR intended to research: the regulation of electronic commerce; consumer protection; data protection and privacy; copyright; law enforcement; evidence and archiving; electronic interaction between government, businesses, and individuals; the risks of computer and communications systems; and the extent to which information technologies discriminate against the less advantaged in society. Its first concern was intended to be researching the underpinnings of electronic commerce, including the then recent directive launched for public consultation by the European Commission.

In fact, the biggest issue of FIPR's early years was the crypto wars leading up to and culminating in the passage of the Regulation of Investigatory Powers Act (2000). It's safe to say that RIPA would have been a lot worse without the time and energy Bowden spent listening to Parliamentary debates, decoding consultation papers, and explaining what it all meant to journalists, politicians, civil servants, and anyone else who would listen.

Not that RIPA is a fountain of democratic behavior even as things are. In the last couple of weeks we've seen the perfect example of the kind of creeping functionalism that FIPR and Privacy International warned about at the time: the Poole council using the access rules in RIPA to spy on families to determine whether or not they really lived in the right catchment area for the schools their children attend.

That use of the RIPA rules, Bowden said at at FIPR's half-day anniversary conference last Wednesday, sets a precedent for accessing traffic data for much lower level purposes than the government originally claimed it was collecting the data for. He went on to call the recent suggestion that the government may be considering a giant database, updated in real time, of the nation's communications data "a truly Orwellian nightmare of data mining, all in one place."

Ross Anderson, FIPR's founding and current chair and a well-known security engineer at Cambridge, noted that the same risks adhere to the NHS database. A clinic that owns its own data will tell police asking for the names of all its patients under 16 to go away. "If," said Anderson, "it had all been in the NHS database and they'd gone in to see the manager of BT, would he have been told to go and jump in the river? The mistake engineers make too much is to think only technology matters."

That point was part of a larger one that Anderson made: that hopes that the giant databases under construction will collapse under their own weight are forlorn. Think of developing Hulk-Hogan databases and the algorithms for mining them as an arms race, just like spam and anti-spam. The same principle that holds that today's cryptography, no matter how strong, will eventually be routinely crackable means that today's overload of data will eventually, long after we can remember anything we actually said or did ourselves, be manageable.

The most interesting question is: what of the next ten years? Nigel Hickson, now with the Department of Business, Enterprise, and Regulatory Reform, gave some hints. On the European and international agenda, he listed the returning dominance of the large telephone companies on the excuse that they need to invest in fiber. We will be hearing about quality of service and network neutrality. Watch Brussels on spectrum rights. Watch for large debates on the liability of ISPs. Digital signatures, another battle of the late 1990s, are also back on the agenda, with draft EU proposals to mandate them for the public sector and other services. RFID, the "Internet for things" and the ubiquitous Internet will spark a new round of privacy arguments.

Most fundamentally, said Anderson, we need to think about what it means to live in a world that is ever more connected through evolving socio-technological systems. Government can help when markets fail; though governments themselves seem to fail most notoriously with large projects.

FIPR started by getting engineers, later engineers and economists, to talk through problems. "The next growth point may be engineers and psychologists," he said. "We have to progressively involve more and more people from more and more backgrounds and discussions."

Probably few people feel that their single vote in any given election really makes a difference. Groups like FIPR, PI, No2ID, and ARCH remind us that even a small number of people can have a significant effect. Happy birthday.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).


May 2, 2008

Bet and sue

Most net.wars are not new. Today's debates about free speech and censorship, copyright and control, nationality and disappearing borders were all presaged by the same discussions in the 1980s even as the Internet protocols were being invented. The rare exception: online gambling. Certainly, there were debates about whether states should regulate gambling, but a quick Usenet search does not seem to throw up any discussions about the impact the Internet was going to have on this particular pastime. Just sex, drugs, and rock 'n' roll.

The story started in March, when the French Tennis Federation (FFT - Fédération Française de Tennis) filed suit in Belgium against Betfair, Bwin, and Ladbrokes to prevent them from accepting bets on matches played at the upcoming French Open tennis championships, which start on May 25. The FFT's arguments are rather peculiar: that online betting stains the French Open's reputation; that only the FFT has the right to exploit the French Open; that the online betting companies are parasites using the French Open to make money; and that online betting corrupts the sport. Bwin countersued for slander.

On Tuesday of this week, the Liège court ruled comprehensively against the FFT and awarded the betting companies costs.

The FFT will still, of course, control the things it can: fans will be banned from using laptops and mobile phones in the stands. The convergence of wireless telephony, smart phones, and online sites means that in the second or two between the end of a point and the electronic scoreboard updating, there's a tiny window in which people could bet on a sure thing. Why this slightly improbable scenario concerns the FFT isn't clear; that's a problem for the betting companies. What should concern the FFT is ensuring a lack of corruption within the sport. That means the players and their entourages.

The latter issue has been a touchy subject in the tennis world ever since last August, when Russian player Nikolay Davydenko, currently fourth in the world rankings, retired in the third and final set of a match in Poland against 87th ranked Marin Vassallo Arguello, citing a foot injury. Davydenko was accused of match-fixing; the investigation still drags on. In the resulting publicity, several other players admitted being approached to fix matches. As part of subsequent rule-tightening by the Association of Tennis Professionals, the governing body of men's professional tennis, three Italian players were suspended briefly late last year for betting on other players' matches.

Probably the most surprising thing is that tennis, along with soccer and horse racing, is actually among the most popular sports for betting. A minority sport like tennis? Yet according to USA Today, the 2007 Paris Masters event saw $750 million to $1.5 billion in bets. I can only assume that the inverted pyramid of matches every week involving individual players fits well with what bettors like to do.

Fixing matches seems even more unlikely. The best payouts come from correctly picking upsets, the bigger the better. But top players are highly unlikely to throw matches to order. Most of them play a relatively modest number of events (Davydenko is admittedly the exception) and need all the match wins and points from those events to sustain their rankings. Plus, they're just too damn rich.

In 2007, Roger Federer, the ultra-dominant number one player since the end of 2003, earned upwards of $10 million in prize money alone; Davydenko picked up over $2 million (and has already won another $1 million in 2008). All of the top 12 earned over $1 million. Add in endorsements, and even after you subtract agents' fees, tax, and travel costs for self and entourage, you're still looking at wealthy guys. They might tank matches at events where they're being paid appearance fees (which are legal on the men's tour at all but the top 14 events, but proving they've done so is exceptionally difficult. Fixing matches, which could cost them in lost endorsements on top of the tour's own sanctions, surely can't be worth it.

There are several ironies about the FFT's action. First of all (something most of the journalists covering this story don't mention, probably because they don't spend a lot of time watching tennis on TV), Bwin has been an important advertiser sponsoring tennis on Eurosport. It's absolutely typical of the counter-productive and intricately incestuous politics that characterize the tennis world that one part of the sport would sue someone who pays money into another part of the sport.

Second of all, as Betfair and Bwin pointed out, all three of these companies are highly regulated European licensed operations. Ruling them out of action would mean shift online betting to less well regulated offshore companies. They also pointed out the absurdity of the parasites claim: how could they accept bets on an event without using its name? Betfair in particular documented its careful agreements with tennis's many governing bodies.

Third of all, the only reason match-fixing is an issue in the tennis world right now is that Betfair spotted some unusual betting patterns during that Polish Davydenko match, cancelled all the bets, and went public with the news. Without that, Davydenko would have avoided the fight over his family's phone records. Come to think of it, making the issue public probably explains the FFT's behavior: it's revenge.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 4, 2008

Million-dollar baby


The first time I saw James Randi he was hauling a load of fresh chicken guts out of a guy's stomach.

Of course, in my eagerness to make it sound like a good story I've jazzed that up a bit. The chicken guts were real and the guy's stomach was real (he was an innocent audience member who'd been recruited for the purpose of demonstration), but the pull-outage was clever sleight-of-hand. The year was 1982 and the occasion was a lecture demonstration at Cornell University. The point was demonstrating how "psychic surgeons" achieve their effects.

The next time I'll see James Randi is on April 19, when he's giving a talk at Conway Hall, in London. I don't think chicken guts will be involved, though a number of other prominent skeptics will also be speaking and you just never know.

It was Randi's ability to demonstrate plausible explanations for the apparently inexplicable that blew me away on that particular day. A lot of people like to claim that skeptics are closed-minded, but in fact it seems to me that the key to skepticism is tolerance of uncertainty and patience. A skeptic sitting in an empty house and hearing inexplicable creaking thinks, "I wonder what that is." A believer thinks, "Must be a ghost." Randi never claimed to be able to explain everything, but he went a long way toward showing me that things that friends thought must be inexplicable might still have natural explanations if you had the patience to wait to find out what they were and the right kind of mind to. A lie goes round the world while the truth is still putting its boots on; it takes seconds to claim something's paranormal but years of research to find out the truth.

One of the sad things about science these days is that so many disciplines require so much expensive equipment and funding that it's hard for an amateur to make much of a contribution. There are, to be sure, exceptions: some friends on Crete were successful in finding the nests of griffin vultures and did a lot of work keeping count, and anyone can look for fossils and hope to fill in a gap in the record. But few can afford their own radio telescope, particle collider, or climate modelling supercomputer. Randi showed that amateurs with a particular bent - a knowledge of stage magic and deception - were more effective at assessing paranormal claims than many scientists.

None of this would qualify Randi as a subject for net.wars except that recently he's been the subject of Usenet spam. Most people who do not participate in Usenet are under the impression that all newsgroups drowned under email levels of spam long ago. But in fact until the last month, when the Chinese apparently discovered Usenet, spam levels have been negligible for quite a few years now. Once Web boards, blogs, and social networks got going Usenet became even more of a minority pastime than it was in its heyday. Spamming Usenet doesn't cost much, but why bother when the audience is relatively tiny?

But people who want to boast that they've bested James Randi apparently want to lump themselves in with ads for cheap knockoffs of Nike shoes, Breitling watches, and Prada handbags. And so a version of this message began popping up randomly. It is, of course, all over the Net by now, and there's not a lot anyone can do other than debunk it and hope someone notices.
To deal with the most trivial bit, the bit that asks if James Randi is "even a real name". Well, it's not the name Randi was born with, although it's a modification of his first and middle names. But he's been using it consistently for something over 50 years, and it is his legal name. So it's real enough for all intents and purposes.

The million-dollar challenge was a relative newcomer that had its origins in a similar $10,000 challenge that Randi had going for more than 30 years. The increased money made the challenge a much juicier story, of course. But as this rational game theoryish analysis of the challenge makes clear, the challenge was only ever likely to attract the deluded. As I understand it, the mailbag got ridiculous in both size and content. There's plenty of evidence for that; the apparent basis of the claim that Randi was beaten is impenetrable. It is true, though, that until the beginning of this year the challenge rules stated that the prize would continue to be offered until it was awarded, including after Randi's death. Now, it ends March 6, 2010. (Get your claim in now!)

The end of the challenge is the end of an era for skeptics. For years, if any paranormal claimant was particularly insistent that he could dowse for oil or read minds we could say, "If you're so psychic, why ain't you taking Randi's challenge?" Now, my god - we're going to have to think of new stuff to say.

Meantime, come watch Randi in person and find out about the kinds of tests he's been doing all these years.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 28, 2008

Leaving Las Vegas

Las Vegas shouldn't exist. Who drops a sprawling display of electric lights with huge fountains and luxury hotels that into the best desert scenery on the planet during an energy crisis? Indoors, it's Britain in mid-winter; outdoors you're standing in a giant exhaust fan. The out-of-proportion scale means that everything is four times as far away as you think, including the jackpot you're not going to win at one of its casinos. It's a great place to visit if you enjoy wallowing in self-righteous disapproval.

This all makes it the stuff of song, story, and legend and explains why Jeff Jonas's presentation at etech was packed.

The way Jonas tells it in his blog and at his presentation, he got into the gaming industry by driving through Las Vegas in 1989 idly wondering what was going on behind the scenes at the casinos. A year later he got the tiny beginnings of an answer when he picked up a used couch he'd found in the newspaper classified ads (boy, that dates it, doesn't it?) and found that its former owner played blackjack "for a living". Jonas began consulting to the gaming industry in 1991, helping to open Treasure Island, Bellagio, and Wynn.

"Possibly half the casinos in the world use technology we created," he said at etech.

Gaming revenues are now less than half of total revenues, he said, and despite the apparent financial win they might represent problem gamblers are in fact bad for business. The goal is for people to have fun. And because of that, he said, a place like the Bellagio is "optimized for consumer experience over interference. They don't want to spend money on surveillance."

Jonas began with a slide listing some common ideas about how Las Vegas works, culled from movies like Ocean's 11 and the TV show Las Vegas. Does the Bellagio have a vault? (No.) Do casinos perform background checks on guests based on public records? (No.) Is there a gaming industry watch list you can put yourself on but not take yourself off? (Yes, for people who know they have a gambling addiction.) Do casinos deliberately hire ex-felons? (Yes, to rehabilitate them.) Do they really send private jets for high rollers? (Cue story.)

There was, he said, a casino high roller who had won some $18 million. A win like that is going to show up in a casino's quarterly earnings. So, yes, they sent a private jet to his town and parked a limo in front of his house for the weekend. If you've got the bug, we're here for you, that kind of thing. He took the bait, and lost $22 million.

Do they help you create cover stories? (Yes.) "What happens in Vegas stays in Vegas" is an important part of ensuring that people can have fun that does not come back to bite them when they go home. The casinos' problem is with identity, not disguises, because they are required by anti-money laundering rules to report it any time someone crosses the $10,000 threshold for cash transactions. So if you play at several different tables, then go upstairs and change disguises, and come back and play some more, they have to be able to track you through all that. ID, therefore, is extremely important. Disguises are welcome; fake ID is not.

Do they use facial recognition to monitor the doors to spot cheaters on arrival? (Well...)

Of course technology-that-is-indistinguishable-from-magic-because-it-actually-is-magic appears on every crime-solving TV show these days. You know, the stuff where Our Heroes start with a fuzzy CCTV image and they punch in on a tiny piece of it and blow it up. And then someone says, "Can you enhance that?" and someone else says, "Oh, yes, we have new software," and a second later a line goes down the picture filling in detail. And a second after that you can read the brand on the face of a wrist watch (Numb3rs or the manufacturer's coding on a couple of pills (Las Vegas. Or they have a perfect matching system that can take a partial fingerprint lifted off a strand of hair or something and bang! the database can find not only the person's identity but their current home address and phone number (Bones). And who can ever forget the first episode of 24, when Jack Bauer, alarmed at the disappearance of his daughter, tosses his phone number to an underling and barks, "Find me all the Internet passwords associated with this phone number."

And yet...a surprising number of what ought to be the technically best-educated audience on the planet thought facial recognition was in operation to catch cheaters. Folks, it doesn't work in airports, either.

Which is the most interesting thing Jonas said: he now works for IBM (which bought his company) on privacy and civil liberties issues, including work on software to help the US government spot terrorists without invading privacy. It's an interesting concept, partly because security at airports and other locations is now so invasive. But also because if Las Vegas can find a way to deploy surveillance such that only the egregious problems are caught and everyone else just has a good time...why can't governments?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 21, 2008

Copywrongs

This is a shortened version of a talk I gave at Musicians, Fans, and Copyright at the LSE on Wednesday, March 19, 2008.

Most discussions about copyright with respect to music do not include musicians. The notable exception is the record companies' trophy musicians who appear at government hearings. Because these tend to be the most famous and well-rewarded musicians they can find, their primarily contribution to the dabate seems to be to try to make politicians think, "We love you, we can't bear that you should starve, the record company must be right." It's a long time since I made a living playing, so I can't pretend to represent them. But I can make a few observations. Folk musicians in particular stand at the nexus of all the copyright arguments: they are contemporary artists and songwriters, but they mine their material from the public domain.

Every musician, at every level of the business, has been ripped off (PDF), usually when they can least afford it. The result is that they tend to be deeply suspicious of any attempt to limit their rights. The music business has such a long history of signing the powerless - young, inexperienced musicians, the black blues musicians of the Mississippi Delta, and many others - to exploitive contracts that it's hard to understand why they're still allowed to get away with it. Surely it ought to be possible to limit what rights and terms the industry can dictate to the inexperienced and desperate with stars in their eyes?

Steve Gillette, author with Tom Campbell of the popular 1966 song "Darcy Farrow", says that when Ian & Sylvia wanted to record the song, they were told to hire someone to collect royalties on their behalf. That person did little to collect royalties for many years. Gillette and Campbell eventually won a court judgement with a standard six-month waiting period - during which time John Denver recorded the song and put it on his best-selling album, Rocky Mountain High, giving the publisher a motive to fight back. They were finally able to wrest back control of the song in about 1990.

In book publishing it is commonplace for the rights to revert to authors if and when the publisher decides to withdraw their work from sale. There is no comparable practice in the music business. And so, people I know on the folk scene whose work has gone out of commercial release find themselves in the situation where their fans want to buy their music but they can't sell it. As one musician said, "I didn't work all those years to have my music stuck in a vault."

Pete Coe, a traditional performer and songwriter, tells me that the common scenario is that a young musician signs a recording contract early on, and then the company goes out of business and the recordings are bought by others. The purchasing company buys the assets - the recordings - but not the burden, the obligation to pass on royalties to the original artists. Coe himself, along with many others, is in this situation; some of his early recordings have been through two such bankruptcies. The company that owns them now owns many other folk releases of the period and either refuses to re-release the recordings or refuses to provide sales figures or pay royalties, and is not a member of MCPS. Coe points out that this company would certainly refuse to cooperate with any effort to claim the reversion of rights.

In a similar case, Nic Jones, a fine and widely admired folk guitarist who played almost exclusively traditional music, was in a terrible car accident in about 1981 that left him unable to play. Over the following years his recordings were bought up but not rereleased, so that an artist now unable to work could not benefit from his back catalogue. It is only in the last few years, with the cost of making and distributing music falling, that he and his wife have managed to release old live recordings on their own label. Term extension would, if anything, hurt Jones's ability to regain control over and exploit his own work. (Note: I have not canvassed Jones's opinion.)

The artists in these cases, like any group of cats, have reacted in different ways. Gillette, who comments also that in general it's the smaller operators who are the biggest problem, says, that term extension "only benefits the corporate media, and in my experience only serves to lend energy to turning the public trust into company assets".

Coe, on the other hand, favors term extension. "We determined," he said by email in 2006, "that once we'd regained our rights, publishing and recording, that they were never again to pass out of our control."

Coe's reaction is understandable. But I think many problems could be solved by forcing the industry to treat musicians and artists more fairly. It's notable that folk artists, through necessity, pioneered what's becoming commonplace now: releasing their own albums to sell to audiences direct at their gigs and via mail, now Web, order.

What the musicians of the future want and need, in my opinion, is the same thing that the musicians of the present and past wanted: control. In my view, there is no expansion of copyright that will give it to them.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 7, 2008

Techitics

This year, 2008, may go down in history as the year geeks got politics. At etech this week I caught a few disparaging references to hippies' efforts to change politics. Which, you know, seemed kind of unfair, for two reasons. First: the 1960s generation did change an awful lot of things, though not nearly as many as they hoped. Second: a lot of those hippies are geeks now.

But still. Give a geek something that's broken and he'll itch to fix it. And one thing leads to another. Which is why on Wednesday night Lawrence Lessig explained in an hour-long keynote that got a standing ovation how he plans to fix what's wrong with Congress.

No, he's not going to run. Some 4,500 people on Facebook were trying to push him into it, and he thought about it, but preliminary research showed that his chances of beating popular Silicon Valley favorite, Jackie Speier, were approximately zero.

"I wasn't afraid of losing," he said, noting ruefully that in ten years of copyfighting he's gotten good at it. Instead, the problem was that Silicon Valley insiders would have known that no one was going to beat Jackie Speier. But outsiders would have pointed, laughed, and said, "See? The idea of Congressional reform has no legs." And on to business as usual. So, he said, counterproductive to run.

Instead, he's launching Change Congress. "Obama has taught us that it's possible to imagine many people contributing to real change."

The point, he said, will be to provide a "signalling function". Like Creative Commongs, Change Congress will give candidates an easy way to show what level of reform they're willing to commit tto. The system will start with three options: 1) refusing money from lobbyists and political action committees (private funding groups); 2) ban earmarks (money allocated to special projects in politicians' home states); 3) commit to public financing for campaigns. Candidates can then display the badge generated from those choices on their campaign materials.

From there, said Lessig, layer something like Emily's List on top, to help people identify candidates they're willing to suppot with monthly donations, thereby subsidizing reform.

Money, he admitted, isn't the entire problem. But, like drinking for an alcoholic, it's the first problem you must solve to be able to tackle any of the others with any hope of success.

In a related but not entirely similar vein, the guys who brought us They Work For You nearly four years ago are back with UN democracy, an attempt to provide a signalling function to the United Nations> by making it easy to find out how your national representatives are voting in UN meetings. The driving force behind UNdemocracy.com is Liverpool's Julian Todd, who took the UN's URL obscurantism as a personal challenge. Since he doesn't fly, presenting the new service were Tom Loosemore, Stefan Mogdalinski, and Danny O'Brien, who pointed out that when you start looking at the decisions and debates you start to see strange patterns: what do the US and Israel have in common with Palau and Micronesia?

The US Congress and the British Parliament are all, they said, now well accustomed to being televised, and their behaviour has adapted to the cameras. At the UN, "They don't think they're being watched at all, so you see horse trading in a fairly raw form."

The meta-version they believe can be usefully and widely applied: 1) identify broken civic institution; 2) liberate data from said institution. There were three more ingredients, but they vanished the slide too quickly. But Mogdalinski noted that where in the past they have said "Ask forgiveness, not permission", alluding to the fact that most institutions if approached will behave as though they own the data. He's less inclined to apologise now. After all, isn't it *our* data that's being released in the public interest?

Data isn't everything. But the Net community has come a long way since the early days, when the prevailing attitude was that technological superiority would wash away politics-as-usual by simply making an end run around any laws governments tried to pass. Yes, technology can change the equation a whole lot. For example, once PGP escaped laws limiting the availability of strong encryption were pretty much doomed to fail (though not without a lot of back-and-forth before it became official). Similarly, in the copyright wars it's clear that copyrighted material will continue to leak out no matter how hard they try to protect it.

But those are pretty limited bits of politics. Technology can't make such an easy end run around laws that keep shrinking the public domain. Nor can it by itself solve policies that deny the reality of global climate change or that, in one of Lessig's examples, back government recommendations off from a daily caloric intake of 10 percent sugar to one of 25 percent. Or that, in another of his examples, kept then Vice-President Al Gore from succeeding with a seventh part to the 1996 Communications Act deregulating ADSL and cable because without anything to regulate what would Congressmen do without the funds those lobbyists were sending their way? Hence, the new approach.

"Technology," Lessig said, "doesn't solve any problems. But it is the only tool we have to leverage power to effect change."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 23, 2007

Road block

There are many ways for a computer system to fail. This week's disclosure that Her Majesty's Revenue and Customs has played lost-in-the-post with two CDs holding the nation's Child Benefit data is one of the stranger ones. The Child Benefit database includes names, addresses, identifying numbers, and often bank details, on all the UK's 25 million families with a child under 16. The National Audit Office requested a subset for its routine audit; the HMRC sent the entire database off by TNT post.

There are so many things wrong with this picture that it would take a village of late-night talk show hosts to make fun of them all. But the bottom line is this: when the system was developed no one included privacy or security in the specification or thought about the fundamental change in the nature of information when paper-based records are transmogrified into electronic data. The access limitations inherent in physical storage media must be painstakingly recreated in computer systems or they do not exist. The problem with security is it tends to be inconvenient.

With paper records, the more data you provide the more expensive and time-consuming it is. With computer records, the more data you provide the cheaper and quicker it is. The NAO's file of email relating to the incident (PDF) makes this clear. What the NAO wanted (so it could check that the right people got the right benefit payments): national insurance numbers, names, and benefit numbers. What it got: everything. If the discs hadn't gotten lost, we would never have known.

Ironically enough, this week in London also saw at least three conferences on various aspects of managing digital identity: Digital Identity Forum, A Fine Balance, and Identity Matters. All these events featured the kinds of experts the UK government has been ignoring in its mad rush to create and collect more and more data. The workshop on road pricing and transport systems at the second of them, however, was particularly instructive. Led by science advisor Brian Collins, the most notable thing about this workshop is that the 15 or 20 participants couldn't agree on a single aspect of such a system.

Would it run on GPS or GSM/GPRS? Who or what is charged, the car or the driver? Do all roads cost the same or do we use differential pricing to push traffic onto less crowded routes? Most important, is the goal to raise revenue, reduce congestion, protect the environment, or rebalance the cost of motoring so the people who drive the most pay the most? The more purposes the system is intended to serve, the more complicated and expensive it will become, and the less likely it is to answer any of those goals successfully. This point has of course also been made about the National ID card by the same sort of people who have warned about the security issues inherent in large databases such as the Child Benefit database. But it's clearer when you start talking about something as limited as road charging.

For example: if you want to tag the car you would probably choose a dashboard-top box that uses GPS data to track the car's location. It will have to store and communicate location data to some kind of central server, which will use it to create a bill. The data will have to be stored for at least a few billing cycles in case of disputes. Security services and insurers alike would love to have copies. On the other hand, if you want to tag the driver it might be simpler just to tie the whole thing to a mobile phone. The phone networks are already set up to do hand-off between nodes, and tracking the driver might also let you charge passengers, or might let you give full cars a discount.

The problem is that the discussion is coming from the wrong angle. We should not be saying, "Here is a clever technological idea. Oh, look, it makes data! What shall we do with it?" We should be defining the problem and considering alternative solutions. The people who drive most already pay most via the fuel pump. If we want people to drive less, maybe we should improve public transport instead. If we're trying to reduce congestion, getting employers to be more flexible about working hours and telecommuting would be cheaper, provide greater returns, and, crucially for this discussion, not create a large database system that can be used to track the population's movements.

(Besides, said one of the workshop's participants: "We live with the congestion and are hugely productive. So why tamper with it?")

It is characteristic of our age that the favored solution is the one that creates the most data and the biggest privacy risk. No one in the cluster of organisations opposing the ID card - No2ID, Privacy International, Foundation for Information Policy Research, or Open Rights Group - wanted an incident like this week's to happen. But it is exactly what they have been warning about: large data stores carry large risks that are poorly understood, and it is not enough for politicians to wave their hands and say we can trust them. Information may want to be free, but data want to leak.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 3, 2007

Amateur hour

If you really want to date yourself, admit that you remember Ted Mack's Amateur Hour. Running from 1949 to 1970, it was the first televised amateur talent competition, the granddaddy of today's reality TV. What's new about the Internet isn't that amateurs can create content people will look at but the ability to access an audience without going through an older-media gatekeeper.

But even on the Internet, user-generated content (as the kids are calling it these days) is not new: user-uploaded messages and files are how people like CompuServe made money. But that was user-originated content. Today's user-generated content on sites like YouTube includes a mass of uploaded video, audio, and text that in fact do not belong to the users but to third parties. These issues are contentious; so much so that Ian Fletcher, the CEO of he UK's Intellectual Property Office, bailed at the thought of appearing before an audience that might publish his remarks out of context on the Net.

To hear media representatives tell it at today's Amateur Hour conference, they regarded it with a pretty benign eye for quite a while.

It wasn't, said Lisa Stancati, assistant general counsel for ESPN, until Google bought YouTube that everyone got mad. "If Google is going to be making money from my content I have a serious problem with that."

Well, fair enough. But how did it get to be your content? Media companies love theoretically paying artists when they want to expand copyright. Come contract time it's a different story, as the tableful from Actors Equity knew all too well. And what about the content of the future?

Marni Pedorella, vice president of NBC Universal, notes that the site the company runs for Battlestar Galactica fans provides raw materials for users to play with. If they upload the mashed-up results, however, NBC takes a royalty-free license in perpetuity. Are older media hoping new media will become a source of what Brian Murphy is calling CGC – for "cheaply generated content". Like reality TV?

Heather Moosnick, vice president of business development for CBS Interactive, recounted CBS's moves to share its content more widely around the Net: you can watch current shows on its Web site, for example (unless you live outside the US). But, she said sadly, if people don't care about copyright – well, there might be fewer CSIs. (Threat or promise? There are three CSI shows. At least she didn't say that less "expert content" will deprive us of Cavemen.)

Because the conference was sponsored by a law school, a lot of the moderators' questions centered on things like: How do you see your risks developing? What is your liability? What about international laws?

And: what is the difference between a professional and an amateur? You might argue that it doesn't matter as long as the content is interesting, but when it comes to the shield laws that allow journalists to protect their sources the difference is important. Should every blogger – hundreds of millions of them – have the right ? Just the ones with mass audiences who make a living from running AdSense alongside their postings? None? Is a blogger with an audience of 100,000 of the most important people in American politics more or less worthy of protection than a guy writing for a local paper with a circulation of 10,000? Is a fan taking pictures of Lindsay Lohan with a cell phone subject to California's new law limiting paparazzi?

To me, the key difference between an amateur and a professional is that the professional does the job even when he doesn't feel like it.

The source of this idea is Agatha Christie, who defined the moment she became a professional writer, some ten or 15 books into her career. She was mid-divorce, and she liked neither the book nor her work on it – but she had a contract. The amateur can say, Screw the contract, I don't feel like getting up this morning. The professional makes the work arrive, even if it stinks. Unfortunately, that practical distinction is not easily describable in law.

You could define it a different way: a professional is the guy you'll miss if he goes on strike, as TV writers are about to do over residual payments for digital reuse.

Another line: a lot of large companies operate their message boards on the basis of the safe harbor protections in the DMCA, under which you're not liable as long as you take down material when notified of infringement or other legal problems. What about mixed content? There's a case pending between the Fair Housing Council and Roommates.com because the latter site gave users a questionnaire asking such roommate-compatibility questions as age, race, gender, sexual orientation… All these are questions that landlords are not allowed to ask under the Fair Housing Act. At what point is someone looking for a roommate subject to that act? Are we really going to refuse to allow people all control over who they live with?

These aren't problems that have solutions, at least yet. They're the user-generated lawsuits of the future.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 26, 2007

Tomorrow's world

"It's like 1994," Richard Bartle, the longest-serving virtual world creator, said this week. We were at the Virtual Worlds Forum. Sure enough: most of the panels were about how businesses could make money! in virtual worlds! Substitute Web! and Bartle was right.

"Virtual worlds are poised to revolutionize today's Web ecommerce," one speaker said enthusiastically. "They will restore to ecommerce the social and recreational aspect of shopping, the central element in the real world, which was stripped away when retailers went online."

There's gold in them thar cartoon hills.

But which hills? Second Life is, to be sure, the virtual world du jour, and it provides the most obviously exploitable platform for businesses. But in 1994 so did CompuServe. It was only three years later – ten years ago last month – that it had shrunk sufficiently for AOL to buy it as revenge. In turn, AOL is itself shrinking – its subscription revenues for the quarter ending June 30, 2007 were half those in the same quarter in 2006.

If there is one thing we know about Internet communities it's that they keep reforming in new technologies, often with many of the same people. Today's kids bop from world to world in groups, every few months. The people I've known on CIX or the WELL turn up on IRC, LiveJournal, Facebook, and IM. Sometimes you flee, as Corey Bridges said of social networks, because your friends list has become "crufted" up with people you don't like. You take your real friends somewhere else until mutatis mutandem. In the older text-based conferencing systems, same pattern: public conferences filled with too many annoying people joined sent old-timers to gated communities like mailing lists or closed conferences. And so it goes.

In a post pointed at by the VWF blog Metaversed's Nick Wilson defines social virtual worlds and concludes that there are only eight of them – the rest are not yet available to the general public, children's worlds, or simply development platforms. "The virtual worlds space," he concludes, "is not as large as many people think."

Probably anyone who's tried to come to grips with Second Life, number one on Wilson's list, without the benefit of friends to go there with knows that. Many parts of SL are resoundingly empty much of the time, and it seems inarguable that most of SL's millions of registered users try it out a few times and then leave their avatars as records in the database. Nonetheless, companies keep experimenting and find the results valuable. A batch of Italian IBMers even used the world to stage a strike last month. Naturally it crashed IBM's SL Business Center: the 1,850 strikers were spread around seven IBM locations, but you can only put about 50 avatars on an island before server lag starts to get you. Strikes: the original denial-of-service attacks.

But questioning whether there's a whole lot of there there is a nice reminder that in another sense, it's 1999. Perfect World, a Chinese virtual world, went public at the end of July, and is currently valued at $1.6 billion. It is, of course, losing money. Meanwhile Microsoft has invested $240 million of the change rattling around the back of its sofas in Facebook to become its exclusive "advertising partner", giving that company an overall value of $515 billion. That should do nicely to ensure that Google or Yahoo! doesn't buy it outright, anyway. Rupert Murdoch bought MySpace only two years ago for $580 million – which sounds like a steal by comparison if it weren't for the fact that Murdoch has made many online plays and they've all so far been wrong.

Two big issues seem to be dominating discussions about "the virtual world space". One: how to make money. Two: how and whether to make world interoperable, so when you get tired of one you can pick up your avatar and reputation and take them somewhere new. It was in discussing this latter point that Bridges made the comment noted above: after a while in a particular world shedding that world's character might be the one thing you really want to do. In real life, wherever you go, there you are. Freely exploring your possible selves is what Richard Bartle had in mind when he wrote the first MUD.

The first of those is, of course, the pesky thing only a venture capitalist or a journalist would ask. So far, in general game worlds make their money on subscriptions, and social worlds make their money selling non-existent items like land and maintenance fees thereupon (actually, says Linden Labs, "server resources"). But Asia seems already to be moving toward free play with the real money coming from in-game item sales: 80 million Koreans are buying products in and from Cyworld.

But the two questions are related. If your avatar only functions in a single world, the argument goes, that makes virtual worlds closed environments like the ones CompuServe and AOL failed with. That is of course true – but only after someone comes up with an open platform everyone can use. Unlike the Internet at large, though, it's hard to see who would benefit enough from building one to actually do it.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

September 21, 2007

The summer of lost hats

I seem to have spent the summer dodging in and out of science fiction novels featuring four general topics: energy, security, virtual worlds, and what someone at the last conference called "GRAIN" technologies (genetic engineering, robotics, AI, and nanotechnology). So the summer started with doom and gloom and got progressively more optimistic. Along the way, I have mysteriously lost a lot of hats. The phenomena may not be related.

I lost the first hat in June, a Toyota Motor Racing hat (someone else's joke; don't ask) while I was reading the first of many very gloomy books about the end of the world as we know it. Of course, TEOTWAWKI has been oft-predicted, and there is, as Damian Thompson, the Telegraph's former religious correspondent, commented when I was writing about Y2K – a "wonderful and gleeful attention to detail" in these grand warnings. Y2K was a perfect example: a timetable posted to comp.software.year-2000 had the financial system collapsing around April 1999 and the cities starting to burn in October…

Energy books can be logically divided into three categories. One, apocalyptics: fossil fuels are going to run out (and sooner than you think), the world will continue to heat up, billions will die, and the few of us who survive will return to hunting, gathering, and dying young. Two, deniers: fossil fuels aren't going to run out, don't be silly, and we can tackle global warming by cleaning them up a bit. Here. Have some clean coal. Three, optimists: fossil fuels are running out, but technology will help us solve both that and global warming. Have some clean coal and a side order of photovoltaic panels.

I tend, when not wracked with guilt for having read 15 books and written 30,000 words on the energy/climate crisis and then spent the rest of the summer flying approximately 33,000 miles, toward optimism. People can change – and faster than you think. Ten years ago, you'd have been laughed off the British isles for suggesting that in 2007 everyone would be drinking bottled water. Given the will, ten years from now everyone could have a solar collector on their roof.

The difficulty is that at least two of those takes on the future of energy encourage greater consumption. If we're all going to die anyway and the planet is going inevitably to revert to the Stone Age, why not enjoy it while we still can? All kinds of travel will become hideously expensive and difficult; go now! If, on the other hand, you believe that there isn't a problem, well, why change anything? The one group who might be inclined toward caution and saving energy is the optimists – technology may be able to save us, but we need time to create create and deploy it. The more careful we are now, the longer we'll have to do that.

Unfortunately, that's cautious optimism. While technology companies, who have to foot the huge bills for their energy consumption, are frantically trying to go green for the soundest of business reasons, individual technologists don't seem to me to have the same outlook. At Black Hat and Defcon, for example (lost hats number two and three: a red Canada hat and a black Black Hat hat), among all the many security risks that were presented, no one talked about energy as a problem. I mean, yes, we have all those off-site backups. But you can take out a border control system as easily with an electrical power outage as you can by swiping an infected RFID passport across a reader to corrupt the database. What happens if all the lights go out, we can't get them back on again, and everything was online?

Reading all those energy books changes the lens through which you view technical developments somewhat. Singapore's virtual worlds are a case in point (lost hat: a navy-and-tan Las Vegas job): everyone is talking about what kinds of laws should apply to selling magic swords or buying virtual property, and all the time in the back of your mind is the blog posting that calculated that the average Second Life avatar consumes as much energy as the average Brazilian. And emits as much carbon as driving an SUV for 2,000 miles. Bear in mind that most SL avatars aren't figured up that often, and the suggestion that we could curb energy consumption by having virtual conferences instead of physical ones seems less realistic. (Though we could, at least, avoid airport security.) In this, as in so much else, the science fiction writer Vernor Vinge seems to have gotten there first: his book Marooned in Real Time looks at the plight of a bunch of post-Singularity augmented humans knowing their technology is going to run out.

It was left to the most science fictional of the conferences, last week's Center for Responsible Nanotechnology conference (my overview is here) to talk about energy. In wildly optimistic terms: technology will not only save us but make us all rich as well.

This was the one time all summer I didn't lose any hats (red Swiss everyone thought was Red Cross, and a turquoise Arizona I bought just in case). If you can keep your hat while all around you everyone is losing theirs…

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 10, 2007

Wall of sheep

Last week at Defcon my IM ID and just enough of the password to show they knew what it was appeared on the Wall of Sheep. This screen projection of the user IDs, partial passwords, and activities captured by the installed sniffer inevitably runs throughout the conference.

It's not that I forgot the sniffer was there, or that there is a risk in logging onto an IM client unencrypted over a Wi-Fi hot spot (at a hacker conference!) but that I had forgotten that it was set to log in automatically whenever it could. Easily done.

It's strange to remember now that once upon a time this crowd – or at least, type of crowd – was considered the last word in electronic evil. In 1995 the capture of Kevin Mitnick made headlines everywhere because he was supposed to be the baddest hacker ever. Yet other than gaining online access and free phone calls, Mitnick is not known to have ever profited from his crimes – he didn't sell copied source code to its owners' competitors, and he didn't rob bank accounts. We would be grateful – really grateful – if Mitnick were the worst thing we had to deal with online now.

Last night, the House of Lords Science and Technology Committee released its report on Personal Internet Security. It makes grim reading even for someone who's just been to Defcon and Black Hat. The various figures the report quotes, assembled after what seems to have been an excellent information-gathering process (that means, they name-check a lot of people I know and would have picked for them to talk to) are pretty depressing. Phishing has cost US banks around $2 billion, and although the UK lags well behind - £33.5 million in bank fraud in 2006 – here, too, it's on the rise. Team Cymru found (PDF) that on IRC channels dedicated to the underground you could buy credit card account information for between $1 (basic information on a US account) to $50 (full information for a UK account); $1,599,335.80 worth of accounts was for sale on a single IRC channel in one day. Those are among the few things that can be accurately measured: the police don't keep figures breaking out crimes committed electronically; there are no good figures on the scale of identity theft (interesting, since this is one of the things the government has claimed the ID card will guard against); and no one's really sure how many personal computers are infected with some form of botnet software – and available for control at four cents each.

The House of Lords recommendations could be summed up as "the government needs to do more". Most of them are unexceptional: fund more research into IT security, keep better statistics. Some measures will be welcomed by a lot of us: make banks responsible for losses resulting from electronic fraud (instead of allowing them to shift the liability onto consumers and merchants); criminalize the sale or purchase of botnet "services" and require notification of data breaches. (Now I know someone is going to want to say, "If you outlaw botnets, only outlaws will have botnets", but honestly, what legitimate uses are there for botnets? The trick is in defining them to include zombie PCs generating spam and exclude PCs intentionally joined to grids folding proteins.)

Streamlined Web-based reporting for "e-crime" could only be a good thing. Since the National High-Tech Crime Unit was folded into the Serious Organised Crime Agency there is no easy way for a member of the public to report online crime. Bringing in a central police e-crime unit would also help. The various kite mark schemes – for secure Internet services and so on – seem harmless but irrelevant.

The more contentious recommendations revolve around the idea that we the people need to be protected, and that it's no longer realistic to lay the burden of Internet security on individual computer users. I've said for years that ISPs should do more to stop spam (or "bad traffic") from exiting their systems; this report agrees with that idea. There will likely be a lot of industry ink spilled over the idea of making hardware and software vendors liable if "negligence can be demonstrated". What does "vendor" mean in the context of the Internet, where people decide to download software on a whim? What does it mean for open source? If I buy a copy of Red Hat Linux with a year's software updates, that company's position as a vendor is clear enough. But if I download Ubuntu and install it myself?

Finally, you have to twitch a bit when you read, "This may well require reduced adherence to the 'end-to-end' principle." That is the principle that holds that the network should carry only traffic, and that services and applications sit at the end points. The Internet's many experiments and innovations are due to that principle.
The report's basic claim is this: criminals are increasingly rampant and increasingly rapacious on the Internet. If this continues, people will catastrophically lose confidence in the Internet. So we must improve security by making the Internet safer. Couldn't we just make it safer by letting people stop using it? That's what people tell you to do when you're going to Defcon.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 29, 2006

Resolutions for 2007

A person can dream, right?

- Scrap the UK ID card. Last week's near-buried Strategic Action Plan for the National Identity Scheme (PDF) included two big surprises. First, that the idea of a new, clean, all-in-one National Identity Register is being scrapped in favor of using systems already in use in government departments; second, that foreign residents in the UK will be tapped for their biometrics as early as 2008. The other thing that's new: the bald, uncompromising statement that it is government policy to make the cards compulsory.

No2ID has pointed out the problems with the proposal to repurpose existing systems, chiefly that they were not built to do the security the legislation promised. The notion is still that everyone will be re-enrolled with a clean, new database record (at one of 69 offices around the country), but we still have no details of what information will be required from each person or how the background checks will be carried out. And yet, this is really the key to the whole plan: the project to conduct background checks on all 60 million people in the UK and record the results. I still prefer my idea from 2005: have the ID card if you want, but lose the database.

The Strategic Action Plan includes the list of purposes of the card; we're told it will prevent illegal immigration and identity fraud, become a key "defence against crime and terrorism", "enhance checks as part of safeguarding the vulnerable", and "improve customer service".

Recall that none of these things was the stated purpose of bringing in an identity card when all this started, back in 2002. Back then, first it was to combat terrorism, then it was an "entitlement card" and the claim was that it would cut benefit fraud. I know only a tiny mind criticizes when plans are adapted to changing circumstances, but don't you usually expect the purpose of the plans to be at least somewhat consistent? (Though this changing intent is characteristic of the history of ID card proposals going back to the World Wars. People in government want identity cards, and try to sell them with the hot-button issue of the day, whatever it is.

As far as customer service goes, William Heath has published some wonderful notes on the problem of trust in egovernment that are pertinent here. In brief: trust is in people, not databases, and users trust only systems they help create. But when did we become customers of government, anyway? Customers have a choice of supplier; we do not.

- Get some real usability into computing. In the last two days, I've had distressed communications from several people whose computers are, despite their reasonable and best efforts, virus-infected or simply non-functional. My favourite recent story, though, was the US Airways telesales guy who claimed that it was impossible to email me a ticket confirmation because according to the information in front of him it had already been sent automatically and bounced back, and they didn't keep a copy. I have to assume their software comes with a sign that says, "Do not press this button again."

Jakob Nielson published a fun piece this week, a list of top ten movie usability bloopers. Throughout movies, computers only crash when they're supposed to, there is no spam, on-screen messages are always easily readable by the camera, and time travellers have no trouble puzzling out long-dead computer systems. But of course the real reason computers are usable in movies isn't some marketing plot by the computer industry but the same reason William Goldman gave for the weird phenomenon that movie characters can always find parking spaces in front of their destination: it moves the plot along. Though if you want to see the ultimate in hilarious consumer struggles with technology, go back to the 1948 version of Unfaithfully Yours (out on DVD!) starring Rex Harrison as a conductor convinced his wife is having an affair. In one of the funniest scenes in cinema, ever, he tries to follow printed user instructions to record a message on an early gramophone.

- Lose the DRM. As Charlie Demerjian writes, the high-def wars are over: piracy wins. The more hostile the entertainment industries make their products to ordinary use, the greater the motivation to crack the protective locks and mass-distribute the results. It's been reasonably argued that Prohibition in the US paved the way for organized crime to take root because people saw bootleggers as performing a useful public service. Is that the future anyone wants for the Internet?

Losing the DRM might also help with the second item on this list, usability. If Peter Gutmann is to be believed, Vista will take a nosedive downwards in that direction because of embedded copy protection requirements.

- Converge my phones. Please. Preferably so people all use just the one phone number, but all routing is least-cost to both them and me.

- One battery format to rule them all. Wouldn't life be so much easier if there were just one battery size and specification, and to make a bigger battery you'd just snap a bunch of them together?

Happy New Year!

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).