" /> net.wars: October 2015 Archives

« September 2015 | Main

October 30, 2015

The magic kingdom

Note: This is a slightly edited version of a talk I gave (via recorded video), at this week's Safer Internet Forum.

We used to talk about "cyberspace" as separate from the physical world. Those of us who remember the world before the internet remember why: we had to dial up and wait to connect in order to talk to strangers. For today's young people, brought up with pervasive broadband and smartphones, the internet is a conduit to their real-life friends and helps them navigate the real world; a separate "cyberspace" is meaningless.

The babies being born now will learn to assume that everything in their world is connected to everything else. What's left of the division between virtual and physical worlds is being further eroded by the "Internet of Things", the effort to add sensors and "intelligence" to physical objects.

A lot of adoption is being driven by environmental concerns and cost. In Glasgow, for example, sensors inside streetlights save energy (and money) by dimming them when no one's around. Inside homes, new "smart" electrical meters have wireless connections so they can be read remotely, and TVs have internet connections. New cars are clusters of computers on wheels with wireless entertainment systems and keyless entry. The overall result will be a much more responsive physical world, but also one that is far better equipped to scrutinize every aspect of our lives. The "Internet of Toys", as manufacturers like Disney are calling it, will be today's babies' first experience of this.

Every device capable of functioning autonomously in response to changes in its environment is enabled to do so via sensors. TVs that respond to voice commands have microphones, Glasgow's streetlights have microphones and cameras, your smartphone has all those plus GPS, accelerometers, gyroscopes, and much more. Fetch Robotics' pet robot dog, demonstrated at We Robot 2013 by Ian Danforth, has all of these; the idea is owners teach their pets new tricks and then share them via the internet with other owners. What could possibly go wrong with that?

All these sensors collect a steady stream of data. Twenty years ago, lacking today's bandwidth, all that data would have been processed locally on either the device itself or a physically attached computer. Today, the data is uploaded to the cloud for processing on remote, much more powerful computers. This is what happens with voice-activated assistants like Apple's Siri, Microsoft's Cortana, Google Now, or Samsung's smart TVs: what you say in front of the microphone is uploaded to the cloud, where it is added to the ever-growing pile of information the system has about you. You will not necessarily know how, when, or with whom this data is shared. This is exactly the design of Mattel's new Hello Barbie.

Mattel's marketing information for Hello Barbie Hello-Barbie.jpgindicates that it uploads the speech it hears when someone presses a button on the doll's belt buckle, the information will be stored in the cloud, and parents will have access via online accounts. For me, this creates the image of a parent being able to monitor all conversations their child has with the doll. Some parents may like this; a friend and I used sometimes to eavesdrop with vast amusement when her young son enacted full-bore military battles in the bathtub. But even a very young child deserves to have private and privately expressed thoughts. The US Campaign for a Commercial-Free Childhood has called the doll "creepy" and points out that while the child thinks they're talking to a doll in reality they're talking to a commercial conglomerate whose only interest in them is financial.

More interesting may be Nübi, Nubi-slice of lime-2015.jpgwhich teaches kids programming by using a "magic flower" to help a nearly-arrived cuddly alien arrival navigate our world, and Dynepic, which is launching a Kickstarter campaign to build a modular platform that is intended to encourage exploration, programming, and engineering. My personal skepticism kicks in with Dynepod's claim that the "toy will grow with the child" because the company will be able to add new functions and capabilities via the cloud. There have always been toys that grew with the child: the toys that allowed kids to exercise their own imagination.

As it becomes ever easier for parents to monitor their children's thoughts and as governments adopt increasingly powerful surveillance technologies, we are teaching our kids that constant monitoring is normal and that being watched means they're safe. This message is all around us - think how many times a day you hear that "CCTV is in operation for your safety". Today's kids are growing up with a very different model of what "freedom" looks like than we did.

There really is a great educational opportunity here, but it's unintended by the manufacturer: we can use these toys to teach kids about the way technology design constrains what they can do with the gadgets around them and how to work through privacy policies and understand the tradeoffs involved in accepting them. This kind of teaching really can grow with the child, since most companies change their policies from time to time, almost always in the direction of retaining more data and sharing it more widely. These kids are growing up in a world where some commercial company is trying to intermediate and monetize every relationship they have; to live in this future they need to understand this in a fundamental, instinctive way.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


October 23, 2015

War of the worlds

Tuesday, in the middle of a symposium on the Internet of Things run by the Marconi Society, a thought occurred: if vendors get their way, other people's homes will be impenetrable fortresses in which nothing works for you, the guest, even if everything works for them, the owners.

I imagine the experience would be like using someone else's mobile phone. Take an example: twenty years ago, left alone in someone's home, you could swiftly execute a decision to watch TV by pushing a button. Today, the buttons you need are scattered among an array of remotes, none of which match any system component. You can watch something online on your computer only if your host has given you the wifi password. And so, tomorrow: the refrigerator will only open for the right RFID card, the washing machine door will remain closed unless the refrigerator authenticates you, and the thermostat will identify you as an interloper (because it has learned the residents' work schedules) and call the police. Speaking as a professional guest of some decades' experience, the prospect is dismaying. You will have to be introduced to the house as if it were the resident alpha dog and you were a new cat.

"People will not buy in unless it's user-friendly," said Thibault Kleiner, head of the unit within the European Commission that's in charge of the Internet of Things and 5G. ("In charge of" is the program's description; my translation is "Ha!")

Internet pioneer Vint Cerf, on hand to help present the 2015 Marconi prize to
fellow pioneer Peter Kirstein, said he wants everything in the Internet of Things to "just work". As Google's internet evangelist, he of course has Nest thermostats that even he admits "learn the wrong things". His nightmare, unlike mine, focuses on his *own* house, where he imagines having to configure 100-plus different devices, spending all day typing in IPv6 addresses. "There's a scaling issue for the individual." Ross Anderson has a solution: introduce Internet of Things devices to each other by touching them together, thereby building up a physically self-configuring system. That works at home, but, as Cerf pointed out, what about gas stations, where the device - that is, the gas pump - needs to talk to relatively few other devices but encounters new humans all day long. Will it need its own "right to be forgotten" mechanism or will we assume that (like today's online retailers that require you to create an account before you can order anything) every gas pump should remember everyone it interacts with in the interests of saving a few seconds if and when you return?

Going back to the home, will your house, like today's desktops, have user groups: Guests, Owners, Administrators? Will that mean that a snooping guest will be reprimanded by the bathroom medicine cabinet if they dare try to open it, like a high-tech version of filling it with marbles?

Security experts have been warning for some time of the dangers inherent in the Internet of Things. These include the mismatch between software update and appliance replacement cycles; the unlikelihood that anyone will want to risk bricking their car by updating the firmware; and the reality that all software has bugs. What life will be like if it all works may be even worse. In 2003, smart home projects imagined local processing. Today it's all in the cloud and for most people Google intermediates navigating information (both virtual and physical, via maps, cars, and Android), companies like Apple and Amazon are gatekeepers to the content we wish to access, and Facebook sticks its nose into all our personal relationships. So tomorrow: companies like Samsung, IBM, and Siemens will intermediate our interactions with even mundane objects like tea kettles and hair dryers. cambridgecoffeemachinec1994.gifCue the first "thing" on the internet: the Cambridge coffee machine.

The good news is that it really may not work. At the Marconi event, Vint Cerf's TCP/IP co-inventor, Bob Kahn, recounted 15-year-old's recent question: "How did you manage to convince all the governments of the world to build the internet?" Of course, they didn't: to a very large extent the internet grew from the bottom up as everyone adopted TCP/IP to get away from proprietary networking protocols that locked them expensively into particular vendors and limited their reach. Kahn is proposing Digital Object Architecture, which relies on repositories, registries, and resolution mechanisms to facilitate interoperability between the myriad protocols whose owners hope they can become monopolies. Can the open internet survive the influx of pwned "things"?

The bigger question is this: whereas the internet grew like wildfire as people rapidly opted for the benefits of being connected, who really wants the Internet of Things other than vendors? "Smart" toys like Hello Barbie are hella creepy, and the rest seems...unnecessary, something most people will have to be forced to adopt by cities and companies determined to collect more data about everything we do.

wendyhall.jpgIt took Wendy Hall to remind everyone of the obvious: "It only works if people use it." She went on to ask, "Are we creating the environment for anti-social machines to grow, where our only control is to buy them or not?" In focusing on technology - yet again - instead of its impact, we have as usual missed the point. The Internet of Things is the wrong goal.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


October 16, 2015

A different kind of disruption

"How might this be used to further marginalize the already marginalized?" At this week's Computers, Freedom, and Privacy Jon Gosier proposed that all technology designers should ask themselves this question as they begin work.

The example he presented was this: jon-gosier.pngwhile working on a project in Uganda, his small, under-resourced group found that dissident texts were being blocked; in response they developed a peer-to-peer secure messaging system but never rolled it out. As Gosier told the story, the reasons were partly the concern that the platform could be used by extremist radical groups, and partly financial, in that the group's caurious funders felt the technology was too risky to back. This example was not completely apt: it's not clear who the marginalized are in that story.

I also suspect this is the Silicon Valley response would echo Henry Higgins in George Bernard Shaw's Pygmalion, "Do any of us understand what we are doing? If we did, would we ever do it?" And to a large extent that would be right. Technology developers are consistently surprised by the uses people find for their work post-release: they write letters in spreadsheets, they use tennis racquets to strain spaghetti, and the phonograph that changed the history of music was originally designed as a device to record telegrams. People are weird and inventive about their tools. If Phil Zimmermann had thought like Gosier's cautious funders he would not have launched PGP. On the other hand, if the internet pioneers had forseen the cooption of their new medium, which they imagined as a force for democracy, as a mass surveillance platform, they might have designed it differently. Certainly, as Michael Froomkin pointed out in April at this year's We Robot, the presence of a single lawyer would have substantially changed the design of the domain name system.

Gosier's other examples were bitcoin, which as he said is too technically complicated for most of the world's population to use directly, and wearable health monitors, which he suggested risk making healthcare reliant on gadgets that many people cannot afford.

Gosier was just one of a number of speakers pointing out the limitations of the way a lot of us think about computers, freedom, and privacy - as well as the risks posed by AI, machine learning, and predictive analytics.

The Pakistani lawyer Hamid Khan, head of the Stop LAPD Spying Coalition, prefers to call the latter "speculative" rather than "predictive", as in "speculative policing". Centuries of racism are embedded in the data on which the Los Angeles Police Department base their judgments of how people walk and move their hands. "What is being sold is the claim that technology is race neutral," he said. Instead, it's "Garbage in, Garbage Out." The reality on the ground may be quite different from what the designers had in mind.

"New technology is doing very old work," said Malkia Cyril, cfp-malkia--cyril.jpgexecutive director of the Center for Media Justice. Communities such as black and Muslim Americans "live scanned, tracked, and traced today," she said. "It did not begin with NSA revelations." Reframing the argument as she does changes the focus entirely. In arguing against mass surveillance, many (mostly white) privacy advocates have accepted that targeted surveillance has its place. For Cyril, "It's not a useful distinction." Because: whom do you target? Disproportionately, the groups you always have. Similarly, network neutrality is usually presented as a question of business opportunity; for Cyril it's a matter of freedom and social justice.

The discussion led the author and former software engineer Ellen Ullman to ask this: "How do we reach venture capitalists to say, 'Why is disruption the goal?'" She was referring to the current mantra of Silicon Valley venture capitalists, who select investments based on their potential to up-end established industries and sectors: to disrupt. (I imagine the word has very a very different meaning for Cyril and Gosier.) Uber, for example, doesn't just disrupt the taxi trade, though of course it does that; it's also frequently seen as a threat to public transport, the delivery business, and possibly municipal government itself. Why else does a six-year-old company have a market cap in the tens of billions?

Anika Collier Navaroli noted that she spends a lot of time talking to technologists. "Lots of the time it's the first time they've heard the civil rights implications." Part of Ullman's answer is to encourage everyone to learn to code as the only route to self-determination in the digital world.

All these pieces taken together - especially when stood next to EPIC international policy fellow Fanny Hidvegi's account of Hungary's increasingly restrictive government and Gosier's references to "trickle-down techonomics" - is an important reminder. It's not just that technology reaches the streets where ordinary citizens get their hands on it - GPS, the internet, and mobile telephony are all technologies that began with the military and are now ubiquitous. It's that missions creep and before it gets to us that same technology will be deployed by much lower layers of law enforcement. We've seen this in the UK: half of local councils have been caught using the Regulation of Investigatory Powers Act to tackle...littering. Paging Arlo Guthrie.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


A different kind of disruption

"How might this be used to further marginalize the already marginalized?" At this week's Computers, Freedom, and Privacy Jon Gosier proposed that all technology designers should ask themselves this question as they begin work.

The example he presented was this: jon-gosier.pngwhile working on a project in Uganda, his small, under-resourced group found that dissident texts were being blocked; in response they developed a peer-to-peer secure messaging system but never rolled it out. As Gosier told the story, the reasons were partly the concern that the platform could be used by extremist radical groups, and partly financial, in that the group's caurious funders felt the technology was too risky to back. This example was not completely apt: it's not clear who the marginalized are in that story.

I also suspect this is the Silicon Valley response would echo Henry Higgins in George Bernard Shaw's Pygmalion, "Do any of us understand what we are doing? If we did, would we ever do it?" And to a large extent that would be right. Technology developers are consistently surprised by the uses people find for their work post-release: they write letters in spreadsheets, they use tennis racquets to strain spaghetti, and the phonograph that changed the history of music was originally designed as a device to record telegrams. People are weird and inventive about their tools. If Phil Zimmermann had thought like Gosier's cautious funders he would not have launched PGP. On the other hand, if the internet pioneers had forseen the cooption of their new medium, which they imagined as a force for democracy, as a mass surveillance platform, they might have designed it differently. Certainly, as Michael Froomkin pointed out in April at this year's We Robot, the presence of a single lawyer would have substantially changed the design of the domain name system.

Gosier's other examples were bitcoin, which as he said is too technically complicated for most of the world's population to use directly, and wearable health monitors, which he suggested risk making healthcare reliant on gadgets that many people cannot afford.

Gosier was just one of a number of speakers pointing out the limitations of the way a lot of us think about computers, freedom, and privacy - as well as the risks posed by AI, machine learning, and predictive analytics.

The Pakistani lawyer Hamid Khan, head of the Stop LAPD Spying Coalition, prefers to call the latter "speculative" rather than "predictive", as in "speculative policing". Centuries of racism are embedded in the data on which the Los Angeles Police Department base their judgments of how people walk and move their hands. "What is being sold is the claim that technology is race neutral," he said. Instead, it's "Garbage in, Garbage Out." The reality on the ground may be quite different from what the designers had in mind.

"New technology is doing very old work," said Malkia Cyril, cfp-malkia--cyril.jpgexecutive director of the Center for Media Justice. Communities such as black and Muslim Americans "live scanned, tracked, and traced today," she said. "It did not begin with NSA revelations." Reframing the argument as she does changes the focus entirely. In arguing against mass surveillance, many (mostly white) privacy advocates have accepted that targeted surveillance has its place. For Cyril, "It's not a useful distinction." Because: whom do you target? Disproportionately, the groups you always have. Similarly, network neutrality is usually presented as a question of business opportunity; for Cyril it's a matter of freedom and social justice.

The discussion led the author and former software engineer Ellen Ullman to ask this: "How do we reach venture capitalists to say, 'Why is disruption the goal?'" She was referring to the current mantra of Silicon Valley venture capitalists, who select investments based on their potential to up-end established industries and sectors: to disrupt. (I imagine the word has very a very different meaning for Cyril and Gosier.) Uber, for example, doesn't just disrupt the taxi trade, though of course it does that; it's also frequently seen as a threat to public transport, the delivery business, and possibly municipal government itself. Why else does a six-year-old company have a market cap in the tens of billions?

Anika Collier Navaroli noted that she spends a lot of time talking to technologists. "Lots of the time it's the first time they've heard the civil rights implications." Part of Ullman's answer is to encourage everyone to learn to code as the only route to self-determination in the digital world.

All these pieces taken together - especially when stood next to EPIC international policy fellow Fanny Hidvegi's account of Hungary's increasingly restrictive government and Gosier's references to "trickle-down techonomics" - is an important reminder. It's not just that technology reaches the streets where ordinary citizens get their hands on it - GPS, the internet, and mobile telephony are all technologies that began with the military and are now ubiquitous. It's that missions creep and before it gets to us that same technology will be deployed by much lower layers of law enforcement. We've seen this in the UK: half of local councils have been caught using the Regulation of Investigatory Powers Act to tackle...littering. Paging Arlo Guthrie.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


October 9, 2015

Unsafe harbor

A few days ago, the EU Court of Justice ruled (PDF), as the advocate-general recommended two weeks ago, that the Safe Harbor deal negotiated in 2000 is invalid. Safe Harbor was needed for a simple reason: under the 1998 EU Data Protection Directive it's illegal to transfer personal data to countries that do not have adequate protections in place. We're looking at you, United States - which, as Marc Rotenberg, director of the Electronic Privacy Information Center, will remind you, actually led the world in privacy legislation back in the 1970s.

The up-ending of Safe Harbor has been generally welcomed by privacy and human rights advocates: Trand Atlantic Consumer Dialogue, European Digital Rights, Bureau Européen des Unions de Consommateurs (PDF), and EPIC. Open source advocate Simon Phipps helpfully highlights key parts of the decision.

ORG in particular notes that this decision, in conjunction with the April 2014 data retention decision, leads in the direction of requiring targeted surveillance and independent access ratyher than indiscriminate, bulk collection. Privacy International places the blame squarely on the US itself, arguing that the decision puts the onus on the US to change its laws to join the modern world, calling the Safe Harbor agreement a "pretense".

And quite right. The reality - and Austrian law student Max Schrems' 2015_Max_Schrems_(17227117226).jpgpoint in bringing the case in the first place - is that US law, specifically the Foreign Intelligence Surveillance Act and its 2008 Amendment, requires US companies and their subsidiaries to comply with government access requests no matter where the data is located. The late independent privacy advocate 256px-Caspar_Bowden-IMG_8974.jpgCaspar Bowden was first to point out, months before Edward Snowden's revelations of the NSA's PRISM program made plain the extent to which the US intelligence agencies are in fact violating the privacy protections that Safe Harbor was created to guarantee.

A blog posting I now can't find suggested that at last the US is being faced with real economic damage from the NSA's actions. This is just a optimistic as some companies' belief chatter that the estimated 4,000-plus companies currently using Safe Harbor can continue in business as usual via contract law. The reality is that no contract can supersede a US company's obligations under the law to turn over data nor can it supersede the EU's requirements for data protections. Hard rock, meet immovable place.

So what does happen now? At a press conference a couple of days ago, the Justice and Home Affairs commissioner, Vera Jourová, talked optimistically about renegotiating with the US to create a new agreement that will offer the protection Safe Harbor did not. She also stressed the Commission's commitment to finding a solution that allows data transfers to the US to continue. We'll call this plan Safer Harbor.

In the meantime, she said, data protection law does allow some exceptions that will allow transfers to continue:
- In performance of a contract such as a hotel booking;
- On important public interest grounds such as law enforcement cooperation in fighting fraud or cartels;
- The data subject's own vital interest, such as medical records to help in urgent life or death situations;
- The free and informed consent of the individual.

So, the most likely: businesses such as Facebook and Google that rely on "the free flow of data" will probably start thinking up contingency plans, but for the moment will proceed as per usual, while arguing that the CJEU ruling threatens the free and open internet. EU - and especially UK - politicians will try to come up with patches that change as little as possible. There will probably have to be more court challenges. Another look will have to be taken at the data protection reform package, which has been in negotiation for some years now. One clause would require European companies to tell their relevant data protection authorities if they are being compelled to turn over data - even if they have been forbidden to disclose this under US law. Sounds nice, but doesn't mobilize the rock or soften the hard place, since companies will still have to pick a law to violate. I imagine the internal discussions there revolving around two questions: which violation is less likely to land the CEO in jail and which set of fines can we afford?

Behind the scenes, the big data companies will continue to lobby for trade agreements such as TTIP and TTP to include provisions that require nations and regions to get rid of interference such as data protection laws.

Other options:
- Encrypt the data and give they key to an independent party under contract not to surrender it, unlikely because it won't end the originating company's liability;
- Reorganize IT systems to opt for local storage, deeply difficult for a company like Facebook;
- Expatriation, whereby companies remove themselves from the US and spin out their US operations.

Or the US can, as Privacy International suggests, change its laws to become more like those of the rest of the world. There is a glimmer of hope in that direction: California's new privacy law. But the big sticking point for EU courts is and will continue to be companies' obligations under FISAA coupled. Impasse.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 2, 2015

Not encyclopedic

There seems to have been a spate of articles lately pointing out various things wrong with Wikipedia: Wikipedia-logo-en.pngthe culture is sexist; other encyclopedias are better; it has a pronounced Western bias; it's being subverted by paid PR people; it's sexist some more.

A few weeks ago, a more serious accusation surfaced, when UC Berkeley biologist and open science proponent Michael Eisen tweeted the news that the journal publisher Elsevier had announced it would donate 45 free accounts to Wikipedia, so that editors could access - and therefore reference - its expensively paywalled science papers. Chagrined at the ensuing fuss, which included coverage at Gizmodo and ars technica - a week later Eisen clarified: a) he loves Wikipedia; b) he wasn't attacking Wikipedia for making *any* deals with *any publishers*, but criticizing the quality of this particular deal. Part of his argument was that making it easier to cite paywalled journal articles will prolong the life of the paywalled business model, which he would like to see die off in favor of "truly open scientific literature". He also thought the deal gave Elsevier great PR but didn't give Wikipedia enough return.

Wikipedia has a pretty clear statement of what it intends to be, written as the "five pillars": a neutrally written encyclopedia that is free for anyone to use, edit, and distribute, with guidelines rather than rules, and a culture in which editors treat each other civilly. If you want your science coverage to be the best it can be, you want access to the best-quality research, as Pete Forsyth argues at WikiStrategies. Although it seems logical for Wikipedia to support open access, that's not really its mission.

What didn't come out in those discussions is something I remember seeing discussed some 20 years ago, that the Western world's "best-quality" is already biased: the requirements that make a journal reputable enough to cite may be out of reach in some cases. I recall that one example was a requirement for a minimum period of regular publication - difficult to meet in a country in turmoil. But leaving that aside, it became known bernice-dahn.jpgduring the ebola outbreak that began in 2014 that better access to journal articles would have saved lives, as the chief medical officer of Liberia's Ministry of Health, Bernice Dahn, wrote in the New York Times. While they were trying to formulate a response to the rapidly spreading but as-yet undiagnosed illness, ebola was believed not to exist in Liberia. The journal article that corrected this mistaken impression was written in 1982; but no Liberian scientists participated in the research, and the results were published in a European journal. In Liberian terms, downloading that single article would have cost a physician half a week's salary. Wikipedia already has a strong Western bias; but this deal could save lives if it can provide leads to published work that no one knows about.

A recent run-in of my own with Wikipedia - trying to add accessibility detail to the pages for London tube stations - taught me that, like a large city, what to outsiders appears to be a single community is in fact thousands of smaller ones, each with its own community norms and understanding. This is what makes Wikipedia so hard for aspiring editors to navigate. Just adding stuff may lead to abrupt reversion - not because they think the information is bad (although they may) but because it's the quickest solution to a perceived deviation from the norm. I thought someone would see my addition, note its usefulness, and move it to the right place, providing a guideline I could follow. Instead: revert. I asked an inside acquaintance: what do I do now? He provided pointers to the relevant talk pages, where I was finally told accessibility information was "not encyclopedic".

Sources were always going to be an issue: Wikipedia's firm policy is published sources only, and personally counting the numbers of steps clearly fails. But the information seemed so obviously valuable that I was baffled. Why is the history of a station's platforms encyclopedic but accessibility information is not? Is it because the pages are written by trainspotters or because there's some neutral definition of "encyclopedic" that I don't understand?

In a lecture this week called The Ugly Truth (the Guardian has a summary), Sense About Science director Tracey Brown argued that scientists must learn to be more honest about uncertainty and more open with their data. She offered the following principles: seek accountability and change; clarify the evidence; admit failings; do not keep things hidden.

These are good principles for all curators of knowledge. The point is not to create certainty. As Brown said, certainty is a characteristic of believers, not scientists (or skeptics). By embracing uncertainty, scientists can answer the more important question: do we know enough? Science - except for mathematics, where a proof is final - is never finished. Failures that are visible can be remedied. And that's the point about Wikipedia; everything can be audited. That its failings are known is its biggest strength.


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.