Main

December 23, 2022

An inherently adverse environment

Rockettes_2239922329_8e6ffd44de-370.jpgEarlier this year, I wrote a short story/provocation for the recent book 22 Ideas About the Future. My story imagined a future in which the British central government had undermined local authorities by allowing local communities to opt out and contract for their own services. One of the consequences was to carve London up into tiny neighborhoods, each with its own rules and sponsorships, making it difficult to plot a joined-up route across town. Like an idiot, I entirely overlooked the role facial recognition would play in such a scenario. Community blocs like these, some openly set up to exclude unwanted diversity, would absolutely grab at facial recognition to repel - or charge - unwelcome outsiders.

Most discussion of facial recognition to date has focused on privacy: that it becomes impossible to move around public spaces without being identified and tracked. We haven't thought enough about the potential use of facial recognition to underpin a braad permission-based society in which our presence in any space can be detected and terminated at any time. In such a society, we are all migrants.

That particular unwanted dystopian future is upon us. This week, we learned that a New Jersey lawyer was blocked from attending the Radio City Music Hall Christmas show with her daughter because the venue's facial recognition system identified her as a member of a law firm involved in litigation against Radio City's owner, MSG Entertainment. Security denied her entry, despite her protests that she was not involved in the litigation. Whether she was or wasn't shouldn't really matter; she had committed no crime, she was causing no disturbance, she was granted no due process, and she had no opportunity for redress.

Soon after she told her story a second instance emerged, a male lawyer who was blocked from attending a New York Knicks basketball game at Madison Square Garden. Then, quickly, a third: a woman and her husband were removed from their seats at a Brandi Carlile concert, also at Madison Square Garden.

MSG later explained that litigation creates "an inherently adverse environment". I read that this way: the company has chosen to use developing technology in an abusive display of power. In other words, MSG is treating its venues as if they were the new-style airports Edward Hasbrouck has detailed, also covered here a few weeks back. In its original context, airport thinking is bad enough; expanded to the world's many privately-owned public venues, the potential is terrifying.

Early adopters of sharing data to exclude bad people talked about barring known shoplifters from chains of pubs or supermarkets, or catching and punishing criminals much more quickly. The MSG story means the mission has crept from "terrorist" to "don't like their employer" at unprecedented speed.

The right to navigate the world without interference is one privileged folks have taken for granted. With some exceptions: in England, the right to ramble all parts of the countryside took more than a century to codify into law.To an American, exclusion from a public venue *feels* like it should be a Constitutional issue - but of course it's not, since the affected venues are owned by a private company. In the reactions I've seen to the MSG stories, people have called for a ban on live facial recognition. By itself that's probably not going to be enough, now that this compost heap of worms has been opened; we are going to need legislation to underpin the right to assemble in privately-owned public spaces. Such a right sort of exists already in the conditions baked into many relevant local licensing laws that require venue operators to be the real-world equivalent of common carriers in telecommunications, who are not allowed to pick and choose whose data they will carry.

In a fourth MSG incident, a lawyer who is suing Madision Square Garden for barring him from entering, tricked the cameras at the MSG-owned Beacon Theater by disguising himself with a beard and a baseball cap. He didn't exactly need to, as his company had won a restraining order requiring MSG to let its lawyers into its venues (the case continues).

In that case, MSG's lawyer told the court barring opposition lawyers was essential to protect the company: "It's not feasible for any entertainment venue to operate any other way,"

Since when? At the New York Times, Kashmir Hill explains that the company adopted this policy last summer and depends on the photos displayed on law firms' websites to feed into its facial recognition to look for matches. But really the answer can only be: since the technology became available to enforce such a ban. It is a clear case where the availability of a technology leads to worse behavior on the part of its owner.

In 1996, the software engineer turned essayist and novelist Ellen Ujllman wrote about exactly this with respect to databases: they infect their owners with the desire to use their new capabilities. In one of her examples, a man suddenly realized he could monitor what his long-trusted secretary did all day. In another, a system to help ensure AIDS patients were getting all the benefits they were entitled to slowly morphed into a system for checking entitlement. In the case of facial recognition, its availability infinitely extends the British Tories' concept of the hostile environment.


Illustrations: The Rockettes performing in 2008 (via skividal at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 4, 2022

Sovereign stack

UA-sunflowers.jpgBut first, a note: RIP Cliff Stanford, who in 1993 founded the first Internet Service Provider to offer access to consumers in the UK, died this week. Stanford's Demon Internet was my first ISP, and I well remember having to visit their office so they could personally debug my connection, which required users to precisely configure a bit of code designed for packet radio (imagine getting that sort of service now!). Simon Rockman has a far better-informed obit than I could ever write.

***

On Monday, four days after Russia invaded Ukraine, the Ukrainian minister for digital transformation, Mykhailo Fedorov, sent a letter (PDF) to the Internet Corporation for Assigned Names and Numbers and asked it to shut down Russian country code domains such as .ru, .рф, and .su. Quick background: ICANN manages the Internet's domain name system, the infrastructure that turns the human-readable name for a website or email address that you type in into the routing numbers computers actually use to get your communications to where you want them to go. Fedorov also asked ICANN to shut down the DNS root servers located in Russia, and plans a separate letter to request the revocation of all numbered Internet addresses in use by Russian members of RIPE-NCC, the registry that allocates Internet numbers in Europe and West Asia.

Shorn of the alphabet soup, what Fedorov is asking ICANN to do is sanction Russia by using technical means to block both incoming (we can't get to their domains) and outgoing (they can't get to ours) Internet access, on the basis that Russia uses the Internet to spread propaganda, disinformation, hate speech and the promotion of violence.

ICANN's refusal (PDF) came quickly. For numerous reasons, ICANN is right to refuse, as the Internet Society, Access Now, and others have all said.

Internet old-timers would say that ICANN's job is management, not governance. This is a long-running argument going all the way back to 1998, when ICANN was created to take over from the previous management, the University of Southern California computer scientist Jon Postel. Among other things, Postel set up much of the domain name system, selecting among submitted proposals to run registries for both international top-level domains (.com and .net, for example), and country code domains (such as .uk and .ru). Especially in its early years, digital rights groups watched ICANN with distrust, concerned that it would stray into censorship at the behest of one or another government instead of focusing on its actual job, ensuring the stability and security of the network's operation.

For much of its history ICANN was accountable to the US National Telecommunications and Information Administration, part of the Department of Commerce. It became formally independent as a multistakeholder organization in 2016, after much wrangling over how to construct the new model.

This history matters because the alternative to ICANN was transitioning its functions to the International Telecommunications Union, an agency of the United Nations, a solution the Internet community generally opposed, then and now. Just a couple of weeks ago, Russia and China began a joint push towards greater state control, which they intended to present this week to the ITU's World Telecommunication Standardization Assembly. Their goal is to redesign the Internet to make it more amenable to government control, exactly the outcome everyone from Internet pioneers to modern human rights activists seeks to avoid.

So, now. Shutting down the DNS at the request of one country would put ICANN exactly where it shouldn't be: making value judgments about who should have access.

More to the specific situation, shutting off Russian access would be counterproductive. The state shut down the last remaining opposition TV outlet on Thursday, along with the last independent radio station. Many of the remaining independent journalists are leaving the country. Recognizing this, the BBC is turning its short-wave radio service back on. But other than that. the Internet is the only remaining possibility most Russians have of accessing independent news sources - and Russia's censorship bureau is already threatening to block Wikipedia if it doesn't cover the Ukraine invasion to its satisfaction.

In fact, Russia has long been working towards a totally-controlled national network that can function independently of the rest of the Internet, like the one China already has. As The Economist writes, China is way ahead; it has 25 years of investment in its Great Firewall, and owns its entire national "stack". That is, it has domestic companies that make chips, write software, and provide services. Russia is far more dependent on foreign companies to provide many of the pieces necessary to fill out the "sovereign stack" it mandated in 2019 legislation. In July 2021, Russia tested disconnecting its nascent "Runet" from the Internet, though little is known about the results. It is

There are other, more appropriate channels for achieving Fedorov's goal. The most obvious are the usual social media suspects and their ability to delete fake accounts and bots and label or remove misinformation. Facebook, Google, and Twitter all moved quickly to block Russian state media from running ads on their platforms or, in Facebook's case, monetizing content. Since then, Google has paused all ad sales in Russia. The economic sanctions enacted by many countries and the crash in the ruble should shut down Russians' access to most Western ecommerce. Many countries are kicking Russia's state-media channels off

This war is a week old. It will end - sometime. It will not pay in the long term (assuming we have one) to lock Russian citizens, many of whom oppose the war, into a state media-controlled echo chamber. Out best hope is to stay connected and find ways to remediate the damage, as painful as that is.


Illustrations: Sunflowers under a blue sky (by Inna Radetskaya at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 14, 2021

Pre-crime

Unicorn_sculpture,_York_Crown_Court-Tim-Green.jpgMuch is being written about this week's Queen's speech, which laid out plans to restrict protests (the Police, Crime, Sentencing, and Courts bill), relax planning measures to help developers override communities, and require photo ID in order to vote even though millions of voters have neither passport nor driver's license and there was just one conviction for voting fraud in the 2019 general election. We, however, will focus here on the Online Safety bill, which includes age verification and new rules for social media content moderation.

At Politico, technology correspondent Mark Scott picks three provisions: the exemption granting politicians free rein on social media; the move to require moderation of content that is not illegal or criminal (however unpleasant it may be); and the carve-outs for "recognised news publishers". I take that to mean they wanted to avoid triggering the opposition of media moguls like Rupert Murdoch. Scott read it as "journalists".

The carve-out for politicians directly contradicts a crucial finding in last week's Facebook oversight board ruling on the suspension of former US president Donald Trump's account: "The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users." Politicians, in other words, may not be more special than other influencers. Given the history of this particular government, it's easy to be cynical about this exemption.

In 2019, Heather Burns, now policy manager for the Open Rights Group, predicted this outcome while watching a Parliamentary debate on the white paper: "Boris Johnson's government, in whatever communication strategy it is following, is not going to self-regulate its own speech. It is going to double down on hard-regulating ours." At ORG's blog, Burns has critically analyzed the final bill.

Few have noticed the not-so-hidden developing economic agenda accompanying the government's intended "world-leading package of online safety measures". Jen Persson, director of the children's rights advocacy group DefendDigitalMe, is the exception, pointing out that in May 2020 the Department of Culture, Media, and Sport released a report that envisions the UK as a world leader in "Safety Tech". In other words, the government views online safety (PDF; see Annex C) as not just an aspirational goal for the country's schools and citizens but also as a growing export market the UK can lead.

For years, Persson has been tirelessly highlighting the extent to which children's online use is monitored. Effectively, monitoring software watches every use of any school-owned device and whenever the child is logged into their school Gsuite account; some types can even record photos of the child at home, a practice that became notorious when it was tried in Pennsylvania.

Meanwhile, outside of DefendDigitalMe's work - for example its case study of eSafe and discussion of NetSupport DNA and this discussion of school safeguarding - we know disturbingly little about the different vendors, how they fit together in the education ecosystem, how their software works, how capabilities vary from vendor to vendor, how well they handle multiple languages, what they block, what data it collects, how they determine risk, what inferences are drawn and retained and by whom, and the rate of errors and their consequences. We don't even really know if any of it works - or what "works" means. "Safer online" does not provide any standard against which the cost to children's human rights can be measured. Decades of government policy have all trended toward increased surveillance and filtering, yet wherever "there" is we never seem to arrive. DefendDigitalMe has called for far greater transparency.

Persson notes both mission creep and scope creep: "The scope has shifted from what was monitored to who is being monitored, then what they're being monitored for." The move from harmful and unlawful content to lawful but "harmful" content is what's being proposed now, and along with that, Persson says, "children being assessed for potential risk". The controversial Prevent program program is about this: monitoring children for signs of radicalization. For their safety, of course.

Previous UK children's rights campaigners used to say that successive UK governments have consistently used children as test subjects for the controversial policies they wish to impose on adults, normalizing them early. Persson suggests the next market for safetytech could be employers monitoring employees for mental health issues. I imagine elderly people.

DCMS's comments support market expansion: "Throughout the consultations undertaken when compiling this report there was a sector consensus that the UK is likely to see its first Safety Tech unicorn (i.e. a company worth over $1bn) emerge in the coming years, with three other companies also demonstrating the potential to hit unicorn status within the early 2020s. Unicorns reflect their namesake - they are incredibly rare, and the UK has to date created 77 unicorn businesses across all sectors (as of Q4 2019)." (Are they counting the much-litigated Autonomy?)

There's something peculiarly ghastly about this government's staking the UK's post-Brexit economic success on exporting censorship and surveillance to the rest of the world, especially alongside its stated desire to opt out of parts of human rights law. This is what "global Britain" wants to be known for?

Illustrations: Unicorn sculpture at York Crown Court (by Tim Green via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 22, 2021

In the balance

Thumbnail image for 800px-Netherlands-4589_-_Lady_of_Justice_&_William_of_Orange_Coat-o-Arms_(12171086413).jpgAs the year gets going, two conflicts look like setting precedents for Internet regulation: Australia's push to require platforms to pay license fees for linking to their articles; and Facebook's pending decision whether to make former president Donald Trump's ban permanent, as Twitter already has.

Facebook has referred Trump's case to its new Oversight Board and asked it to make policy recommendations for political leaders. The Board says it will consider whether Trump's content violated Facebook community standards and "values", and whether its removal respected human rights standards. It expects to report within 90 days; the decision will be binding on Facebook.

On Twitter, Kate Klonick, an assistant professor at St. John's University of Law, who has been following the Oversight Board's creation and development in detail, says the important aspect is not the inevitably polarizing decision itself, but the creation of what she hopes will be a "transparent global process to adjudicate these human rights issues of speech". In a Yale Law Journal articledocumenting the board's history so far, she suggests that it could set a precedent for collaborative governance of private platforms.

Or - and this seems more likely - it could become the place where Facebook dumps the controversial cases where making its own decision gains the company nothing. Trump is arguably one of these. No matter how much money Trump's presidential campaign (which seems unlikely to have any future) netted the company, it surely must be a drop in the ocean of its overall revenues. With antitrust suits pending and a politically controversial decision, why *wouldn't* Facebook want to hand it off? Would the company do the same in a case where the company's business model was at stake, though? If it does and the decision goes against Facebook's immediate business interests, will shareholders sue?

Those questions won't be answered for some years. Meanwhile, this initial case will be a milestone in Internet history, as Klonick says. If the board does not create durable principles that can be applied across other countries and political systems, it will have failed. The larger question, however, which is the circulation of deliberate lies and misinformation, is more complex.

For that, letters sent this week by US Congress members Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) may be more germane: they have asked the CEOs of Facebook, Google, YouTube, and Twitter to alter their algorithms to stop promoting conspiracy theories at scale. Facebook has been able to ignore previous complaints it was inciting violence in markets less essential to its bottom line and of less personal significance.

The Australian case is smaller, and kind of a rerun, but still interesting. We noted in September that the Australian government had announced the draft News Media Bargaining Code, a law requiring Google and Facebook (to start with) to negotiate license fees for displaying snippets of news articles. By including YouTube, user postings, and search engine results, Australia hoped to ensure the companies could not avoid the law by shutting down, which was what happened in 2014 when Spain enacted a similar law that caught only Google News. Early reports indicated that its withdrawal resulted in a dramatic loss of traffic to publishers' sites.

However, by 2015, Spain's Association of Newspaper Editor was saying members were reporting just a 12% loss of traffic, and a 2019 assessment argues that in fact the closure (which persists) made little long-term difference to publishers. If this is true, it's unarguably better for publishers not to be dependent on a third-party company to send them traffic out of the goodness of their hearts. The more likely underlying reality, however, is that people have learned to use generic search engines and social media to find news stories - in which case the Australian law could still be damaging to publishers' revenues.

It is, as journalist Michael West points out, exceptionally difficult to tease out what portion of Google's or Facebook's revenues are attributable to news content. West argues that a better solution to those companies' rise is regulating their power and taxing them appropriately; neither Google nor Facebook is in the business of reporting the news and are not in direct competition with the traditional publishers - the biggest of which, in Australia, are owned by Rupert Murdoch and so filled with climate change denial that Murdoch's own son left the company because of it.

In December, Google and Facebook won a compromise that will allow Google to include in the negotiations the value it brings in the form of traffic; limit the data it has to share with publishers; and lower the requirement for platforms to share algorithm changes with the publishers. Prediction: the publishers aren't going to wind up getting much out of this.

For the rest of us, though, the notion that users could be stopped from sharing news links (as Facebook is threatening) should be alarming; open, royalty-free linking, as web inventor Tim Berners-Lee told Bloomberg above, is the fundamental characteristic of the web. We take the web so much for granted now that it's easy to forget that the biggest decision Berners-Lee made, with the backing of his employers at CERN, was to make it open instead of proprietary. The Australian law is the latest attempt to modify that decision. I wish I could say it will never catch on.

Illustrations: Justitia outside the Delft Town Hall, the Netherlands (via Dennis Jarvis at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 24, 2019

Name change

Dns-rev-1-wikimedia.gifIn 2014, six months after the Snowden revelations, engineers began discussing how to harden the Internet against passive pervasive surveillance. Among the results have been efforts like Let's Encrypt, EFF's Privacy Badger, and HTTPS Everywhere. Real inroads have been made into closing some of the Internet's affordances for surveillance and improving security for everyone.

Arguably the biggest remaining serious hole is the domain name system, which was created in 1983. The DNS's historical importance is widely underrated; it was essential in making email and the web usable enough for mass adoption before search engines. Then it stagnated. Today, this crucial piece of Internet infrastructure still behaves as if everyone on the Internet can trust each other. We know the Internet doesn't live there any more; in February the Internet Corporation for Assigned Names and Numbers, which manages the DNS, warned of large-scale spoofing and hijacking attacks. The NSA is known to have exploited it, too.

The problem is the unprotected channel between the computer into which we type humanly-readable names such as pelicancrossing.net and the computers that translate those names into numbered addresses the Internet's routers understand, such as 216.92.220.214. The fact that routers all trust each other is routinely exploited for the captive portals we often see when we connect to public wi-fi systems. These are the pages that universities, cafes, and hotels set up to redirect Internet-bound traffic to their own page so they can force us to log in, pay for access, or accept terms and conditions. Most of us barely think about it, but old-timers and security people see it as a technical abuse of the system.

Several hijacking incidents raised awareness of DNS's vulnerability as long ago as 1998, when security researchers Matt Blaze and Steve Bellovin discussed it at length at Computers, Freedom, and Privacy. Twenty-one years on, there have been numerous proposals for securing the DNS, most notably DNSSEC, which offers an upwards chain of authentication. However, while DNSSEC solves validation, it still leaves the connection open to logging and passive surveillance, and the difficulty of implementing it has meant that since 2010, when ICANN signed the global DNS root, uptake has barely reached14% worldwide.

In 2018, the IETF adopted DNS-over-HTTPS as a standard. Essentially, this sends DNS requests over the same secure channel browsers use to visit websites. Adoption is expected to proceed rapidly because it's being backed by Mozilla, Google, and Cloudflare, who jointly intend to turn it on by default in Chrome and Firefox. In a public discussion at this week's Internet Service Providers Association conference, a fellow panelist suggested that moving DNS queries to the application level opens up the possibility that two different apps on the same device might use different DNS resolvers - and get different responses to the same domain name.

Britain's first public notice of DoH came a couple of week ago in the Sunday Times, which billed it as Warning over Google Chrome's new threat to children. This is a wild overstatement, but it's not entirely false: DoH will allow users to bypass the parts of Britain's filtering system that depend on hijacking DNS requests to divert visitors to blank pages or warnings. An engineer would probably argue that if Britain's many-faceted filtering system is affected it's because the system relies on workarounds that shouldn't have existed in the first place. In addition, because DoH sends DNS requests over web connections, the traffic can't be logged or distinguished from the mass of web traffic, so it will also render moot some of the UK's (and EU's) data retention rules.

For similar reasons, DoH will break captive portals in unfriendly ways. A browser with DoH turned on by default will ignore the hotel/cafe/university settings and instead direct DNS queries via an encrypted channel to whatever resolver it's been set to use. If the network requires authentication via a portal, the connection will fail - a usability problem that will have to be solved.

There are other legitimate concerns. Bypassing the DNS resolvers run by local ISPs in favor of those belonging to, say, Google, Cloudflare, and Cisco, which bought OpenDNS in 2015, will weaken local ISPs' control over the connections they supply. This is both good and bad: ISPs will be unable to insert their own ads - but they also can't use DNS data to identify and block malware as many do now. The move to DoH risks further centralizing the Internet's core infrastructure and strengthening the power of companies most of us already feel have too much control.

The general consensus, however, is that like it or not, this thing is coming. Everyone is still scrambling to work out exactly what to think about it and what needs to be done to mitigate accompanying risks, as well as find solutions to the resulting problems. It was clear from the ISPA conference panel that everyone has mixed feelings, though the exact mix of those feelings and which aspects are identified as problems - differ among ISPs, rights activists, and security practitioners. But it comes down to this: whether you like this particular proposal or not, the DNS cannot be allowed to remain in its present insecure state. If you don't want DoH, come up with a better proposal.


Illustrations: DNS diagram (via Б.Өлзий at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 25, 2019

Reversal of fortunes

Seabees_remove_corroded_zinc_anodes_from_an_undersea_cable._(28073762161).jpgIt may seem unfair to keep busting on the explosion of the Internet's origin myths, but documenting what happens to the beliefs surrounding the beginning of a new technology may help foster more rational thinking next time.

Today's two cherished early-Internet beliefs: 1) the Internet was designed withstand a bomb outage; 2) the Internet is impossible to censor. The first of these is true - the history books are clear on this - but it was taken to mean that the Internet could withstand all damage. That's just not true; it can certainly be badly disrupted on a national or regional basis.

While the Internet was new, a favorite route to overload was introducing a new application - the web, for example. Around 1996, Peter Dawe, the founder of one of Britain's first two ISPs, predicted that video would kill the Internet. For "kill" read "slow down horribly". Bear in mind that this was BB - before broadband - so an 11MB video file took hours to trickle in. Stream? Ha!

In 1995, Bob Metcalfe, the co-inventor of ethernet, predicted that the Internet would start to collapse in 1996. In 1997, he literally ate his column as penance for being wrong.

It was weird: with one of their brains people were staking their lives on online businesses, yet with another part the Internet was always vulnerable. My favorite was Simson Garfinkel, writing "Fifty Ways to Kill the Internet" for Wired in 1997 who nailed the best killswitch: "Buy ten backhoes." Underneath all the rhetoric about virtuality the Internet remains a physical network of cables. You'd probably need more than ten backhoes today, but it's still a finite number.

People have given up these worries even though parts of the Internet are actually being blacked out - by governments. In the acute form either access providers (ISPs, mobile networks) are ordered to shut down, or the government orders blocks on widely-used social media that people use to distribute news (and false news) and coordinate action, such as Twitter, Facebook, or WhatsApp.

In 2018 , governments shutting down "the Internet" became an increasingly frequent fixture of the the fortnightly Open Society Foundation Information Program News Digest. The list for 2018 is long, as Access Now says. At New America, Justin Sherman predicts that 2019 will see a rise in Internet blackouts - and I doubt he'll have to eat his pixels. The Democratic Republic of Congo was first, on January 1, soon followed by Zimbabwe.

There's general agreement that Internet shutdowns are bad for both democracy and the economy. In a 2016 study, the Brookings Institution estimated that Internet shutdowns cost countries $2.4 billion in 2015 (PDF), an amount that surely rises as the Internet becomes more deeply embedded in our infrastructure.

But the less-worse thing about the acute form is that it's visible to both internal and external actors. The chronic form, the second of our "things they thought couldn't be done in 1993", is long-term and less visible, and for that reason is the more dangerous of the two. The notion that censoring the Internet is impossible was best expressed by EFF co-founder John Gilmore in 1993: "The Internet perceives censorship as damage and routes around it". This was never a happy anthropomorphization of a computer network; more correctly, *people* on the Internet... Even today, ejected Twitterers head toGab; disaffected 4chan users create 8chan. But "routing around the damage" only works as long as open protocols permit anyone to build a new service. No one suggests that *Facebook* regards censorship as damage and routes around it; instead, Facebook applies unaccountable censorship we don't see or understand. The shift from hundreds of dial-up ISPs to a handful of broadband providers is part of this problem: centralization.

The country that has most publicly and comprehensively defied Gilmore's aphorism is China; in the New York Times, Raymond Zhong recently traced its strategy. At Technology Review, James Griffiths reports that the country is beginning to export its censorship via malware infestations and DDoS attacks, while Abdi Latif Dahir writes at Quartz that it is also exporting digital surveillance to African countries such as Morocco, Egypt, and Libya inside the infrastructure it's helping them build as part of its digital Silk Road.

The Guardian offers a guide to what Internet use is like in Russia, Cuba, India, and China. Additional insight comes from Chinese professor Bai Tongdong, who complains in the South China Morning Post that Westerners opposing Google's Dragonfly censored search engine project do not understand the "paternalism" they are displaying in "deciding the fate of Chinese Internet users" without considering their opinion.

Mini-shutdowns are endemic in democratic countries: unfair copyright takedowns, the UK's web blocking, and EU law limiting hate speech. "From being the colonizers of cyberspace, Americans are now being colonized by the standards adopted in Brussels and Berlin," Jaccob Mchangama complains at Quillette.

In the mid-1990s, Americans could believe they were exporting the First Amendment. Another EFF co-founder, John Perry Barlow, was more right than he'd have liked when, in a January 1992 column for Communications of the ACM, he called the US First Amendment "a local ordinance". That is much less true of the control being built into our infrastructure now.


Illustrations: The old threat model: Seabees remove corroded zinc anodes from an undersea cable (via Wikimedia, from the US Navy site.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 17, 2019

Misforgotten

European_Court_of_Justice_(ECJ)_in_Luxembourg_with_flags.jpg"It's amazing. We're all just sitting here having lunch like nothing's happening, but..." This was on Tuesday, as the British Parliament was getting ready to vote down the Brexit deal. This is definitely a form of privilege, but it's hard to say whether it's confidence born of knowing your nation's democracy is 900 years old, or aristocrats-on-the-verge denial as when World War I or the US Civil War was breaking out.

Either way, it's a reminder that for many people historical events proceed in the background while they're trying to get lunch or take the kids to school. This despite the fact that all of us in the UK and the US are currently hostages to a paralyzed government. The only winner in either case is the politics of disgust, and the resulting damage will be felt for decades. Meanwhile, everything else is overshadowed.

One of the more interesting developments of the past digital week is the European advocate general's preliminary opinion that the right to be forgotten, part of data protection law, should not be enforceable outside the EU. In other words, Google, which brought the case, should not have to prevent access to material to those mounting searches from the rest of the world. The European Court of Justice - one of the things British prime minister Theresa May has most wanted the UK to leave behind since her days as Home Secretary - typically follows these preliminary opinions.

The right to be forgotten is one piece of a wider dispute that one could characterize as the Internet versus national jurisdiction. The broader debate includes who gets access to data stored in another country, who gets to crack crypto, and who gets to spy on whose citizens.

This particular story began in France, where the Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection regulator, fined Google €100,000 for selectively removing a particular person's name from its search results on just its French site. CNIL argued that instead the company should delink it worldwide. You can see their point: otherwise, anyone can bypass the removal by switching to .com or .co.jp. On the other hand, following that logic imposes EU law on other countries, such as the US's First Amendment. Americans in particular tend to regard the right to be forgotten with the sort of angry horror of Lady Bracknell contemplating a handbag. Google applied to the European Court of Justice to override CNIL and vacate the fine.

A group of eight digital rights NGOs, led by Article 19 and including Derechos Digitales, the Center for Democracy and Technology, the Clinique d'intérêt public et de politique d'Internet du Canada (CIPPIC), the Electronic Frontier Foundation, Human Rights Watch, Open Net Korea, and Pen International, welcomed the ruling. Many others would certainly agree.

The arguments about jurisdiction and censorship were, like so much else, foreseen early. By 1991 or thereabouts, the question of whether the Internet would be open everywhere or devolve to lowest-common-denominator censorship was frequently debated, particularly after the United States v. Thomas case that featured a clash of community standards between Tennessee and California. If you say that every country has the right to impose its standards on the rest of the world, it's unclear what would be left other than a few Disney characters and some cat videos.

France has figured in several of these disputes: in (I think) the first international case of this kind, in 2000, it was a French court that ruled that the sale of Nazi memorabilia on Yahoo!'s site was illegal; after trying to argue that France was trying to rule over something it could not control, Yahoo! banned the sales on its French auction site and then, eventually, worldwide.

Data protection law gave these debates a new and practical twist. The origins of this particular case go back to 2014, when the European Court of Justice ruled in Google Spain v AEPD and Mario Costeja González that search engines must remove links to web pages that turn up in a name search and contain information that is irrelevant, inadequate, or out of date. This ruling, which arguably sought to redress the imbalance of power between individuals and corporations publishing information about them and free expression. Finding this kind of difficult balance, the law scholar Judith Rauhofer argued at that year's Computers, Freedom, and Privacy, is what courts *do*. The court required search engines to remove from the search results that show up in a *name* search the link to the original material; it did not require the original websites to remove it entirely or require the link's removal from other search results. The ruling removed, if you like, a specific type of power amplification, but not the signal.

How far the search engines have to go is the question the ECJ is now trying to settle. This is one of those cases where no one gets everything they want because the perfect is the enemy of the good. The people who want their past histories delinked from their names don't get a complete solution, and no one country gets to decide what people in other countries can see. Unfortunately, the real winner appears to be geofencing, which everyone hates.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 16, 2018

Septet

bush-gore-hanging-chad-florida.jpgThis week catches up on some things we've overlooked. Among them, in response to a Twitter comment: two weeks ago, on November 2, net.wars started its 18th unbroken year of Fridays.

Last year, the writer and documentary filmaker Astra Taylor coined the term "fauxtomation" to describe things that are hyped as AI but that actually rely on the low-paid labor of numerous humans. In The Automation Charade she examines the consequences: undervaluing human labor and making it both invisible and insecure. Along these lines, it was fascinating to read that in Kenya, workers drawn from one of the poorest places in the world are paid to draw outlines around every object in an image in order to help train AI systems for self-driving cars. How many of us look at a self-driving car see someone tracing every pixel?

***

Last Friday, Index on Censorship launched Demonising the media: Threats to journalists in Europe, which documents journalists' diminishing safety in western democracies. Italy takes the EU prize, with 83 verified physical assaults, followed by Spain with 38 and France with 36. Overall, the report found 437 verified incidents of arrest or detention and 697 verified incidents of intimidation. It's tempting - as in the White House dispute with CNN's Jim Acosta - to hope for solidarity in response, but it's equally likely that years of politicization have left whole sectors of the press as divided as any bullying politician could wish.

***

We utterly missed the UK Supreme Court's June decision in the dispute pitting ISPs against "luxury" brands including Cartier, Mont Blanc, and International Watch Company. The goods manufacturers wanted to force BT, EE, and the three other original defendants, which jointly provide 90% of Britain's consumer Internet access, to block more than 46,000 websites that were marketing and selling counterfeits. In 2014, the High Court ordered the blocks. In 2016, the Court of Appeal upheld that on the basis that without ISPs no one could access those websites. The final appeal was solely about who pays for these blocks. The Court of Appeal had said: ISPs. The Supreme Court decided instead that under English law innocent bystanders shouldn't pay for solving other people's problems, especially when solving them benefits only those others. This seems a good deal for the rest of us, too: being required to pay may constrain blocking demands to reasonable levels. It's particularly welcome after years of expanded blocking for everything from copyright, hate speech, and libel to data retention and interception that neither we nor ISPs much want in the first place.

***

For the first time the Information Commissioner's Office has used the Computer Misuse Act rather than data protection law in a prosecution. Mustafa Kasim, who worked for Nationwide Accident Repair Services, will serve six months in prison for using former colleagues' logins to access thousands of customer records and spam the owners with nuisance calls. While the case reminds us that the CMA still catches only the small fry, we see the ICO's point.

***

In finally catching up with Douglas Rushkoff's Throwing Rocks at the Google Bus, the section on cashless societies and local currencies reminded us that in the 1960s and 1970s, New Yorkers considered it acceptable to tip with subway tokens, even in the best restaurants. Who now would leave a Metro Card? Currencies may be local or national; cashlessness is global. It may be great for those who don't need to think about how much they spend, but it means all transactions are intermediated, with a percentage skimmed off the top for the middlefolk. The costs of cash have been invisible to us, as Dave Birch says, but it is public infrastructure. Cashlessness privatizes that without any debate about the social benefits or costs. How centralized will this new infrastructure become? What happens to sectors that aren't commercially valuable? When do those commissions start to rise? What power will we have to push back? Even on-the-brink Sweden is reportedly rethinking its approach for just these reasons In a survey, only 25% wanted a fully cashless society.

***

Incredibly, 18 years after chad hung and people disposed in Bush versus Gore, ballots are still being designed in ways that confuse voters, even in Broward County, which should have learned better. The Washington Post tell us that in both New York and Florida ballot designs left people confused (seeing them, we can see why). For UK voters accustomed to a bit of paper with big names and boxes to check with a stubby pencil, it's baffling. Granted, the multiple federal races, state races, local officers, judges, referendums, and propositions in an average US election make ballot design a far more complex problem. There is advice available, from the US Election Assistance Commission, which publishes design best practices, but I'm reliably told it's nonetheless difficult to do well. On Twitter, Dana Chisnell provides a series of links that taken together explain some background. Among them is this one from the Center for Civic Design, which explains why voting in the US is *hard* - and not just because of the ballots.

***

Finally, a word of advice. No matter how cool it sounds, you do not want a solar-powered, radio-controlled watch. Especially not for travel. TMOT.

Illustrations: Chad 2000.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 13, 2018

Exporting the Second Amendment

Ultimaker_3D_Printer_(16656068207).jpgOne thing about a fast-moving world in a time of technological change is that it's easy to lose track of things in the onslaught. This week alone - the UK Information Commissioner's Office fined Facebook the pre-GDPR maximum £500,000, ; Uber is firing human safety drivers because it's scaling back its tests of autonomous vehicles; and Twitter is currently deleting more than 1 million fake accounts a *day*.

Until a couple of days ago, one such forgotten moment in internet history was the 2013 takedown of the 3D printing designs site Defcad after it was prosecuted for publishing blueprints for various guns. In the years since, Andy Greenberg writes at Wired, Defcad owner Cody Wilson went on to sue the US Department of Justice, arguing that in demanding removal of his gun blueprints from the internet the DoJ was violating both the First Amendment (freedom of speech) and the Second (the right to bear arms). Wilson has now won his case in a settlement.

It's impossible for anyone with a long memory of the internet's development to read this and not be immediately reminded of the early 1990s battles surrounding the PC-based encryption software PGP. In a 1993 interview for a Guardian piece about his investigation, PGP creator Phil Zimmermann explicitly argued that keeping strong cryptography available for public use, like the right to bear arms enshrined in the Second Amendment, was essential to limit the power of the state.

The reality is that crypto is much more of a leveler than guns are. Few governments are so small that a group of civilians can match their military might. Crypto is much more of a leveler. In World War II, only governments had enough resources to devise and crack the strongest encryption. Today, which government has a cluster the size of GAFA's?

More immediately relevant is the fact that the law the DoJ used in both cases - Wilson and Zimmermann - is the same one: the International Traffic in Arms Regulations. Based on crypto's role in World War II, ITAR restricted strong encryption restricted as a weapon of strategic importance. The Zimmerman investigation focused on whether he had exported PGP to other countries by uploading it to the internet. The contemporaneous Computers, Freedom, and Privacy conferences quivered with impassioned fury over the US's insistence that export restrictions were essential. It all changed around 1996, when cryptographer Daniel Bernstein won his court case against the US government over ITAR's restrictions. By then cryptography's importance in ecommerce made restrictions untenable anyway. Lifting the restrictions did not end the arguments over law enforcement access; these continue today.

The battles over cryptography, however, are about a technology that is powerfully important in preserving the privacy and security of everyone's data, from banks to retailers to massive numbers of innocent citizens. Human rights organizations argue that the vast majority who are innocent citizens have a right to protect the confidentiality of the records we keep of our conversations with our doctors, lawyers, and best friends. In addition, the issues surrounding encryption are the same irrespective of location and timing. For nearly three decades myriad governments have cited the dangers of terrorists, drug dealers, pedophiles, and organized crime in demanding free access to encrypted data. Similarly, privacy activists worldwide have responded with the need to protect journalists, whistleblowers, human rights activists, victims of domestic violence, and other vulnerable people from secret snooping and the wrongness of mass surveillance.

Arguments over guns, however, play out as differently outside the US as arguments about data protection, competition, and antitrust laws do. Put simply, outside the US there is no Second Amendment, and the idea that guns should be restricted is much less controversial. European friends often comment on how little Americans trust their government.

For this reason, it's likely that publishing blueprints for DIY guns, though now explicitly ruled legal in the US, will become a new excuse for censoring the internet for other governments. In the US, the Electronic Frontier Foundation backed Wilson as a matter of protecting free speech; it's doubtful that human rights organizations elsewhere will see gun designs in the same way.

One major change since this case first came up: 3D printing has not become anything like the mass phenomenon its proponents were predicting in 2013. Then, many thought 3D printing was the coming thing. Scientists like Hod Lipson were imagining the new shapes and functions strange materials composited molecule by molecule would imminently create. Then, few people had 3D printers in their homes.

But today...although 3D printing has made some inroads in manufacturing and prototyping, consumers still find 3D printers too expensive for their limited usefulness, even though they can be fun. Some gain access to them through Hackspaces/FabLabs/Makerspaces, but that movement, though important and valuable, seems similarly to have largely stalled a few years back. Lipson's future may still happen. But it isn't happening yet to any appreciable degree.

Instead, the future that's rushing at us is the Internet of Things, where the materials are largely familiar and what's different is that they're laced with lectronics that make them programmable. There is more to worry about in "smart" guns than in readily downloadable designs for guns.


Illustrations: Ultimaker 3D printer in London, 2014 (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 9, 2018

Signaling intelligence

smithsam-ASIdemo-slides.pngLast month, the British Home Office announced that it had a tool that can automatically detect 94% of Daesh propaganda with 99.995% accuracy. Sophos summarizes the press release to say that only 50 out of 1 million videos would require human review.

"It works by spotting subtle patterns in the extremist videso that distinguish them from normal content..." Mark Werner, CEO of London-based ASI Data Science, the company that developed the classifier, told Buzzfeed.

Yesterday, ASI, which numbers Skype co-founder Jaan Tallinn among its investors, presented its latest demo day in front of a packed house. Most of the lightning presentations focused on various projects its Fellows have led using its tools in collaboration with outside organizations such as Rolls Royce and the Financial Conduct Authority. Warner gave a short presentation of the Home Office extremism project that included little more detail than the press reports a month ago, to which my first reaction was: it sounds impossible.

That reaction is partly due to the many problems with AI, machine learning, and big data that have surfaced over the last couple of years. Either there are hidden biases, or the media reports are badly flawed, or the system appears to be telling us only things we already know.

Plus, it's so easy - and so much fun! - to mock the flawed technology. This week, for example, neural network trainer Janelle Shane showed off the results of some of her pranks. After confusing image classifiers with sheep that don't exist, goats in trees (birds! or giraffes!) and sheep painted orange (flowers!), she concludes, "...even top-notch algorithms are relying on probability and luck." Even more than humans, it appears that automated classifiers decide what they see based on what they expect to see and apply probability. If a human is holding it, it's probably a cat or dog; if it's in a tree it's not going to be a goat. And so on. The experience leads Shane to surmise that surrealism might be the way to sneak something past a neural net.

Some of this approach appears to be what ASI's classifier probably also does (we were shown no details). As Sophos suggests, a lot of the signals ASI's algorithm is likely to use have nothing to do with the computer "seeing" or "interpreting" the images. Instead, it likely looks for known elements such as logos and facial images matched against known terrorism photos or videos. In addition it can assess the cluster of friends surrounding the account that's posted the video and look for profile information that shows the source is one that has been known to post such material in the past. And some will be based on analyzing the language used in the video. From what ASI was saying, it appears that the claim the company is making is fairly specific: the algorithm is supposed to be able to detect (specifically) Daesh videos, with a false positive rate of 0.005%, and 94% of true positives.

These numbers - assuming they're not artifacts of computerish misunderstanding about what it's looking for - of course represent tradeoffs, as Patrick Ball explained to us last year. Do we want the algorithm to block all possible Daesh videos? Or are we willing to allow some through in the interests of honoring the value of freedom of expression and not blocking masses of perfectly legal and innocent material? That policy decision is not ASI's job.

What was more confusing in the original reports is that the training dataset was said to have been "over 1,000 videos". That seems an incredibly small sample for testing a classifier that's going to be turned loose on a dataset of millions. At the demonstration, Warner's one new piece of information is that because that training set was indeed small, the project developed "synthetic data" to enlarge the training set to sufficient size. As gaming-the-system as that sounds, creating synthetic data to augment training data is a known technique. Without knowing more about the techniques ASI used to create its synthetic data it's hard to assess that work.

We would feel a lot more certain of all of these claims if the classifier had been through an independent peer review. The sensitivity of the material involved makes this tricky; and if there has been an outside review we haven't been told about it.

But beyond that, the project to remove this material rests on certain assumptions. As speakers noted at the first conference run by VOX-Pol, an academic research network studying violent online political extremism, the "lone wolf" theory posits that individuals can be radicalized at home by viewing material on the internet. The assumption that this is true underpins the UK's censorship efforts. Yet this theory is contested: humans are highly social animals. Radicalization seems unlikely to take place in a vacuum. What - if any - is the pathway from viewing Daesh videos to becoming a terrorist attacker?

All these questions are beyond ASI's purview to answer. They'd probably be the first to say: they're only a hill of technology beans being asked to solve a mountain of social problems.

Illustrations: Slides from the demonstration (Sam Smith).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 14, 2011

Think of the children

Give me smut and nothing but! - Tom Lehrer

Sex always sells, which is presumably why this week's British headlines have been dominated by the news that the UK's ISPs are to operate an opt-in system for porn. The imaginary sales conversations alone are worth any amount of flawed reporting:

ISP Customer service: Would you like porn with that?

Customer: Supersize me!

Sadly, the reporting was indeed flawed. Cameron, it turns out was merely saying that new customers signing up with the four major consumer ISPs would be asked if they want parental filtering. So much less embarrassing. So much less fun.

Even so, it gave reporters such as Violet Blue, at ZDNet UK, a chance to complain about the lack of transparency and accountability of filtering systems.

Still, the fact that so many people could imagine that it's technically possible to turn "Internet porn" on and off as if operated a switch is alarming. If it were that easy, someone would have a nice business by now selling strap-on subscriptions the way cable operators do for "adult" TV channels. Instead, filtering is just one of several options for which ISPs, Web sites, and mobile phone operators do not charge.

One of the great myths of our time is that it's easy to stumble accidentally upon porn on the Internet. That, again, is television, where idly changing channels on a set-top box can indeed land you on the kind of smut that pleased Tom Lehrer. On the Internet, even with safe search turned off, it's relatively difficult to find porn accidentally - though very easy to find on purpose. (Especially since the advent of the .xxx top-level domain.)

It is, however, very easy for filtering systems to remove non-porn sites from view, which is why I generally turn off filters like "Safe search" or anything else that will interfere with my unfettered access to the Internet. I need to know that legitimate sources of information aren't being hidden by overactive filters. Plus, if it's easy to stumble over pornography accidentally I think that as a journalist writing about the Net and in general opposing censorship I think I should know that. I am better than average at constraining my searches so that they will retrieve only the information I really want, which is a definite bias in this minuscule sample of one. But I can safely say that the only time I encounter unwanted anything-like-porn is in display ads on some sites that assume their primary audience is young men.

Eli Pariser, whose The Filter Bubble: What the Internet is Hiding From You I reviewed recently for ZDNet UK, does not talk in his book about filtering systems intended to block "inappropriate" material. But surely porn filtering is a broad-brush subcase of exactly what he's talking about: automated systems that personalize the Net based on your known preferences by displaying content they already "think" you like at the expense of content they think you don't want. If the technology companies were as good at this as the filtering people would like us to think, this weekend's Singularity Summit would be celebrating the success of artificial intelligence instead of still looking 20 to 40 years out.

If I had kids now, would I want "parental controls"? No, for a variety of reasons. For one thing, I don't really believe the controls keep them safe. What keeps them safe is knowing they can ask their parents about material and people's behavior that upsets them so they can learn how to deal with it. The real world they will inhabit someday will not obligingly hide everything that might disturb their equanimity.

But more important, our children's survival in the future will depend on being able to find the choices and information that are hidden from view. Just as the children of 25 years ago should have been taught touch typing, today's children should be learning the intricacies of using search to find the unknown. If today's filters have any usefulness at all, it's as a way of testing kids' ability to think ingeniously about how to bypass them.

Because: although it's very hard to filter out only *exactly* the material that matches your individual definition of "inappropriate", it's very easy to block indiscriminately according to an agenda that cares only about what doesn't appear. Pariser worries about the control that can be exercised over us as consumers, citizens, voters, and taxpayers if the Internet is the main source of news and personalization removes the less popular but more important stories of the day from view. I worry that as people read and access only the material they already agree with our societies will grow more and more polarized with little agreement even on basic facts. Northern Ireland, where for a long time children went to Catholic or Protestant-owned schools and were taught that the other group was inevitably going to Hell, is a good example of the consequences of this kind of intellectual segregation. Or, sadly, today's American political debates, where the right and left have so little common basis for reasoning that the nation seems too polarized to solve any of its very real problems.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 1, 2011

Free speech, not data

Congress shall make no law...abridging the freedom of speech...

Is data mining speech? This week, in issuing its ruling in the case of IMS Health v Sorrell, the Supreme Court of the United States took the view that it can be. The majority (6-3) opinion struck down a Vermont law that prohibited drug companies from mining physicians' prescription data for marketing purposes. While the ruling of course has no legal effect outside the US, the primary issue in the case - the use of aggregated patient data - is being considered in many countries, including the UK, and the key technical debate is relevant everywhere.

IMS Health is a new species of medical organization: it collects aggregated medical data and mines it for client pharmaceutical companies, who use the results to determine their strategies for marketing to doctors. Vermont's goal was to save money by encouraging doctors to prescribe lower-cost generic medications. The pharmaceutical companies know, however, that marketing to doctors is effective. IMS Health accordingly sued to get the law struck down, claiming that the law abrogated the company's free speech rights. NGOs from the digital - EFF and EPIC - to the not-so-digital - AARP, - along with a host of medical organizations, filed amicus briefs arguing that patient information is confidential data that has never before been considered to fall within "free speech". The medical groups were concerned about the threat to trust between doctors and patients; EPIC and EFF added the more technical objection that the deidentification measures taken by IMS Health are inadequate.

At first glance, the SCOTUS ruling is pretty shocking. Why can't a state protect its population's privacy by limiting access to prescription data? How do marketers have free speech?

The court's objection - or rather, the majority opinion - was that the Vermont law is selective: it prohibits the particular use of this data for marketing but not other uses. That, to the six-judge majority, made the law censorship. The three remaining judges dissented, partly on privacy grounds, but mostly on the well-established basis that commercial speech typically enjoys a lower level of First Amendment protection than non-commercial speech.

When you are talking about traditional speech, censorship means selectively banning a type or source of content. Let's take Usenet in the early 1990s as an example. When spam became a problem, a group of community-minded volunteers devised cancellation practices that took note of this principle and defined spam according to the behavior involved in posting it. Deciding a particular posting was spam requires no subjective judgments about who posted the message or whether it was a commercial ad. Instead, postings are scored against a bunch of published, objective criteria: x number of copies, posted to y number of newsgroups, over z amount of time., or off-topic for that particular newsgroup, or a binary file posted to a text-only newsgroup. In the Vermont case, if you can accept the argument that data mining is speech, as SCOTUS did, then the various uses of the data are content and therefore a law that bans only one of many possible uses or bans use by specified parties is censorship.

The decision still seems intuitively wrong to me, as it apparently also did to the three remaining judges, who wrote a dissenting opinion that instead viewed the Vermont law as an attempt to regulate commercial activity, something that has never been covered by the First Amendment.

But note this: the concern for patient privacy that animated much of the interest in this case was only a bystander (which must surely have pleased the plaintiffs).

Obscured by this case, however, is the technical question that should be at the heart of such disputes (several other states have passed Vermont-style laws): how effectively can data be deidentified? If it can be easily reidentified and linked to specific patients, making it available for data mining ends medical privacy. If it can be effectively anonymized, then the objections go away.

At this year's Computers, Freedom, and Privacy there was some discussion of this issue; an IMS Health representative and several of the experts EPIC cited in its brief were present and disagreeing. Khaled El Emam, from the University of Ottawa, filed a brief (PDF) opposing EPIC's analysis; Latanya Sweeney, who did the seminal work in this area in the early 2000s, followed with a rebuttal. From these, my non-expert conclusion is that just as you cannot trust today's secure cryptographic system to remain unbreakable for the future as computing power continues to increase in speed and decrease in price, you cannot trust today's deidentification to remain robust against the increasing masses of data available for matching to it.

But it seems the technical and privacy issues raised by the Vermont case are yet to be decided. Vermont is free to try again to frame a law that has the effect the state wants but takes a different approach. As for the future of free speech, it seems clear that it will encompass many technological artefacts still being invented - and that it will be quite a fight to keep it protecting individuals instead of, increasingly, commercial enterprises.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 16, 2011

The democracy divide (CFP2011 Day 2)

Good news: the Travel Security Administration audited itself and found it was doing pretty well. At least, so said Kimberly Walton, special counsellor to the administrator for the TSA.

It's always tough when you're the raw meat served up to the Computers, Freedom, and Privacy crowd, and Walton was appropriately complimented for her courage in appearing. But still: we learned little that was new, other than that the TSA wants to move to a system of identifying people who need to be scrutinized more closely.

Like CAPPS-II? asked the ACLU's Daniel Mach? "It was a terrible idea."

No. It's different. Exactly how, Walton couldn't say. Yet.

Americans spent the latter portion of last year protesting the TSA's policies - but little has happened? Why? It's arguable that a lot has to do with a lot of those protests being online complaints rather than massed ranks of rebellious passengers at airport terminals. And a lot has to do with the fact that FOIA requests and lawsuits move slowly. ACLU, said Ginger McCall, has been unable to get any answers from the TSA except by lawsuit.

Apparently it's easier to topple a government.

"Instead of the reign of terror, the reign of terrified," said Deborah Hurley.(CFP2001 chair) during the panel considering the question of social media's role in the upheavals in Egypt and Tunisia. Those on the ground - Jillian York, Nasser Weddady, Mona Eltawy - say instead that social media enabled little pockets of protest, sometimes as small as just one individual, to find each other and coalesce like the pooling blobs reforming into the liquid metal man in Terminator 2. But what appeared to be sudden reversals of rulers' fortunes to outsiders who weren't paying attention were instead the culmination of years of small rebellions.

The biggest contributor may have been video, providing non-repudiable evidence of human rights abuses. When Tunisia's President Zine al-Abidine Ben Ali blocked video sharing sites, Tunisians turned to Facebook.

"Facebook has a lot of problems with freedom of expression," said York, "but it became the platform of choice because it was accessible, and Tunisia never managed to block it for more than a couple of weeks because when they did there were street protests."

Technology may or may not be neutral, but its context never is. In the US for many years, Section 230 of the Communications Decency Act has granted somewhat greater protection to online speech than to that in traditional media. The EU long ago settled these questions by creating the framework of notice-and-takedown rules and generally refusing to award online speech any special treatment. (You may like to check out EDRI's response to the ecommerce directive (PDF).)

Paul Levy, a lawyer with Public Citizen and organizer of the S230 discussion, didn't like the sound of this. It would be, he argued, too easy for the unhappily criticized to contact site owners and threaten to sue: the heckler's veto can trump any technology, neutral or not.

What, Hurley asked Google's policy director, Bob Boorstin, to close the day, would be the one thing he would do to improve individuals' right to self-determination? Give them more secure mobile devices, he replied. "The future is all about what you hold in your hand." Across town, a little earlier, Senators Franken and Blumenthal introduced the Location Privacy Protection Act 2011.

Certainly, mobile devices - especially Talk to Tweet - gave Africa's dissidents a direct way to get their messages out. But at the same time, the tools used by dictators to censor and suppress Internet speech are those created by (almost entirely) US companies.

Said Weddady in some frustration, "Weapons are highly regulated. If you're trading in fighter jets there are very stringent frames of regulations that prevent these things from falling into the wrong hands. What is there for the Internet? Not much." Worse, he said, no one seems to be putting political behind enforcing the rules that do exist. In the West we argue about filtering as a philosophical issue. Elsewhere, he said, it's life or death. "What am I worth if my ideas remain locked in my head?"

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 25, 2011

Return to the red page district

This week's agreement to create a .xxx generic top-level domain (generic in the sense of not being identified with a particular country) seems like a quaint throwback. Ten or 15 years ago it might have made mattered. Now, for all the stories rehashing the old controversies, it seems to be largely irrelevant to anyone except those who think they can make some money out of it. How can it be a vector for censorship if there is no prohibition on registering pornography sites elsewhere? How can it "validate" the porn industry any more than printers and film producers did? Honestly, if it didn't have sex in the title, who would care?

I think it was about 1995 when a geekish friend said, probably at the Computers, Freedom, and Privacy conference, "I think I have the solution. Just create a top-level domain just for porn."

It sounded like a good idea at the time. Many of the best ideas are simple - with a kind of simplicity mathematicians like to praise with the term "elegant". Unfortunately, many of the worst ideas are also simple - with a kind of simplicity we all like to diss with the term "simplistic". Which this is depends to some extent on when you're making the judgement..

In 1995, the sense was that creating a separate pornography domain would provide an effective alternative to broad-brush filtering. It was the era of Time magazine's Cyberporn cover story, which Netheads thoroughly debunked and leading up to the passage of the Communications Decency Act in 1996. The idea that children would innocently stumble upon pornography was entrenched and not wholly wrong. At that time, as PC Magazine points out while outlining the adult entertainment industry's objections to the new domain, a lot of Web surfing was done by guesswork, which is how the domain whitehouse.com became famous.

A year or two later, I heard that one of the problems was that no one wanted to police domain registrations. Sure. Who could afford the legal liability? Besides, limiting who could register what in which domain was not going well: .com, which was intended to be for international commercial organizations, had become the home for all sorts of things that didn't fit under that description, while the .us country code domain had fallen into disuse. Even today, with organizations controlling every top-level domain, the rules keep having to adapt to user behavior. Basically, the fewer people interested in registering under your domain the more likely it is that your rules will continue to work.

No one has ever managed to settle - again - the question of what the domain name system is for, a debate that's as old as the system itself: its inventor, Paul Mockapetris, still carries the scars of the battles over whether to create .com. (If I remember correctly, he was against it, but finally gave on in that basis that: "What harm can it do?") Is the domain name system a directory, a set of mnemonics, a set of brands/labels, a zoning mechanism, or a free-for-all? ICANN began its life, in part, to manage the answers to this particular controversy; many long-time watchers don't understand why it's taken so long to expand the list of generic top-level domains. Fifteen years ago, finding a consensus and expanding the list would have made a difference to the development of the Net. Now it simply does not matter.

I've written before now that the domain name system has faded somewhat in importance as newer technologies - instant messaging, social networks, iPhone/iPad apps - bypass it altogether. And that is true. When the DNS was young, it was a perfect fit for the Internet applications of the day for which it was devised: Usenet, Web, email, FTP, and so on. But the domain name system enables email and the Web, which are typically the gateways through which people make first contact with those services (you download the client via the Web, email your friend for his ID, use email to verify your account).

The rise of search engines - first Altavista, then primarily Google - did away with much of consumers' need for a directory. Also a factor was branding: businesses wanted memorable domain names they could advertise to their customers. By now, though probably most people don't bother to remember more than a tiny handful of domain names now - Google, Facebook, perhaps one or two more. Anything else they either put into a search engine or get from either a bookmark or, more likely, their browser history.

Then came sites like Facebook, which take an approach akin to CompuServe in the old days or mobile networks now: they want to be your gateway to everything online (Facebook is going to stream movies now, in competition with NetFlix!) If they succeed, would it matter if you had - once - to teach your browser a user-unfriendly long, numbered address?

It is in this sense that the domain name system competes with Google and Facebook as the gateway to the Net. Of all the potential gateways, it is the only one that is intended as a public resource rather than a commercial company. That has to matter, and we should take seriously the threat that all the Net's entrances could become owned by giant commercial interests. But .xxx missed its moment to make history.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 18, 2011

Block party

When last seen in net.wars, the Internet Watch Foundation was going through the most embarrassing moment of its relatively short life: the time it blocked a Wikipedia page. It survived, of course, and on Tuesday this week it handed out copies of its latest annual report (PDF) and its strategic plan for the years 2011 to 2014 (PDF) in the Strangers Dining Room at the House of Commons.

The event was, more or less, the IWF's birthday party: in August it will be 15 years since the suspicious, even hostile first presentation, in 1996, of the first outline of the IWF. It was an uneasy compromise between an industry accused of facilitating child abuse, law enforcement threatening technically inept action, and politicians anxious to be seen to be doing something, all heightened by some of the worst mainstream media reporting I've ever seen.

Suspicious or not, the IWF has achieved traction. It has kept government out of the direct censorship business and politicians and law enforcement reasonably satisfied. Without - as was pointed out - cost to the taxpayer, since the IWF is funded from a mix of grants, donations, and ISPs' subscription fees.

And to be fair, it has been arguably successful at doing what it set out to do, which is to disrupt the online distribution of illegal pornographic images of children within the UK. The IWF has reported for some years now that the percentage of such images hosted within the UK is near zero. On Tuesday, it said the time it takes to get foreign-hosted content taken down has halved. Its forward plan includes more of the same, plus pushing more into international work by promoting the use its URL list abroad and developing partnerships.

Over at The Register Jane Fae Ozniek has done a good job of tallying up the numbers the IWF reported, and also of following up on remarks made by Culture Minister Ed Vaizey and Home Office Minister James Brokenshire that suggested the IWF or its methods might be expanded to cover other categories of material. So I won't rehash either topic here.

Instead, what struck me is the IWF's report that a significant percentage of its work now concerns sexual abuse images and videos that are commercially distributed. This news offered a brief glance into a shadowy world that is illegal for any of us to study since under UK law (and the laws of many other countries) it's illegal to access such material. If this is a correct assessment, it certainly follows the same pattern as the world of malware writing, which has progressed from the giggling, maladjusted teenager writing a bit of disruptive code in his bedroom to a highly organized, criminal, upside-down image of the commercial software world (complete, I'm told by experts from companies like Symantec and Sophos, with product trials, customer support, and update patches). Similarly, our, or at least my, image was always of like-minded amateurs exchanging copies of the things they managed to pick up rather like twisted stamp collectors.

The IWF report says it has identified 715 such commercial sources, 321 of which were active in 2010. At least 47.7 percent of the commercially branded material is produced by the top ten, and the most prolific of these brands used 862 URLs. The IWF has attempted to analyze these brands, and believes that they are operated in clusters by criminals. To quote the report:

Each of the webpages or websites is a gateway to hundreds or even thousands of individual images or videos of children being sexually abused, supported by layers of payment mechanisms, content sores, membership systems, and advertising frames. Payment systems may include pre-pay cards, credit cards, "virtual money" or e-payment systems, and may be carried out across secure webpages, text, or email.

This is not what people predicted when they warned at the original meeting that blocking access to content would drive it underground into locations that were harder to police. I don't recall anyone saying: it will be like Prohibition and create a new Mafia. How big a problem this is and how it relates to events like yesterday's shutdown of boylovers.net remains to be seen. But there's logic to it: anything that's scarce attracts a high price and anything high-priced and illegal attracts dedicated criminals. So we have to ask: would our children be safer if the IWF were less successful?

The IWF will, I think always be a compromise. Civil libertarians will always be rightly suspicious of any organization that has the authority and power to shut down access to content, online or off. Still, the IWF's ten-person board now includes, alongside the representatives of ISPs, top content sites, and academics, a consumer representative, and seems to be less dominated by repressive law enforcement interests. There's an independent audit in the offing, and while the IWF publishes no details of its block list for researchers to examine, it advocates transparency in the form of a splash screen that tells users a site that is blocked and why. They learned, the IWF's departing head, Peter Robbins, said in conversation, a lot from the Wikipedia incident.

My summary: the organization will know it has its balance exactly right when everyone on all sides has something to complain about.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 18, 2011

What is hyperbole?

This seems to have been a week for over-excitement. IBM gets an onslaught of wonderful publicity because it built a very large computer that won at the archetypal American TV game, Jeopardy. And Eben Moglen proposes the Freedom box, a more-or-less pocket ("wall wart") computer you can plug in and that will come up, configure itself, and be your Web server/blog host/social network/whatever and will put you and your data beyond the reach of, well, everyone. "You get no spying for free!" he said in his talk outlining the idea for the New York Internet Society.

Now I don't mean to suggest that these are not both exciting ideas and that making them work is/would be an impressive and fine achievement. But seriously? Is "Jeopardy champion" what you thought artificial intelligence would look like? Is a small "wall wart" box what you thought freedom would look like?

To begin with Watson and its artificial buzzer thumb. The reactions display everything that makes us human. The New York Times seems to think AI is solved, although its editors focus, on our ability to anthropomorphize an electronic screen with a smooth, synthesized voice and a swirling logo. (Like HAL, R2D2, and Eliza Doolittle, its status is defined by the reactions of the surrounding humans.)

The Atlantic and Forbes come across as defensive. The LA Times asks: how scared should we be? The San Francisco Chronicle congratulates IBM for suddenly becoming a cool place for the kids to work.

If, that is, they're not busy hacking up Freedom boxes. You could, if you wanted, see the past twenty years of net.wars as a recurring struggle between centralization and distribution. The Long Tail finds value in selling obscure products to meet the eccentric needs of previously ignored niche markets; eBay's value is in aggregating all those buyers and sellers so they can find each other. The Web's usefulness depends on the diversity of its sources and content; search engines aggregate it and us so we can be matched to the stuff we actually want. Web boards distributed us according to niche topics; social networks aggregated us. And so on. As Moglen correctly says, we pay for those aggregators - and for the convenience of closed, mobile gadgets - by allowing them to spy on us.

An early, largely forgotten net.skirmish came around 1991 over the asymmetric broadband design that today is everywhere: a paved highway going to people's homes and a dirt track coming back out. The objection that this design assumed that consumers would not also be creators and producers was largely overcome by the advent of Web hosting farms. But imagine instead that symmetric connections were the norm and everyone hosted their sites and email on their own machines with complete control over who saw what.

This is Moglen's proposal: to recreate the Internet as a decentralized peer-to-peer system. And I thought immediately how much it sounded like...Usenet.

For those who missed the 1990s: invented and implemented in 1979 by three students, Tom Truscott, Jim Ellis, and Steve Bellovin, the whole point of Usenet was that it was a low-cost, decentralized way of distributing news. Once the Internet was established, it became the medium of transmission, but in the beginning computers phoned each other and transferred news files. In the early 1990s, it was the biggest game in town: it was where the Linus Torvalds and Tim Berners-Lee announced their inventions of Linux and the World Wide Web.

It always seemed to me that if "they" - whoever they were going to be - seized control of the Internet we could always start over by rebuilding Usenet as a town square. And this is to some extent what Moglen is proposing: to rebuild the Net as a decentralized network of equal peers. Not really Usenet; instead a decentralized Web like the one we gave up when we all (or almost all) put our Web sites on hosting farms whose owners could be DMCA'd into taking our sites down or subpoena'd into turning over their logs. Freedom boxes are Moglen's response to "free spying with everything".

I don't think there's much doubt that the box he has in mind can be built. The Pogoplug, which offers a personal cloud and a sort of hardware social network, is most of the way there already. And Moglen's argument has merit: that if you control your Web server and the nexus of your social network law enforcement can't just make a secret phone call, they'll need a search warrant to search your home if they want to inspect your data. (On the other hand, seizing your data is as simple as impounding or smashing your wall wart.)

I can see Freedom boxes being a good solution for some situations, but like many things before it they won't scale well to the mass market because they will (like Usenet) attract abuse. In cleaning out old papers this week, I found a 1994 copy of Esther Dyson's Release 1.0 in which she demands a return to the "paradise" of the "accountable Net"; 'twill be ever thus. The problem Watson is up against is similar: it will function well, even engagingly, within the domain it was designed for. Getting it to scale will be a whole 'nother, much more complex problem.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


February 4, 2011

Blackout

They didn't even have to buy ten backhoes.

The most fundamental mythology of the Net goes like this. The Internet was built to withstand bomb outages. Therefore, it can withstand anything. Defy authority. Whee!

This basic line of thinking underlay a lot of early Net hyperbole, most notably Grateful Dead lyricist John Perry Barlow's Declaration of the Independence of Cyberspace. Barlow's declaration was widely derided even at the time; my favorite rebuttal was John Gilmore's riposte at Computers, Freedom, and Privacy 1995, that cyberspace was just a telephone network with pretensions. (Yes, the same John Gilmore who much more famously said, "The Internet perceives censorship as damage, and routes around it.")

Like all the best myths, the idea of the Net's full-bore robustness was both true and not true. It was true in the sense that the first iteration of the Net - ARPAnet - was engineered to share information and enable communications even after a bomb outage. But it was not true in the sense that there have always been gods who could shut down their particular bit of communications heaven. There are, in networking and engineering terms, central points of failure. It is also not true in the sense that a bomb is a single threat model, and the engineering decisions you make to cope with other threat models - such as, say, a government - might be different.

The key to withstanding a bomb outage - or in fact any other kind of outage - is redundancy. There are no service-level agreements for ADSL (at least in the UK), so if your business is utterly dependent on having a continuous Internet connection you have two broadband suppliers and a failover set-up for your router. You have a landline phone and a mobile phone, an email connection and private messaging on a social network, you have a back-up router, and a spare laptop. The Internet's particular form of redundancy comes from the way data is transmitted: the packets that make up every message do not have to follow any particular route when the sender types in a destination address. They just have to get there, just as last year passengers stranded by the Icelandic volcano looked for all sorts of creative alternative routes when their original direct flights were canceled.

Even in 1995, when Barlow and Gilmore were having that argument, the Internet had some clear central points of failure - most notably the domain name system, which relies on updates that ultimately come from a single source. At the physical level, it wouldn't take cutting too many cables - those ten backhoes again - to severely damage data flows.

But back then all of today's big, corporate Net owners were tiny, and the average consumer had many more choices of Internet service provider than today. In many parts of the US consumers are lucky to have two choices; the UK's rather different regulatory regime has created an ecology of small xDSL suppliers - but behind the scenes a great deal of their supply comes from BT. A small number of national ISPs - eight? - seems to be the main reason the Egyptian government was able to shut down access. Former BT Research head Peter Cochrane writes that Egyptians-in-the-street managed to find creative ways to get information out. But if the goal was to block people's ability to use social networks to organize protests, the Egyptian government may indeed have bought itself some time. Though I liked late-night comedian Conan O'Brien's take: "If you want people to stay at home and do nothing, turn the Internet back on."

While everyone is publicly calling foul on Egypt's actions, can there be any doubt that there are plenty of other governments who will be eying the situation with a certain envy? Ironically, the US government is the only one known to be proposing a kill switch. We have to hope that the $110 million the five-day outage is thought to have cost Egypt will give them pause.

In his recent book The Master Switch, Columbia professor Tim Wu uses the examples set by the history of radio, television, and the telephone network to argue that all media started their lives as open experiments but have gone on to become closed and controlled as they mature. The Internet, he says there, and again this week in the press, is likely on the verge of closing.

What would the closed Internet look like? Well, it might look something like Apple's ecology: getting an app into the app store requires central approval, for example. Or it might look something like the walled gardens to which many mobile network operators limit their customers' access. Or perhaps something like Facebook, which seeks to mediate its users' entire online experience: one reason so many people use it for messaging is that it's free of spam. In the history of the Internet, open access has beaten out such approaches every time. CompuServe and AOL's central planning lost to the Web; general purpose computers ruled.

I don't think it's clear which way the Internet will wind up, and it's much less clear whether it will follow the same path in all countries or whether dissidents might begin rebuilding the open Net by cracking out the old modems and NNTP servers. But if closure does happen, this week may have been the proof of concept.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 17, 2010

Sharing values

And then they came for Google...

The notion that the copyright industries' war on file-sharing would eventually rise to the Google level of abstraction used to be a sort of joke. It was the kind of thing the owners of torrent search sites (and before them, LimeWire and Gnutella nodes) said as an extreme way of showing how silly the whole idea was that file-sharing could be stamped out by suing people. It was the equivalent in airport terms of saying, "What are they going to do? Have us all fly naked?"

This week, it came true. You can see why: the British Phonographic Institute's annual report cites research it commissioned from Harris Interactive showing that 58 percent of "illegal downloaders" used Google to find free music. (Of course, all free music is not unauthorized copies of music, but we'll get to that in a minute.)

The rise of Google in particular (it has something like 90 percent of the UK market, somewhat less in the US) and search engines in general as the main gateway through which people access the Internet made it I think inevitable that at some point the company would become a focus for the music industry. And Google is responding, announcing on December 2 that it would favor authorized content in its search listings and remove prevent "terms closely related with piracy" from appearing in AutoComplete.

Is this censorship? Perhaps, but I find it hard to get too excited about, partly because Autocomplete is the annoying boor who's always finishing my sentences wrongly, partly because having to type "torrent" doesn't seem like much of a hardship, and partly because I don't believe this action will make much of a difference. Still, as Google's design shifts more toward the mass market, such subtle changes will create ever-larger effects.

I would be profoundly against demonizing file-sharing technology by making it technically impossible to use Google to find torrent/cyber locker/forum sites - because such sites are used for many other things that have nothing to do with distributing music - but that's not what's being talked about here. It's worth noting, however, that this is (yet another) example of Google's double standards when it comes to copyright. Obliging the music industry's request costs them very little and also creates the opportunity to nudge its own YouTube a little further up the listings. Compare and contrast, however, to the company's protracted legal battle over its having digitized and made publicly available millions of books without the consent of the rights holders.

If I were the music industry I think I'd be generally encouraged by the BPI's report. It shows that paid, authorized downloads are really beginning to take off; digital now accounts for nearly 25 percent of UK record industry revenues. Harris Interactive found that approximately 7.7 million people in the UK continue to download music "illegally". Jupiter Research estimated the foregone revenues at £219 million. The BPI's arithmetic estimates that paid, authorized downloads represent about a quarter of all downloads. Seems to me that's all moving in the right direction - without, mind you, assistance from the draconian Digital Economy Act.

The report also notes the rise of unauthorized, low-cost pay sites that siphon traffic away from authorized pay services. These are, to my view, the equivalent of selling counterfeit CDs, and I have no problem with regarding them as legitimately lost sales or seeing them shut down.

Is the BPI's glass half-empty or half-full? I think it's filling up, just like we told them it would. They are progressively competing successfully with free, and they'd be a lot further along that path if they had started sooner.

As a former full-time musician with many friends still in the trade, it's hard to argue that encouraging people towards services that pay the artist at the expense of those that don't is a bad principle. What I really care about is that it should be as easy to find Andy Cohen playing "Oh, Glory" as it is to find Lady Gaga singing anything. And that's an area where the Internet is the best hope for parity we've ever had; as a folksinger friend of mine said a couple of years back, "The music business never did anything for us."

I've been visiting Cohen this week, and he's been explicating the German sociologist Ferdinand Tönnies' structure, with the music business as gesellschaft (society) versus folk music as community (gemeinschaft)

"Society has rules, communities have customs," he said last night. "When a dispute over customs has to be adjudicated, that's the border of society." Playing music for money comes under society's rules - that is, copyright. But for Cohen, a professional musician for more than 40 years with multiple CDs, music is community.

We've been driving around Memphis visiting his friends, all of whom play themselves, some easily, some with difficulty. Music is as much a part of their active lives as breathing. This is a fundamental disconnect from the music industry, which sees us all as consumers and every unpaid experience of music as a lost sale, This is what "sharing music" really means: playing and singing together - wherever.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 12, 2010

Just between ourselves

It is, I'm sure, pure coincidence that a New York revival of Vaclav Havel's wonderfully funny and sad 1965 play The Memorandum was launched while the judge was considering the Paul Chambers "Twitter joke trial" case. "Bureaucracy gone mad," they're billing the play, and they're right, but what that slogan omits is that the bureaucracy in question has gone mad because most of its members don't care and the one who does has been shut out of understanding what's going on. A new language, Ptydepe, has been secretly invented and introduced as a power grab by an underling claiming it will improve the efficiency of intra-office communications. The hero only discovers the shift when he receives a memorandum written in the new language and can't get it translated due to carefully designed circular rules. When these are abruptly changed the translated memorandum restores him to his original position.

It is one of the salient characteristics of Ptydepe that it has a different word for every nuance of the characters' natural language - Czech in the original, but of course English in the translation I read. Ptydepe didn't work for the organization in the play because it was too complicated for anyone to learn, but perhaps something like it that removes all doubt about nuance and context would assist older judges in making sense of modern social interactions over services such as Twitter. Clearly any understanding of how people talk and make casual jokes was completely lacking yesterday when Judge Jacqueline Davies upheld the conviction of Paul Chambers in a Doncaster court.

Chambers' crime, if you blinked and missed those 140 characters, was to post a frustrated message about snowbound Doncaster airport: "Crap! Robin Hood airport is closed. You've got a week and a bit to get your shit together otherwise I'm blowing the airport sky high!" Everyone along the chain of accountability up to the Crown Prosecution Service - the airport duty manager, the airport's security personnel, the Doncaster police - seems to have understood he was venting harmlessly. And yet prosecution proceeded and led, in May, to a conviction that was widely criticized both for its lack of understanding of new media and for its failure to take Chambers' lack of malicious intent into account.

By now, everyone has been thoroughly schooled in the notion that it is unwise to make jokes about bombs, plane crashes, knives, terrorists, or security theater - when you're in an airport hoping to get on a plane. No one thinks any such wartime restraint need apply in a pub or its modern equivalent, the Twitter/Facebook/online forum circle of friends. I particularly like Heresy Corner's complaint that the judgement makes it illegal to be English.

Anyone familiar with online writing style immediately and correctly reads Chambers' Tweet for what it was: a perhaps ill-conceived expression of frustration among friends that happens to also be readable (and searchable) by the rest of the world. By all accounts, the judge seems to have read it as if it were a deliberately written personal telegram sent to the head of airport security. The kind of expert explanation on offer in this open letter apparently failed to reach her.

The whole thing is a perfect example of the growing danger of our data-mining era: that casual remarks are indelibly stored and can be taken out of context to give an utterly false picture. One of the consequences of the Internet's fundamental characteristic of allowing the like-minded and like-behaved to find each other is that tiny subcultures form all over the place, each with its own set of social norms and community standards. Of course, niche subcultures have always existed - probably every local pub had its own set of tropes that were well-known to and well-understood by the regulars. But here's the thing they weren't: permanently visible to outsiders. A regular who, for example, chose to routinely indicate his departure for the Gents with the statement, "I'm going out to piss on the church next door" could be well-known in context never to do any such thing. But if all outsiders saw was a ten-second clip of that statement and the others' relaxed reaction that had been posted to YouTube they might legitimately assume that pub was a shocking hotbed of anti-religiou slobs. Context is everything.

The good news is that the people on the ground whose job it was to protect the airport read the message, understood it correctly, and did not overreact. The bad news is that when the CPS and courts did not follow their lead it opened up a number of possibilities for the future, all bad. One, as so many have said, is that anyone who now posts anything online while drunk, angry, stupid, or sloppy-fingered is at risk of prosecution - with the consequence of wasting huge amounts of police and judicial time that would be better spent spotting and stopping actual terrorists. The other is that everyone up the chain felt required to cover their ass in case they were wrong.

Chambers still may appeal to the High Court; Stephen Fry is offering to pay his fine (the Yorkshire Post puts his legal bill at £3,000), and there's a fund accepting donations.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 30, 2010

Child's play

In the TV show The West Wing (Season 6, Episode 17, "A Good Day") young teens tackle the president: why shouldn't they have the right to vote? There's probably no chance, but they made their point: as a society we trust kids very little and often fail to take them or their interests seriously.

That's why it was so refreshing to read in 2008's < a href="http://www.dcsf.gov.uk/byronreview/actionplan/">Byron Review the recommendation that we should consult and listen to children in devising programs to ensure their safety online. Byron made several thoughtful, intelligent analogies: we supervise as kids learn to cross streets, we post warning signs at swimming pools but also teach them to swim.

She also, more controversially, recommended that all computers sold for home use in the UK should have Kitemarked parental control software "which takes parents through clear prompts and explanations to help set it up and that ISPs offer and advertise this prominently when users set up their connection."

The general market has not adopted this recommendation; but it has been implemented with respect to the free laptops issued to low-income families under Becta's £300 million Home Access Laptop scheme, announced last year as part of efforts to bridge the digital divide. The recipients - 70,000 to 80,000 so far - have a choice of supplier, of ISP, and of hardware make and model. However, the laptops must meet a set of functional technical specifications, one of which is compliance with PAS 74:2008, the British Internet safety standard. That means anti-virus, access control, and filtering software: NetIntelligence.

Naturally, there are complaints; these fall precisely in line with the general problems with filtering software, which have changed little since 1996, when the passage of the Communications Decency Act inspired 17-year-old Bennett Haselton to start Peacefire to educate kids about the inner working of blocking software - and how to bypass it. Briefly:

1. Kids are often better at figuring out ways around the filters than their parents are, giving parents a false sense of security.

2. Filtering software can't block everything parents expect it to, adding to that false sense of security.

3. Filtering software is typically overbroad, becoming a vehicle for censorship.

4. There is little or no accountability about what is blocked or the criteria for inclusion.

This case looks similar - at first. Various reports claim that as delivered NetIntelligence blocks social networking sites and even Google and Wikipedia, as well as Google's Chrome browser because the way Chrome installs allows the user to bypass the filters.

NetIntelligence says the Chrome issue is only temporary; the company expects a fix within three weeks. Marc Kelly, the company's channel manager, also notes that the laptops that were blocking sites like Google and Wikipedia were misconfigured by the supplier. "It was a manufacturer and delivery problem," he says; once the software has been reinstalled correctly, "The product does not block anything you do not want it to." Other technical support issues - trouble finding the password, for example - are arguably typical of new users struggling with unfamiliar software and inadequate technical support from their retailer.

Both Becta and NetIntelligence stress that parents can reconfigure or uninstall the software even if some are confused about how to do it. First, they must first activate the software by typing in the code the vendor provides; that gets them password access to change the blocking list or uninstall the software.

The list of blocked sites, Kelly says, comes from several sources: the Internet Watch Foundation's list and similar lists from other countries; a manual assessment team also reviews sites. Sites that feel they are wrongly blocked should email NetIntelligence support. The company has, he adds, tried to make it easier for parents to implement the policies they want; originally social networks were not broken out into their own category. Now, they are easily unblocked by clicking one button.

The simple reaction is to denounce filtering software and all who sail in her - censorship! - but the Internet is arguably now more complicated than that. Research Becta conducted on the pilot group found that 70 percent of the parents surveyed felt that the built-in safety features were very important. Even the most technically advanced of parents struggle to balance their legitimate concerns in protecting their children with the complex reality of their children's lives.

For example: will what today's children post to social networks damage their chances of entry into a good university or a job? What will they find? Not just pornography and hate speech; some parents object to creationist sites, some to scary science fiction, others to Fox News. Yesterday's harmless flame wars are today's more serious cyber-bullying and online harassment. We must teach kids to be more resilient, Byron said; but even then kids vary widely in their grasp of social cues, common sense, emotional make-up, and technical aptitude. Even experts struggle with these issues.

"We are progressively adding more information for parents to help them," says Kelly. "We want the people to keep the product at the end. We don't want them to just uninstall it - we want them to understand it and set the policies up the way they want them." Like all of us, Kelly thinks the ideal is for parents to engage with their children on these issues, "But those are the rules that have come along, and we're doing the best we can."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 23, 2009

The power of Twitter

It was the best of mobs, it was the worst of mobs.

The last couple of weeks have really seen the British side of Twitter flex its 140-character muscles. First, there was the next chapter of the British Chiropractic Association's ongoing legal action against science writer Simon Singh. Then there was the case of Jan Moir, who wrote a more than ordinarily Daily Mailish piece for the Daily Mail about the death of Boyzone's Stephen Gately. And finally, the shocking court injunction that briefly prevented the Guardian from reporting on a Parliamentary question for the first time in British history.

I am on record as supporting Singh, and I, too, cheered when, ten days ago, Singh was granted leave to appeal Justice Eady's ruling on the meaning of Singh's use of the word "bogus". Like everyone, I was agog when the BCA's press release called Singh "malicious". I can see the point in filing complaints with the Advertising Standards Authority over chiropractors' persistent claims, unsupported by the evidence, to be able to treat childhood illnesses like colic and ear infections.

What seemed to edge closer to a witch hunt was the gleeful take-up of George Monbiot's piece attacking the "hanging judge", Justice Eady. Disagree with Eady's ruling all you want, but it isn't hard to find libel lawyers who think his ruling was correct under the law. If you don't like his ruling, your correct target is the law. Attacking the judge won't help Singh.

The same is not true of Twitter's take-up of the available clues in the Guardian's original story about the gag to identify the Parliamentary Question concerned and unmask Carter-Ruck, the lawyers who served it and their client, Trafigura. Fueled by righteous and legitimate anger at the abrogation of a thousand years of democracy, Twitterers had the PQ found and published thousands of times in practically seconds. Yeah!

Of course, this phenomenon (as I'm so fond of saying) is not new. Every online social medium, going all the way back to early text-based conferencing systems like CIX, the WELL, and, of course, Usenet, when it was the Internet's town square (the function in fact that Twitter now occupies) has been able to mount this kind of challenge. Scientology versus the Net was probably the best and earliest example; for me it was the original net.war. The story was at heart pretty simple (and the skirmishes continue, in various translations into newer media, to this day). Scientology has a bunch of super-secrets that only the initiate, who have spent many hours in expensive Scientology training, are allowed to see. Scientology's attempts to keep those secrets off the Net resulted in their being published everywhere. The dust has never completely settled.

Three people can keep a secret if two of them are dead, said Mark Twain. That was before the Internet. Scientology was the first to learn - nearly 15 years ago - that the best way to ensure the maximum publicity for something is to try to suppress it. It should not have been any surprise to the BCA, Trafigura, or Trafigura's lawyers. Had the BCA ignored Singh's article, far fewer people would know now about science's dim view of chiropractic. Trafigura might have hoped that a written PQ would get lost in the vastness that is Hansard; but they probably wouldn't have succeeded in any case.

The Jan Moir case, and the demonstration outside Carter-Ruck's offices are, however rather different. These are simply not the right targets. As David Allen Green (Jack of Kent) explains, there's no point in blaming the lawyers; show your anger to the client (Trafigura) or to Parliament.

The enraged tweets and Facebook postings about Moir's article helped send a record number of over 25,000 complaints to the Press Complaints Commission, whose Web site melted down under the strain. Yes, the piece was badly reasoned and loathsome, but isn't that what the Daily Mail lives for? Tweets and links create hits and discussion. The paper can only benefit. In fact, it's reasonable to suppose that in the Trafigura and Moir cases both the Guardian and the Daily Mail manipulated the Net perfectly to get what they wanted.

But the stupid part about let's-get-Moir is that she does not *matter*. Leave aside emotional reactions, and what you're left with is someone's opinion, however distasteful.

This concerted force would be more usefully turned to opposing the truly dangerous. See for example, the AIDS denialism on parade by Fraser Nelson at The Spectator. The "come-get-us" tone e suggests that they saw attention New Humanist got for Caspar Melville's mistaken - and quickly corrected - endorsement of the film House of Numbers and said, "Let's get us some of that." There is no more scientific dispute about whether HIV causes AIDS than there is about climate change or evolutionary theory.

If we're going to behave like a mob, let's stick to targets that matter. Jan Moir's column isn't going to kill anybody. AIDS denialism will. So: we'll call Trafigura a win, chiropractic a half-win, and Moir a loser.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

August 7, 2009

The five percent solution

So much has been said about Australia's Internet filtering this year that nearby New Zealand's project has mostly escaped notice. The plan is to implement filtering sometime in the next couple of months. Unlike the UK, where the blocklist is maintained by the Internet Watch Foundation under a voluntary arrangement, in New Zealand the list is being administered by the Department of Internal Affairs.

It turns out that the technology New Zealand is putting in place is coming into use in the UK, courtesy of Watchdog International, which recently signed a deal to supply it to Talk Internet.

Watchdog's managing director, Peter Mancer, says the idea for the technical implementation comes from Sweden.

"I was impressed at the cooperation of police and NGOs," he said of the work he observed there, "but I don't like DNS poisoning. It's not effective enough and it's too broad a brush, and my ten-year-old can bypass it by putting someone else's DNS servers in the browser settings. But it's easy to employ from the ISP's point of view." DNS poisoning - or rather, blocking selected domains - is, of course, what is implemented in the UK through BT's Cleanfeed.

The system Mancer was shown by the Swedish royal technical college and now supplies via his company relies instead on Border Gateway Protocol, or BGP, the core routing protocol of the Internet. Users don't interact with it directly; it's used among ISPs to route traffic correctly. In New Zealand's case, the necessary servers are all managed and hosted by the government. Mancer's explanation: "All ISPs connect to those servers via Internet tunnels using BGP, so the URL list is managed independently of the ISPs, and there is very little cost to the ISP - a few configurations and they're connected to it."

The point for the UK: Cleanfeed requires implementation effort from the ISP. If you're Virgin or another huge ISP, you have sufficient resources and in-house expertise to do it. But the difficulty and expense is, says Mancer, one of the reasons why smaller ISPs haven't adopted it - and why the percentage of British consumer broadband users covered by the IWF blocklist has remained stuck at 90 to 95 percent for years.

Smaller ISPs, says Mancer, "find it quite a challenge. Cleanfeed is not suitable for a lot of ISPs, and there's no commercially available system." So, he says, to the "remaining 5 percent tail which the Home Office and the government keep jumping up and down about a commercially available solution is more attractive." Watchdog's system starts at €2,000 per year, or about £200 per month, and the cost per user goes down as the number of users goes up. Despite the horrid economics of running a small ISP, 5p per customer per month ought in theory to be affordable.

All of this leads back to the question we posed in a panel at this year's Computers, Freedom, and Privacy conference: can the Internet still route around censorship? Images of child abuse (the IWF's preferred term) are illegal in most countries.

Even the US is beginning to show signs of moving in the hotline-voluntary blocklist direction. Last year, for example, Qwest began blocking access to a list of sites that the National Center for Missing and Exploited Children has identified as containing child pornography. (This is not, by the way, a violation of the First Amendment right to free speech as far as I can make out. The First Amendment says, "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances." It does not prohibit private companies like Qwest from making their own rules, a reality that seems to be widely misunderstood.)

Mancer himself is passionate on the topic: "I sat on a Swedish hotline and took some of the reports and looked at sites. It really does impact you, and it's worth fighting against." He adds, "We're a bit frustrated. We believe we have a good solution that's affordable, but a lot of ISPs are sitting on the fence." There isn't, he concludes, enough pressure.

Given some odds and ends of possible failures - the link to Watchdog's servers has to stay up, the ISP has to configure its systems correctly - Watchdog's system seems likely to be hard for Web users to bypass, although Richard Clayton, the expert in these matters, queries whether the technology will be able to track changes fast enough to deal with the fast-flux technology in use on botnets.

But Clayton also sugests that blocking Web sites is becoming quaintly old-fashioned.

"The IWF list is down to c. 400 sites (from 1500+, of which about 1/3 are 'free' sites - ie: a single phone call would remove the material)," he said by email. In other words, the Web may not be able to bypass the technology - but things like TOR, Freenet, closed peer-to-peer networks, and that wacky darknet-in-a-browser project showed off at Black Hat last week probably can because they were deliberately created to bypass the domain name system entirely. The Web is not the Internet. The Web may no longer be able to route around censorship, but the Internet still can in the time-honored way: by changing technologies. Originally, John Gilmore's aphorism referred to...Usenet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter, or send email to netwars@skeptic.demon.co.uk.

January 2, 2009

No rest for 2009

It's been a quiet week, as you'd expect. But 2009 is likely to be a big year in terms of digital rights.

Both the US and the UK are looking to track non-citizens more closely. The UK has begun issuing foreigners with biometric ID cards. The US, which began collecting fingerprints from visiting tourists two years ago says it wants to do the same with green card holders. In other words, you can live in the US for decades, you can pay taxes, you can contribute to the US economy - but you're still not really one of us when you come home.

The ACLU's Barry Steinhardt has pointed out, however, that the original US-VISIT system actually isn't finished: there's supposed to be an exit portion that has yet to be built. The biometric system is therefore like a Roach Motel: people check in but they never leave.

That segues perfectly into the expansion of No2ID's "database state". The UK is proceeding with its plan for a giant shed to store all UK telecommunications traffic data. Building the data shed is a lot like saying we're having trouble finding a few needles in a bunch of haystacks so the answer is to build a lot bigger haystack.

Children in the UK can also look forward to ContactPoint (budget £22.4 million) going live at the end of January, only the first of several. The conservativers apparently have pledged to scrap ContactPoint in favor of a less expensive system that would track only children deemed to be at risk. If the conservatives don't get their chance to scrap it - probably even if they do - the current generation may be the last that doesn't get to grow up taking for granted that their every move is being tracked. Get 'em young, as the Catholic church used to say, and they're yours for life.

The other half of that is, of course, the National Identity Register. Little has been heard of the ID card in recent months; although the Home Office says 1,000 people have actually requested one. Since these have begun rolling out to foreigners, it's probably best to keep an eye on them.

On January 19, look for the EU to vote on copyright term extension in sound recordings. They have now: 50 years. They want: 95 years. The problem: all the independent reviewers agree it's a bad idea economically. Why does this proposal keep dogging us? Especially given that even the UK government accepts that recording contracts mean that little of the royalties will go to the musicians the law is supposedly trying to help, why is the European Parliament even considering it? Write your MEP. Meanwhile, the economic downturn reaches Cliff Richards; his earliest recordings begin entering the public domain...oh, look - yesterday, January 1, 2009.

Those interested in defending file-sharing technology, the public domain, or any other public interest in intellectual property will find themselves on the receiving end of a pack of new laws and initiatives out to get them.

The RIAA recently announced it would cease suing its customers in the US. It plans to "work with ISPs". Anyone who's been around the UK and France in recent months should smell the three-strikes policy that the Open Rights Group has been fighting against. ORG's going to find it a tougher battle, now that the govermment is considering a stick and carrot approach: make ISPs liable for their users' copyright infringement, but give them a slice of the action for legal downloads. One has to hope that even the most cash-strapped ISPs have more sense.

Last year's scare over the US's bald statement that customs authorities have the right to search and impound computers and other electronic equipment carried by travellers across the national borders will probably be followed up with lengthy protest over new rules known as the Anti-Counterfeiting Trade Agreement and being negotiated by the US, EU, Japan, and other countries. We don't know as much as we'd like about what the proposals actually are, though some information escaped last June. Negotiations are expected to continue in 2009.

The EU has said that it has no plans to search individual travellers, which is a relief; in fact, in most cases it would be impossible for a border guard to tell whether files on a computer were copyright violations. Nonetheless, it seems likely that this and other laws will make criminals of most of us; almost everyone who owns an MP3 player has music on it that technically infringes the copyright laws (particularly in the UK, where there is as yet no exemption for personal copying).

Meanwhile, Australia's new $44 million "great firewall" is going ahead despiteknown flaws in the technology. Nearer home, British Culture Secretary Andy Burnham would like to rate the Web, lest it frighten the children.

It's going to be a long year. But on the bright side, if you want to make some suggestions for the incoming Obama administration, head over to Change.org and add your voice to those assembling under "technology policy".

Happy new year!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 19, 2008

Backbone

There's a sense in which you haven't really arrived as a skeptic until someone's sued you. I've never had more than a threat, so as founder of The Skeptic, I'm almost a nobody. But by that standard Simon Singh, author with alternative medicine professor Edzard Ernst of the really excellent Trick or Treatment: The Undeniable Facts about Alternative Medicine, has arrived.

I think of Singh as one of the smarter, cooler generation of skeptics, who combine science backgrounds, good writing, and the ability to make their case in the mass media. Along with Ben Goldacre, Singh has proved that I was wrong when I thought, ten years ago, that getting skepticism into the national press on a regular basis was just too unlikely.

It's probably no coincidence that both cover complementary and alternative medicine, one of the biggest consumer issues of our time. We have a government that wants to save money on the health service. We have consumers who believe, after a decade or more of media insistence, that medicine is bad (BSE, childhood vaccinations, mercury fillings) and alternative treatments that defy science (homeopathy, faith healing) are good. We have overworked doctors who barely know their patients and whose understanding of the scientific process is limited. We have patients who expect miraculous cures like the ones they see on the increasingly absurd House. Doctors recommend acupuncture and Prince Charles, possessed of the finest living standards and medical treatment money can buy, promotes everything *else*. And we have medical treatments whose costs spiral every upwards, and constant reports of new medicines that fail their promise in one way or another.

But the trouble with writing for major media in this area is that you run across the litigious, and so has Singh: as Private Eye has apparently reported, he is being sued for libel by the British Chiropractic Association. The original article was published by the Guardian in April; it's been pulled from the site but the BCA's suit has made reposting it a cause celebre. (Have they learned *nothing* about the Net?) This annotated version details the evidence to back Singh's rather critical assessment of chiropractic. And there are many other New Zealand. And people complain about Big Pharma - the people alternative-medicine folks are supposed to be saving us from.

I'm not even sure how much sense it makes as a legal strategy. As the "gimpy" blog's comments point out, most of Singh's criticisms were based on evidence; a few were personal opinion. He mentioned no specific practitioners. Where exactly is the libel? (Non-UK readers may like to glance at the trouble with UK libel laws, recently criticized by the UN as operating against the public interest..

All science requires a certain openness to criticism. The whole basis of the scientific method is that independent researchers should be able to replicate each other's results. You accept a claim on that basis and only that basis - not because someone says it on their Web site and then sues anyone who calls it lacking in evidence. If the BCA has evidence that Singh is wrong, why not publish it? The answer to bad speech, as Mike Godwin, now working at Wikimedia, is so fond of saying, is more speech. Better speech. Or (for people less fond of talking) a dignified silence in the confidence that the evidence you have to offer is beyond argument. But suing people - especially individual authors rather than major media such as national newspapers - smacks of attempted intimidation. Though I couldn't possibly comment.

Ever since science became a big prestige, big money game we've seen angry fights and accusations - consider, for example, the ungracious and inelegant race to the Nobel prize on the part of some early HIV researchers. Scientists are humans, too, with all the ignoble motives that implies.

But many alternative remedies are not backed by scientific evidence, partly because often they are not studied by scientists in any great depth. The question of whether to allocate precious research money and resource to these treatments is controversial. Large pharmaceutical companies are unlikely to do it, for similar reasons to those that led them to research pills to reverse male impotence instead of new antibiotics. Scientists in research areas may prefer to study bigger problems. Medical organizations are cautious. The British Medical Association has long called for complementary therapies to be regulated to the same standards as orthodox medicine or denied NHS funding. As the General Chiropractic Council notes NHS funding is so far not widespread for chiropractic.

If chiropractors want to play with the big boys - the funded treatments, the important cures - they're going to have to take their lumps with the rest of them. And that means subluxing a little backbone and stumping up the evidence, not filing suit.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 12, 2008

Watching the Internet

It is more than ten years since it was possible to express dissent about the rights and wrongs of controlling the material available on the Net without being identified as either protecting child abusers or being one. Even the most radical of civil liberties organisations flinch at the thought of raising a challenge to the Internet Watch Foundation. Last weekend's discovery that the IWF had added a page from Wikipedia to its filtering list was accordingly the best possible thing that could have happened. It is our first chance since 1995 to have a rational debate about whether the IWF is fulfilling successfully the purpose for which it was set up and the near nationwide coverage of BT's Cleanfeed, despite the problems Cambridge researcher Richard Clayton has highlighted (PDF).

The background: the early 1990s was full of media scare stories about the Internet. In 1996, the police circulated a list of 133 Usenet newsgroups they claimed hosted child pornography, and threatened seizures of equipment. The government threatened regulation. And in that very tense climate, Peter Dawe, the founder of Pipex, called a meeting to announce an initiative he had sketched out on the back of an envelope called SafetyNet, aimed at hindering the spread of child pornography over the Internet. He was willing to stump up £500,000 to get it off the ground.

Renamed the IWF, the system still operates largely like he envisioned it would: it operates a hotline to which the public can report the objectionable material they find. If the IWF believes the material is illegal under UK law and it's hosted in the UK, the ISP is advised to remove it and the police are notified. If it's hosted elsewhere, the IWF adds it to the list of addresses that it recommends for blocking. ISPs must pay to join the IWF to subscribe to the list, and the six biggest ISPs, who have 90 to 95 percent of the UK's consumer accounts, all are members. Cleanfeed is BT's implementation of the list. Of course, despite its availability via Google Groups, Usenet hardly matters any more, and ISPs are beginning to drop it quietly from their offerings as a cost with little return.

The IWF's statement when it eventually removed the block is rather entertaining: it says, essentially, "We were right, but we'll remove the block anyway." In other words, the IWF still believes the image is "potentially illegal" - which provides a helpful, previously unavailable, window into their thinking - but it recognises the foolishness of banning a page on the world's fourth biggest Web site, especially given that the same image can be purchased in large, British record shops in situ on the cover of the 32-year-old album for which it was commissioned.

We've also learned that the most thoughtful debate on these issues is actually available on Wikipedia itself, where the presence of the image had been discussed at length from a variety of angles.

At the free speech end of the spectrum, the IWF is an unconscionable form of censorship. It operates a secret blocklist, it does not notify non-UK sites that they are being blocked, and it operates an equally secret appeals process. Some of this is silly. If it's going to exist the blocklist has to be confidential: a list of Internet links is actions, not words and they can be emailed across the world in seconds, and the link targets downloaded in minutes. Plus, it might be committing a crime: under UK law, it is illegal to take, make, distribute, show, or possess indecent images of children; that includes accessing such images.

At the control end of the spectrum, the IWF is probably too limited. There have been calls for it to add hate speech and racial abuse to its mandate, calls that as far as we know it has so far largely resisted. Pornography involving children - or, in the IWF's preferred terminology, "child sexual abuse images" - is the one thing that most people can agree on.

When the furor dies down and people can consider the matter rationally, I think there's no chance that the IWF will be disbanded. The compromise is too convenient for politicians, ISPs, and law enforcement. But some things could usefully change. Here's my laundry list.

First, this is the first mistake that's come to light in the 12 years of the IWF's existence. The way it was caught should concern us: Wikipedia's popularity and technical incompatibilities between the way Wikipedia protects itself from spam edits and the way UK ISPs have implemented the block list. Other false positives may not be so lucky. The IWF has been audited twice in 12 years; this should be done more frequently and the results published.

The IWF board should be rebalanced to include at least one more free speech advocate and a representative of consumer interests. Currently, it is heavily overbalanced in the direction of law enforcement and child protection representatives.

There should be judicial review and/or oversight of the IWF. In other areas of censorship, it's judges who make the call.

The IWF's personnel should have an infusion of common sense.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 3, 2008

Deprave and corrupt

It's one of the curiosities of being a free speech advocate that you find yourself defending people for saying things you'd never say yourself.

I noticed this last week when a friend, after delivering an impassioned defense of the rights of bloggers to blog about the world around them - say, recounting the Nazi costumes people were wearing to the across-the-street neighbor's party last weekened or detailing the purchases your friend made in the drugstore - and then turned around and said she didn't know why she was defending it because she wouldn't actually put things like that in her blog. (Unless, I suppose, her neighbor was John McCain.)

Probably most bloggers have struggled at one point or another with the collision these tell-the-world-your-private-thoughts technologies create between freedom of speech and privacy. Usually, though, invading your own privacy is reasonably safe, even if that invasion takes the form of revealing your innermost fantasies. Yes, there's a lot of personal information in them thar hills, and the enterprising data miner could certainly find out a lot about me by going through my 17-year online history via Google searches and intelligent matching. But that's nothing to the situation Newcastle civil servant Darryn Walker finds himself in after allegedly posting a 12-page kidnap, torture, and murder fantasy about the pop group Girls Aloud.

As unwise postings go, this one sounds like a real winner. It was (reports say) on a porn site; it named a real pop group (making it likely to pop up in searches by the group's fans); and identified as the author was a real, findable person - a civil servant, no less. A member of the public reported the story to the Internet Watch Foundation, who reported it to the police, who arrested Walker under the Obscene Publications Act.

The IWF's mission in life is to get illegal content off the Net. To this end, it operates a public hotline to which anyone can report any material they think might be illegal. The IWF's staff sift through the reports - 31,776 in 2006, the last year their Web site shows statistics for - and determines whether the material is "potentially illegal". If it is, the IWF reports it to the police and also recommends to the many ISPs who subscribe to its service that the material be removed from their servers. The IWF so far has focused on clearly illegal material, largely pornographic images, both photographic and composited, of children. Since 2003, less than 1 percent of illegal images involving children is hosted in the UK.
As a cloistered folksinger I had never heard of the very successful group Girls Aloud; apparently they were created like synthetic gemstones in 2002 by the TV show Popstars: the Rivals. According to their Wikipedia entries, they're aged 22 to 26 - hardly children, no matter how unpleasant it is to be the heroines of such a violent fantasy.

So the case poses the question: is posting such a story illegal? That is, in the words of the Obscene Publications Act, is it likely to "deprave and corrupt"? And does it matter that the site to which it was posted is not based in the UK?

It is now several decades since any text work was prosecuted under the Obscene Publications Act, and much longer since any such prosecution succeeded. The last such court case, the 1976 prosecution against the publishers of Inside Linda Lovelace apparently left the Metropolitan Police believing they couldn't win . In 1977, a committee recommended excluding novels from the Act. Novels, not blog postings.

Succeeding in this case would therefore potentially extend the IWF's - and the Obscene Publications Unit's - remit by creating a new and extremely large class of illegal material. The IWF prefers to use the term "child abuse images" rather than "child pornography"; in the case of actual photographs of real incidents this is clearly correct. The argument for outlawing composited or wholly created images as well as photographs of actual children is that pedophiles can use them to "groom" their targets - that is, to encourage their participation in child abuse by convincing them that these are activities that other children have engaged in and showing them how. Outlawing text descriptions of real events could block child abuse victims from publishing their own personal stories; outlawing fiction, however disgusting seems a wholly ineffectual way of preventing child abuse. Bad things happen to good fictional characters all the time.

So, as a human being I have to say that I not only wouldn't write this piece, I don't even want to have to read it. But as a free speech advocate I also have to say that the money spent tracking down and prosecuting its writer would have been more effectively spent on...well, almost anything. The one thing the situation has done is widely publicize a story that otherwise hardly anyone knew existed. Suppressing material just isn't as easy as it used to be when all you had to do was tell the publisher to get it off the shelves.

Of course, for Walker none of this matters. The most likely outcome for him in today's environment is a ruined life.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 27, 2007

There ain't no such thing as a free Benidorm

This has been the week for reminders that the border between real life and cyberspace is a permeable blood-brain barrier.

On Wednesday, Linden Labs announced that it was banning gambling in Second Life. The resentment expressed by some of SL residents is understandable but naive. We're not at the beginning of the online world any more; Second Life is going through the same reformation to take account of national laws as Usenet and the Web did before it.

Second, this week MySpace deleted the profiles of 29,000 American users identified as sex offenders. That sounds like a lot, but it's a tiny percentage of MySpace's 180 million profiles. None of them, be it noted, are Canadian.

There's no question that gambling in Second Life spills over into the real world. Linden dollars, the currency used in-world, have active exchange rates, like any other currency, currently running about L$270 to the US dollar. (When I was writing about a virtual technology show, one of my interviewees was horrified that my avatar didn't have any distinctive clothing; she was and is dressed in the free outfit you are issued when you join. He insisted on giving me L$1,000 to take her shopping. I solemnly reported the incident to my commissioning editor, who felt this wasn't sufficiently corrupt to worry about: US$3.75! In-world, however, that could buy her several cars.) Therefore: the fact that the wagering takes place online in a simulated casino with pretty animated decorations changes nothing. There is no meaningful difference between craps on an island in Second Life and poker on an official Web-based betting site. If both sites offer betting on real-life sporting events, there's even less difference.

But the Web site will, these days, have gone through considerable time and money to set up its business. Gaming, even outside the US, is quite difficult to get into: licenses are hard to get, and without one banks won't touch you. Compared to that, the $3,800 and 12 to 14 hours a day Brighton's Anthony Smith told Information Week he'd invested in building his SL Casino World is risibly small. You have to conclude that there are only two possibilities. Either Smith knew nothing about the gaming business - if he did, he know that the US has repeatedly cracked down on online gambling over the last ten years and that ultimately US companies will be forced to decide to live within US law. He'd also have known how hard and how expensive it is to set up an online gambling operation even in Europe. Or, he did know all those things and thought he'd found a loophole he could exploit to avoid all the red tape and regulation and build a gaming business on the cheap.

I have no personal interest in gaming; risking real money on the chance draw of a card or throw of dice seems to me a ridiculous waste of the time it took to earn it. But any time you have a service that involves real money, whether that service is selling an experience (gaming), a service, or a retail product, when the money you handle reaches a certain amount governments are going to be interested. Not only that, but people want them involved; people want protection from rip-off artists.

The MySpace decision, however, is completely different. Child abuse is, rightly, illegal everywhere. Child pornography is, more controversially, illegal just about everywhere. But I am not aware of any laws that ban sex offenders from using Web sites, even if those Web sites are social networks. Of course, in the moral panic following the MySpace announcement, someone is proposing such a law. The MySpace announcement sounds more like corporate fear (since the site is now owned by News International) than rational response. There is a legitimate subject for public and legislative debate here: how much do we want to cut convicted sex offenders out of normal social interaction? And a question for scientists: will greater isolation and alienation be effective strategies to keep them from reoffending? And, I suppose, a question for database experts: how likely is it that those 29,000 profiles all belonged to correctly identified, previously convicted sex offenders? But those questions have not been discussed. Still, this problem, at least in regards to MySpace, may solve itself: if parents become better able to track their kids' MySpace activities, all but the youngest kids will surely abandon it in favour of sites that afford them greater latitude and privacy.

A dozen years ago, John Perry Barlow (in)famously argued that national governments had no place in cyberspace. It was the most hyperbolic demonstration of what I call the "Benidorm syndrome": every summer thousands of holidaymakers descend on Benidorm, in Spain, and behave in outrageous and sometimes lawless ways that they would never dare indulge in at home in the belief that since they are far away from their normal lives there are no consequences. (Rinse and repeat for many other tourist locations worldwide, I'm sure.) It seems to me only logical that existing laws apply to behaviour in cyberspace. What we have to guard against is deforming cyberspace to conform to laws that don't exist.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 15, 2007

Six degrees of defamation

We used to speculate about the future of free speech on the Internet if every country got to impose its own set of cultural quirks and censorship dreams on The lowest common denominator would win – probably Singapore.

We forgot Canada. Michael Geist, the Canada Research Chair of Internet and E-Commerce Law at the University of Ottawa, is being sued for defamation by Wayne Crookes, a Vancouver businessman (it says here). You might think that Geist, who doubles as a columnist for the Toronto Star (so enlightened, a newspaper with a technology law column!), had slipped up and said something unfortunate in one of his public pronouncements. But no. Geist is part of an apparently unlimited number of targets that have linked to other sites that have linked to sites that allegedly contained defamatory postings.

In Geist's words on his blog at the end of May, "I'm reportedly being sued for maintaining a blogroll that links to a site that links to a site that contains some allegedly defamatory third party comments." (Geist has since been served.)
Crookes is also suing Yahoo!, MySpace, and Wikipedia. (If you followed the link to the Wikipedia stub identifying Wayne Crookes, now you know why it's so short. Wikipedia's own logs, searchable via Google, show that it's replacing the previous entry.) Plus P2Pnet, OpenPolitics.ca, DomainsByProxy, and Google. In fact, it's arguable that if Crookes isn't suing you your Net presence is so insignificant that you should put your head in a bucket.

One of the things about a very young medium – as the Net still is – is that the legal precedents about how it operates may be set by otherwise obscure individuals. In Britain, one of the key cases determining the liability of ISPs for material they distribute was 1999's Laurence Godfrey vs Demon Internet. Godfrey was, or is, an otherwise unremarkable British physics lecturer working in Canada until he discovered Usenet; his claim to fame (see for example the Net.Legends FAQ) is a series of libel suits he launched to protect his reputation after a public dispute whose details probably few remember or understand. In 2000 Demon settled the case, paying Godfrey £15,000 and legal costs. And thus were today's notice and takedown rules forged.

The truly noticeable thing about Godfrey's case against Demon was that Demon was not Godfrey's ISP, nor was it the ISP used by the poster whose 1997 contributions to soc.culture.thai were at issue. Demon was merely the largest ISP in Britain that carried the posting, along with the rest of the newsgroup, on its servers. The case therefore is one of a string of cases that loosely circled a single issue: the liability of service providers for the material they host. US courts decided in 1991, in Cubby vs Compuserve, that an online service provider was more like a bookstore than a publisher. But under the Digital Millennium Copyright Act it has become alarmingly easy to frighten individuals and service providers into taking down material based on an official-looking lawyer's letter. (The latest target, apparently, is guitar tablature, which, speaking as a musician myself, I think is shameful.)

But the more important underlying thread is the attempt to keep widening the circle of liability. In Cubby, at least the material at issue appeared on the Journalism Forum which, though independently operated, was part of CompuServe's service. That particular judgement would not have helped any British service provider: in Britain, bookstores, as well as publishers, can be held responsible for libels that appear in the books they sell, a fact that didn't help Demon in the Godfrey case.

In the US, the next step was 2600 DeCSS case (formally known as Universal City vs Reimerdes, which covered not only posting copies of the DVD-decrypting software but linking to sites that had it available. This, of course, was a copyright infringement case, not a libel case; with respect to libel the relevant law seems to be, of all things, the 1996 Communications Decency Act, which allocated sole responsibility to the original author. Google itself has already won at least one lawsuit over including allegedly defamatory material in its search results.

But legally Canada is more like Britain than like the US, so the notion of making service providers responsible may be a more comfortable one. In his column on the subject, Geist argues that if Crookes' suits are successful Canadian free speech will be severely curtailed. Who would dare run a wiki or allow comments on their blog if they are to be held to a standard that makes them liable for everything posted there? Who would even dare put a link to a third-party site on a Web site or in a blogroll if they are to be held liable for all the content not only on that site but on all sites that site links to? Especially since Crookes's claim against Wikimedia is not that the site failed to remove the offending articles when asked, but that the site failed to monitor itself proactively to ensure that the statements did not reappear.

The entire country may have to emigrate virtually. Are you now, or have you ever been, Canadian?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 24, 2006

The Great Firewall of Britain

We may joke about the "Great Firewall of China", but by the end of 2007 content blocking will be a fact of Internet life in the UK. In June, Vernon Coaker, Parliamentary Under-Secretary for the Home Department told Parliament, "I have recently set the UK Internet industry a target to ensure that by the end of 2007 all Internet service providers offering broadband Internet connectivity to the UK public prevent their customers from accesssing those Web sites." By "those", he means Web sites carrying pornographic images of children.

Coaker went on to say that by the end of 2006 he expects 90 percent of ISPs to have blocked "access to sites abroad", and that, "We believe that working with the industry offers us the best way forward, but we will keep that under review if it looks likely that the targets will not be met."

The two logical next questions: How? And How much?

Like a lot of places, the UK has two major kinds of broadband access: cable and DSL. DSL is predominantly provided by BT, either retail directly to customers or wholesale to smaller ISPs. Since 2004, BT's retail service is filtered by its Cleanfeed system, which last February the company reported was blocking about 35,000 attempts to access child pornography sites per day. The list of sites to block comes from the Internet Watch Foundation, and is compiled from reports submitted by the public. ISPs pay IWF £5,000 a year to be supplied with the list – insignificant to a company like BT but not necessarily to a smaller one. But the raw cost of the IWF list is insignificant compared to the cost of reengineering a network to do content blocking.

How much will it cost for the entire industry?

Malcolm Hutty, head of public affairs at Linx, says he can't even begin to come up with a number. BT, he thinks, spent something like £1 million in creating and deploying Cleanfeed – half on original research and development, half on deployment. Most of the first half of that would not now be necessary for an ISP trying to decide how to proceed, since a lot more is known now than back in 2003.

Although it might seem logical that Cleanfeed would be available to any DSL provider reselling BT's wholesale product, that's not the case.

"You can be buying all sorts of different products to be able to provide DSL service," he says. A DSL provider might simply rebrand BT's own service – or it might only be paying BT to use the line from your home to the exchange. "You have to be pretty close to the first extreme before BT Cleanfeed can work for you." So adopting Cleanfeed might mean reengineering your entire product.

In the cable business, things are a bit different. There, an operator like ntl or Telewest owns the entire network, including the fibre to each home. If you're a cable company that implemented proxy caching in the days when bandwidth was expensive and caching was fashionable, the technology you built then will make it cheap to do content blocking. According to Hutty, ntl is in this category – but its Telewest and DSL businesses are not.

So the expense to a particular operator varies for all sorts of reasons: the complexity of the network, how it was built, what technologies it's built on. This mandate, therefore, has no information behind it as to how much it might cost, or the impact it might have on an industry that other sectors of government regard as vital for Britain's economic future.

The How question is just as complicated.

Cleanfeed itself is insecure (PDF), as Cambridge researcher Richard Clayton has recently discovered. Cleanfeed was intended to improve on previous blocking technologies by being both accurate and inexpensive. However, Clayton has found that not only can the system be circumvented but it also can be used as an "oracle to efficiently locate illegal websites".

Content blocking is going to be like every other security system: it must be constantly monitored and updated as new information and attacks becomes known or are developed. You cannot, as Clayton says, "fit and forget".

The other problem in all this is the role of the IWF. It was set up in 1996 as a way for the industry to regulate itself; the meeting where it was proposed came after threats of external regulation. If all ISPs are required to implement content blocking, and all content blocking is based on the IWF's list, the IWF will have considerable power to decide what content should be blocked. So far, the IWF has done a respectable job of sticking to clearly illegal pornography involving children. But its ten years have been marked by occasional suggestions that it should broaden its remit to include hate speech and even copyright infringement. Proposals are circulating now that the organisation should become an independent regulator rather than an industry-owned self-regulator. If IWF is not accountable to the industry it regulates; if it's not governed by Parliamentary legislation; if it's not elected….then we will have handed control of the British Internet over to a small group of people with no accountability and no transparency. That sounds almost Chinese, doesn't it?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 27, 2006

Crossfire

The Sky News host looked horrified. How, he asked, could anyone claim that getting rid of child pornography online had anything to do with freedom of speech? Surely, he added, anyone would know child pornography when they see it. "Of course," he added, "I've never seen any…"

We are all against child abuse.

The occasion for this discussion, which also included John Carr, from the NCH: ten years ago this week, a bunch of us sat in a room somewhere in Central London while Peter Dawe explain his back-of-the-envelope scheme for combating child pornography online. Most of us thought it was a bad idea, and against Net freedoms, but the then very real threat of regulation was worse. Carr and I were both there, arguing on opposite sides.

In honor of the tenth anniversary, the Internet Watch Foundation released a bunch of statistics. Removed 30,000 Web sites from the British Internet. Seen Britain's share of such sites shrink from 18 percent to .2 percent. A third of the Web sites reported to IWF are found to be "potentially illegal". (These get forwarded to the police.)

The IWF includes a note at the end of the press release to the effect that it doesn't like the term "child pornography" because it "can act to legitimise images which are not pornography." IWF prefers the term "child abuse", because, it says, "they are permanent records of children being sexually abused." But that isn't necessarily so: digital composites do not document anything at all. No one, as far as I'm aware, has done – or legally been able to do – a study of the images of this type that circulate. It would be valuable to know what percentage are images from known cases, for example, or how many can be identified as not children at all, either because they are clearly digital composites or because they use young-seeming adults.

I am willing to stipulate that the images the IWF inspects are, as I have been told they are, horrendously upsetting to look at. But we will never really know; there can be no transparency in this situation. I am also willing to stipulate that in spite of our fears in 1996, other than a few wobbles of purpose, the IWF seems to have stuck to its very narrow remit. It has not, as it has a couple of times suggested it might, branched out into hate speech and copyright violations. It has stuck, basically, to the one thing most people agree is wrong, though of course there is no external review of what's being removed.

Carr noted two things I didn't know. First, that the US is now the world leader in child pornography sites. Second, that Congress is now considering initiatives to change that. Until now, there's been this little problem of the First Amendment which, Carr said, is sacred to Americans. This was the moment when our host looked so horrified.

I'm not sure the First Amendment, which includes freedoms of assembly, religion, and the press as well as free speech, is as sacred as it used to be. For one thing, freedom of speech is one of those things that everyone wants for themselves but not so much for other people. For another, it's commonly misunderstood. The First Amendment doesn't guarantee free speech of all types and in all circumstances. What it actually says is that "Congress shall make no law…abridging the freedom of speech." It may well be that had the Founding Fathers lived in a time where giant corporations were as much of a threat as governments they would have drafted that differently. But the Constitution, like the Bible or Shakespeare, lives on interpretation and textual analysis. What the First Amendment bans, therefore, is legislation that limits free speech.

Which is why, when I went to look up the Congressional moves mentioned by Carr (which I've been unable to find and which even he suggested might be just midterm election posturing), I discovered that this week the ACLU is in court with the government over the 1998 Child Online Protection Act. In its action, ACLU is representing a host of well-respected plaintiffs, including Salon, Dictionary.com, and Powell's Bookstores.

The point of the ACLU's action is not to defend child abuse – we are all against child abuse. The point is that it is very, very difficult to draft a law that only, narrowly, bans child pornography and therefore could pass the First Amendment test in court. And COPA didn't manage it; instead, it banned material that might be "harmful to minors", whether or not that material might be valuable to adults. Clinton, who signed it into law in 2000, ought to be ashamed of himself. But I suppose politically it's a valid strategy: win votes for yourselfyou’re your party by creating a law that looks like you're doing something to protect children; let the ACLU be the bad guy later and by getting it overturned.

So what I should have said to our host is this: it's freedom of speech that's allowing us to have this discussion. But freedom of speech does not mean condoning child abuse.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

October 20, 2006

Spam, spam, spam, and spam

Illinois is a fine state. It is the Land of Lincoln. It is the birth place of such well-known Americans as Oprah Winfrey, Roger Ebert, and Ronald Reagan. It has a baseball team so famous that even I know it's called the Chicago Cubs. John Dewey (as in the Dewey decimal system for cataloguing library books) came from Illinois. So did the famous pro-evolution lawyer Clarence Darrow, Mormon church founder Joseph Smith, the nuclear physicist Enrico Fermi, semiconductor inventor William Shockley, and Frank Lloyd Wright.

I say all this because I don't want anyone to think I don't like or respect Illinois or the intelligence and honor of its judges, including those of Charles Kocoras, who who awarded $11.7 million in damages to e360Insight, a company branded a spammer by the Spamhaus Project.

The story has been percolating for a while now, but is reasonably simple. e360Insight says it's not a bad spammer guy but a good opt-in marketing guy; Spamhaus first said the Illinois court didn't have jurisdiction over a British company with no offices, staff, or operations in the US, then decided to appeal against the court's $11.7 million judgement. e360Insight filed a motion asking the court to haveICANN and/or Spamhaus's domain registrar, the Canadian company Tucows, remove Spamhaus's domain from the Net. The judge refused to grant this request, partly because doing so would cut off Spamhaus's lawful activities, not just those in contravention of the order he issued against Spamhaus. And a good time is being had by all the lawyers.

The case raises so many problems you almost don't know where to start. For one thing, there's the arms race that is spam and anti-spam. This lawsuit escalates it, in that if you can't get rid of an anti-spammer through DDoS attacks, well, hey, bankrupt them through lawsuits.

Spam, as we know, is a terrible, intractable problem that has broken email, and is trying to break blogs, instant messaging, online chat, and, soon, VOIP. (The net.wars blog, this week, has had hundreds of spam comments, all appearing to come from various Gmail addresses, all landing in my inbox, breaking both blogs and email in one easy, low-cost plan. The breakage takes two forms. One is the spam itself – up to 90 percent of all email. But the second is the steps people take to stop it. No one can use email with any certainty now.

Some have argued that real-time blacklists are censorship. I don't think it's fair to invoke the specter of Joseph McCarthy. For one thing, using these blacklists is voluntary. No one is forced to subscribe, not even free Webmail users. That single fact ought to be the biggest protection against abuse. For another thing, spam email in the volumes it's now going out is effectively censorship in itself: it fills email boxes, often obscuring and sometimes blocking entirely wanted email. The fact that most of it either is a scam or advertises something illegal is irrelevant; what defines spam, I have long argued, is the behavior that produces it. I have also argued that the most effective way to put spammers out of business is to lean on the credit card companies to pull their authorisations.

Mail servers are private property; no one has the automatic right to expect mine to receive unwanted email just as I am not obliged to speak to a telemarketer who phones during dinner.

That does not mean all spambusters are perfect. Spamhaus provides a valuable public service. But not all anti-spammers are sane; in 2004 journalist Brian McWilliams made a reasonable case in his book Spam Kings that some anti-spammers can be as obsessive as the spammers they chase.

The question that's dominated a lot of the Spamhaus coverage is whether an Illinois court has jurisdiction over a UK-based company with no offices or staff in the US. In the increasingly connected world we live in, there are going to be a lot of these jurisdictional questions. The first one I remember – the 1996 case United States vs. Thomas – came down in favor of the notion that Tennessee could impose its community decency standards on a bulletin board system in California. It may be regrettable – but consumers are eager enough for their courts to have jurisdiction in case of fraud. Spamhaus is arguably as much in business in the US as any foreign organisation whose products are bought or used in the US. Ultimately, "Come here and say that" just isn't much of a legal case.

The really tricky and disturbing question is: how should blacklists operate in future? Publicly listing the spammers whose mail is being blocked is an important – even vital – way of keeping blacklists honest. If you know what's being blocked and can take steps to correct it, it's not censorship. But publishing those lists makes legal action against spam blockers of all types – blacklists, filtering software, you name it – easier.

Spammers themselves, however, should not rejoice if Spamhaus goes down. Spam has broken email, that's not news. But if Spamhaus goes and we actually receive all the spam it's been weeding out for us – the flood will be so great that spam will finally break spam itself.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 2, 2006

Boob job

Back in about 1978, the wonderful actress Diana Rigg did a full half-hour with the American talk show interviewer Dick Cavett, during which she told the story of the Avengers episode in which she had to do a belly dance (Honey for the Prince). The American network executives reacted with some of the horror with which Oscar Wilde's Lady Bracknell said, "A handbag?"

The problem was navels. You can't, the network executives told Diana Rigg, show your navel on television. They insisted she wear a jewel to cover up her navel, and it had to be glued in place, and the glue didn't work…but I digress. "Where did that come from, I wonder?" Cavett asked, speculating that somewhere back in the mists of time some executive had decreed, "I don't want navels!" I'm working from memory here, but I think Rigg replied, "I think it's a lot of men who don't want to know where they come from."

Apparently even if the navel reference is just a black dot: the press barons who ran the comic strip Beetle Bailey, kept erasing the navels off Miss Buxley, the blonde, bikini-clad secretary whose job it was to be ogled by the general.

Eventually, the navelphobics lost. Enter their descendants, the nipplephobics (there's apparently an entire department on Desperate Housewives whose job it is to blur the actresses' nipples), some of whom are running things at LiveJournal, which recently declared some kind of war on icons depicting breastfeeding mothers. Even if those mothers are medieval paintings.

That is, of course, a vast over-simplification. According to a comment in Teresa Nielsen Hayden's blog by a member of LiveJournal's abuse team, in fact no rules have changed. LiveJournal always banned nipples (and areolae) in default icons in its terms and conditions. All that happened recently was that the site altered its FAQ to reflect that ban – which is when people noticed. That's online community for you. Things are going fine until suddenly someone reads an FAQ, at which point they behave as though you've just shot their mother.

What is a default icon? Well you may ask. When you search LiveJournal you get pages showing user profiles. Each of these has a small, square picture depicting…anything the user happens to like. One of my friends has a picture of something that looks like a ferret holding a rifle. Another has a picture of herself piloting a boat. Many users have a clutch of these pictures, and attach one to every blog entry.

The default icon is the picture that by default shows up on one of those profile pages. Banning nipples from default icons in no way stops users from putting up pictures of nipples with their postings, or linking to pictures of nipples, or talking about nipples, or even having nipples in real life. The idea, I guess, is that people should be able to conduct searches in the complete confidence that they will not see anything that offends them. Like nipples. It's the same reasoning by which the Federal Communications Commission bans terrestrial broadcast television from showing nudity, pornography, extreme violence, and swearing: someone could turn on their TV and accidentally see something that offends them. We can't have that.

Giggle.

So some people got cease and desist notices from the LiveJournal abuse team asking them to remove their lactating mother default icons. They took umbrage. There was discussion. And now there's going to be a protest: on 6/6/6, that is, Tuesday, when an indeterminate number of people are going to delete their LiveJournals to protest this discrimination against nipples, or at least against the ones that are in babies' mouths, and a fine, old time is going to be had by all. There is a subset of protesters who believe they are striking a blow for breastfeeding and against bottle feeding, but this is clearly a confusion between cyberspace and real life and beyond the reach of LiveJournal rules. They plan to restore their LiveJournals 24 hours later, since deletions are not permanent for 30 days.

My guess is that the number of protesters won't even make a dent in LiveJournal's 10 million bloggers. But the complaint isn't, ultimately, really trivial: the underlying reality is that LiveJournal isn't a small, open-source cooperative whose rules and standards are formed by the community any more. It's a business with a venture capital-funded owner that is trying to figure out how to "monetize" what it's bought. There will be many more disputes like this as the business develops, because the dispute is really about who owns LiveJournal: the users or the business. Every online community goes through this, and some even survive. Groups who really can't stand it break off and form their own spaces, such as free-association.net, which broke off from The Tribe when that service abruptly changed its terms and conditions.

One of the big adjustments the US is going through is that sometime in the last century it stopped being possible to deal with disagreements with your neighbors by moving 20 miles up the road and starting your own new town. But cyberspace is infinite. We can do the town right here. Posters, unite! You have nothing to lose but your nipples.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars' home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).