Main

July 22, 2022

Parting gifts

nw-Sunak-Truss-ITV-2022.pngAll national constitutions are written to a threat model that is clearly visible if you compare what they say to how they are put into practice. Ireland, for example, has the same right to freedom of religion embedded in its constitution as the US bill of rights does. Both were reactions to English abuse, yet they chose different remedies. The nascent US's threat model was a power-abusing king, and that focus coupled freedom of religion with a bar on the establishment of a state religion. Although the Founding Fathers were themselves Protestants and likely imagined a US filled with people in their likeness, their threat model was not other beliefs or non-belief but the creation of a supreme superpower derived from merging state and church. In Ireland, for decades, "freedom of religion" meant "freedom to be Catholic". Campaigners for the separation of church and state in 1980s Ireland, when I lived there, advocated fortifying the constitutional guarantee with laws that would make it true in practice for everyone from atheists to evangelical Christians.

England, famously, has no written constitution to scrutinize for such basic principles. Instead, its present Parliamentary system has survived for centuries under a "gentlemen's agreement" - a term of trust that in our modern era transliterates to "the good chaps rule of government". Many feel Boris Johnson has exposed the limitations of this approach. Yet it's not clear that a written constitution would have prevented this: a significant lesson of Donald Trump's US presidency is how many of the systems protecting American democracy rely on "unwritten norms" - the "gentlemen's agreement" under yet another name.

It turns out that tinkering with even an unwritten constitution is tricky. One such attempt took place in 2011, with the passage of the Fixed-term Parliaments Act. Without the act, a general election must be held at least once every five years, but may be called earlier if the prime minister advises the monarch to do so; they may also be called at any time following a vote of no confidence in the government. Because past prime ministers were felt to have abused their prerogative by timing elections for their political benefit, the act removed it in favor of a set five-year interval unless a no-confidence vote found a two-thirds super-majority. There were general elections in 2010 and 2015 (the first under the act). The next should have been in 2020. Instead...

No one counted on the 2016 vote to leave the EU or David Cameron's next-day resignation. In 2017, Theresa May, trying to negotiate a deal with an increasingly divided Parliament and thinking an election would win her a more workable majority and a mandate, got the necessary super-majority to call a snap election. Her reward was a hung Parliament; she spent the rest of her time in office hamstrung by having to depend on the good will of Northern Ireland's Democratic Unionist Party to get anything done. Under the act, the next election should have been 2022. Instead...

In 2019, a Conservative party leadership contest replaced May with Boris Johnson, who, after several failed attempts blocked by opposition MPs determined to stop the most reckless Brexit possibilities, won the necessary two-thirds majority and called a snap election, winning a majority of 80 seats. The next election should be in 2024. Instead...

They repealed the act in March 2022. As we were. Now, Johnson is going, leaving both party and country in disarray. An election in 2023 would be no surprise.

Watching the FTPA in action led me to this conclusion: British democracy is like a live frog. When you pin down one bit of it, as the FTPA did, it throws the rest into distortion and dysfunction. The obvious corollary is that American democracy is a *dead* frog that is being constantly dissected to understand how it works. The disadvantage to a written constitution is that some parts will always age badly. The advantage is clarity of expectations. Yet both systems have enabled someone who does not care about norms to leave behind a generation's worth of continuing damage.

All this is a long preamble to saying that last year's concerns about the direction of the UK's computers-freedom-privacy travel have not abated. In this last week before Parliament rose for the summer, while the contest and the heat saturated the news, Johnson's government introduced the Data Protection and Digital Information bill, which will undermine the rights granted by 25 years of data protection law. The widely disliked Online Safety bill was postponed until September. The final two leadership candidates are, to varying degrees, determined to expunge EU law, revamp the Human Rights act, and withdraw from the European Convention on Human Rights. In addition, lawyer Gina Miller warns, the Northern Ireland Protocol bill expands executive power by giving ministers the Henry VIII power to make changes without Parliamentary consent: "This government of Brexiteers are eroding our sovereignty, our constitution, and our ability to hold the government to account."

The British convention is that "government" is collective: the government *are*. Trump wanted to be a king; Johnson wishes to be a president. The coming months will require us to ensure that his replacement knows their place.


Illustrations: Final leadership candidates Rishi Sunak and Liz Truss in debate on ITV.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 15, 2022

Online harms

boris-johnson-on-his-bike-European-Cycling-Federation-370.jpgAn unexpected bonus of the gradual-then-sudden disappearance of Boris Johnson's government, followed by his own resignation, is that the Online Safety bill is being delayed until after Parliament's September return with a new prime minister and, presumably, cabinet.

This is a bill almost no one likes - child safety campaigners think it doesn't go far enough; digital and human rights campaigners - Big Brother Watch, Article 19, Electronic Frontier Foundation, Open Rights Group, Liberty, a coalition of 16 organizations (PDF) - because it threatens freedom of expression and privacy while failing to tackle genuine harms such as the platforms' business model; and technical and legal folks because it's largely unworkable.

The DCMS Parliamentary committee sees it as wrongly conceived. The he UK Independent Reviewer of Terrorism Legislation, Jonathan Hall QC, says it's muzzled and confused. Index on Censorship calls it fundamentally broken, and The Economist says it should be scrapped. The minister whose job it has been to defend it, Nadine Dorries (C-Mid Bedfordshire), remains in place at the Department for Culture, Media, and Sport, but her insistence that resigning-in-disgrace Johnson was brought down by a coup probably won't do her any favors in the incoming everything-that-goes-wrong-was-Johnson's-fault era.

In Wednesday's Parliamentary debate on the bill, the most interesting speaker was Kirsty Blackman (SNP-Aberdeen North), whose Internet usage began 30 years ago, when she was younger than her children are now. Among passionate pleas that her children should be protected from some of the high-risk encounters she experienced, was: "Every person, nearly, that I have encountered talking about this bill who's had any say over it, who continues to have any say, doesn't understand how children actually use the Internet." She called this the bill's biggest failing. "They don't understand the massive benefits of the Internet to children."

This point has long been stressed by academic researchers Sonia Livingstone and Andy Phippen, both of whom actually do talk to children. "If the only horse in town is the Online Safety bill, nothing's going to change," Phippen said at last week's Gikii, noting that Dorries' recent cringeworthy TikTok "rap" promoting the bill focused on platform liability. "The liability can't be only on one stakeholder." His suggestion: a multi-pronged harm reduction approach to online safety.

UK politicians have publicly wished to make "Britain the safest place in the world to be online" all the way back to Tony Blair's 1997-2007 government. It's a meaningless phrase. Online safety - however you define "safety" - is like public health; you need it everywhere to have it anywhere.

Along those lines, "Where were the regulators?" Paul Krugman asked in the New York Times this week, as the cryptocurrency crash continues to flow. The cryptocurrency market, which is now down to $1 trillion from its peak of $3 trillion, is recapitulating all the reasons why we regulate the financial sector. Given the ongoing collapses, it may yet fully vaporize. Krugman's take: "It evolved into a sort of postmodern pyramid scheme". The crash, he suggests, may provide the last, best opportunity to regulate it.

The wild rise of "crypto" - and the now-defunct Theranos - was partly fueled by high-trust individuals who boosted the apparent trustworthiness of dubious claims. The same, we learned this week was true of Uber 2014-2017, Based on the Uber files,124,000 documents provided by whistleblower Mark MacGann, a lobbyist for Uber 2014-2016, the Guardian exposes the falsity of Uber's claims that its gig economy jobs were good for drivers.

The most startling story - which transport industry expert Hubert Horan had already published in 2019 - is the news that the company paid academic economists six-figure sums to produce reports it could use to lobby governments to change the laws it disliked. Other things we knew about - for example, Greyball, the company's technology denying regulators and police rides so they couldn't document Uber's regulatory violations and Uber staff's abuse of customer data - are now shown to have been more widely used than we knew. Further appalling behavior, such as that of former CEO Travis Kalanick, who was ousted in 2017, has been thoroughly documented in the 2019 book, Super Pumped, by Mike Isaac, and the 2022 TV series based on it, Super Pumped.

But those scandals - and Thursday/s revelation that 559 passengers are suing the company for failing to protect them from rape and assault by drivers - aren't why Horan described Uber as a regulatory failure in 2019. For years, he has been indefatigably charting Uber's eternal unprofitability. In his latest, he notes that Uber has lost over $20 billion since 2015 while cutting driver compensation by 40%. The company's share price today is less than half its 2019 IPO price of $45 - and a third of its 2021 peak of $60. The "misleading investors" kind of regulatory failure.

So, returning to the Online Safety bill, if you undermine existing rights and increase the large platforms' power by devising requirements that small sites can't meet *and* do nothing to rein in the platforms' underlying business model...the regulatory failure is built in. This pause is a chance to rethink.

Illustrations: Boris Johnson on his bike (European Cyclists Federation via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 10, 2022

Update needed

In public discussions or Internet governance, only two organizations feature much: the Internet Corporation for Assigned Names and Numbers, founded in 1998, and the Internet Governance Forum, set up in 2005. The former performs the crucial technical role of ensuring that the domain name system that allow humans to enter a word-like Internet address and computers to translate and route it to a numbered device continues to function correctly. The second...well, it hosts interesting conferences on Internet governance.

Neither is much known to average users, who would probably guess the Internet is run by one or more of the big technology companies. Yet they're the best-known of a clutch of engineering-led organizations that set standards and make decisions that affect all of us. In 2011, the Economist described the Internet as shambolically governed (yet concluded that multistakeholder "chaos" is preferable to the alternative of government control).

In a report for the Tony Blair Institute, journalist and longstanding ICANN critic Kieren McCarthy considers that much of Internet governance as currently practiced needs modernization. This is not about the application-layer debates such as content moderation and privacy that occupy the minds of rights activists and governments. Instead, McCarthy is considering the organizations that devised and manage the technical underpinnings that most people ignore. These things matter; the fact that any computer can join the Internet and set up a service without asking anyone's permission or that a website posted in 1995 is remains readable is due to the efforts of organizations like the Internet Engineering Task Force, the Internet Architecture Board, the Internet Society, the World Wide Web Consortium (W3C), and so on. And those are just part of the constellation of governance organizations, well-known compared to the Regional Internet Registries or the tiny group of root server operators.

As unknown as these organizations are to most people (even W3C is vastly less famous than its founder, Tim Berners-Lee), they still have decisive power over the Internet's development. When, shortly after February's Russian invasion, a Ukrainian minister asked ICANN to block Internet traffic to and from Russia. ICANN, prioritizing the openness, interconnectedness, and unity of the global network, correctly said no. But note: ICANN, whose last ties to the US government were severed in 2016, made its decision without consulting either governments or a United Nations committee.

McCarthy's main points: these legacy organizations do not coordinate their efforts; they lack strategy beyond maintaining and evolving the network as it stands; they are internally disorganized; and they are increasingly resistant to new ideas and new participants. They are "essential to maintaining a global, interoperable Internet" - yet McCarthy finds a growing list of increasingly contentious topics and emerging technologies that escape the current ecosystem: censorship, content moderation, AI, web3 and blockchain, privacy and data protection, If these organizations don't rise to those occasions, governments will seek to fill the gap, most likely creating a more fragmented and less functional network. Even now this happens in small ways: four years after the EU's GDPR came into force many US media sites still block European readers rather than find a compliant way to serve us.

From the beginning, ensuring that the technical organizations remain narrowly focused has been seen as essential. See for example the critics who monitored ICANN's development during its first decade, suspicious that it might stray into enforcing government-mandated censorship.

The guiding principles of new governments are always based on a threat model. The writers of the US Constitution, for example, feared the installation of a king and takeover by a foreign country (England). Internet organizations' threat model also has two prongs: first, fragmentation), and second, takeover by governments, specifically the ">International Telecommunication Union, the United Nations agency that manages worldwide telecommunications and which regards itself as the Internet's natural governor. Internet pioneers still believe there could be no worse fate, citing decades of pre-Internet stagnation in the fully-controlled telephone networks.

The ITU has come sort-of-close several times: in 1997 ($), when widespread opposition led instead to ICANN's creation, in the early 2000s, when the World Summit on the Information Society instead created the IGF, and in 2012, when a meeting to update the ITU's regulations led many, including the Trade Union Congress, to fear a coup, Currently, concern that governments will carve things up surrounds negotiations over cybersecurity,

The approach that created today's multistakeholder organizations is, however, just one of four that University of Southampton professors Wendy Hall and Kieron O'Hara examine in their 2021 book, The Four Internets and find are being contested. Our legacy version they dub the "open Internet", and connect it with San Francisco and libertarian ideology. The other three: the "bourgeois Brussels" Internet that the EU is trying to regulate into being with laws like the Digital Services Act, the AI Act, and the Digital Market Act; the commercial ("DC") Internet; and the "paternalistic" Internet of countries like China and Russia, who want to ringfence what their citizens can access. Any of them, singly or jointly, could lead to the long-feared "splinternet".

McCarthy concludes that the threat now is that Internet governance as practiced to date will fail through stagnation. His proposal is to create a new oversight body which he compares to a root server that provides coordination and authoritative information. Left for another time: who? And how?


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 3, 2022

Nine meals from anarchy*

Kate-Cooper.jpgThe untutored, asked how to handle a food crisis, are prone to suggest that everyone grow vegetables in their backyards. They did it in World War II!

I fell into this trap once myself, during the 2008 financial crisis, when I heard a Russian commentator explain on US talk radio that Americans could never survive because we were too soft and individualistic, whereas Russians were used to helping each other out in hard times, living together in cramped conditions, and working around shortages. Nonsense, I thought. Americans are quite capable of turning off their TVs, getting up off their couches, and doing useful stuff when they need to. Wishing for a Plan B, I thought of the huge backyard some Pennsylvania friends had, which backed onto three more similarly-sized backyards, and imagined a cooperative arrangement in which one family kept chickens and another grew some things, and a third grew some complementary things...and they swapped and so on.

My Pennsylvania friends were not impressed. "Is this a joke?"

"It's a Plan B! It's good to have a Plan B!"

It's not a Plan B.

A couple of years ago, at the annual conference convened by the Cybernetics Society, I learned it wasn't even a Plan Y.

"It's subsistence farming," Kate Cooper explained as part of her talk on food security. The grueling full-time unpredictability of that is is what most of us gave up in favor of selecting items off grocery store shelves once or twice a week.

The point about subsistence farming is that it's highly unreliable, highly individual, and doesn't scale to the levels required for a modern society, still less for a densely populated modern British society that imports almost all its food. Yes, people were encouraged to grow vegetables in World War II, but although the net effect was good for morale and for helping people better understand the foods they eat, it doesn't help anyone understand the food system and its scale and complexity. Basically, in terms of the problem of feeding the nation, it was a rounding error. Worth doing, but not a solution.

Cooper is the executive director of the Birmingham Food Council, a community interest company that grew out of efforts to think about the future of Birmingham. "It's our job to be an exemplar of how to think about the food system," she explained.

Two years later, with stories everywhere about escalating food prices and dangerous shortages, the interdependencies that underlie our food supply are being exposed by the intermingling of three separate crises, each of which would be bad on its own: the pandemic, Russia's invasion of Ukraine, and climate change. A source I can't recall calls this constellation a "polycrisis" - multiple simultaneous crises that interact to make them all worse. Plus, while the present government doesn't admit it, *Brexit* has added substantially to the challenges of maintaining the UK's increasingly brittle, highly complex system that few of us understand by fracturing trade relationships and pushing workers out of the industry.

As part of its research, the Council created The Game, a scenario-based role-playing game for decision makers, food sector leaders, researchers, and other policy influencers in which teams of four to six are put in charge of a city and must maintain the residents' access to enough safe and nutritious food.

I felt better about my own level of ignorance when I learned that one player's idea for combating shortages was to grow potatoes along the A38, a major route that runs from Bodmin, Cornwall, to Mansfield, Nottinghamshire. No idea of scale, you see, or the toxins passing automobiles deposit in the soil. (To say nothing of the inefficiencies of trying to farm a plot of land that's 292 miles long and a few hundred yards wide...) Another player wanted to get the national government to send in the army. Also not helping...but they were not alone, as many players found it difficult to feed their populations. People who had played it when the pandemic began forcing lockdowns and hourly changes to the food system. "Nothing had surprised [the people who had played The Game]", she said. Even so, the lockdowns showed the fragility of the food system and how powerless local officials are to do anything about it.

There are options at the national level. If you are lucky enough to have a government that has both the resources and the will to plan for the future, you can create buffer stocks to tide you through a crisis. You need a plan to rotate and resupply since some things (grain) store much better than others (fresh produce). Cooper has a simple plan for deciding which foodstuffs should be stored and which not: is it subject to VAT? That would lead to storing essentials - the healthy, nutritious stuff - and not candy, alcohol, caffeine, sugar, potato chips. Cooper calls those "drug foods", and notes that over 50% of most household budgets are spent on them, 6% of the potato crop goes to making Walker's potato chips, and a 2012 estimate found that Coca Cola's global consumption of water was enough to meet the annual daily needs of more than 2 billion people.

"Is this a sensible use of increasingly scarce land and water?" she asked.

Put like that, what can you say?


Illustrations: Kate Cooper. *Quote attributed to Alfred Henry Lewis, 1906.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 29, 2021

Majority report

Frari_(Venice)_nave_left_-_Monument_to_Doge_Giovanni_Pesaro_-_Statue_of_the_Doge.jpgHow do democracy and algorithmic governance live together? This was the central question of a workshop this week on computational governance. This is only partly about the Internet; many new tools for governance are appearing all the time: smart contracts, for example, and AI-powered predictive systems. Many of these are being built with little idea of how they can go wrong.

The workshop asked three questions:

- What can technologists learn from other systems of governance?
- What advances in computer science would be required for computational systems to be useful in important affairs like human governance?
- Conversely, are there technologies that policy makers can use to improve existing systems?

Implied is this: who gets to decide? On the early Internet, for example, decisions were reached by consensus among engineers, funded by hopeful governments, who all knew each other. Mass adoption, not legal mandate, helped the Internet's TCP/IP protocols dominate over many other 1990s networking systems: it was free, it worked well enough, and it was *there*. The same factors applied to other familiar protocols and applications: the web, email, communications between routers and other pieces of infrastructure. Proposals circulated as Requests for Comments, and those that found the greatest acceptance were adopted. In those early days, as I was told in a nostalgic moment at a conference in 1998, anyone pushing a proposal because it was good for their company would have been booed off the stage. It couldn't last; incoming new stakeholders demanded a voice.

If you're designing an automated governance system, the fundamental question is this: how do you deal with dissenting minorities? In some contexts - most obviously the US Supreme Court - dissenting views stay on the record alongside the majority opinion. In the long run of legal reasoning, it's important to know how judgments were reached and what issues were considered. You must show your work. In other contexts where only the consensus is recorded, minority dissent is disappeared - AI systems, for example, where the labelling that's adopted is the result of human votes we never see.

In one intriguing example, a panel of judges may rule a defendant is guilty or not guilty depending on whether you add up votes by premise - the defendant must have both committed the crime and possessed criminal intent - or by conclusion, in which each judge casts a final vote and only these are counted. In a small-scale human system the discrepancy is obvious. In a large-scale automated system, which type of aggregation do you choose, and what are the consequences, and for whom?

Decentralization poses a similarly knotty conundrum. We talk about the Internet's decentralized origins, but its design fundamentally does not prevent consolidation. Centralized layers such as the domain name system and anti-spam blocking lists are single points of control and potential failure. If decentralization is your goal, the Internet's design has proven to be fundamentally flawed. Lots of us have argued that we should redecentralize the Internet, but if you adopt a truly decentralized system, where do you seek redress? In a financial system running on blockchains and smart contracts, this is a crucial point.

Yet this fundamental flaw in the Internet's design means that over time we have increasingly become second-class citizens on the Internet, all without ever agreeing to any of it. Some US newspapers are still, three and a half years on, ghosting Europeans for fear of GDPR; videos posted to web forums may be geoblocked from playing in other regions. Deeper down the stack, design decisions have enabled surveillance and control by exposing routing metadata - who connects to whom. Efforts to superimpose security have led to a dysfunctional system of digital certificates that average users either don't know is there or don't know how to use to protec themselves. Efforts to cut down on attacks and network abuse have spawned a handful of gatekeepers like Google, Akamai, Cloudflare, and SORBS that get to decide what traffic gets to go where. Few realize how much Internet citizenship we've lost over the last 25 years; in many of our heads, the old cooperative Internet is just a few steps back. As if.

As Jon Crowcroft and I concluded in our paper on leaky networks for this year's this year's Gikii, "leaky" designs can be useful to speed development early on even though they pose problems later, when issues like security become important. The Internet was built by people who trusted each other and did not sufficiently imagine it being used by people who didn't, shouldn't, and couldn't. You could say it this way: in the technology world, everything starts as an experiment and by the time there are problems it's lawless.

So this the main point of the workshop: how do you structure automated governance to protect the rights of minorities? Opting to slow decision making to consider the minority report impedes decision making in emergencies. If you limit Internet metadata exposure, security people lose some ability to debug problems and trace attacks.

We considered possible role models: British corporate governance; smart contracts;and, presented by Miranda Mowbray, the wacky system by which Venice elected a new Doge. It could not work today: it's crazily complex, and impossible to scale. But you could certainly code it.


Illustrations: Monument to the Doge Giovanni Pesaro (via Didier Descouens at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 10, 2021

Globalizing Britain

Chatsworth_Cascade_and_House_-_geograph.org.uk_-_2191570.jpgBrexit really starts now. It was easy to forget, during the dramas that accompanied the passage of the Withdrawal Agreement and the disruption of the pandemic, that the really serious question had still not been answered: given full control, what would Britain do with it? What is a reshaped "independent global Britain" going to be when it grows up? Now is when we find out, as this government, which has a large enough majority to do almost anything it wants, pursues the policies it announced in the Queen's Speech last May.

Some of the agenda is depressingly cribbed from the current US Republican playbook. First and most obvious in this group is the Elections bill. The most contentious change is requiring voter ID at polling stations (even though there was a total of one conviction for voter fraud in 2019, the year of the last general election). What those in other countries may not realize is how many eligible voters in Britain lack any form of photo ID. The Guardian that 11 million people - a fifth of eligible voters - have neither driver's license nor passport. Naturally they are disproportionately from black and Asian backgrounds, older and disabled, and/or poor. The expected general effect, especially coupled with the additional proposal to remove the 15-year cap on voting while expatriate, is to put the thumb on the electoral scale to favor the Conservatives.

More nettishly, the government is gearing up for another attack on encryption, pulling out all the same old arguments. As Gareth Corfield explains at The Register, the current target is Facebook, which intends to roll out end-to-end encryption for messaging and other services, mixed with some copied FBI going dark rhetoric.

This is also the moment when the Online Safety bill (previously online harms). The push against encryption, which includes funding technical development is part of that because the bill makes service providers responsible for illegal content users post - and also, as Heather Burns points out at the Open Rights Group, legal but harmful content. Burns also details the extensive scope of the bill's age verification plans.

These moves are not new or unexpected. Slightly more so was the announcement that the UK will review data protection law with an eye to diverging from the EU; it opened the consultation today. This is, as many have pointed out before dangerous for UK businesses that rely on data transfers to the EU for survival. The EU's decision a few months ago to grant the UK an adequacy decision - that is, the EU's acceptance of the UK's data protection laws as providing equivalent protection - will last for four years. It seems unlikely the EU will revisit it before then, but even before divergence Ian Brown and Douwe Korff have argued that the UK's data protection framework should be ruled inadequate. It *sounds* great when they say it will mean getting rid of the incessant cookie pop-ups, but at risk is privacy protections that have taken years to build. The consultation document wants to promise everything: "even better data protection regime" and "unlocking the power of data" appear in the same paragraph, and the new regime will also both be "pro-growth and innovation-friendly" and "maintain high data protection standards".

Recent moves have not made it easier to trust this government with respect to personal data- first the postponed-for-now medical data fiasco and second this week's revelation that the government is increasingly using our data and hiring third-party marketing firms to target ads and develop personalized campaigns to manipulate the country's behavior. This "influence government" is the work of the ten-year-old Behavioural Insights Team - the "nudge unit", whose thinking is summed up in its behavioral economy report.

Then there's the Police, Crime, Sentencing, and Courts bill currently making its way through Parliament. This one has been the subject of street protests across the UK because of provisions that permit police and Home Secretary Priti Patel to impose various limits on protests.

Patel's Home Office also features in another area of contention, the Nationality and Borders bill. This bill would make criminal offenses out of arriving in the UK without permission a criminal offense and helping an asylum seeker enter the UK. The latter raises many questions, and the Law Society lists many legal issues that need clarification. Accompanying this is this week's proposal to turn back migrant boats, which breaks maritime law.

A few more entertainments lurk, for one, the plan to review of network neutrality announced by Ofcom, the communications regulator. At this stage, it's unclear what dangers lurk, but it's another thing to watch, along with the ongoing consultation on digital identity.

More expected, no less alarming, this government also has an ongoing independent review of the 1998 Human Rights Act, which Conservatives such as former prime minister Theresa May have long wanted to scrap.

Human rights activists in this country aren't going to get much rest between now and (probably) 2024, when the next general election is due. Or maybe ever, looking at this list. This is the latest step in a long march, and it reminds that underneath Britain's democracy lies its ancient feudalism.


Illustrations: Derbyshire stately home Chatsworth (via Trevor Rickards at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 23, 2021

Internet fragmentation as a service

Screenshot from 2021-07-23 11-48-13.png"You spend most of your day telling a robot that you're not a robot. Think about that for two minutes and tell me you don't want to walk into the ocean," the comedian John Mulaney said in his 2018 comedy special, Kid Gorgeous. He was talking about captchas.

I was reminded of this during a recent panel at the US Internet Governance Forum hosted by Mike Nelson. Nelson's challenge to his panelists: imagine alternative approaches to governments' policy goals that won't damage the Internet. They talked about unintended consequences (and the exploitation thereof) of laws passed with good intentions, governments' demands for access to data, ransomware, content blocking, multiplying regional rulebooks, technical standards and interoperability, transparency, and rising geopolitical tensions, which cyberspace policy expert Melissa Hathaway suggested should be thought about by playing a mash-up of the games Risk and Settlers of Catan.The main topic: is the Internet at risk of Internet fragmentation?

So much depends on what you mean by "fragmentation". No one mentioned the physical damage achievable by ten backhoes. Nor the domain name system that allows humans and computers to find each other; "splitting the root" (that is, the heart of the DNS) used to dominate such discussions. Nor captchas, but the reason Mulaney sprang to mind was that every day (in every way) captchas frustrate access. Saying that makes me privileged; in countries where Facebook is zero-rated but the rest of the Internet costs money people can't afford on their data plans, the Internet is as cloven as it can possibly be.

Along those lines, Steve DelBianco raised the idea of splintering-by-local-law, the most obvious example being the demand in many countries for data localization. DelBianco, however, cited Illinois' Biometric Information Privacy Act (2008), which has been used to sue platforms on behalf of unnamed users for automatically tagging their photos online. Result: autotagging is not available to Illinois users on the major platforms, and neither is the Google Nest and Amazon Ring doorbells' facility for recognizing and admitting friends and family. See also GDPR, noted above, which three and a half years after taking force still has US media sites blocking access by insisting that our European visitors are important to us.

You could also say that the social Internet is splintering along ideological lines as the extreme right continue to build their own media and channels. In traditional media, this was Roger Ailes' strategy. Online, the medium designed to connect people doesn't care who it connects or for what purpose. Commercial social media engagement algorithms have exacerbated this, as many current books make plain.

Nelson, whose Internet policy experience goes back to the Clinton administration, suggested that policy change is generally driven by a big event: 9/11, for example, which led promptly to the passage of the PATRIOT Act (US) and the Anti-Terrorism, Crime, and Security Act (UK), or the Colonial Pipeline hack that has made ransomware an urgent mainstream concern. So, he asked: what kind of short, sharp shock would cause the Internet to fracture? If you see data protection law as a vector, the 2013 Snowden revelations were that sort of event; a year earlier, GDPR looked like fading away.

You may be thinking, as I was, that we're literally soaking in global catastrophes: the COVID-19 pandemic, and climate change. Both are slow-burning issues, unlike the high-profile drivers of legislative panic Nelson was looking for, but both generate dozens of interim shocks.

I'm always amazed so little is said about climate change and the future of the Internet; the IT industry's emissions just keep growing. China's ban on cryptocurrency mining, which it attributes to environmental concerns, may be the first of many such limits on the use of computing power. Disruptions to electricity supplies - just yesterday, the UK's National Grid warned there may be blackouts this winter - don't "break" the Internet, but they do make access precarious.

So far, the pandemic's effect has mostly been to exacerbate ideological splits and accelerate efforts to curb the spread of misinformation via social media. It's also led to increased censorship in some places; early on, China banned virus-related keywords on WeChat, and this week the Indian authorities raided a newspaper that criticized the government's pandemic response. In addition, the exposure and exacerbation of social inequalities brought by the pandemic may, David Bray suggested in the panel, be contributing to the increase in cybercrime, as "failed states" struggle to rescue their economies. This week's revelations of the database of numbers of interest to NSO Group clients since 2016 doesn't fragment the Internet as a global communications system, but it might in the sense that some people may not be able to afford the risk of being on it.

This is where Mulaney comes in. Today, robots gatekeep web pages. Three trends seem likely to expand their role: online, age verification and online safety laws; covid passports, which are beginning to determine access to physical-world events; and the Internet of Things, which is bridging what's left of the divide between cyberspace and the real world. In the Internet subsumed into everything of our future, "splitting the Internet" may no longer be meaningful as the purely virtual construct Nelson's panel was considering. In the cyber-physical world world, Internet fragmentation must also be hybrid.


Illustrations: The IGF-USA panel in action.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 31, 2020

Build back

New_Years_2014_Fireworks_-_London_Eye-WM.jpgIn my lifetime there has never been a New Year that has looked so bleak. At 11pm last night, Big Ben tolled the final severance of the UK's participation in the European Union. For the last few days, as details of the trade agreement agreed last night become known, Twitter has been filling up with graphics and text explaining the new bureaucracy that will directly or indirectly affect every UK resident and the life complications still facing the 3 million EU citizens resident in the UK and the UK expatriates in the EU. Those who have pushed for this outcome for many years will I'm sure rejoice, but for many of us it's a sad, sad moment and we fear the outcome.

The bright spot of the arriving vaccines is already being tarnished by what appears to be a panic response pushing to up-end the conditions under which they were granted an emergency license. Case numbers are rising out of control, and Twitter is filled with distress signals from exhausted, overwhelmed heath care workers. With Brexit completed and Trump almost gone, 2021 will be a year of - we hope - renewed sanity and sober remediation, not just of the damage done this year in specific but of the accrued societal and infrastructural technical debt that made everything in 2020 so much worse. It is already clear that the cost of this pandemic will be greater than all the savings ever made by cuts to public health and social welfare systems.

Still, it *is* a new year (because of human-made calendars), and because we love round numbers - defining "round" as the number of digits our hands happen to have - there's a certain amount of "that was the decade" about it. There is oddly less chatter about the twenty years since the turn of the millennium, which surprises me a bit: we've completed two-fifths of the 21st century!

Even the pre-pandemic change was phenomenal. Ten years ago - 2010 - was when smartphones really took off, pouring accelerant on Facebook, Twitter, and other social media, which were over-credited for 2011's "Arab Spring" ("useful but not sufficient", the linked report concludes). At Gikii 2019, Andres Guademuz described this moment as "peak cyber-utopia". In fact, it was probably the second peak, the first having been circa 1999, but who's counting? Both waves of cyber-utopianism seem quaint now, in the face of pandemic-fueled social and economic disruption. We may - we do - look to social media for information - but we've remembered we need governments for public health measures, economic support, and leadership. The deliberate thinning of the institutions we now need to save us in countries like the US and UK is one legacy of the last 30 years of technology-fueled neoliberalism. Ronald Reagan, US president from 1980 to 1988, liked to say that the most frightening words in the English language were "I'm from the government and I'm here to help". Far more frightening is the reality of a government that can't, won't, or chooses not to help.

Twenty years ago - 2000 - was the year of the dot-com peak, when AOL disastrously merged with Time-Warner. The crash was well underway when 9/11 happened and ushered in 20 years of increasing surveillance: first an explosion of CCTV cameras in the physical world and, on the Internet, data retention and interception, and finally, in the last year or so, the inescapability of automated facial recognition, rolled out without debate or permission.

Despite having argued against all these technologies as they've come along, I wish I could report that investing in surveillance instead of public health had paid dividends in the Year of Our Pandemic 2020. Contact tracing apps, which we heard so much about earlier in the year, have added plenty of surveillance capabilities and requirements to our phones and lives, but appear to have played little part in reducing infection rates. Meanwhile, the pandemic is fueling the push to adopt the sort of MAGIC flowthrough travel industry execs have imagined since 2013. Airports and our desire to travel will lead the way to normalizing pervasive facial recognition, fever-scanning cameras, and, soon, proof of vaccination.

This summer, many human rights activists noted the ethical issues surrounding immunity passports. Early in the year this was easy pickings because the implementations were in China. Now, however, anyone traveling to countries like Canada and the US must be able to show a negative covid test within 72 hours before traveling from the UK. Demand for vaccination certificates is inevitable. Privacy International taken the view that " Until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair." Being careful about this is essential, because unfairness entrenched while we rebuild will be *very* hard to dislodge.

So, two big things to work towards in 2021. The first is to ensure that new forms of unfairness do not become the new normal. The second, which will take a lot of luck, even more diligence, and a massive scientific effort, is to ensure that one item on the Mindset list of 2040's 18-year-olds will be "There has never been a pandemic."

Happy new year.

Illustrations: New year's eve fireworks in London, 2014 (via Clarence Ji).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 9, 2020

Incoming

fifth-element-destroys-evil.pngThis week saw the Antitrust Subcommittee of the (US) House Judiciary Committee release the 449-page report (PDF) on its 16-month investigation into Google, Apple, Facebook, and Amazon - GAFA, as we may know them. Or, if some of the recommendations in this report get implemented, *knew* them. The committee has yet to vote on the report, and the Republican members have yet to endorse it. So this is very much a Democrats' report...but depending how things go over the next month, come January that may be sufficient to ensure action.

At BIG, Matt Stoller has posted a useful and thorough summary. As he writes, the subcommittee focused on a relatively new idea of "gatekeeper power", which each of the four exercises in its own way (app stores, maps, search, phone operating systems, personal connections), and each of which is aided by its ability to surveil the entirety of the market and undermine current and potential rivals. It also attacks the agencies tasked with enforcing the antitrust laws for permitting the companies to make some 500 acquisitions. The resulting recommendations fall into three main categories: restoring competition in the digital economy, strengthening the antitrust laws, and reviving antitrust enforcement.

In a discussion while the report was still just a rumor, a group of industry old-timers seemed dismayed at the thought of breaking up these companies. A major concern was the impact on research. The three great American corporate labs of the 1950s to 1980s were AT&T's Bell Labs, Xerox PARC, and IBM's Watson. All did basic research, developing foundational ideas for decades to come but that might never provide profits for the company itself. The 1984 AT&T breakup effectively killed Bell Labs. Xerox famously lost out on the computer market. IBM redirected its research priorities toward product development. GAFA and Microsoft operate substantial research labs today, but they are more focused on the technologies, such as AI and robotics, that they envision as their own future.

The AT&T case is especially interesting. Would the Internet have disrupted AT&T's business even without the antitrust case, or would AT&T, kept whole, been able to use its monopoly power to block the growth of the Internet? Around the same time, European countries were deliberately encouraging competition by ending the monopolies of their legacy state telcos. Without that - or with AT&T left intact - anyone wanting to use the arriving Internet would have been paying a small fortune to the telcos just to buy a modem to access it with. Even as it was, the telcos saw Voice over IP as a threat to their lucrative long distance business, and it was only network neutrality that kept them from suppressing it. Today, Zoom-like technology might be available, but likely out of reach for most of us.

The subcommittee's enlistment of Lina Khan as counsel suggests GAFA had this date from the beginning. Khan made waves while still a law student by writing a lengthy treatise on Amazon's monopoly power and its lessons for reforming antitrust law, back when most of us still thought Amazon was largely benign. One of her major points was that much opposition to antitrust enforcement in the technology industry is based on the idea that every large company is always precariously balanced because at any time, a couple of guys in a garage could be inventing the technology that will make them obsolete. Khan argued that this was no longer true, partly because those two garage guys were enabled by antitrust enforcement that largely ceased after the 1980s, and partly because GAFA are so powerful that few start-ups can find funding to compete with them directly and rich enough to buy and absorb or shut down anyone who tries. The report, like the hearings, notes the fear of reprisal among business owners asked for their experiences, as well as the disdain with which these companies - particularly Facebook - have treated regulators. All four companies have been repeat offenders, apparently not inspired to change their behavior by even the largest fines.

Stoller thinks that we may now see real action because our norms have shifted. In 2011, admiration for monopolists was so widespread, he writes, that Occupy Wall Street honored Steve Jobs' death, whereas today US and EU politicians of all stripes are targeting monopoly power and intermediary liability. Stoller doesn't speculate about causes, but we can think of several: the rapid post-2010 escalation of social media and smartphones; Snowden's 2013 revelations; the 2016 Cambridge Analytica scandal; and the widespread recognition that, as Kashmir Hill found, it's incredibly difficult to extricate yourself from these systems once you are embedded in them. Other small things have added up, too, such as Mark Zuckerberg's refusal to appear in front of a grand committee assembled by nine nations.

Put more simply, ten years ago GAFA and other platforms and monopolists made the economy look good. Today, the costs they impose on the rest of society - precarious employment, lost privacy, a badly damaged media ecosystem, and the difficulty of containing not just misinformation but anti-science - are clearly visible. This, too, is a trend that the pandemic has accelerated and exposed. When the cost of your doing business is measured in human deaths, people start paying attention pretty quickly. You should have paid your taxes, guys.


Illustrations: The fifth element breaks up the approaching evil in The Fifth Element.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

September 18, 2020

Systems thinking

Official_portrait_of_Chi_Onwurah_crop_3.jpgThere's a TV ad currently running on MSNBC that touts the services of a company that makes custom T-shirts to help campaigns raise funds for causes such as climate change.

Pause. It takes 2,700 liters of water to make a cotton T-shirt - water that, the Virtual Water project would argue, is virtually exported from cotton-growing nations to those earnest climate change activists. Plus other environmental damage relating to cotton; see also the recent paper tracking the pollution impact of denim microfibers. So the person buying the T-shirt may be doing a good thing on the local level by supporting climate change activism while simultaneously exacerbating the climate change they're trying to oppose.

The same sort of issue arose this week at the UK Internet Governance Forum with respect to what the MP and engineer Chi Onwurah (Labour-Newcastle upon Tyne Central) elegantly called "data chaos" - that is, the confusing array of choices and manipulations we're living in. Modern technology design has done a very good job of isolating each of us into a tiny silo, in which we attempt to make the best decisions for ourselves and our data without any real understanding of the wider impact on wider society.

UCL researcher Michael Veale expanded on this idea: "We have amazing privacy technologies, but what we want to control is the use of technologies to program and change entire populations." Veale was participating in a panel on building a "digital identity layer" - that is, a digital identity infrastructure to enable securely authenticated interactions on the Internet. So if we focus on confidentiality we miss the danger we're creating in allowing an entire country to rely on intermediaries whose interests are not ours but whose actions could - for example - cause huge populations to self-isolate during a pandemic. It is incredibly hard just to get a half-dozen club tennis players to move from WhatsApp to something independent of Facebook. At the population level, lock-in is far worse.

Third and most telling example. Last weekend, at the 52nd annual conference of the Cybernetics Society, Kate Cooper, from the Birmingham Food Council, made a similar point when, after her really quite scary talk, she was asked whether we could help improve food security if those of us who have space started growing vegetables in our gardens. The short answer: no. "It's subsistence farming," she said, going on to add that although growing your own food helps you understand your own relationship with food and where it comes from and can be very satisfying to do, it does nothing at all to help you gain a greater understanding of the food system and the challenges of keeping it secure. This is - or could be - another of Yes, Minister's irregular verbs: I choose not to eat potato chips; you very occasionally eat responsibly-sourced organic potato chips; potato chips account for 6% of Britain's annual crop of potatoes. This was Cooper's question: is that a good use of the land, water, and other resources? Growing potatoes in your front garden will not lead you to this question.

Cybernetics was new to me two years ago, when I was invited to speak at the 50th anniversary conference. I had a vague idea it had something to do with Isaac Asimov's robots. In its definition, Wikipedia cites MIT scientific Norbert Weiner in 1948: "the scientific study of control and communication in the animal and the machine". So it *could* be a robot. Trust Asimov.

Attending the 2018 event, followed by this year's, which was shared with the American Society for Cybernetics, showed cybernetics up as a slippery transdiscipline. The joint 2020 event veered from a case study of IBM to choreography, taking in subjects like the NHS Digital Academy, design, family therapy, social change, and the climate emergency along the way. Cooper, who seemed as uncertain as I was two years ago whether her work really had anything to do with cybernetics, fit right in.

The experience has led me to think of cybernetics as a little like Bayes' Theorem as portrayed in Sharon Bertsch McGrayne's book The Theory That Would Not Die. As she tells the story, for two and a half centuries after its invention, select mathematicians kept the idea alive but rarely dared to endorse it publicly - and today it's everywhere. The cybernetics community feels like this, too: a group who are nurturing an overlooked, poorly understood-by-the-wider-world, but essential field waiting for the rest of us to understand its power.

For a newcomer, getting oriented is hard; some of the discussion seems abstract enough to belong in a philosophy department. Other aspects - such as Ray Ison's description of his new book, The Hidden Power of Systems Thinking, smacks of self-help, especially when he describes it: "The contention of the book is that systems thinking in practice provides the means to understand and fundamentally alter the systems governing our lives."

At this stage, however, with the rolling waves of crises hitting our societies (which Ison helpfully summed up in an apt cartoon), if this is cybernetics, it sounds like exactly what we need. "Why," asked the artist Vanilla Beer, whose father was the cybernetics pioneer Stafford Beer, "is something so useful unused?" Beats me.


Illustrations: Chi Onwurah (official portrait, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

July 17, 2020

Flying blind

twitter-bird-flipped.jpgQuick update to last week: the European Court of Justice has ruled in favor of Max Schrems a second time and struck down Privacy Shield, the legal framework that allowed data transfers from the EU to the US (and other third countries); businesses can still use Standard Contractual Clauses, subject to some conditions. TL;DR: Sucks even more to be the UK, caught in the middle between the EU and US demands regarding data flows. On to this week...

This week's Twitter hack is scary. Not, obviously, because it was a hack; by this time we ought to be too used to systems being penetrated by attackers to panic. We know technology is insecure. That's not news.

The big fear should be the unused potential.

Twitter's influence has always been disproportionate to its size. By Big Social Media standards, Twitter is small - a mere snip at 330 million users, barely bigger than Pinterest. TikTok has 800 million, Instagram has 1 billion, YouTube 2 billion, and Facebook 2.5 billion. But Twitter is addictively home to academics, politicians, and entertainers - and journalists, who monitor Twitter constantly for developments to report on. A lot of people feel unable to mention Twitter these days without stressing how much of a sinkhole they think it is (the equivalent of, in decades past, boasting how little TV you watched), but for public information in the West Twitter is a nerve center. We talk a lot about how Facebook got Trump elected, but it was Twitter that got him those acres of free TV and print coverage.

I missed most of the outage. According to Vice, on Wednesday similarly-worded tweets directing followers to send money in the form of bitcoin began appearing in the feeds coming from the high-profile, high-follower accounts belonging to Joe Biden, Elon Musk, Uber, Apple, Bill Gates, and others. Twitter had to shut down a fair bit of the service for a while and block verified users - high-profile public figures that Twitter deems important enough to make sure they're not fakes - from posting. The tweets have been removed, and some people who - presumably trying to follow standard practice in a data breach - tried to change their passwords got locked out - and some people must have sent money, since Vice reported the Bitcoin wallet in question had collected $100,000. But overall not much harm was done.

This time.

Most people, when they think about their social media account or email being hacked, think first of the risk that their messages will be read. This is always a risk, and it's a reason not to post your most sensitive secrets to technology and services you don't control. But the even bigger problem many people overlook is exactly what the attackers did here: spoofed messages that fool friends and contacts - in this case, the wider public - into thinking they're genuine. This is not a new problem; hackers have sought to take advantage of trust relationships to mount attacks ever since Kevin Mitnick dubbed the practice "social engineering" circa 1990.

In his detailed preliminary study of the attack, Brian Krebs suggests the attack likely came from people who've "typically specialized in hijacking social media accounts via SIM swapping". Whoever did it and whatever route they took, it seems clear they gained access to Twitter's admin tools, which enabled them to change the email address associated with accounts and either turn off or capture the two-factor authentication that might alert the actual owners. (And if, like many people, you operate Twitter, email, and 2FA on your phone, you actually don't *have* two factors, you have one single point of failure - your phone. Do not do this if you can avoid it.)

In the process of trying to manage the breach, Eric Geller reports at Politico, Twitter silenced accounts belonging to numerous politicians including US president Donald Trump and the US National Weather Service tornado alerts, among many others that routinely post public information, in some cases for more than 24 hours. You can argue that some of these aren't much of a loss, but the underlying problem is a critical one, in that organizations and individuals of all stripes use Twitter as an official outlet for public information. Forget money: deployed with greater subtlety at the right time, such an attack could change the outcome of elections by announcing false information about polling places (Geller's suggestion), or kill people simply by suppressing critical public safety warnings.

What governments and others don't appear to have realized is that in relying on Twitter as a conduit to the public they are effectively outsourcing their security to it without being in a position to audit or set standards beyond those that apply to any public company. Twitter, on the other hand, should have had more sense: if it created special security arrangements for Trump's account, as the New York Times says it did, why didn't it occur to the company to come up with a workable system for all its accounts? How could it not have noticed the need? The recurring election problems around the world weren't enough of a clue?

Compared to what the attackers *could* have wanted, stealing some money is trivial. Twitter, like others before it, will have to rethink its security to match its impact.


Illustrations:

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 19, 2020

The science

paddington-2020-06-13.jpgWhat I - and I suspect a lot of other people - would love to have right now is an online calculator into which you could put where you were going, the time of day, the length of time you expect to spend there, and the type of activity and get back out a risk estimate of acquiring coronavirus infection given various mitigations. I write this as the UK government announces that the "threat level" is dropping from "4" to "3", which tells me more or less precisely nothing useful.

Throughout the pandemic, the British government has explained every decision by saying it's led by the science. I'm all for following the advice of scientists - particularly, in our present situation, public health experts, virologists, and epidemiologists - but "the science" implies there's a single received monolithic truth even while failing to identify any particular source for it. Which science? Whose research? Based on what evidence? Funded by whom? How does it fit in with what we were told before?

Boris Johnson's government spent much of the early months avoiding answering those questions, which has led, as the biologist Ian Boyd complains to the characterization of the Scientific Advisory Group for Emergencies (SAGE) as "secretive". As the public trusts this government less and less, showing their work has become increasingly important, especially when those results represent a change of plan.

The last four months have seen two major U-turns in "the science" that's governing our current lives, and a third may be in progress: masks, contact tracing apps, and the two-meter rule. Meanwhile, the pieces that are supposed to be in place for reopening - a robust contact tracing system, for example - aren't.

We'll start with masks. Before this thing started, the received wisdom was that masks protected other people from you, but not you from them. This appears to still be the generally accepted case. But tied in with that was the attitude that wearing masks while ill was something only Asians did; Westerners...well, what? Knew better? Were less considerate? Were made of tougher stuff and didn't care if they got sick? In mid-March, Zeynep Tufecki got a certain amount of stick on Twitter for impassioned plea in the New York Times that public health authorities should promote wearing masks and teach people how to do it properly. "Of course masks work," she wrote, "maybe not perfectly, and not all to the same degree, but they provide some protection."

But we had to go on arguing about it back and forth. There is says Snopes, no real consensus on how effective they are. Nonetheless, it seems logical they ought to help, and both WHO and CDC now recommend them while mayors of crowded cities are increasingly requiring them. In this case, there's no obvious opportunity for profiteering and for most people the inconvenience is modest. The worst you can suspect is that the government is recommending them so we'll feel more confident about resuming normal activity.

Then, for the last four months we've been told to stay two meters from everyone else except fellow household members. During the closures, elves - that is, people who took on the risks of going to work - have been busy painting distancing indicators on underground platforms, sidewalks, and park benches and sticking decals to train windows. They've set up hand sanitizer stations in London's stations, and created new bike lanes and pedestrian areas. Now, the daily news includes a drumbeat of pressure on government to reduce that recommended distance to one meter. Is this science or economics? The BBC has found a study that says that standing one meter apart carries ten times the risk of two meters. But how significant is that?

I'm all for "the science", but there's so much visible vested interest that I want details. What are the tradeoffs? How does the drop in distance change R0, the reproduction number? The WHO recommends one meter - but it assumes that people are wearing masks - which, in London, on public transport they will be but in restaurants they can't be.

Finally, when last seen, the UK's contact tracing app was being trialed on the Isle of Wight and was built in-house using a centralized design despite the best efforts of privacy advocates and digital rights activists to convince NHSx it was a bad idea. Yesterday, this app was officially discarded.

The relevant scientific aspect, however, is how much apps matter. In April, an an Oxford study suggested that 60% of the population would have use the app for it to be effective.

We should have read the study, as MIT Technology Review did this week to find that it actually says contact tracing apps can be helpful at much lower levels of takeup. It is still clear that human tracers with local knowledge are more effective and there are many failings in the tracing system, as the kibitzing scientific group Independent SAGE says, but *some* help is better than no help.

"The science" unfortunately can't offer us what we really want: certainty. Instead, we have many imperfect but complementary tools and must hope they add up to something like enough. The science will only become fully clear much later.


Illustrations: London's Paddington station on June 13.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 12, 2020

Getting out the vote

Thumbnail image for bush-gore-hanging-chad-florida.jpg"If voting changed anything, they'd abolish it, the maverick British left-wing politician Ken Livingstone wrote in 1987.

In 2020, the strategy appears to be to lecture people about how they should vote if they want to change things, and then make sure they can't. After this week's denial-of-service attack on Georgia voters and widespread documentation of voter suppression tactics, there should be no more arguments about whether voter suppression is a problem.

Until a 2008 Computers, Freedom, and Privacy tutorial on "e-deceptive campaign practices", organized by Lillie Coney, I had no idea how much effort was put into disenfranchising eligible voters. The tutorial focused on the many ways new technology - the pre-social media Internet - was being adapted to do very old work to suppress the votes of those who might have undesired opinions. The images from the 2018 mid-term elections and from this week in Georgia tell their own story.

In a presentation last week, Rebecca Mercuri noted that there are two types of fraud surrounding elections. Voter fraud, which is efforts by individuals to vote when they are not entitled to do so and is the stuff proponents of voter ID requirements get upset about, is vanishingly rare. Election fraud, where one group or another try to game the election in their favor, is and has been common throughout history, and there are many techniques. Election fraud is the big thing to keep your eye on - and electronic voting is a perfect vector for it. Paper ballots can be reexamined, recounted, and can't easily be altered without trace. Yes, they can be stolen or spoiled, but it's hard to do at scale because the boxes of ballots are big, heavy, and not easily vanished. Scale is, however, what computers were designed for, and just about every computer security expert agrees that computers and general elections do not mix. Even in a small, digitally literate country like Estonia a study found enormous vulnerabilities.

Mercuri, along with longtime security expert Peter Neumann, was offering an update on the technical side of voting. Mercuri is a longstanding expert in this area; in 2000, she defended her PhD thesis, the first serious study of the security problems for online voting, 11 days before Bush v. Gore burst into the headlines. TL;DR: electronic voting can't be secured.

In the 20 years since, the vast preponderance of computer security experts have continued to agree with her. Naturally, people keep trying to find wiggle room, as if some new technology will change the math; besides election systems vendors there are well-meaning folks with worthwhile goals, such as improving access for visually impaired people, ensuring access for a widely scattered membership, such as unions, or motivating younger people.

Even apart from voter suppression tactics, US election systems continue to be a fragmented mess. People keep finding new ways to hack into them; in 2017, Bloomberg reported that Russia hacked into voting systems in 39 US states before the US presidential election and targeted election systems in all 50. Defcon has added a voting machine hacking village, where, in 2018, an 11-year-old hacked into a replica of the Florida state voting website in under ten minutes. In 2019, Defcon hackers were able to buy a bunch of voting machines and election systems on eBay - and cracked every single one for the Washington Post. The only sensible response: use paper.

Mercuri has long advocated for voter-verified paper ballots (including absentee and mail-in ballots) as the official votes that can be recounted or audited as needed. The complexity and size of US elections, however, means electronic counting.

In Congressional testimony, Matt Blaze, a professor at Georgetown University, has made three recommendations (PDF): immediately dump all remaining paperless direct recording electronic voting machines; provide resources, infrastructure, and training to local and state election officials to help them defend their systems against attacks; and conduct risk-limiting audits after every election to detect software failures and attacks. RLAs, which were proposed in a 2012 paper by Mark Lindeman and Philip B. Stark (PDF), involves counting a statistically significant random sampling of ballots and checking the results against the machine. The proposal has a fair amount of support, including from the Electronic Frontier Foundation.

Mercuri has doubts; she argues that election administrators don't understand the math that determines how many ballots to count in these audits, and thinks the method will fail to catch "dispersed fraud" - that is, a few votes changed across many precincts rather than large clumps of votes changed in a few places. She is undeniably right when she says that RLAs are intended to avoid counting the full set of ballots; proponents see that as a *good* thing - faster, cheaper, and just as good. As a result, some states - Michigan, Colorado (PDF) - are beginning to embrace it. My guess is there will be many mistakes in implementation and resulting legal contests until everyone either finds a standard for best practice or decides they're too complicated to make work.

Even more important, however, is whether RLAs can successfully underpin public confidence in election integrity. Without that, we've got nothing.

Illustrations: Hanging chad, during the 2000 Bush versus Gore vote.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

June 5, 2020

Centralized stupidity

private-eye-contact-tracing.jpegAs a friend with greater experience with lockdowns might have said, when you see one coming be careful not only who you get locked down with, but where. People with strong local neighborhoods and personal relationships with independent local shops have had a vastly easier time through the last couple of months than most others.

My lifetime has seen everything progressively centralize. In the 1970s, someone living in Ithaca, New York, population about 30,000, could visit the phone company and negotiate billing with the same woman they dealt with several months previously. The guy who came to read the electric meter this month was the same guy you saw every month. And when you called the telephone operator to check on a phone number, they would confirm the address and speculate with you how to get there because they knew your town. Forty years later, if you *can* make a call to a utility company you're probably dealing with someone to whom your town is a dot they can't find on a map...

...which all brings me to this week, when a Twitter account that seemed to be from the National Health Service posted a note to the effect that we might get a message or call from "NHS" and if we did we should follow the instructions. The tweet also published the number we could expect to hear from. Because the immediate follow-up was a few people saying they would immediately block the number, I commented that the smart thing to do seemed to me to be to put the number in a phone's contacts so the call would be recognized.

But, the security folks reminded: SIM spoofing. True. Hello, phishing attacks.

Does the NHS employ no security experts?

Here are the NHS's published instructions for what to do if you're contacted. Note what's missing: a way to verify the call is genuine. Sure, they tell you they won't ask for bank details or other accounts, payment, or ask you to call premium rate numbers or set up a password or PIN over the phone. But they still miss the main point; that is, like a celebrity they still assume that because any call they make will be genuine, any call you get will be genuine. This is Ravenous Bugblagger Beast of Traal reasoning. I recommend wrapping a towel around your head.

As others have pointed out, you could quite effectively mount a denial-of-livelihood attack on someone by reporting them as an exposed contact so they are required to self-isolate for 14 days. Even 30 years ago the world contained people highly skilled at the kind of social engineering that would enable someone to pose effectively as a contact tracer. The NHS needs to do the obvious: publish a number people can call back to verify.

The press appeared to understand the possibilities, and had this exchange with the deputy chief medical officer for England, Jenny Harris:

A question about how to know if a track and trace call is genuine, one person asks. Harries says there is a lot of confidentiality and it will be unlikely you will be contacted by someone with other motives. She says it will be clear that they are genuine - they are professionally trained individuals.

I don't know how to rate the ignorant stupidity of this comment. The satirical magazine Private Eye, however, managed (see above).

This gathering of power to the center was on display elsewhere this week, as Jacob Rees-Mogg, the leader of the House of Commons, pushed to end remote participation and voting in Parliamentary debates. No one is saying that remote participation is ideal, but it *does* permit MPs to represent their constituents who shouldn't be traveling and taking health risks. Even more ridiculous is Rees-Mogg's refusal to countenance electronic voting, with replacement arrangements so absurd and time-wasting that one can only assume he fears losing control otherwise.

Contact tracing is one area where staying local makes all the difference. Anyone who lives in my little area, for example, would know to ask a senior testing positive whether they've been to the local club that (normally) provides classes (dancing, Pilates, photography), social lunches, and entertainment to hundreds of people, chiefly seniors. They know the local independent shops are community hubs as well as sources of essential items and would ask which ones the infected person uses. And they know the spot where homeless people who might struggle to find testing are often to be found selling The Big Issue. The local council, which UK epidemiologists have repeatedly said has the necessary contact tracing expertise, knows all this. Serco certainly doesn't.

We've written before about the dangers of centralizing the Net. What we've previously failed to recognize is how dangerous it can be when combined with politically convenient stupidity.

The UK government, which has been gathering power to the center ever since Margaret Thatcher disbanded the Greater London Council, is outsourcing contact tracing to Serco, which has proved so inept as to be genuinely dangerous. The result is to treat contact tracin contact tracing as if it were calls to customer service at a phone company an to mistake efficiency for effectiveness. Centralization was bad for the Internet. It's even worse for real life.


Illustrations: Private Eye explains contact tracing.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 27, 2020

The to-do list

Thumbnail image for casablanca-dooley-wilson-as-time-goes-by.pngWith so much insecurity and mounting crisis, there's no time now to think about a lot of things that will matter later. But someday there will be. And at that time...

Remember that health workers - doctors, nurses, technicians, ambulance drivers - matter just as much every day as they do during a crisis. Six months after everyone starts feeling safe and starts to forget, remind them how much we owe health workers..

The same goes for other essential services workers, the ones who keep the food stores open, the garbage and recycling being picked up, who harvest the crops, catch the fish, and raise and slaughter the animals and birds, who drive the trucks and supply the stores, and deliver post, takeout, and packages from Amazon et. al, and keep the utilities running, and the people who cook the takeout food, and clean the hospitals and streets. Police. Fire. Pharmacists. Journalists. Doubtless scores of other people doing things I haven't thought of. In developed countries, we forget how our world runs until something breaks, evidenced by Steve Double (Con-St Austell and Newquay), the British MP who said on Monday, "One of the things that the current crisis is teaching us is that many people who we considered to be low-skilled are actually pretty crucial to the smooth running of our country - and are, in fact, recognised as key workers." (Actually, a lot of us knew this.)

Stop taking travel, particularly international travel, for granted. Even when bans and lockdowns are eventually fully lifted, it's likely that pre-boarding and immigration health checks will become as routine as security scanning and showing ID have since 2001. Even if governments don't mandate it the public will demand it: who will sit crammed next to a random stranger unless they can believe it's safe?

Demand better travel conditions. Airlines are likely to find the population is substantially less willing to be crammed in as tightly as we have been.

Along those lines, I'm going to bet that today's children and young people, separated from older relatives by travel bans and lockdowns in this crisis, will think very differently about moving across the country or across the world, where they might be cut off in a future health crisis. Families and friends have been separated before by storms, earthquakes, fires, and floods - but travel links have rarely been down this far for this long - and never so widely. The idea of travel as conditional has been growing through security and notification requirements (I'm thinking of the US's ESTA requirements), but health will bring a whole new version of requiring permission.

Think differently about politicians. For years now it's been fashionable for people to say it doesn't matter who gets in because "they're all the same". You have only to compare US governors' different reactions to this crisis to see how false that is. As someone said on Twitter the other day, when you elect a president you are choosing a crisis manager, not a friend or favorite entertainer.

Remember the importance of government and governance. The US's unfolding disaster owes much of its amplitude to the fact that the federal government has become, as Ed Yong, writing in The Atlantic, calls it, "a ghost town of scientific expertise".

Stop asking "How much 'excess' can we trim from this system?" to asking "What surge capacity do we need, and how can we best ensure it will be available?" This will apply not only to health systems, hospitals, and family practices but to supply chains. The just-in-time fad of the 1990s and the outsourcing habits of the 2000s have left systems predictably brittle and prone to failure. Much of the world - including the US - depends on China to supply protective masks rather than support local production. In this crisis, Chinese manufacturing shut down just before every country in the world began to realize it had a shortage. Our systems are designed for short, sharp local disasters, not expanding global catastrophes where everyone needs the same supplies.

Think collaboratively rather than competitively. In one of his daily briefings this week, New York State governor Andrew Cuomo said forthrightly that sending ventilators to New York now, as its crisis builds, did not mean those ventilators wouldn't be available for other places where the crisis hasn't begun yet. It means New York can send them on when the need begins to drop. More ventilators for New York now is more ventilators for everyone later.

Ensure that large companies whose policies placed their staff at risk during this time are brought to account.

Remember these words from Nancy Pelosi: "And for those who choose prayer over science, I say that science is the answer to our prayers."

Reschedule essential but timing-discretionary medical care you've had to forego during the emergency. Especially, get your kids vaccinated so no one has to fight a preventable illness and an unpreventable one at the same time.

The final job: remember this. Act to build systems so we are better prepared for the next one before you forget. It's only 20 years since Y2K, and what people now claim is that "nothing happened"; the months and person-millennia that went into remediating software to *make* "nothing" happen have faded from view. If we can remember old movies, we can remember this.

Illustrations: Dooley Wilson, singing "As Time Goes by", from Casablanca (1942).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 20, 2020

The beginning of the world as we don't know it

magnolia-1.jpgOddly, the most immediately frightening message of my week was the one from the World Future Society, subject line "URGENT MESSAGE - NOT A DRILL". The text began, "The World Future Society over its 60 years has been preparing for a moment of crisis like this..."

The message caused immediate flashbacks to every post-disaster TV show and movie, from The Leftovers (in which 2% of the world's population mysteriously vanishes) to The Last Man on Earth (in which everyone who isn't in the main cast has died of a virus). In my case, it also reminds unfortunately of the very detailed scenarios I saw posted in the late 1990s to the comp.software.year-2000 Usenet newsgroup, in which survivalists were certain that the Millennium Bug would cause the collapse of society. In one scenario I recall, that collapse was supposed to begin with the banks failing, pass through food riots and cities burning, and end with four-fifths of the world's population dead: the end of the world as we know it (TEOTWAWKI). So what I "heard" in the World Future Society's tone was that the "preppers", who built bunkers, stored sacks of beans, rice, dried meat, and guns, were finally right and this was their chance to prove it.

Naturally, they meant no such thing. What they *did* mean was that futurists have long thought about the impact of various types of existential risks, and that what they want is for as many people as possible to join their effort to 1) protect local government and health authorities, 2) "co-create back-up plans for advanced collaboration in case of societal collapse", and 3) collaborate on possible better futures post-pandemic. Number two still brings those flashbacks, but I like the first goal very much, and the third is on many people's minds. If you want to see more, it's here.

It was one of the notable aspects of the early Internet that everyone looked at what appeared to be a green field for development and sought to fashion it in their own desired image. Some people got what they wanted: China, for example, defying Western pundits who claimed it was impossible, successfully built a controlled national intranet. Facebook, while coming along much later, through zero rating deals with local telcos for its Free Basics, is basically all the Internet people know in countries like Ghana and the Philippines, a phenomenon Global Voices calls "digital colonialism". Something like that mine-to-shape thinking is visible here.

I don't think WFS meant to be scary; what they were saying is in fact what a lot of others are saying, which is that when we start to rebuild after the crisis we have a chance - and a need - to do things differently. At Wired, epidemiologist Larry Brilliant tells Steven Levy he hopes the crisis will "cause us to reexamine what has caused the fractional division we have in [the US]".

At Singularity University's virtual summit on COVID-19 this week, similar optimism was on display (some of it probably unrealistic, like James Ehrlich's land-intensive sustainable villages). More usefully, Jamie Metzl compared the present moment to 1941, when US president Franklin Delano Roosevelt began to imagine how the world might be reshaped after the war would end in the Atlantic charter. Today, Metzl said, "We are the beneficiaries of that process." Therefore, like FDR we should start now to think about how we want to shape our upcoming different geopolitical and technological future. Like net.wars last week and John Naughton at the Guardian, Metzl is worried that the emergency powers we grant today will be hard to dislodge later. Opportunism is open to all.

I would guess that the people who think it's better to bail out businesses than support struggling people also fear permanence will become true of the emergency support measures being passed in multiple countries. One of the most surreal aspects of a surreal time is that in the space of a few weeks actions that a month ago were considered too radical to live are suddenly happening: universal basic income, grounding something like 80% of aviation, even support for *some* limited free health care and paid sick leave in the US.

The crisis is also exposing a profound shift in national capabilities. China could build hospitals in ten days; the US, which used to be able to do that sort of thing, is instead the object of charity from Chinese billionaire Alibaba founder Jack Ma, who sent over half a million test kits and 1 million face masks.

Meanwhile, all of us, with a few billionaire exceptions are turning to the governments we held in so little regard a few months ago to lead, provide support, and solve problems. Libertarians who want to tear governments down and replace all their functions with free-market interests are exposed as a luxury none of us can afford. Not that we ever could; read Paulina Borsook's 1996 Mother Jones article Cyberselfish if you doubt this.

"It will change almost everything going forward," New York State governor Andrew Cuomo said of the current crisis yesterday. Cuomo, who is emerging as one of the best leaders the US has in an emergency, and his counterparts are undoubtedly too busy trying to manage the present to plan what that future might be like. That is up to us to think about while we're sequestered in our homes.


Illustrations:: A local magnolia tree, because it *is* spring.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 14, 2020

Pushy algorithms

cyberporn.jpgOne consequence of the last three and a half years of British politics, which saw everything sucked into the Bermuda Triangle of Brexit debates, is that things that appeared to have fallen off the back of the government's agenda are beginning to reemerge like so many sacked government ministers hearing of an impending cabinet reshuffle and hoping for reinstatement.

One such is age verification, which was enshrined in the Digital Economy Act (2017) and last seen being dropped to wait for the online harms bill.

A Westminster Forum seminar on protecting children online shortly before the UK's December 2019 general election, reflected that uncertainty. "At one stage it looked as if we were going to lead the world," Paul Herbert lamented before predicting it would be back "sooner or later".

The expectation for this legislation was set last spring, when the government released the Online Harms white paper. The idea was that a duty of care should be imposed on online platforms, effectively defined as any business-owned website that hosts "user-generated content or user interactions, for example through comments, forums, or video sharing". Clearly they meant to target everyone's current scapegoat, the big social media platforms, but "comments" is broad enough to include any ecommerce site that accepts user reviews. A second difficulty is the variety of harms they're concerned about: radicalization, suicide, self-harm, bullying. They can't all have the same solution even if, like one bereaved father, you blame "pushy algorithms".

The consultation exercise closed in July, and this week the government released its response. The main points:

- There will be plentiful safeguards to protect freedom of expression, including distinguishing between illegal content and content that's legal but harmful; the new rules will also require platforms to publish and transparently enforce their own rules, with mechanisms for redress. Child abuse and exploitation and terrorist speech will have the highest priority for removal.

- The regulator of choice will be Ofcom, the agency that already oversees broadcasting and the telecommunications industry. (Previously, enforcing age verification was going to be pushed to the British Board of Film Classification.)

- The government is still considering what liability may be imposed on senior management of businesses that fall under the scope of the law, which it believes is less than 5% of British businesses.

- Companies are expected to use tools to prevent children from accessing age-inappropriate content "and protect them from other harms" - including "age assurance and age verification technologies". The response adds, "This would achieve our objective of protecting children from online pornography, and would also fulfill the aims of the Digital Economy Act."

There are some obvious problems. The privacy aspects of the mechanisms proposed for age verification remain disturbing. The government's 5% estimate of businesses that will be affected is almost certainly a wild underestimate. (Is a Patreon page with comments the responsibility of the person or business that owns it or Patreon itself?). At the Guardian, Alex Hern explains the impact on businesses. The nastiest tabloid journalism is not within scope.

On Twitter, technology lawyer Neil Brown identifies four fallacies in the white paper: the "Wild West web"; that privately operated computer systems are public spaces; that those operating public spaces owe their users a duty of care; and that the offline world is safe by default. The bigger issue, as a commenter points out, is that the privately operated computer systems UK government seeks to regulate are foreign-owned. The paper suggests enforcement could include punishing company executives personally and ordering UK ISPs to block non-compliant sites.

More interesting and much less discussed is the push for "age-appropriate design" as a method of harm reduction. This approach was proposed by Lorna Woods and Will Perrin in January 2019. At the Westminster eForum, Woods explained, "It is looking at the design of the platforms and the services, not necessarily about ensuring you've got the latest generation of AI that can identify nasty comments and take it down."

It's impossible not to sympathize with her argument that the costs of move fast and break things are imposed on the rest of society. However, when she started talking about doing risk assessments for nascent products and services I could only think she's never been close to software developers, who've known for decades that from the instant software goes out into the hands of users they will use it in ways no one ever imagined. So it's hard to see how it will work, though last year the ICO proposed a code of practice.

The online harms bill also has to be seen in the context of all the rest of the monitoring that is being directed at children in the name of keeping them - and the rest of us - safe. DefendDigital.me has done extensive work to highlight the impact of such programs as Prevent, which requires schools and libraries to monitor children's use of the Internet to watch for signs of radicalization, and the more than 20 databases that collect details of every aspect of children's educational lives. Last month, one of these - the Learning Records Service - was caught granting betting companies access to personal data about 28 million children. DefendDigital.me has called for an Educational Rights Act. This idea could be usefully expanded to include children's online rights more broadly.


Illustrations: Time magazine's 1995 "Cyberporn" cover, which marked the first children-Internet panic.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

October 4, 2019

Digital London

cropped-view-from-walkie-talkie-2017.jpgAnyone studying my travel patterns on the London Underground will encounter a conundrum: what makes a person undertake, once or twice a week, a one-way journey into the center of town? How do they get home?

For most people, it would remain, like Sudoku, a pointless puzzle. For Transport for London, the question is of greater importance: what does it plan for? On Monday, at an event run by the Greater London Authority intelligence unit to showcase its digital tools, a TfL data analyst expressed just this sort of conundrum. I had asked, "What's the hardest problem you're working on?" And he said, "Understanding human behavior." Data shows what happened. It gives no clue as to *why* unless you can map the data to other clue-bearing streams. If you can match the dates, times, and weather reports, the flood onto and into buses, trains, tubes, and taxis may be clearly understood as: it was raining. But beyond that...people are weird.

And they're numerous. As London's chief digital officer, Theo Blackwell, said, London is now the largest it's ever been, only recently passing the peak it reached in 1939. Seventy-odd years of peace and improving public health has enabled uninterrupted growth to 9 million; five or six years hence it's expected to reach 11 million, a 20+% rise that will challenge the capacity of housing and transport, and exacerbate the impact of climate change. Think water: London, is drier than you'd expect.

A fellow attendee summed this up this way: "London has put on an entire Birmingham in size in the last ten years." Two million more is approaching the size of greater Manchester. London, in a term used by Greenwood Strategic Advisors' Craig Stephens, is an "attractor city". People don't need a reason to come here, as they do when moving to smaller places. As a result, tracking and predicting migration is one of the thornier problems.

TfL's planning problems are, therefore, a subset of the greater range of conundrums facing London, some of them fueled by the length of the city's history. David Christie, TfL's demand forecasting and analytics manager, commented, for example, that land use was a challenge because there hasn't been an integrated system to track it. Mike Bracken, one of the founders of the Government Digital Service, reminded that legacy systems and vendor lock-in are keeping the UK lagging well behind countries like Peru and Madagascar, which solve services in 12 weeks. "We need to hurry up," he said, "because our mental model of where we stand in relationship to other nations is not going to stand for much longer." He had a tip for making things work: "Don't talk about blockchain. Just fix your website."

Christie's group does the technical work of modeling for TfL. In the 1970s, he said, his department would prepare an input file and send it off to the Driver and Vehicle Licensing Agency's computer and they'd get back results two months later. He still complains that run times for the department's models are an issue, but the existing model has been the basis for current schemes such as Crossrail 1 and the Northern Line extension. What makes this model - the London Simulator - sound particularly interesting was Christie's answer to the question of how they validate the data. "The first requirement of the model is to independently recreate the history." Instead of validating the data, they validate the model by looking to see what it's wrong about in the last 25 years.

Major disruptors TfL expects include increasingly flexible working patterns, autonomous vehicles, more homes. Christie didn't mention it, but I imagine Uber's arrival was an unpredictable external black swan event, abruptly increasing congestion and disrupting modal share. But is it any part of why the car journeys per day have dropped 8% since 2000?

Refreshingly, the discussion focused on using technology in effective ways to achieve widely-held public goals, rather than biased black-box algorithms and automated surveillance, or the empty solutionist landscapes Ben Green objects to in The Smart-Enough City. Instead, they were talking things like utilities sharing information about which roads they need to dig up when, intended to be a win for residents, who welcome less disruption, and for companies, which appreciate saving some of the expense. When, in a final panel, speakers were asked to name significant challenges they'd like to solve, they didn't talk about technology. Instead, Erika Lewis, the deputy director for data policy and strategy at the Department for Culture, Media, and Sport, said she wanted to improve how the local and city governments interface with central government and design services from the ground up around the potential uses for the data. "We missed the boat on smart meters," she said, "but we could do it with self-driving cars."

Similarly, Sarah Mulley, GLA's executive director for communities and intelligence, said engaging with civil society and the informal voluntary sector was a challenge she wanted to solve. "[They have] a lot to say, but there aren't ways to connect into it." Blackwell had the last word. "In certain areas, data has been used in a quite brutal way," he said. "How to gain trust is a difficult leadership challenge for cities."


Illustrations: London in 2017, looking south past London Bridge toward Southwark Cathedral and the Shard from the top of the Walkie-Talkie building.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 10, 2019

Slime trails

ghostbusters-murray-slime.pngIn his 2000 book, Which Lie Did I Tell?, the late, great screenwriter William Goldman called the brilliant 1963 Stanley Donen movie Charade a "money-loser". Oh, sure, it was a great success - for itself. But it cost Hollywood hundreds of millions of dollars in failed attempts to copy its magical romantic-comedy-adventure-thriller mixture. (Goldman's own version, 1992's The Year of the Comet, was - his words - "a flop".) In this sense, Amazon may be the most expensive company ever launched in Silicon Valley because it encouraged everyone to believe losing money in 17 of its first 18 years doesn't matter.

Uber has been playing up this comparison in the run-up to its May 2019 IPO. However, two things make it clear the comparison is false. First - duh - losing money just isn't a magical sign of a good business, even in the Internet era. Second, Amazon had scale on its side, as well as a pioneering infrastructure it was able later to monetize. Nothing about transport scales, as Hubert Horan laid out in 2017; even municipalities can't make Uber cheaper than public transit. Horan's analysis of Uber's IPO filing is scathing. Investment advisers love to advise investing in companies that make popular products, but *not this time*.

Meanwhile, network externalities abound. The Guardian highlights the disparity between Uber's drivers, who have been striking this week, and its early investors, who will make billions even while the company says it intends to continue slicing drivers' compensation. The richest group, says the New York Times, have already decamped to lower-tax states.

If Horan is right, however, the impending shift of billions of dollars from drivers and greater fools to already-wealthy early investors will arguably be a regulatory failure on the part of the Securities and Exchange Commission. I know the rule of the stock market is "buyer beware", but without the trust conferred by regulators there will *be* no buyers, not even pension funds. Everyone needs government to ensure fair play.

Somewhere in one of his 500-plus books, the science/fiction writer Isaac Asimov commented that he didn't like to fly because in case of a plane crash his odds of survival were poor. "It's not sporting." In fact, most passengers survive, unharmed, but not, obviously, in the recent Boeing crashes. Blame, as Madeline Elish correctly predicted in her paper on moral crumple zones, is being sprayed widely, particularly among the humans who build and operate these things: faulty sensors, pilots, and software issues.

The reality seems more likely to be a perfect storm comprising numerous components: 1) the same kind of engineering-management disconnect that doomed Challenger in 1986, 2) trying to compensate with software for a hardware problem, 3) poorly thought-out cockpit warning light design, 4) the number and complexity of vendors involved, and 5) receding regulators. As hybrid cyber-physical systems become more pervasive, it seems likely we will see many more situations where small decisions made by different actors will collide to create catastrophes, much like untested drug interactions.

Again, regulatory failure is the most alarming. Any company can screw up. The failure of any complex system can lead to companies all blaming each other. There are always scapegoats. But in an industry where public perception of safety is paramount, regulators are crucial in ensuring trust. The flowchart at the Seattle Times says it all about how the FAA has abdicated its responsibility. It's particularly infuriating because many in the cybersecurity industry cite aviation as a fine example of what an industry can do to promote safety and security when the parties recognize their collective interests are best served by collaborating and sharing data. Regulators who audit and test provide an essential backstop.

The 6% of the world that flies relies on being able to trust regulators to ensure their safety. Even if the world's airlines now decide that they can't trust the US system, where are they going to go for replacement aircraft? Their own governments will have to step in where the US is failing, as the EU already does in privacy and antitrust. Does the environment win, if people decide it's too risky to fly? Is this a plan?

I want regulators to work. I want to be able to fly with reasonable odds of survival, have someone on the job to detect financial fraud, and be able to trust that medical devices are safe. I don't care how smart you are, no consumer can test these things for themselves, any more than we can tell if a privacy policy is worth the electrons it's printed on.

On that note, last week on Twitter Demos researcher Carl Miller, author of The Death of the Gods, made one of his less-alarming suggestions. Let's replace "cookie": "I'm willing to bet we'd be far less willing to click yes, if the website asked if we [are] willing to have a 'slime trail', 'tracking beacon' or 'surveillance agent' on our browser."

I like "slime trail", which extends to cover the larger use of "cookie" in "cookie crumbs" to describe the lateral lists that show the steps by which you arrived at the current page. Now, when you get a targeted ad, people will sympathize as you shout, "I've been slimed!"


Illustrations: Bill Murray, slimed in Ghostbusters (1984).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 29, 2019

Doubt

Philip Seymour Hoffman in Doubt.jpgA few months ago, Max Reid published an article at New York Magazine commenting that increasingly most of the Internet is fake. He starts with Twitter bots and fake news and winds up contemplating the inauthenticity of self. Fake traffic, automatically-generated content, and advertising fraud are not necessarily lethal. What *is* lethal is the constant nagging doubt about what you're seeing or, as Reid puts it, "the uncanny sense that what you encounter online is not 'real' but is also undeniably not 'fake', and indeed may be both at once, or in succession, as you turn it over in your head". It's good not to take everything you read at face value. But the loss of confidence can be poisonous. Navigating that is exhausting.

And it's pervasive. Last weekend, a couple of friends began arguing about whether the epetition to revoke Article 50 was full of fraud. At the time, 40,000 to 50,000 people were signing per hour and the total had just passed 4 million. The petition is hosted on a government site, whose managers have transparently explained their handling of the twin challenges of scale and combating fraud. JJ Patrick's independent analysis concurs: very little bot participation. Even Theresa May defended its validity during Wednesday's Parliamentary debate, even though her government went on to reject it.

Doubt is very easily spread, sometimes correctly. One of the most alarming things about the Boeing 737 MAX 8 crashes is the discovery that the Federal Aviation Administration is allowing the company to self-certify the safety of its planes. Even if the risk there is only perception, it's incredibly dangerous for an industry that has displayed enormous intelligence in cooperating to make air travel safe.

Another example: last year Nextdoor, a San Francisco-based neighborhood-focused social networking service, sent postcards inviting my area to join Think of it as a corporately-owned chain of community bulletin boards. Across the tracks from me, one street gossips on a private community bulletin board one of them has set up, where I'm told they have animated discussions of their micro issues. By comparison, Nextdoor sounded synthetic; still, I signed up.

Most postings ask for or recommend window cleaners and piano tuners, buildes and babysitters. Yet the site's daily email persistently gives crime and safety the top spot: two guys on a moped peering into car windows reacted aggressively to challenges; car broken into at Tesco; stolen bike; knife attack; police notices. I can't tell if this is how the site promotes "engagement" or whether its origin is deliberate strategy or algorithmic accident. But I do note the rising anxiety among people's responses, and while crime is rising in the UK, likely attributable to police cuts, my neighborhood remains a safer place than Nextdoor suggests...I think. What is certain is that I doubt my neighbors more; you can easily imagine facing their hostile inquisition for being a perfectly innocent hoodie-wearing young male on a moped using a flashlight to look for dropped keys.

Years ago, some of us skeptics considered mounting a hoax - a fake UFO to be found in someone's garden, for example - to chart its progress through the ranks of believers and the media. In the end, we decided it was a bad idea, because such things never die, and then you have to spend the rest of your life debunking them. There are plenty of examples; David Langford's UFO hoax, published as an account found in his attic and written by William Robert Loosley still circulates as true, goosed since then by a mention in a best-selling book by Whitley Strieber. As the Internet now proves every day, once a false story is embedded, you can never fully dig it out again. Worse, even when you don't believe it, repeated encounters can provoke doubt despite yourself.

Andrew Wakefield is a fine case in point. Years after the British Medical Journal retracted his paper and called it a hoax, the damage continues to escalate. Recently, the World Health Organization called vaccine hesitancy is a top-ten threat to health worldwide. Hesitancy is right. I am an oddity in having been born in 1954 but having somehow escaped all the childhood diseases. "You probably just don't remember you had them," the nurse said when I inquired about getting the MMR, now that every week you read of a measles outbreak somewhere. True, I *don't* remember having them. But I *do* remember, clearly, my mother asking me, "Were you *close* to them?" every time the note came home from school that another kid had one of them. We made an appointment for the shot.

I then realized that years of exposure to anti-vaccination arguments have had their effect. Hundreds of millions of people have had these vaccines with little to no ill-effects, and yet: what if? How stupid would I feel if I broke my own health? "It's just fear of illness," someone carped on Twitter, trying to convince me that vaccines were not safety-tested and only a fool would get one. Well, yes, and some illnesses *should* be feared, particularly as you age.

"I know," the nurse said, when I commented that spreading doubt has been a terrible effect of all this. She pushed the plunger. Two days later, a friend living in north London emailed: she has mumps. It feels like I had a close call.


Illustrations: Philip Seymour Hoffman in John Patrick Shanley's DOUBT.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 22, 2019

Layer nine

nemeth-osi-9layer-tshirt.jpgIs it possible to regulate the internet without killing it?

Before you can answer that you have to answer this: what constitutes killing the Internet? The Internet Society has a sort of answer, which is a list of what it calls Internet invariants, a useful phrase that is less attackable as "solutionism" by Evgeny Morozov than alternatives that portray the Internet as if it were a force of nature instead of human-designed and human-made.

Few people watching video on their phones on the Underground care about this, but networking specialists view the Internet as a set of layers. I don't know the whole story, but in the 1980s researchers, particularly in Europe, put a lot of work into conceptualizing a seven-layer networking model, Open Systems Interconnection. By 1991, however, a company CEO told me, "I don't know why we need it. TCP/IP is here now. Why can't we just use that?" TCP/IP are the Internet protocols, so that conversation showed the future. However, people still use the concepts OSI built. The bottom, physical layers, are the province of ISPs and telcos. The ones the Internet Society is concerned about are the ones concerning infrastructure and protocols - the middle layers. Layer 7, "Application", is all the things users see - and politicians fight over.

We are at a layer the OSI model failed to recognize, identified by the engineer Evi Nemeth. We - digital and human rights activists, regulators, policy makers, social scientists, net.wars readers - are at layer 9.

So the question we started with might also be phrased, "Is it possible to regulate the application layer while leaving the underlying infrastructure undamaged?" Put like that, it feels like it ought to be. Yet aspects of Internet regulation definitely entangle downwards. Most are surveillance-related, such as the US requirement that ISPs enable interception and data retention. Emerging demands for localized data storage and the General Data Protection Regulation also may penetrate more deeply while raising issues of extraterritorial jurisdiction. GDPR seeds itself into other countries like the stowaway recursive clause of the GNU General Public License for software: both require their application to onward derivatives. Localized data storage demands blocks and firewalls instead of openness.

Twenty years ago, you could make this pitch to policy makers: if you break the openness of the Internet by requiring a license to start an online business, or implementing a firewall, or limiting what people can say and do, you will be excluded form the Internet's economic and social benefits. Since then, China has proved that a national intranet can still fuel big businesses. Meanwhile, the retail sector craters and a new Facebook malfeasance surfaces near-daily, the policy maker might respond that the FAANG- Fab Five pay far less in tax than the companies they've put out of business, employment precarity is increasing, and the FAANGs wield disproportionate power while enabling abusive behavior and the spread of extremism and violence. We had open innovation and this is what it brought us.

To old-timers this is all kinds of confusion. As I said recently on Twitter, it's subsets all the way down: Facebook is a site on the web, and the web is an application that runs on the Internet. They are not equivalents. Here. In countries where Facebook's Free Basics is zero-rated, the two are functionally equivalent.

Somewhere in the midst of a discussion yesterday about all this, it was interesting to consider airline safety. That industry understood very early that safety was crucial to its success. Within 20 years of the Wright Brothers' first flight in 1903, the nascent industry was lobbying the US Congress for regulation; the first airline safety bill passed in 1926. If the airline industry had instead been founded by the sort of libertarians who have dominated large parts of Internet development...well, the old joke about the exchange between General Motors and Bill Gates applies. The computer industry has gotten away with refusing responsibility for 40 years because they do not believe we'll ever stop buying their products, and we let it.

There's a lot to say about the threat of regulatory capture even in two highly regulated industries, medicine and air travel, and maybe we'll say it here one week soon, but the overall point is that outside of the open source community, most stakeholders in today's Internet lack the kind of overarching common goal that continues to lead airlines and airplane manufacturers to collaborate on safety despite also being fierce competitors. The computer industry, by contrast, has spent the last 50 years mocking government for being too slow to keep up with technological change while actively refusing to accept any product liability for software.

In our present context, the "Internet invariants" seem almost quaint. Yet I hope the Internet Society succeeds in protecting the Internet's openness because I don't believe our present situation means that the open Internet has failed. Instead, the toxic combination of neoliberalism, techno-arrogance, and the refusal of responsibility (by many industries - just today, see pharma and oil) has undermined the social compact the open Internet reflected. Regulation is not the enemy. *Badly-conceived* regulation is. So the question of what good regulation looks like is crucial.


Illustrations: Evi Nemeth's adapted OSI model, seen here on a T-shirt historically sold by the Internet Systems Consortium.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 15, 2019

Schrödinger's Brexit

Parliament_Clock_Westminster-wikimedia.jpg

"What's it like over there now?" American friends keep asking as the clock ticks down to midnight on March 29. Even American TV seems unusually interested: last week's Full Frontal with Samantha Bee had Amy Hoggart explain in detail; John Oliver made it a centerpiece two weeks ago, and US news outlets are giving it as much attention as if it were a US story. They're even - so cute! - trying to pronounce "Taoiseach". Everyone seems fascinated by the spectacle of the supposedly stoic, intellectual British holding meaningless "meaningful" votes and avoiding making any decisions that could cause anyone to lose face. So this is what it's like to live through a future line in the history books: other countries fret on your behalf while you're trying to get lunch.

In 14 days, Britain will either still be a member of the European or it won't. It will have a deal describing the future relationship or it won't. Ireland will be rediscovering civil war or it won't. In two months, we will be voting in the European Parliamentary elections as if nothing has happened, or we won't. All possible outcomes lead to protests in Parliament Square.

No one expects to be like Venezuela. But no one knows what will happen, either. We were more confident approaching Y2K. At least then you knew that thousands of people had put years of hard work into remediating the most important software that could fail. Here...in January, returning from CPDP and flowing seamlessly via Eurostar from Brussels to London, my exit into St Pancras station held the question: is this the last time this journey will be so simple? Next trip, will there be Customs channels and visa checks? Where will they put them? There's no space.

A lot of the rhetoric both at the time of the 2016 vote and since has been around taking back control and sovereignty. That's not the Britain I remember from the 1970s, when the sense of a country recovering from the loss of its empire was palpable, middle class people had pay-as-you-go electric and gas meters, and the owner of a Glasgow fruit and vegetable shop stared at me when I asked for fresh garlic. In 1974, a British friend visiting an ordinary US town remarked, "You can tell there's a lot more money around in this country." And another, newly expatriate and struggling: "But at least we're eating real meat here." This is the pre-EU Britain I remember.

"I've worked for them, and I know how corrupt they are," a 70-something computer scientist said to me of the EU recently. She would, she said, "man the barriers" if withdrawal did not go through. We got interrupted before I could ask if she thought we were safer in the hands of the Parliament whose incompetence she had also just furiously condemned.

The country remains profoundly in disagreement. There may be as many definitions of "Brexit" as there are Leave voters. But the last three years have brought everyone together on one thing: no matter how they voted, where they're from, which party they support, or where they get their news, everyone thinks the political class has disgraced itself. Casually-met strangers laugh in disbelief at MPs' inability to put country before party or self-interest or say things like "It's sickening". Even Wednesday's hair's width vote taking No Deal off the table is absurd: the clock inexorably ticks toward exiting the EU with nothing unless someone takes positive action, either by revoking Article 50, or by asking for an extension, or by signing a deal. But action can get you killed politically. I've never cared for Theresa May, but she's prime minister because no one else was willing to take this on.

NB for the confused: in the UK "tabling a motion" means to put it up for discussion; in the US it means to drop it.

Quietly, people are making just-in-case preparations. One friend scheduled a doctor's appointment to ensure that he'd have in hand six months' worth of the medications he depends on. Others stockpile EU-sourced food items that may be scarce or massively more expensive. Anyone who can is applying for a passport from an EU country; many friends are scrambling to research their Irish grandparents and assemble documentation. So the people in the best position are the recent descendants of immigrants that would would not now be welcome. It is unfair and ironic, and everyone knows it. A critical underlying issue, Danny Dorling and Sally Tomlinson write in their excellent and eye-opening Rule Britannia: Brexit and the End of Empire is education that stresses the UK's "glorious" imperial past. Within the EU, they write, UK MEPs are most of the extreme right, and the EU may be better off - more moderate, less prone to populism - without the UK, while British people may achieve a better understanding of their undistinguished place in the world. Ouch.

The EU has never seemed irrelevant to digital rights activists. Computers, freedom, and privacy (that is, "net.wars") shows the importance of the EU in our time, when the US refuses to regulate and the Internet is challenging national jurisdiction. International collaboration matters.

Just as I wrote that, Parliament finally voted to take the smallest possible action and ask the EU for a two-month extension. Schrödinger needs a bigger box.

Illustrations: "Big Ben" (Aldaron, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 8, 2019

Pivot

parliament-whereszuck.jpgWould you buy a used social media platform from this man?

"As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platform," Mark Zuckerberg wrote this week at the Facebook blog, also summarized at the Guardian.

Zuckerberg goes on to compare Facebook and Instagram to "the digital equivalent of a town square".

So many errors, so little time. Neither Facebook nor Instagram is open. "Open information, Rufus Pollock explained last year in The Open Revolution, "...can be universally and freely used, built upon, and shared." While, "In a Closed world information is exclusively 'owned' and controlled, its attendant wealth and power more and more concentrated".

The alphabet is open. I do not need a license from the Oxford English Dictionary to form words. The web is open (because Tim Berners-Lee made it so). One of the first social media, Usenet, is open. Particularly in the early 1990s, Usenet really was the Internet's town square.

*Facebook* is *closed*.

Sure, anyone can post - but only in the ways that Facebook permits. Running apps requires Facebook's authorization, and if Facebook makes changes, SOL. Had Zuckerberg said - as some have paraphrased him - "town hall", he'd still be wrong, but less so: even smaller town halls have metal detectors and guards to control what happens inside. However, they're publicly owned. Under the structure Zuckerberg devised when it went public, even the shareholders have little control over Facebook's business decisions.

So, now: this week Zuckerberg announced a seeming change of direction for the service. Slate, the Guardian, and the Washington Post all find skepticism among privacy advocates that Facebook can change in any fundamental way, and they wonder about the impact on Facebook's business model of the shift to focusing on secure private messaging instead of the more public newsfeed. Facebook's former chief security officer Alex Stamos calls the announcement a "judo move" that removes both the privacy complaints (Facebook now can't read what you say to your friends) and allows the site to say that complaints about circulating fake news and terrorist content are outside its control (Facebook now can't read what you say to your friends *and* doesn't keep the data).

But here's the thing. Facebook is still proposing to unify the WhatsApp, Instagram, and Facebook user databases. Zuckerberg's stated intention is to build a single unified secure messaging system. In fact, as Alex Hern writes at the Guardian that's the one concrete action Zuckerberg has committed to, and that was announced back in January, to immediate privacy queries from the EU.

The point that can' t be stressed enough is that although Facebook is trading away the ability to look at the content of what people post it will retain oversight of all the traffic data. We have known for decades that metadata is even more revealing than content; I remember the late Caspar Bowden explaining the issues in detail in 1999. Even if Facebook's promise to vape the messages doesn't include keeping no copies for itself (a stretch, given that we found out in 2013 that the company keeps every character you type), it will be able to keep its insights into the connections between people and the conclusions it draws from them. Or, as Hern also writes, Zuckerberg "is offering privacy on Facebook, but not necessarily privacy from Facebook".

Siva Vaidhyanathan, author of Antisocial Media, seems to be the first to get this, and to point out that Facebook's supposed "pivot" is really just a decision to become more dominant, like China's WeChat.WeChat thoroughly dominates Chinese life: it provides messaging, payments, and a de facto identity system. This is where Vaidhyanathan believes Facebook wants to go, and if encrypting messages means it can't compete in China...well, WeChat already owns that market anyway. Let Google get the bad press.

Facebook is making a tradeoff. The merged database will give it the ability to inspect redundancy - are these two people connected on all three services or just one? - and therefore far greater certainty about which contacts really matter and to whom. The social graph that emerges from this exercise will be smaller because duplicates will have been merged, but far more accurate. The "pivot" does, however, look like it might enable Facebook to wriggle out from under some of its numerous problems - uh, "challenges". The calls for regulation and content moderation focus on the newsfeed. "We have no way to see the content people write privately to each other" ends both discussions, quite possibly along with any liability Facebook might have if the EU's copyright reform package passes with Article 11 (the "link tax") intact.

Even calls that the company should be broken up - appropriate enough, since the EU only approved Facebook's acquisition of WhatsApp when the company swore that merging the two databases was technically impossible - may founder against a unified database. Plus, as we know from this week's revelations, the politicians calling for regulation depend on it for re-election, and in private they accommodate it, as Carole Cadwalladr and Duncan Campbell write at the Guardian and Bill Goodwin writes at Computer Weekly.

Overall, then, no real change.


Illustrations: The international Parliamentary committee, with Mark Zuckerberg's empty seat.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

February 22, 2019

Metropolis

Metropolis-openingshot.png"As a citizen, how will I know I live in a smarter city, and how will life be different?" This question was probably the smartest question asked at yesterday's Westminster Forum seminar on smart cities (PDF); it was asked by Tony Sceales, acting as moderator.

"If I feel safe and there's less disruption," said Peter van Manen. "You won't necessarily know. Thins will happen as they should. You won't wake up and say, 'I'm in the city of the future'," said Sam Ibbott. "Services become more personalized but less visible," said Theo Blackwell the Chief Digital Office for London.

"Frictionless" said Jacqui Taylor, offering it as the one common factor she sees in the wildly different smart city projects she has encountered. I am dubious that this can ever be achieved: one person's frictionless is another's desperate frustration: streets cannot be frictionless for *both* cars and cyclists, just as a city that is predicted to add 2 million people over the next ten years can't simultaneously eliminate congestion. "Working as intended" was also heard. Isn't that what we all wish computers would do?

Blackwell had earlier mentioned the "legacy" of contactless payments for public transport. To Londoners smushed into stuffed Victoria Line carriages in rush hour, the city seems no smarter than it ever was. No amount of technological intelligence can change the fact that millions of people all want to go home at the same time or the housing prices that force them to travel away from the center to do so. We do get through the ticket barriers faster.

"It's just another set of tools," said Jennifer Schooling. "It should feel no different."

The notion of not knowing as the city you live in smartens up should sound alarm bells. The fair reason for that hiddenness is the reality that, as Sara Degli Esposti pointed out at this year's Computers, Privacy, and Data Protection, this whole area is a business-to-business market. "People forget that, especially at the European level. Users are not part of the picture, and that's why we don't see citizens engaged in smart city projects. Citizens are not the market. This isn't social media."

She was speaking at CPDP's panel on smart cities and governance, convened by the University of Stirling's William Webster, who has been leading a research project, CRISP, to study these technologies. CRISP asked a helpfully different question: how can we use smart city technologies to foster citizen engagement, coproduction of services, development of urban infrastructure, and governance structures?

The interesting connection is this: it's no surprise when CPDP's activists, regulators, and academics talk about citizen engagement and participation, or deplore a model in which smart cities are a business-led excuse for corporate and government, surveillance. The surprise comes when two weeks later the same themes arise among Westminster Forum's more private and public sector speakers and audience. These are the people who are going to build these new programs and services, and they, too, are saying they're less interested in technology and more interested in solving the problems that keep citizens awake at night: health, especially.

There appears to be a paradigm shift beginning to happen as municipalities begin to seriously consider where and on what to spend their funds.

However, the shift may be solely European. At CPDP, Canadian surveillance studies researcher David Murakami Wood told the story of Toronto, where (Google owner) Alphabet subsidiary Sidewalk Labs swooped in circa 2014 with proposals to redevelop the Quayside area of Toronto in partnership with Waterfront Toronto. The project has been hugely controversial - there were hearings this week in Ottawa, the provincial capital.

As Murakami Wood's tells it, for Sidewalk Labs the area is a real-world experiment using real people's lives as input to create products the company can later sell elsewhere. The company has made clear it intends to keep all the data the infrastructure generates on its servers in the US as well as all the intellectual property rights. This, Murakami Wood argued, is the real cost of the "free" infrastructure. It is also, as we're beginning to see elsewhere, the extension of online tracking or, as Murakami Wood put it, surveillance capitalism into the physical world: cultural appropriation at municipal scale from a company that has no track record in building buildings, or even publishing detailed development plans. Small wonder that Murakami Wood laughed when he heard Sidewalk Labs CEO Dan Doctoroff impress a group of enthusiastic young Canadian bankers with the news that the company had been studying cities for *two years*.

Putting these things together, we have, as Andrew Adams suggested, three paradigms, which we might call US corporate, Chinese authoritarian, and, emerging, European participatory and cooperative. Is this the choice?

Yes and no. Companies obviously want to develop systems once, sell them everywhere. Yet the biggest markets are one-off outliers. "Croydon," said Blackwell, "is the size of New Orleans." In addition, approaches vary widely. Some places - Webster mentioned Glasgow - are centralized command and control; others - Brazil - are more bottom-up. Rick Robinson finds that these do not meet in the middle.

The clear takeaway overall is that local context is crucial in shaping smart city projects and despite some common factors each one is different. We should built on that.


Illustrations: Fritz Lang's Metropolis (1927).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

December 14, 2018

Entirely preventable

cropped-Spies_and_secrets_banner_GCHQ_Bude_dishes.jpgThis week, the US House of Representatives Committee on Oversight and Government Reform used this phrase to describe the massive 2017 Equifax data breach: "Entirely preventable." It's not clear that the ensuing recommendations, while all sensible and valuable stuff - improve consumers' ability to check their records, reduce the use of Social Security numbers as unique identifiers, improve oversight of credit reporting agencies, increase transparency and accountability, hold federal contractors liable, and modernize IT security - will really prevent another similar breach from taking place. A key element was a bit of unpatched software that left open a vulnerability used by the attackers to gain a foothold - in part, the report says, because the legacy IT systems made patching difficult. Making it easier to do the right thing is part of the point of the recommendation to modernize the IT estate.

How closely is it feasible to micromanage companies the size and complexity of Equifax? What protection against fraud will we have otherwise?

The massive frustration is that none of this is new information or radical advice. On the consumer rights side, the committee is merely recommending practices that have been mandated in the EU for more than 20 years in data protection law. Privacy advocates have been saying for more than *30* years that the SSN is every example of how a unique identifier should *not* be used. Patching software is so basic that you can pick any random top ten security tips and find it in the top three. We sort of make excuses for small businesses because their limited resources mean they don't have dedicated security personnel, but what excuse can there possibly be for a company the size of Equifax that holds the financial frailty of hundreds of millions of people in its grasp?

The company can correctly say this: we are not its customers. It is not its job to care about us. Its actual customers - banks, financial services, employers, governments - are all well served. What's our problem? Zeynep Tufecki summed it up correctly on Twitter when she commented that we are not Equifax's customers but its victims. Until there are proportionate consequences for neglect and underinvestment in security, she said later, the companies and their departing-with-bonuses CEOs will continue scrimping on security even though the smallest consumer infraction means they struggle for years to reclaim their credit rating.

If Facebook and Google should be regulated as public utilities, the same is even more true for the three largest credit agencies, Equifax, Experian, and TransUnion, who all hold much more power over us, and who are much less accountable. We have no opt-out to exercise.

But even the punish-the-bastards approach merely smooths over and repaints the outside of a very ugly tangle of amyloid plaques. Real change would mean, as Mydex CEO David Alexander is fond of arguing, adopting a completely different approach that puts each of us in charge of our own data and avoids creating these giant attacker-magnet databases in the first place. See also data brokers, which are invisible to most people.

Meanwhile, in contrast to the committee, other parts of the Five Eyes governments seem set on undermining whatever improvements to our privacy and security we can muster. Last week the Australian parliament voted to require companies to back-door their encryption when presented with a warrant. While the bill stops at requiring technology companies to build in such backdoors as a permanent fixture - it says the government cannot require companies to introduce a "systemic weakness" or "systemic vulnerability" - the reality is that being able to break encryption on demand *is* a systemic weakness. Math is like that: either you can prove a theorem or you can't. New information can overturn existing knowledge in other sciences, but math is built on proven bedrock. The potential for a hole is still a hole, with no way to ensure that only "good guys" can use it - even if you can agree who the good guys are.

In the UK, GCHQ has notified the intelligence and security committee that it will expand its use of "bulk equipment interference". In other words, having been granted the power to hack the world's computers - everything from phones and desktops to routers, cars, toys, and thermostats - when the 2016 Investigatory Powers Act was being debated, GCHQ now intends to break its promise to use that power sparingly.

As I wrote in a submission to the consultation, bulk hacking is truly dangerous. The best hackers make mistakes, and it's all too easy to imagine a hacking error becoming the cause of a 100-car pile-up. As smart meters roll out, albeit delayed, and the smart grid takes shape, these, too, will be "computers" GCHQ has the power to hack. You, too, can torture someone in their own home just by controlling their thermostat. Fun! And important for national security. So let's do more of it.

In a time when attacks on IT infrastructure are growing in sophistication, scale, and complexity, the most knowledgeable people in government, whose job it is to protect us, are deliberately advocating weakening it. The consequences that are doubtless going to follow the inevitable abuse of these powers - because humans are humans and the mindset inside law enforcement is to assume the worst of all of us - will be entirely preventable.


Illustrations: GCHQ listening post at dawn (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


June 28, 2018

Divergence

siliconvalleyopoloy.jpgLast September, Open Markets Institute director of legal policy Lina M. Khan published a lengthy law review article discussing two related topics: American courts' narrowing of antitrust law to focus on pricing and profits rather than promoting competition and balancing market power, and the application of that approach to Amazon in particular. The US's present conception of antitrust law, she writes and we see in action, provides no lens through which to curb the power of data-driven platforms. If cheaper is always better, what can possibly be wrong with free?

This week, the US Supreme Court provided another look at this sort of reasoning when it issued its opinion in Ohio v. American Express. The short version, as Henry Farrell explains on Twitter: SCOTUS sided with American Express, and in doing so made it harder to bring antitrust action against the big technology companies in the US *and* widened the gulf between the EU and US approaches to such things.

As Erik Hovenkamp explains in a paper analyzing the case (PDF), credit card transactions, like online platforms, are multi-sided markets. That is, they connect two distinguishable groups of customers whose relationships with the platform are markedly different. For Facebook and Google, these groups are consumers and advertisers; for credit card companies they're consumers and retailers. The intermediary platforms make their money by taking a small slice of each transaction. Credit card companies' slice is a percentage of the value of the transaction, paid directly by the merchant and indirectly by card holders through fees and interest; social media's slice is the revenue from the advertising it can display when you interact with your friends. Historically, American Express charges merchants a higher commission than other cards, money the merchant can't reclaim by selectively raising prices. Network effects - the fact that the more people use them the more useful they are to users - mean all these platforms benefit hugely from scale.

American Express v. Ohio, began in 2010, when the US Department of Justice, eventually joined by 17 states, filed a civil antitrust suit against American Express, Visa, and Mastercard. At issue were their "anti-steering" merchant contract clause rules, which barred merchants from steering customers toward cheaper (to the merchant) forms of payment.

Visa and Mastercard settled and removed the language. In 2015, the District Court ruled in favor of the DoJ. American Express then won in the 2nd Circuit Appeals Court in 2016; 11 of the states appealed. Now, SCOTUS has upheld the circuit court, and the precedent it sets, Beth Farmer writes at SCOTUSblog suggests that the plaintiffs in future antitrust cases covering two-sided markets will have to show that both sides have suffered harm in order to succeed. Applied to Facebook, this judgment would appear to say that harm to users (the loss of privacy) or to society at large (gamed elections) wouldn't count if no advertisers were harmed.

Farrell goes on to note the EU's very different tack, like last year's fine against Google for abusing its market dominance. Americans also underestimate the importance of Max Schrems's case against Google, Instagram, WhatsApp, and Facebook, launched the day the General Data Protection Regulation came into force. For 20 years, American companies have tried to cut a deal with data protection law, but, as Simon Davies warned in 1999, this is no more feasible than Europeans doing the same to the US 1st Amendment.

Schrems's case is that the all-or-nothing approach ("give us your data or go away") is not the law's required meaningful consent. In its new report, Deceived by Design, the Norwegian Consumer Council finds plenty of evidence to back up this contention. After studying the privacy settings provided by Windows 10, Facebook, and Google, the NCC argues that the latter two in particular deliberately make it easier for users to accept the most intrusive options than to opt out; they also use "nudge" techniques to stress the benefits of accepting the intrusive defaults.

"This is not privacy by default," the authors conclude after showing that opting into Facebook's most open privacy setting requires only a single click, while opting out requires foraging through "Manage your privacy settings". These are dark patterns - nudges intended to mislead users into doing things they wouldn't normally choose. In advising users to turn on photo tagging, for example, Facebook implies that choosing otherwise will harm the visually impaired using screenreaders, a technique the Dark Patterns website calls confirmshaming. Five NGOs have written to EU Data Protection Board chair Andrea Jelinek to highlight the report and asking her to investigate further.

The US has no comparable GDPR on which to base regulatory action, even if it were inclined to do so, and the American Express case makes clear that it has little interest in applying antitrust law to curb market power. For now, the EU is the only other region or government large enough to push back and make it stick. The initial response from some American companies', ghosting everyone in the EU, is unlikely to be sufficient. It's hard to see a reconciliation of these diverging approaches any time soon.

Illustrations: Silicon Valleyopoly game, 1996..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

May 18, 2018

Fool me once

new-22portobelloroad.jpgMost of the "us" who might read this rarely stop to marvel at the wonder that is our daily trust in the society that surrounds us. One of the worst aspects of London Underground's incessant loud reminders to report anything suspicious - aside from the slogan, which is dumber than a bag of dead mice - is that it interrupts the flow of trust. It adds social friction. I hear it, because I don't habitually block out the world with headphones.

Friction is, of course, the thing that so many technologies are intended to eliminate. And they might, if only we could trust them.

Then you read things like this news, that Philip Morris wants to harvest data from its iQOS e-cigarette. If regulators allow, Philip Morris will turn on functions in the device's internal chips that capture data on its user's smoking habits, not unlike ebook readers' fine-grained data collection. One can imagine the data will be useful for testing strategies for getting people to e-smoke longer.

This example did not arrive in time for this week's Nuances of Trust event, hosted by the Alliance for Internet of Things Innovation (AIOTI) and aimed at producing intelligent recommendations for how to introduce trust into the Internet of Things. But, so often, it's the company behind the devices you can't trust. For another example: Volkswagen.

Partly through the problem-solving session, we realized we had regenerated three of Lawrence Lessig's four modalities of constraining behavior: technology/architecture, law, market, social norms. The first changes device design to bar shipping loads of data about us to parts unknown; law pushes manufacturers into that sort of design, even if it cost more; market would mean people refused to buy privacy-invasive devices, and social norms used to be known as "peer pressure". Right now, technology is changing faster than we can create new norms. If a friend has an Amazon Echo at home, does entering their house constitute signing Amazon's privacy policy? Should they show me the privacy policy before I enter? Is it reasonable to ask them to turn it off while I'm there? We could have asked questions like "Are you surreptitiously recording me?" at any time since portable tape recorders were invented, but absent a red, blinking light we felt safe in assuming no. Now, suddenly, trusting my friend requires also trusting a servant belonging to a remote third party. If I don't, it's a social cost - to me, and maybe to my friend, but not to Amagoople.

On Tuesday, Big Brother Watch provided a far more alarming example when director Silkie Carlo launched BBW's report on automated facial recognition (PDF). Now, I know the technically minded will point out grumpily that all facial recognition is "automated" because it's a machine what does it, but what BBW means is a system in which CCTV and other cameras automatically feed everything they gather into a facial recognition system that sprinkles AI fairy dust and pops out Persons of Interest (I blame TV). Various UK police have deployed these AFR systems at concerts and football and rugby games; at the 2016 and 2017 Notting Hill Carnivals; on Remembrance Sunday 2017 to restrict "fixated individuals"; and at peaceful demonstrations. On average, fewer than 9% of matches were accurate; but that's little consolation when police pick you out of the hordes arriving by train for an event and insist on escorting you under watch. The system London's Met Police used had a false positive rate of over 98%! How does a system like that even get out of the lab?

Neither the police nor the Home Office seem to think that bringing in this technology requires any public discussion; when asked they play the Yes, Minister game of pass the policy. Within the culture of the police, it may in fact be a social norm that invasive technologies whose vendors promise magical preventative results should be installed as quickly as possible before anyone can stop them. Within the wider culture...not so much.

This is the larger problem with what AIOTI is trying to do. It's not just that the devices themselves are insecure, their risks capricious, and the motives of their makers suspect. It's that long after you've installed and stopped thinking about a system incorporating these devices someone else can come along to subvert the whole thing. How do you ensure that the promise you make today cannot be broken by yourself or others in future? The problem is near-identical to the one we face with databases: each may be harmless on its own, but mash them together and you have a GDPR fine-to-the-max dataset of reidentification.

Somewhere in the middle of this an AIOTI participant suggested that the IoT rests on four pillars: people, processes, things, data. Trust has pillars, too, that take a long time to build but that can be destroyed in an instant: choice, control, transparency, and, the one we talk about least, but perhaps the most important, familiarity. The more something looks familiar, the more we trust it, even when we shouldn't. Both the devices AIOTI is fretting about and the police systems BBW deplores have this in common: they center on familiar things whose underpinnings have changed without our knowledge - yet their owners want us to trust them. We wish we could.


Illustrations:: Orwell's house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 30, 2018

Conventional wisdom

Hague-Grote_Markt_DH3.JPGOne of the problems the internet was always likely to face as a global medium was the conflict over who gets to make the rules and whose rules get to matter. So far, it's been possible to kick the can down the road for Future Government to figure out while each country makes its own rules. It's clear, though, that this is not a workable long-term strategy, if only because the longer we go on without equitable ways of solving conflicts, the more entrenched the ad hoc workarounds and because-we-can approaches will become. We've been fighting the same battles for nearly 30 years now.

I didn't realize how much I longed for a change of battleground until last week's Internet Law Works-in-Progress paper workshop, when for the first time I heard an approach that sounded like it might move the conversation beyond the crypto wars, the censorship battles, and the what-did-Facebook-do-to-our-democracy anguish. The paper was presented by Asaf Lubin, a Yale JSD candidate whose background includes a fellowship at Privacy International. In it, he suggested that while each of the many cases of international legal clash has been considered separately by the courts, the reality is that together they all form a pattern.

asaf-lubin-yale.pngThe cases Lubin is talking about include the obvious ones, such as United States v. Microsoft, currently under consideration in the US Supreme Court and Apple v. FBI. But they also include the prehistoric cases that created the legal environment we've lived with for the last 25 years: 1996's US v. Thomas, the first jurisdictional dispute, which pitted the community standards of California against those of Tennessee (making it a toss-up whether the US would export the First Amendment Puritanism); 1995's Stratton Oakmont v. Prodigy, which established that online services could be held liable for the content their users posted; and 1991's Cubby v. CompuServe, which ruled that CompuServe was a distributor, not a publisher and could not be held liable for user-posted content. The difference in those last two cases: Prodigy exercised some editorial control over postings; CompuServe did not. In the UK, notice-and-takedown rules were codified after the Godfrey v. Demon Internet extended defamation law to the internet..

Both access to data - whether encrypted or not - and online content were always likely to repeatedly hit jurisdictional boundaries, and so it's proved. Google is arguing with France over whether right-to-be-forgotten requests should be deindexed worldwide or just in France or the EU. The UK is still planning to require age verification for pornography sites serving UK residents later this year, and is pondering what sort of regulation should be applied to internet platforms in the wake of the last two weeks of Facebook/Cambridge Analytica scandals.

The biggest jurisdictional case, United States v. Microsoft, may have been rendered moot in the last couple of weeks by the highly divisive Clarifying Lawful Overseas Use of Data (CLOUD) Act. Divisive because: the technology companies seem to like it, EFF and CDT argue that it's an erosion of privacy laws because it lowers the standard of review for issuing warrants, and Peter Swire and Jennifer Daskal think it will improve privacy by setting up a mechanism by which the US can review what foreign governments do with the data they're given; they also believe it will serve us all better than if the Supreme Court rules in favor of the Department of Justice (which they consider likely).

Looking at this landscape, "They're being argued in a siloed approach," Lubin said, going on to imagine the thought process of the litigants involved. "I'm only interested in speech...or I'm a Mutual Legal Assistance person and only interested in law enforcement getting data. There are no conversations across fields and no recognition that the problems are the same." In conversation at conferences, he's catalogued reasons for this. Most cases are brought against companies too small to engage in too-complex litigation and who fear antagonizing the judge. Larger companies are strategic about which cases they argue and in front of whom; they seek to avoid having "sticky precedents" issued by judges who don't understand the conflicts or the unanticipated consequences. Courts, he said, may not even be the right forums for debating these issues.

The result, he went on to say, is that these debates conflate first-order rules, such as the right balance on privacy and freedom of expression, with second-order rules, such as the right procedures to follow when there's a conflict of laws. To solve the first-order rules, we'd need something like a Geneva Convention, which Lubin thought unlikely to happen.

To reach agreement on the second-order rules, however, he proposes a Hague Convention, which he described as "private international law treaties" that could address the problem of agreeing the rules to follow when laws conflict. As neither a lawyer nor a policy wonk, the idea sounded plausible and interesting: these are not debates that should be solved by either "Our lawyers are bigger and more expensive than your lawyers" or "We have bigger bombs." (Cue Tom Lehrer: "But might makes right...") I have no idea if such an idea can work or be made to happen. But it's the first constructive new suggestion I've heard anyone make for changing the conversation in a long, long time.


Illustrations: The Hague's Grote Markt (via Wikimedia; Asaf Lubin.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

March 16, 2018

Homeland insecurity

United_Kingdom_foreign_born_population_by_country_of_birth.pngTo the young people," a security practitioner said at a recent meeting, speaking of a group he'd been working with, "it's life lived on their phone."

He was referring to the tendency for adults to talk to kids about fake news, or sexting, or sexual abuse and recruitment, and so on as "online" dangers the adults want to protect them from. But, as this practitioner was trying to explain (and we have said here before), "online" isn't separate to them. Instead, all these issues are part of the context of pressures, relationships, economics, and competition that makes up their lives. This will become increasingly true as widely deployed sensors and hybrid cyber-physical systems and tracking become the norm.

This is a real generation gap. Older adults have taken on board each of these phenomena as we've added it into our existing understanding of the world. Watching each arrive singly over time allows the luxury of consideration and the mental space in which to plot a strategy. If you're 12, all of these things are arriving at once as pieces that are coalescing into your picture of the world. Even if you only just finally got your parents to let you have your own phone you've been watching videos on YouTube, FaceTiming your friends, and playing online games all your life.

An important part of "life lived on the phone" is in the UK's data protection bill implementation of the General Data Protection Regulation, now going through Parliament. The bill carves out some very broad exemptions. Most notably, opposed by the Open Rights Group and the3million, the bill would remove a person's rights as a data subject in the interests of "effective immigration control". In other words, under this exemption the Home Office could make decisions about where and whether you were allowed to live but never have to tell you the basis for its decisions. Having just had *another* long argument with a different company about whether or not I've ever lived in Iowa, I understand the problem of being unable to authenticate yourself because of poor-quality data.

It's easy for people to overlook laws that "only" affect immigrants, but as Gracie Mae Bradley, an advocacy and policy officer, made clear at this week's The State of Data 2018 event, hosted by Jen Persson, one of the consequences is to move the border from Britain's ports into its hospitals, schools, and banks, which are now supposed to check once a quarter that their 70 million account holders are legitimate. NHS Digital is turning over confidential patient information to help the Home Office locate and deport undocumented individuals. Britain's schools are being pushed to collect nationality. And, as Persson noted, remarkably few parents even know the National Pupil Database exists, and yet it catalogues highly detailed records of every schoolchild.

"It's obviously not limited to immigrants," Bradley said of the GDPR exemption. "There is no limit on the processes that might apply this exemption". It used to be clear when you were approaching a national border; under these circumstances the border is effectively gummed to your shoe.

The data protection bill also has the usual broad exemptions for law enforcement and national security.

Both this discussion (implicitly) and the security conversation we began with (explicitly) converged on security as a felt, emotional state. Even a British citizen living in their native country in conditions of relative safety - a rich country with good health care, stable governance, relatively little violence, mostly reasonable weather - may feel insecure if they're constantly being required to prove the legitimacy of their existence. Conversely, people may live in objectively more dangerous conditions and yet feel more secure because they know the local government is not eying them suspiciously with a view to telling them to repatriate post-haste.

Put all these things together with other trends, and you have the potential for a very high level of social insecurity that extends far outwards from the enemy class du jour, "illegal immigrants". This in itself is a damaging outcome.

And the potential for social control is enormous. Transport for London is progressively eliminating both cash and its Oyster payment cards in favor of direct payment via credit or debit card. What happens to people who one quarter fail the bank's inspection. How do they pay the bus or tube fare to get to work?

Like gender, immigration status is not the straightforward state many people think. My mother, brought to the US when she was four, often talked about the horror of discovering in her 20s that she was stateless: marrying my American father hadn't, as she imagined, automatically made her an American, and Switzerland had revoked her citizenship because she had married a foreigner. In the 1930s, she was naturalized without question. Now...?

Trying to balance conflicting securities is not new. The data protection bill-in-progress offers the opportunity to redress a serious imbalance, which Persson called, rightly, a "disconnect between policy, legislation, technological change, and people". It is, as she and others said, crucial that the balance of power that data protection represents not be determined by a relatively small, relatively homogeneous group.


Illustrations: 2008 map of nationalities of UK residents (via Wikipedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

January 19, 2018

Expressionism

Thumbnail image for discardingimages-escherbackground.jpg"Regulatory oversight is going to be inevitable," Adam Kinsley, Sky's director of policy, predicted on Tuesday. He was not alone in saying this is the internet's direction of travel, and we shouldn't feel too bad about it. "Regulation is not inherently bad," suggested Facebook's UK public policy manager, Karim Palant.

The occasion was the Westminster eForum's seminar on internet regulation (PDF). The discussion focused on the key question, posed at the outset by digital policy consultant Julian Coles: who is responsible, and for what? Free speech fundamentalists find it easy to condemn anything smacking of censorship. Yet even some of them are demanding proactive removal of some types of content.

Two government initiatives sparked this discussion. The first is the UK's Internet Safety Strategy green paper, published last October. Two aspects grabbed initial attention: a levy on social media companies and age verification for pornography sites, now assigned to the British Board of Film Classification to oversee. But there was always more to pick at, as Evelyn Douek helpfully summarized at Lawfare. Coles' question is fundamental, and 2018 may be its defining moment.

The second, noted by Graham Smith, was raised by the European Commission at the December 2017 Global Internet Forum, and aims to force technology companies to take down extremist content within one to two hours of posting. Smith's description: "...act as detective, informant, arresting officer, prosecutor, defense, judge, jury, and prison warder all at once." Open Rights Group executive director Jim Killock added later that it's unreasonable to expect technology companies to do the right thing perfectly within a set period at scale, making no mistakes.

As Coles said - and as Old Net Curmudgeons remember - the present state of the law was largely set in the mid-to-late 1990s, when the goal of fostering innovation led both the US Congress (via Section 230 of the Communications Decency Act, 1996) and the EU (via the Electronic Commerce Directive, 2000) to hold that ISPs are not liable for the content they carry.

However, those decisions also had precedents of their own. The 1991 US case Cubby v. CompuServe ended in CompuServe's favor, holding it not liable for defamatory content posted to one of its online forums. In 2000, the UK's Godfrey v. Demon Internet successfully applied libel law to Usenet postings, ultimately creating the notice and takedown rules we still live by today. Also crucial in shaping those rules was Scientology's actions in 1994-1995 to remove its top-level secret documents from the internet.

In the simpler landscape when these laws were drafted, the distinction between access providers and content providers was cleaner. Before then, the early online services - CompuServe, AOL, and smaller efforts such as the WELL, CIX, and many others were hybrids - social media platforms by a different name - providing access and a platform for content providers, who curated user postings and chat.

Eventually, when social media were "invented" (Coles's term; more correctly, when everything migrated to the web), today's GAFA (or, in the US, FAANG) inherited that freedom from liability. GAFA/FAANG straddle that briefly sharp boundary between pipes and content like the dead body on the Quebec-Ontario boundary sign in the Canadian film Bon Cop, Bad Cop. The vertical integration that is proceeding apace - Verizon buying AOL and Yahoo!; Comcast buying NBC Universal; BT buying TV sports rights - is setting up the antitrust cases of 2030 and ensuring that the biggest companies - especially Amazon - play many roles in the internet ecosystem. They might be too big for governments to regulate on their own (see also: paying taxes), but public and advertisers' opinions are joining in.

All of this history has shaped the status quo that Kinsley seems to perceive as somewhat unfair when he noted that the same video that is regulated for TV broadcast is not for Facebook streaming. Palant noted that Facebook isn't exactly regulation-free. Contrary to popular belief, he said, many aspects of the industry, such as data and advertising, are already "heavily regulated". The present focus, however, is content, a different matter. It was Smith who explained why change is not simple: "No one is saying the internet is not subject to general law. But if [Kinsley] is suggesting TV-like regulation...where it will end up is applying to newspapers online." The Authority for Television on Demand, active from 2010 to 2015, already tested this, he said, and the Sun newspaper got it struck down. TV broadcasting's regulatory regime was the exception, Smith argued, driven by spectrum scarcity and licensing, neither of which applies to the internet.

New independent Internet Watch Foundation chair Andrew Puddephatt listed five key lessons from the IWF's accumulated 21 years of experience: removing content requires clear legal definitions; independence is essential; human analysts should review takedowns, which have to be automated for reasons of scale; outside independent audits are also necessary; companies should be transparent about their content removal processes.

If there is going to be a regulatory system, this list is a good place to start. So far, it's far from the UK's present system. As Killock explained, PIPCU, CTRIU, and Nominet all make censorship decisions - but transparency, accountability, oversight, and the ability to appeal are lacking.


Illustrations: "Escher background" (from Discarding Images, Boccaccio, "Des cleres et nobles femmes" (French version of "De mulieribus claris"), France ca. 1488-1496, BnF, Français 599, fol. 89v).


Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.


November 10, 2017

Regulatory disruption

Thumbnail image for Northern_Rock_Queue.jpgThe financial revolution due to hit Britain in mid-January has had surprisingly little publicity and has little to do with the money-related things making news headlines over the last few years. In other words, it's not a new technology, not even a cryptocurrency. Instead, this revolution is regulatory: banks will be required to open up access to their accounts to third parties.

The immediate cause of this change is two difficult-to-distinguish pieces of legislation, one UK-specific and one EU-wide. The EU piece is Payment Services Directive 2, which is intended to foster standards and interoperability in payments across Europe. In the UK, Open Banking requires the nine biggest retail banks to create APIs that, given customer consent, will give third parties certified by the Financial Conduct Authority direct access to customer accounts. Account holders have begun getting letters announcing new terms and conditions, although recipients report that the parts that refer to open banking and consent are masterfully vague.

Thumbnail image for rotated-birch-contactlessmonopoly-ttf2016.jpgAs anyone attending the annual Tomorrow's Transactions Forum knows, open banking has been creeping up on us for the last few years. Consult Hyperion's Tim Richards has a good explanation of the story so far. At this year's event, Dave Birch, who has a blog posting outlining PSD2's background and context, noted that in China, where the majority of non-cash payments are executed via mobile, Alipay and Tencent are already executing billions of transactions a year, bypassing banks entirely. While the banks aren't thrilled about losing the transactions and their associated (dropping) revenue, the bigger issue is that they are losing the data and insight into their customers that traditionally has been exclusively theirs.

We could pick an analogy from myriad internet-disrupted sectors, but arguably the best fit is telecoms deregulation, which saw AT&T (in the US) and BT (in the UK) forced to open up their networks to competitors. Long distance revenues plummeted and all sorts of newcomers began leaching away their customers.

For banks, this story began the day Elon Musk's x.com merged with Peter Thiel's money transfer business to create the first iteration of Paypal so that anyone with an email address could send and receive money. Even then, the different approach of cryptocurrencies was the subject of experiments, but for most people the rhetoric of escaping government was less a selling point than being able to trade small sums with strangers who didn't take credit cards. Today's mobile payment users similarly don't care whether a bank is involved or not as long as they get their money.

Part of the point is to open up competition. In the UK, consumer-bank relationships tend to be lifelong, partly because so much of banking here has been automated for decades. For most people, moving their account involves not only changing arrangements for inbound payments like salary, but also also all the outbound payments that make up a financial life. The upshot is to give the banks impressive customer lock-in, which the Competition and Markets Authority began trying to break with better account portability.

The larger point of Open Banking, however, is to drive innovation in financial services. Why, the reasoning goes, shouldn't it be easier to aggregate data from many sources - bank and other financial accounts, local transport, government benefits - and provide a dashboard to streamline management or automatically switch to the cheapest supplier of unavoidable services? At Wired, Rowland Manthorpe has a thorough outline of the situation and its many uncertainties. Among these are the impact on the banks themselves - will they become, as the project's leader and the telecoms analogy suggest, plumbing for the financial sector or will they become innovators themselves? Or, despite the talk of fintech startups, will the big winners be Google and Facebook?

The obvious concerns in all this are security and privacy. Few outside the technology sector understand what an API is; how do we explain it to the broad range of the population so they understand how to protect themselves? Assuming that start-ups emerge, what mechanisms will we have to test how well our data is secured or trace how it's being used? What about the potential for spoof apps that steal people's data and money?

It's also easy to imagine that "consent" may be more than ordinarily mangled, a problem a friend calls the "tendency to mandatory". It's easy to imagine that the companies to whom we apply for insurance, a loan, or a job may demand an opened gateway to account data as part of the approvals process, which is extortion rather than consent.

This is also another situation where almost all of "my" data inevitably involves exposing third parties, the other halves of our transactions who have never given consent for that to happen. Given access to a large enough percentage of the population's banking data, triangulation should make it possible to fill in a fair bit of the rest. Amazon already has plenty of this kind of data from its own customers; for Facebook and Google this must be an exciting new vista.

Understanding what this will all mean will take time. But it represents a profound change, not only in the landscape of financial services but in the area of technical innovation. This time, those fusty old government regulators are the ones driving disruption.


Illustrations: Northern Rock in 2007 (Dominic Alves); Dave Birch.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

November 23, 2012

Democracy theater

So Facebook is the latest to discover that it's hard to come up with a governance structure online that functions in any meaningful way. This week, the company announced plans to disband the system of voting on privacy changes that it put in place in 2009. To be honest, I'm surprised it took this long.

Techcrunch explains the official reasons. First, with 1 billion users, it's now too easy to hit the threshold of 7,000 comments that triggers a vote on proposed changes. Second, with 1 billion users, amassing the 30 percent of the user base necessary to make the vote count has become...pretty much impossible. (Look, if you hate Facebook's policy changes, it's easier to simply stop using the system. Voting requires engagement.) The company also complained that the system as designed encourages comments' "quantity over quality". Really, it would be hard to come up with an online system that didn't unless it was so hard to use that no one would bother anyway.

The fundamental problem for any kind of online governance is that no one except some lawyers thinks governmance is fun. (For an example of tedious meetings producing embarrassing results, see this week's General Synod.) Even online, where no one can tell you're a dog watching the Outdoor Channel while typing screeds of debate, it takes strong motivation to stay engaged. That in turn means that ultimately the people who participate, once the novelty has worn off, are either paid, obsessed, or awash in free time.

The people who are paid - either because they work for the company running the service or because they work for governments or NGOs whose job it is to protect consumers or enforce the law - can and do talk directly to each other. They already know each other, and they don't need fancy online governmental structures to make themselves heard.

The obsessed can be divided into two categories: people with a cause and troublemakers - trolls. Trolls can be incredibly disruptive, but they do eventually get bored and go away, IF you can get everyone else to starve them of the oxygen of attention by just ignoring them.

That leaves two groups: those with time (and patience) and those with a cause. Both tend to fall into the category Mark Twain neatly summed up in: "Never argue with a man who buys his ink by the barrelful." Don't get me wrong: I'm not knocking either group. The cause may be good and righteous and deserving of having enormous amounts of time spent on it. The people with time on their hands may be smart, experienced, and expert. Nonetheless, they will tend to drown out opposing views with sheer volume and relentlessness.

All of which is to say that I don't blame Facebook if it found the comments process tedious and time-consuming, and as much of a black hole for its resources as the help desk for a company with impenetrable password policies. Others are less tolerant of the decision. History, however, is on Facebook's side: democratic governance of online communities does not work.

Even without the generic problems of online communities which have been replicated mutatis mutandem since the first modem uploaded the first bit, Facebook was always going to face problems of scale if it kept growing. As several stories have pointed out, how do you get 300 million people to care enough to vote? As a strategy, it's understandable why the company set a minimum percentage: so a small but vocal minority could not hijack the process. But scale matters, and that's why every democracy of any size has representative government rather than direct voting, like Greek citizens in the Acropolis. (Pause to imagine the complexities of deciding how to divvy up Facebook into tribes: would the basic unit of membership be nation, family, or circle of friends, or should people be allocated into groups based on when they joined or perhaps their average posting rate?)

The 2009 decision to allow votes came a time when Facebook was under recurring and frequent pressure over a multitude of changes to its privacy policies, all going one way: toward greater openness. That was the year, in fact, that the system effectively turned itself inside out. EFF has a helpful timeline of the changes from 2005 to 2010. Putting the voting system in place was certainly good PR: it made the company look like it was serious about listening to its users. But, as the Europe vs Facebook site says, the choice was always constrained to old policy or new policy, not new policy, old policy, or an entirely different policy proposed by users.

Even without all that, the underlying issue is this: what company would want democratic governance to succeed? The fact is that, as Roger Clarke observed before Facebook even existed, social networks have only one business model: to monetize their users. The pressure to do that has only increased since Facebook's IPO, even though founder Mark Zuckerberg created a dual-class structure that means his decisions cannot be effectively challenged. A commercial company- especially a *public* commercial company - cannot be run as a democracy. It's as simple as that. No matter how much their engagement makes them feel they own the place, the users are never in charge of the asylum. Not even on the WELL.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

November 16, 2012

Grabbing at governance

Someday the development of Internet governance will look like a continuous historical sweep whose outcome, in hindsight, is obvious. At the beginning will be one man, Jon Postel, who in the mid-1990s was, if anyone was, the god of the Internet. At the end will be...well, we don't know yet. And the sad thing is that the road to governance is so long and frankly so dull: years of meetings, committees, proposals, debate, redrafted proposals, diplomatic language, and, worst of all, remote from the mundane experience of everyday Internet users, such as spam and whether they can trust their banks' Web sites.

But if we care about the future of the Internet we must take an interest in what authority should be exercised by the International Telecommunications Union or the Internet Corporation for Assigned Names and Numbers or some other yet-to-be-defined. In fact, we are right on top of a key moment in that developmental history: from December 3 to 14, the ITU is convening the World Conference on International Telecommunications (WCIT, pronounced "wicket"). The big subject for discussion: how and whether to revise the 1988 International Telecommunications Regulations.

Plans for WCIT have been proceeding for years. In May, civil society groups concerned with civil liberties and human rights signed a letter to ITU secretary-general Hamadeoun Touré asking the ITU to open the process to more stakeholders. In June, a couple of frustrated academics changed the game by setting up WCITLeaks asking anyone who had copies of the proposals being submitted to the ITU to send copies. crutiny of those proposals showed the variety and breadth of some countries' desires for regulation. On November 7, the ITU's secretary-general, Hamadoun Touré, wrote an op-ed for Wired arguing that nothing would be passed except by consensus.

On Monday, he got a sort of answer from the International Trade Union Congress secretary, Sharon Burrow who, together with former ICANN head Paul Twomey, and, by video link, Internet pioneer Vint Cerf , launched the Stop the Net Grab campaign. The future of the Internet, they argued, is too important to too many stakeholders to leave decisions about its future up to governments bargaining in secret. The ITU, in its response, argued that Greenpeace and the ITUC have their facts wrong; after the two sides met, the ITUC reiterated its desire for some proposals to be taken off the table.

But stop and think. Opposition to the ITU is coming from Greenpeace and the ITUC?

"This is a watershed," said Twomey. "We have a completely new set of players, nothing to do with money or defending the technology. They're not priests discussing their protocols. We have a new set of experienced international political warriors saying, 'We're interested'."

Explained Burrow, "How on earth is it possible to give the workers of Bahrain or Ghana the solidarity of strategic action if governments decide unions are trouble and limit access to the Internet? We must have legislative political rights and freedoms - and that's not the work of the ITU, if it requires legislation at all."

At heart for all these years, the debate remains the same: who controls the Internet? And does governing the Internet mean regulating who pays whom or controlling what behavior is allowed? As Vint Cerf said, conflating those two is confusing content and infrastructure.

Twomey concluded, "[Certain political forces around the world] see the ITU as the place to have this discussion because it's not structured to be (nor will they let it be) fully multi-stakeholder. They have taken the opportunity of this review to bring up these desires. We should turn the question around: where is the right place to discuss this and who should be involved?"

In the journey from Postel to governance, this is the second watershed. The first step change came in 1996-1997, when it was becoming obvious that governing the Internet - which at the time primarily meant managing the allocation of domain names and numbered Internet addresses (under the aegis of the Internet Assigned Numbers Authority) - was too complex and too significant a job for one man, no matter how respected and trusted. The Internet Society and IANA formed the Internet Ad-Hoc Committee, which, in a published memorandum, outlined its new strategy. And all hell broke loose.

Long-term, the really significant change was that until that moment no one had much objected to either the decisions the Internet pioneers and engineers made or their right to make them. After some pushback, in the end the committee was disbanded and the plan scrapped, and instead a new agreement was hammered out, creating ICANN. But the lesson had been learned: there were now more people who saw themselves as Internet stakeholders than just the engineers who had created it, and they all wanted representation at the table.

In the years since, the make-up of the groups demanding to be heard has remained pretty stable, as Twomey said: engineers and technologists; representatives of civil society groups, usually working in some aspect of human rights, usually civil liberties, such as EFF, ORG, CDT, and Public Knowledge, all of whom signed the May letter. So yes, for labor unions and Greenpeace to decide that Internet freedoms are too fundamental to what they do to not participate in the decision-making about its future, is a watershed.

"We will be active as long as it takes," Burrow said Monday.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series.

October 26, 2012

Lie to me

I thought her head was going to explode.

The discussion that kicked off this week's Parliament and Internet conference revolved around cybersecurity and trust online, harmlessly at first. Then Helen Goodman (Labour - Bishop Auckland), the shadow minister for Culture, Media, and Sport, raised a question: what was Nominet doing to get rid of anonymity online? Simon McCalla, Nominet's CTO, had some answers: primarily, they're constantly trying to improve the accuracy and reliability of the Whois database, but it's only a very small criminal element that engage in false domain name registration. Like that.

A few minutes later, Andy Smith, PSTSA Security Manager, Cabinet Office, in answer to a question about why the government was joining the Open Identity Exchange (as part of the Identity Assurance Programme) advised those assembled to protect themselves online by lying. Don't give your real name, date of birth, and other information that can be used to perpetrate identity theft.

Like I say, bang! Goodman was horrified. I was sitting near enough to feel the splat.

It's the way of now that the comment was immediately tweeted, picked up by the BBC reporter in the room, published as a story, retweeted, Slashdotted, tweeted some more, and finally boomeranged back to be recontextualized from the podium. Given a reporter with a cellphone and multiple daily newspaper editions, George Osborne's contretemps in first class would still have reached the public eye the same day 15 years ago. This bit of flashback couldn't have happened even five years ago.

For the record, I think it's clear that Smith gave good security advice, and that the headline - the greater source of concern - ought to be that Goodman, an MP apparently frequently contacted by constituents complaining about anonymous cyberbullying, doesn't quite grasp that this is a nuanced issue with multiple trade-offs. (Or, possibly, how often the cyberbully is actually someone you know.) Dates of birth, mother's maiden names, the names of first pets...these are all things that real-life friends and old schoolmates may well know, and lying about the answers is a perfectly sensible precaution given that there is no often choice about giving the real answers for more sensitive purposes, like interacting with government, medical, and financial services. It is not illegal to fake or refuse to disclose these things, and while Facebook has a real names policy it's enforced with so little rigor that it has a roster of fake accounts the size of Egypt.

Although: the Earl of Erroll might be a bit busy today changing the fake birth date - April 1, 1900 - he cheerfully told us and Radio 4 he uses throughout; one can only hope that he doesn't use his real mother's maiden name, since that, as Tom Scott pointed out later, is in Erroll's Wikipedia entry. Since my real birth date is also in *my* Wikipedia entry and who knows what I've said where, I routinely give false answers to standardized security questions. What's the alternative? Giving potentially thousands of people the answers that will unlock your bank account? On social networking sites it's not enough for you to be taciturn; your birth date may be easily outed by well-meaning friends writing on your wall. None of this is - or should be - illegal.

It turns out that it's still pretty difficult to explain to some people how the Internet works or why . Nominet can work as hard as it likes on verifying its own Whois database, but it is powerless over the many UK citizens and businesses that choose to register under .com, .net, and other gTLDs and country codes. Making a law to enjoin British residents and companies from registering domains outside of .uk...well, how on earth would you enforce that? And then there's the whole problem of trying to check, say, registrations in Chinese characters. Computers can't read Chinese? Well, no, not really, no matter what Google Translate might lead you to believe.

Anonymity on the Net has been under fire for a long, long time. Twenty years ago, the main source of complaints was AOL, whose million-CD marketing program made it easy for anyone to get a throwaway email address for 24 hours or so until the system locked you out for providing an invalid credit card number. Then came Hotmail, and you didn't even need that. Then, as now, there are good and bad reasons for being anonymous. For every nasty troll who uses the cloak to hide there are many whistleblowers and people in private pain who need its protection.

Smith's advice only sounds outrageous if, like Goodman, you think there's a valid comparison between Nominet's registration activity and the function of the Driver and Vehicle Licensing Agency (and if you think the domain name system is the answer to ensuring a traceable online identity). And therein lies the theme of the day: the 200-odd Parliamentarians, consultants, analysts, government, and company representatives assembled repeatedly wanted incompatible things in conflicting ways. The morning speakers wanted better security, stronger online identities, and the resources to fight cybercrime; the afternoon folks were all into education and getting kids to hack and explore so they learn to build things and understand things and maybe have jobs someday, to their own benefit and that of the rest of the country. Paul Bernal has a good summary.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


October 12, 2012

My identity, my self

Last week, the media were full of the story that the UK government was going to start accepting Facebook logons for authentication. This week, in several presentations at the RSA Conference, representatives of the Government Digital Service begged to differ: the list of companies that have applied to become identity providers (IDPs) will be published at the end of this month and until then they are not confirming the presence or absence of any particular company. According to several of the spokesfolks manning the stall and giving presentations, the press just assumed that when they saw social media companies among the categories of organization that might potentially want to offer identity authentication, that meant Facebook. We won't actually know for another few weeks who has actually applied.

So I can mercifully skip the rant that hooking a Facebook account to the authentication system you use for government services is a horrible idea in both directions. What they're actually saying is, what if you could choose among identification services offered by the Post Office, your bank, your mobile network operator (especially for the younger generation), your ISP, and personal data store services like Mydex or small, local businesses whose owners are known to you personally? All of these sounded possible based on this week's presentations.

The key, of course, is what standards the government chooses to create for IDPs and which organizations decide they can meet those criteria and offer a service. Those are the details the devil is in: during the 1990s battles about deploying strong cryptography, the government's wanted copies of everyone's cryptography keys to be held in escrow by a Trusted Third Party. At the time, the frontrunners were banks: the government certainly trusted those, and imagined that we did, too. The strength of the disquiet over that proposal took them by surprise. Then came 2008. Those discussions are still relevant, however; someone with a long memory raised the specter of Part I of the Electronic Communications Act 2000, modified in 2005, as relevant here.

It was this historical memory that made some of us so dubious in 2010, when the US came out with proposals rather similar to the UK's present ones, the National Strategy for Trusted Identities in Cyberspace (NSTIC). Ross Anderson saw it as a sort of horror-movie sequel. On Wednesday, however, Jeremy Grant, the senior executive advisor for identity management at the US National Institute for Standards and Technology (NIST), the agency charged with overseeing the development of NSTIC, sounded a lot more reassuring.

Between then and now came both US and UK attempts to establish some form of national ID card. In the US, "Real ID", focused on the state authorities that issue driver's licenses. In the UK, it was the national ID card and accompanying database. In both countries the proposals got howled down. In the UK especially, the combination of an escalating budget, a poor record with large government IT projects, a change of government, and a desperate need to save money killed it in 2006.

Hence the new approach in both countries. From what the GDS representatives - David Rennie (head of proposition at the Cabinet Office), Steven Dunn (lead architect of the Identity Assurance Programme; Twitter: @cuica), Mike Pegman (security architect at the Department of Welfare and Pensions, expected to be the first user service; Twitter: @mikepegman), and others manning the GDS stall - said, the plan is much more like the structure that privacy advocates and cryptographers have been pushing for 20 years: systems that give users choice about who they trust to authenticate them for a given role and that share no more data than necessary. The notion that this might actually happen is shocking - but welcome.

None of which means we shouldn't be asking questions. We need to understand clearly the various envisioned levels of authentication. In practice, will those asking for identity assurance ask for the minimum they need or always go for the maximum they could get? For example, a bar only needs relatively low-level assurance that you are old enough to drink; but will bars prefer to ask for full identification? What will be the costs; who pays them and under what circumstances?

Especially, we need to know what the detail of the standards organizations must meet to be accepted as IDPs, in particular, what kinds of organization they exclude. The GDS as presently constituted - composed, as William Heath commented last year, of all the smart, digitally experienced people you *would* hire to reinvent government services for the digital world if you had the choice - seems to have its heart in the right place. Their proposals as outlined - conforming, as Pegman explained happily, to Kim Cameron's seven laws of identity - pay considerable homage to the idea that no one party should have all the details of any given transaction. But the surveillance-happy type of government that legislates for data retention and CCDP might also at some point think, hey, shouldn't we be requiring IDPs to retain all data (requests for authentication, and so on) so we can inspect it should we deem it necessary? We certainly want to be very careful not to build a system that could support such intimate secret surveillance - the fundamental objection all along to key escrow.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.


June 15, 2012

A license to print money

"It's only a draft," Julian Huppert, the Liberal Democrat MP for Cambridge, said repeatedly yesterday. He was talking about the Draft Communications Data Bill (PDF), which was published on Wednesday. Yesterday, in a room in a Parliamentary turret, Hupper convened a meeting to discuss the draft; in attendance were a variety of Parliamentarians plus experts from civil society groups such as Privacy International, the Open Rights Group, Liberty, and Big Brother Watch. Do we want to be a nation of suspects?

The Home Office characterizes the provisions in the draft bill as vital powers to help catch criminals, save lives, and protect children. Everyone else - the Guardian, ZDNet UK, and dozens more - is calling them the "Snooper's charter".

Huppert's point is important. Like the Defamation Bill before it, publishing a draft means there will be a select committee with 12 members, discussion, comments, evidence taken, a report (by November 30, 2012), and then a rewritten bill. This draft will not be voted on in Parliament. We don't have to convince 650 MPs that the bill is wrong; it's a lot easier to talk to 12 people. This bill, as is, would never pass either House in any case, he suggested.

This is the optimistic view. The cynic might suggest that since it's been clear for something like ten years that the British security services (or perhaps their civil servants) have a recurring wet dream in which their mountain of data is the envy of other governments, they're just trying to see what they can get away with. The comprehensive provisions in the first draft set the bar, softening us up to give away far more than we would have in future versions. Psychologists call this anchoring, and while probably few outside the security services would regard the wholesale surveillance and monitoring of innocent people as normal, the crucial bit is where you set the initial bar for comparison for future drafts of the legislation. However invasive the next proposals are, it will be easy for us to lose the bearings we came in with and feel that we've successfully beaten back at least some of the intrusiveness.

But Huppert is keeping his eye on the ball: maybe we can not only get the worst stuff out of this bill but make things actually better than they are now; it will amend RIPA. The Independent argues that private companies hold much more data on us overall but that article misses that this bill intends to grant government access to all of it, at any time, without notice.

The big disappointment in all this, as William Heath said yesterday, is that it marks a return to the old, bad, government IT ways of the past. We were just getting away from giant, failed public IT projects like the late unlamented NHS platform for IT and the even more unlamented ID card towards agile, cheap public projects run by smart guys who know what they're doing. And now we're going to spend £1.8 billion of public money over ten years (draft bill, p92) building something no one much wants and that probably won't work? The draft bill claims - on what authority is unclear - that the expenditure will bring in £5 to £6 billion in revenues. From what? Are they planning to sell the data?

Or are they imagining the economic growth implied by the activity that will be necessary to build, install, maintain, and update the black boxes that will be needed by every ISP in order to comply with the law. The security consultant Alec Muffet has laid out the parameters for this SpookBox 5000: certified, tested, tamperproof, made by, say, three trusted British companies. Hundreds of them, legally required, with ongoing maintenance contracts. "A license to print money," he calls them. Nice work if you can get it, of course.

So we're talking - again - about spending huge sums of government money on a project that only a handful of people want and whose objectives could be better achieved by less intrusive means. Give police better training in computer forensics, for example, so they can retrieve the evidence they need from the devices they find when executing a search warrant.

Ultimately, the real enemy is the lack of detail in the draft bill. Using the excuse that the communications environment is changing rapidly and continuously, the notes argue that flexibility is absolutely necessary for Clause 1, the one that grants the government all the actual surveillance power, and so it's been drafted to include pretty much everything, like those contracts that claim copyright in perpetuity in all forms of media that exist now or may hereinafter be invented throughout the universe. This is dangerous because in recent years the use of statutory instruments to bypass Parliamentary debate has skyrocketed. No. Make the defenders of this bill prove every contention; make them show the evidence that makes every extra bit of intrusion necessary.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


April 13, 2012

The people perimeter

People with jobs are used to a sharp division between their working lives and their private lives. Even in these times, when everyone carries a mobile phone and may be on call at any moment, they still tend to believe that what they say to their friends is no concern of their employer's. (Freelances tend not to have these divisions; to a much larger extent we have always been "in public" most of the time.)

These divisions were always less in small towns, where teachers or clergy had little latitude, and where even less-folk would be well advised to leave town before doing anything they wouldn't want discussed in detail. Then came social media, which turns everywhere into a small town and where even if you behave impeccably details about you and your employer may be exposed without your knowledge.

That's all a roundabout way of leading to yesterday's London Tea camp, where the subject of discussion was developing guidelines for social media use by civil servants.

Civil servants! The supposedly faceless functionaries who, certainly at the senior levels, are probably still primarily understood by most people through the fictional constructs of TV shows like Yes, Minister and The Thick of It. All of the 50 or 60 people from across government who attended yesterday have Twitter IDs; they're on Facebook and Foursquare, and probably a few dozen other things that would horrify Sir Humphrey. And that's as it should be: the people administering the nation's benefits, transport, education, and health absolutely should live like the people they're trying to serve. That's how you get services that work for us rather than against us.

The problem with social media is the same as their benefit: they're public in a new and different way. Even if you never identify your employer, Foursquare or the geotagging on Twitter or Facebook checks you in at a postcode that's indelibly identified with the very large government building where your department is the sole occupant. Or a passerby photographs you in front of it and Facebook helpfully tags your photograph with your real name, which then pops up in outside searches. Or you say something to someone you know who tells someone else who posts it online for yet another person to identify and finally the whole thing comes back and bites you in the ass. Even if your Tweets are clearly personal, and even if your page says, "These are just my personal opinions and do not reflect those of my employer", the fact of where you can be deduced to work risks turning anything connected to you into something a - let's call it - excitable journalist can make into a scandal. Context is king.

What's new about this is the uncontrollable exposure of this context. Any Old Net Curmudgeon will tell you that the simple fact of people being caught online doing things their employers don't like goes back to the dawn of online services. Even now I'm sure someone dedicated could find appalling behavior in the Usenet archives by someone who is, 25 years on, a highly respected member of society. But Usenet was a minority pastime; Facebook, Twitter et al are mainstream.

Lots has been written by and about employers in this situation: they may suffer reputational damage, legal liability, or a breach that endangers their commercial secrets. Not enough has been written about individuals struggling to cope with sudden, unwanted exposure. Don't we have the right to private lives? someone asked yesterday. What they are experiencing is the same loss of border control that security engineers are trying to cope with. They call it "deperimeterization", because security used to mean securing the perimeter of your network and now security means coping with its loss. Adding wireless, remote access for workers at home, personal devices such as mobile phones, and links to supplier and partner networks have all blown holes in it.

There is no clear perimeter any more for networks - or individuals, either. Trying to secure one by dictating behavior, whether by education, leadership by example, or written guidelines, is inevitably doomed. There is, however, a very valid reason to have these things: to create a general understanding between employer and employee. It should be clear to all sides what you can and cannot get fired for.

In 2003, Danny O'Brien nailed a lot of this when he wrote about the loss of what he called the "private-intermediate sphere". In that vanishing country, things were private without being secret. You could have a conversation in a pub with strangers walking by and be confident that it would reach only the audience present at the time and that it would not unexpectedly be replayed or published later (see also Don Harmon and Chevy Chase's voicemail). Instead, he wrote, the Net is binary: secret or public, no middle ground.

What's at stake here is really not private life, but *social* life. It's the addition of the online component to our social lives that has torn holes in our personal perimeters.

"We'll learn a kind of tolerance for the private conversation that is not aimed at us, and that overreacting to that tone will be a sign of social naivete," O'Brien predicted. Maybe. For now, hard cases make bad law (and not much better guidelines) *First* cases are almost always hard cases.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


March 23, 2012

The year of the future

If there's one thing everyone seemed to agree on yesterday at Nominet's annual Internet policy conference, it's that this year, 2012, is a crucial one in the development of the Internet.

The discussion had two purposes. One is to feed into Nominet's policy-making as the body in charge of .uk, in which capacity it's currently grappling with questions such as how to respond to law enforcement demands to disappear domains. The other, which is the kind of exercise net.wars particularly enjoys and that was pioneered at the Computers, Freedom, and Privacy conference (next one spring 2013, in Washington, DC), is to peer into the future and try to prepare for it.

Vint Cerf, now Google's Chief Internet Evangelist, outlined some of that future, saying that this year, 2012, will see more dramatic changes to the Internet than anything since 1983. He had a list:

- The deployment of better authentication in the form of DNSSec;

- New certification regimes to limit damage in the event of more cases like 2011's Diginotar hack;

- internationalized domain names;

- The expansion of new generic top-level domains;

- The switch to IPv6 Internet addressing, which happens on June 6;

- Smart grids;

- The Internet of things: cars, light bulbs, surfboards (!), and anything else that can be turned into a sensor by implanting an RFID chip.

Cerf paused to throw in an update on his long-running project the interplanetary Internet he's been thinking about since 1998 (TXT).

"It's like living in a science fiction novel," he said yesterday as he explained about overcoming intense network lag by using high-density laser pulses. The really cool bit: repurposing space craft whose scientific missions have been completed to become part of the interplanetary backbone. Not space junk: network nodes-in-waiting.

The contrast to Ed Vaizey, the minister for culture, communications and the creative industries at the Department of Culture, Media, and Sport, couldn't have been more marked. He summed up the Internet's governance problem as the "three Ps": pornography, privacy, and piracy. It's nice rhetorical alliteration, but desperately narrow. Vaizey's characterization of 2012 as a critical year rests on the need to consider the UK's platform for the upcoming Internet Governance Forum leading to 2014's World Information Technology Forum. When Vaizey talks about regulating with a "light touch", does he mean the same things we do?

I usually place the beginning of the who-governs-the-Internet argument at1997, the first time the engineers met rebellion when they made a technical decision (revamping the domain name system). Until then, if the pioneers had an enemy it was governments, memorably warned off by John Perry Barlow's 1996 Declaration of the Independence of Cyberspace. After 1997, it was no longer possible to ignore the new classes of stakeholders, commercial interests and consumers.

I'm old enough as a Netizen - I've been online for more than 20 years - to find it hard to believe that the Internet Governance Forum and its offshoots do much to change the course of the Internet's development: while they're talking, Google's self-drive cars rack up 200,000 miles on San Francisco's busy streets with just one accident (the car was rear-ended; not their fault) and Facebook sucks in 800 million users (if it were a country, it would be the world's third most populous nation).

But someone has to take on the job. It would be morally wrong for governments, banks, and retailers to push us all to transact with them online if they cannot promise some level of service and security for at least those parts of the Internet that they control. And let's face it: most people expect their governments to step in if they're defrauded and criminal activity is taking place, offline or on, which is why I thought Barlow's declaration absurd at the time

Richard Allan, director of public policy for Facebook EMEA - or should we call him Lord Facebook? - had a third reason why 2012 is a critical year: at the heart of the Internet Governance Forum, he said, is the question of how to handle the mismatch between global Internet services and the cultural and regulatory expectations that nations and individuals bring with them as they travel in cyberspace. In Allan's analogy, the Internet is a collection of off-shore islands like Iceland's Surtsey, which has been left untouched to develop its own ecosystem.

Should there be international standards imposed on such sites so that all users know what to expect? Such a scheme would overcome the Balkanization problem that erupts when sites present a different face to each nation's users and the censorship problem of blocking sites considered inappropriate in a given country. But if that's the way it goes, will nations be content to aggregate the most open standards or insist on the most closed, lowest-common-denominator ones?

I'm not sure this is a choice that can be made in any single year - they were asking this same question at CFP in 1994 - but if this is truly the year in which it's made, then yes, 2012 is a critical year in the development of the Internet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

January 21, 2012

Camping out

"Why hasn't the marvelous happened yet?" The speaker - at one of today's "unconference" sessions at this year's UK Govcamp - was complaining that with 13,000-odd data sets up on his organization's site there ought to be, you know, results.

At first glance, GovCamp seems peculiarly British: an incongruous mish-mash of government folks, coders, and activists, all brought together by the idea that technology makes it possible to remake government to serve us better. But the Web tells me that events like this are happening in various locations around Europe. James Hendler, who likes to collect government data sets from around the world (700,000 and counting now!), tells me that events like this are happening all over the US, too - except that there this size of event - a couple of hundred people - is New York City.

That's both good and bad: a local area in the US can find many more people to throw at more discrete problems - but on the other hand the federal level is almost impossible to connect with. And, as Hendler points out, the state charters mean that there are conversations the US federal government simply cannot have with its smaller, local counterparts. In the UK, if central government wants a local authority to do something, it can just issue an order.

This year's GovCamp is a two-day affair. Today was an "unConference": dozens of sessions organized by participants to talk about...stuff. Tomorrow will be hands-on, doing things in the limited time available. By the end of the day, the Twitter feed was filling up with eagerness to get on with things.

A veteran camper - I'm not sure how to count how many there have been - tells me that everyone leaves the event full of energy, convinced that they can change the world on Monday. By later next week, they'll have come down from this exhilarated high to find they're working with the same people and the same attitudes. Wonders do not happen overnight.

Along those lines, Mike Bracken, the guy who launched the Guardian's open data platform, now at the Cabinet Office, acknowledges this when he thanks the crowd for the ten years of persistence and pain that created his job. The user, his colleague Mark O'Neill said recently is at the center of everything they're working on. Are we, yet, past proving the concept?

"What should we do first?" someone I couldn't identify (never knowing who's speaking is a pitfall of unConferences) asked in the same session as the marvel-seeker. One offered answer was one any open-source programmer would recognize: ask yourself, in your daily life, what do you want to fix? The problem you want to solve - or the story you want to tell - determines the priorities and what gets published. That's if you're inside government; if you're outside, based on last summer's experience following the Osmosoft teams during Young Rewired State, often the limiting factor is what data is available and in what form.

With luck and perseverance, this should be a temporary situation. As time goes on, and open data gets built into everything, publishing it should become a natural part of everything government does. But getting there means eliminating a whole tranche of traditional culture and overcoming a lot of fear. If I open this data and others can review my decisions will I get fired? If I open this data and something goes wrong will it be my fault?

In a session on creative councils, I heard the suggestion that in the interests of getting rid of gatekeepers who obstruct change organizational structures should be transformed into networks with alternate routes to getting things done until the hierarchy is no longer needed. It sounds like a malcontent's dream for getting the desired technological change past a recalcitrant manager, but the kind of solution that solves one problem by breaking many other things. In such a set-up, who is accountable to taxpayers? Isn't some form of hierarchy inevitable given that someone has to do the hiring and firing?

It was in a session on engagement where what became apparent that as much as this event seems to be focused on technological fixes, the real goal is far broader. The discussion veered into consultations and how to build persistent networks of people engaged with particular topics.

"Work on a good democratic experience," advised the session's leader. Make the process more transparent, make people feel part of the process even if they don't get what they want, create the connection that makes for a truly representative democracy. In her view, what goes wrong with the consultation process now - where, for example, advocates of copyright reform find themselves writing the same ignored advice over and over again in response to the same questions - is that it's trying to compensate for the poor connections to their representatives that most people have. Building those persistent networks and relationships is only a partial answer.

"You can't activate the networks and not at the same time change how you make decisions," she said. "Without that parallel change you'll wind up disappointing people."

Marvels tomorrow, we hope.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

December 30, 2011

Ignorance is no excuse

My father was not a patient man. He could summon up some compassion for those unfortunates who were stupider than himself. What he couldn't stand was ignorance, particularly willful ignorance. The kind of thing where someone boasts about how little they know.

That said, he also couldn't abide computers. "What can you do with a computer that you can't do with a paper and pencil?" he demanded to know when I told him I was buying a friend's TRS-80 Model III in 1981. He was not impressed when I suggested that it would enable me to make changes on page 3 of a 78-page manuscript without retyping the whole thing.

My father had a valid excuse for that particular bit of ignorance or lack of imagination. It was 1981, when most people had no clue about the future of the embryonic technology they were beginning to read about. And he was 75. But I bet if he'd made it past 1984 he'd have put some effort into understanding this technology that would soon begin changing the printing industry he worked in all his life.

While computers were new on the block, and their devotees were a relatively small cult of people who could be relatively easily spotted as "other", you could see the boast "I know nothing about computers" as a replay of high school. In American movies and TV shows that would be jocks and the in-crowd on one side, a small band of miserable, bullied nerds on the other. In the UK, where for reasons I've never understood it's considered more admirable to achieve excellence without ever being seen to work hard for it, the sociology plays out a little differently. I guess here the deterrent is less being "uncool" and more being seen as having done some work to understand these machines.

Here's the problem: the people who by and large populate the ranks of politicians and the civil service are the *other* people. Recent events such as the UK's Government Digital Service launch suggest that this is changing. Perhaps computers have gained respectability at the top level from the presence of MPs who can boast that they misspent their youth playing video games rather than, like the last generation's Ian Taylor, getting their knowledge the hard way, by sweating for it in the industry.

There are several consequences of all this. The most obvious and longstanding one is that too many politicians don't "get" the Net, which is how we get legislation like the DEA, SOPA, PIPA, and so on. The less obvious and bigger one is that we - the technology-minded, the early adopters, the educated users - write them off as too stupid to talk to. We call them "congresscritters" and deride their ignorance and venality in listening to lobbyists and special interest groups.

The problem, as Emily Badger writes for Miller-McCune as part of a review of Clay Johnson's latest book, is that if we don't talk to them how can we expect them to learn anything?

This sentiment is echoed in a lecture given recently at Rutgers by the distinguished computer scientist David Farber on the technical and political evolution of the Internet (MP3) (the slides are here (PDF)). Farber's done his time in Washington, DC, as chief technical advisor to the Federal Communications Commission and as a member of the Presidential Advisory Board on Information Technology. In that talk, Farber makes a number of interesting points about what comes next technically - it's unlikely, he says, that today's Internet Protocols will be able to cope with the terabyte networks on the horizon, and reengineering is going to be a very, very hard problem because of the way humans resist change - but the more relevant stuff for this column has to do with what he learned from his time in DC.

Very few people inside the Beltway understand technology, he says there, citing the Congressman who asked him seriously, "What is the Internet?" (Well, see, it's this series of tubes...) And so we get bad - that is, poorly grounded - decisions on technology issues.

Early in the Net's history, the libertarian fantasy was that we could get on just fine without their input, thank you very much. But as Farber says, politicians are not going to stop trying to govern the Internet. And, as he doesn't quite say, it's not like we can show them that we can run a perfect world without them. Look at the problems techies have invented: spam, the flaky software infrastructure on which critical services are based, and so on. "It's hard to be at the edge in DC," Farber concludes.

So, going back to Badger's review of Johnson: the point is it's up to us. Set aside your contempt and distrust. Whether we like politicians or not, they will always be with us. For 2012, adopt your MP, your Congressman, your Senator, your local councilor. Make it your job to help them understand the bills they're voting on. Show them tshat even if they don't understand the technology there's votes in those who do. It's time to stop thinking of their ignorance as solely *their* fault.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.


December 9, 2011

Reversal of government fortunes

What if - I say, what if? - a country in which government IT projects have always been marked as huge, expensive, lengthy failures could transform itself into a country where IT genuinely works for both government and the people? What if the cheeky guys who founded MySociety and made communicating with your MP or looking up his voting record as easy as buying a book from Amazon were given the task of digitizing government? The guys (which I use as a gender-neutral term) who made e-petitions, PledgeBank, and FixMyStreet? Who embarrassed dozens of big, fat, failed government IT projects? What would that look like?

Government IT in Britain has been an expensive calamity for so long that it's become generally accepted that it will fail, and the headlines describing the latest billions lost in taxpayers' money have become a national joke on a par with losing at sports. People complain that Andy Murray hasn't won anything big, but the near-miss is thoroughly ingrained in the British national consciousness; the complaints are as familiar and well-worn a track as the national anthem. No one is happy about it - but it's like comfort food.

It was gently explained to me this week - in a pub, of course - that my understanding of how the UK government operates, based as it is on a mish-mash of single readings of Anthony Trollope's Palliser novels, repeated viewings of the 1980s sitcom Yes, Minister, and the occasional patient explanation from friends and acquaintances needs to be updated. The show was (and remains) a brilliant exposé of the inner workings of the civil service of the day, something that until then was completely obscure. Politicians repeatedly said it was a documentary, not fiction - and then they began to change in response to it. Who saw that coming? The Blair government bypassed the civil service by hiring outside consultants - who were expensive and, above all, not disinterested. The coalition has reacted by going the other way, thinking small, and hiring people who are good at doing things with all this fancy, new technology. Cheap things. Effective things. Even some of the MySociety people. I know, right?

The fact that people like Mike Bracken, who masterminded the Guardian's open platform and who is a founder of MySociety, are working in government is kind of astonishing. And not just him: also Wired UK, and who has gone on to work for the BBC and advise Ofcom on digital strategy and Richard Pope, another of the MySociety guys.

The question is, can a small cohort of clever people succeed in turning a lumbering ship like a national government, let alone one running a country so wedded to the traditional way of doing things as Britain is? This week, the UK government has seemed to embrace both the dysfunctional old, in the form of promising the nation's public health data to life sciences companies, and the new, in the form of launching the Government Digital Service. You almost want to make one of those old Tired/Wired tables. Tired: centralisation, big databases, the British population as assets to be sold off or given away to "users", who are large organisations. Wired: individual control, personal data stores, users who are citizens in charge of their own destinies.

Yesterday, Bracken was the one to announce the new Government Data Service. William Heath, who founded the government consultancy Kable (since sold and now Guardian Government Computing) and, in 2004, the Ideal Government blog in pursuit of something exactly like this, could scarcely contain his excitement.

What's less encouraging is seeing health data mixed in with the Autumn Statement's open data provisions (PDF). As Heath wrote when the news broke, open data is about things, not people. Open data is: transport schedules, mapping data, lists of government assets, national statistics, and so on. This kind of data we want published as openly and in as raw a form as possible, so that it can be reused and form the basis for new businesses and economic growth. This is the process that not the kind of data we want to open. Yes, there are many organisations that would like access to it: life sciences companies, researchers of all types, large pharmaceutical companies, and so on. This is a battle that has been going on in Europe for more than ten years and for a somewhat shorter amount of time in the US, where the lack of nationalized health insurance means that it's taken longer for the issue to come to the front. In the UK, Ross Anderson (see also here) and Fleur Fisher are probably the longest-running campaigners against the assembling of patient records into a single national database. As the case of Wikileaks and the diplomatic cables showed, it is hopeless to think that a system accessible by 800,000 people can keep a secret.

But let's wait to see the details before we get mad. For today, enjoy the moment. Change may happen! In a good way!

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 14, 2011

Think of the children

Give me smut and nothing but! - Tom Lehrer

Sex always sells, which is presumably why this week's British headlines have been dominated by the news that the UK's ISPs are to operate an opt-in system for porn. The imaginary sales conversations alone are worth any amount of flawed reporting:

ISP Customer service: Would you like porn with that?

Customer: Supersize me!

Sadly, the reporting was indeed flawed. Cameron, it turns out was merely saying that new customers signing up with the four major consumer ISPs would be asked if they want parental filtering. So much less embarrassing. So much less fun.

Even so, it gave reporters such as Violet Blue, at ZDNet UK, a chance to complain about the lack of transparency and accountability of filtering systems.

Still, the fact that so many people could imagine that it's technically possible to turn "Internet porn" on and off as if operated a switch is alarming. If it were that easy, someone would have a nice business by now selling strap-on subscriptions the way cable operators do for "adult" TV channels. Instead, filtering is just one of several options for which ISPs, Web sites, and mobile phone operators do not charge.

One of the great myths of our time is that it's easy to stumble accidentally upon porn on the Internet. That, again, is television, where idly changing channels on a set-top box can indeed land you on the kind of smut that pleased Tom Lehrer. On the Internet, even with safe search turned off, it's relatively difficult to find porn accidentally - though very easy to find on purpose. (Especially since the advent of the .xxx top-level domain.)

It is, however, very easy for filtering systems to remove non-porn sites from view, which is why I generally turn off filters like "Safe search" or anything else that will interfere with my unfettered access to the Internet. I need to know that legitimate sources of information aren't being hidden by overactive filters. Plus, if it's easy to stumble over pornography accidentally I think that as a journalist writing about the Net and in general opposing censorship I think I should know that. I am better than average at constraining my searches so that they will retrieve only the information I really want, which is a definite bias in this minuscule sample of one. But I can safely say that the only time I encounter unwanted anything-like-porn is in display ads on some sites that assume their primary audience is young men.

Eli Pariser, whose The Filter Bubble: What the Internet is Hiding From You I reviewed recently for ZDNet UK, does not talk in his book about filtering systems intended to block "inappropriate" material. But surely porn filtering is a broad-brush subcase of exactly what he's talking about: automated systems that personalize the Net based on your known preferences by displaying content they already "think" you like at the expense of content they think you don't want. If the technology companies were as good at this as the filtering people would like us to think, this weekend's Singularity Summit would be celebrating the success of artificial intelligence instead of still looking 20 to 40 years out.

If I had kids now, would I want "parental controls"? No, for a variety of reasons. For one thing, I don't really believe the controls keep them safe. What keeps them safe is knowing they can ask their parents about material and people's behavior that upsets them so they can learn how to deal with it. The real world they will inhabit someday will not obligingly hide everything that might disturb their equanimity.

But more important, our children's survival in the future will depend on being able to find the choices and information that are hidden from view. Just as the children of 25 years ago should have been taught touch typing, today's children should be learning the intricacies of using search to find the unknown. If today's filters have any usefulness at all, it's as a way of testing kids' ability to think ingeniously about how to bypass them.

Because: although it's very hard to filter out only *exactly* the material that matches your individual definition of "inappropriate", it's very easy to block indiscriminately according to an agenda that cares only about what doesn't appear. Pariser worries about the control that can be exercised over us as consumers, citizens, voters, and taxpayers if the Internet is the main source of news and personalization removes the less popular but more important stories of the day from view. I worry that as people read and access only the material they already agree with our societies will grow more and more polarized with little agreement even on basic facts. Northern Ireland, where for a long time children went to Catholic or Protestant-owned schools and were taught that the other group was inevitably going to Hell, is a good example of the consequences of this kind of intellectual segregation. Or, sadly, today's American political debates, where the right and left have so little common basis for reasoning that the nation seems too polarized to solve any of its very real problems.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

September 9, 2011

The final countdown

The we-thought-it-was-dead specter of copyright term extension in sound recordings has done a Diabolique maneuver and been voted alive by the European Council. In a few days, the Council of Ministers could make it EU law because, as can happen under the inscrutable government structures of the EU, opposition has melted away.

At stake is the extension of copyright in sound recordings from 50 years to 70, something the Open Rights Group has been fighting since it was born. The push to extend it above 50 years has been with us for at least five years; originally the proposal was to take it to 95 years. An extension from 50 to 70 years is modest by comparison, but given the way these things have been going over the last 50 years, that would buy the recording industry 20 years in which to lobby for the 95 years they originally wanted, and then 25 years to lobby for the line to be moved further. Why now? A great tranche of commercially popular recordings is up for entry into the public domain: Elvis Presley's earliest recordings date to 1956, and The Beatles' first album came out in 1963; their first singles are 50 years old this year. It's not long after that to all the great rock records of the 1970s.

My fellow Open Rights Group advisory council member Paul Sanders, has up a concise little analysis about what's wrong here. Basically, it's never jam today for the artists, but jam yesterday, today, and tomorrow for the recording companies. I have commented frequently on the fact that the more record companies are able to make nearly pure profit on their back catalogues whose sunk costs have long ago been paid, the more new, young artists are required to compete for their attention with an ever-expanding back catalogue. I like Sanders' language on this: "redistributive, from younger artists to older and dead ones".

In recent years, we've heard a lof of the mantra "evidence-based policy" from the UK government. So, in the interests of ensuring this evidence-based policy the UK government is so keen on, here is some. The good news is they commissioned it themselves, so it ought to carry a lot of weight with them. Right? Right.

There have been two major British government reports studying the future of copyright and intellectual property law generally in the last five years: the Gowers Review, published in 2006, and the Hargreaves report was commissioned in November 2010 and released in May 2011.

From Hargreaves:

Economic evidence is clear that the likely deadweight loss to the economy exceeds any additional incentivising effect which might result from the extension of copyright term beyond its present levels.14 This is doubly clear for retrospective extension to copyright term, given the impossibility of incentivising the creation of already existing works, or work from artists already dead.

Despite this, there are frequent proposals to increase term, such as the current proposal to extend protection for sound recordings in Europe from 50 to 70 or even 95 years. The UK Government assessment found it to be economically detrimental. An international study found term extension to have no impact on output.

And further:

Such an extension was opposed by the Gowers Review and by published studies commissioned by the European Commission.

Ah, yes, Gowers and its 54 recommendations, many or most of which have been largely ignored. (Government policy seems to have embraced "strengthening of IP rights, whether through clamping down on piracy" to the exclusion of things like "improving the balance and flexibility of IP rights to allow individuals, businesses, and institutions to use content in ways consistent with the digital age".

To Gowers:

Recommendation 3: The European Commission should retain the length of protection on sound recordings and performers' rights at 50 years.

And:

Recommendation 4: Policy makers should adopt the principle that the term and scope of protection for IP rights should not be altered retrospectively.

I'd use the word "retroactive", myself, but the point is the same. Copyright is a contract with society: you get the right to exploit your intellectual property for some number of years, and in return after that number of years your work belongs to the society whose culture helped produce it. Trying to change an agreed contract retroactively usually requires you to show that the contract was not concluded in good faith, or that someone is in breach. Neither of those situations applies here, and I don't think these large companies with their in-house lawyers, many of whom participated in drafting prior copyright law, can realistically argue that they didn't understand the provisions. Of course, this recommendation cuts both ways: if we can't put Elvis's earliest recordings back into copyright, thereby robbing the public domain, we also can't shorten the copyright protection that applies to recordings created with the promise of 50 years' worth of protection.

This whole mess is a fine example of policy laundering: shopping the thing around until you either wear out the opposition or find sufficient champions. The EU, with its Hampton Court maze of interrelated institutions, could have been deliberately designed to facilitate this. You can write to your MP, or even your MEP - but the sad fact is that the shiny, new EU government is doing all this in old-style backroom deals.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

August 12, 2011

"Phony concerns about human rights"

Why can't you both condemn violent rioting and looting *and* care about civil liberties?

One comment of David Cameron's yesterday in the Commons hit a nerve: that "phony" (or "phoney", if you're British) human rights concerns would not get in the way of publishing CCTV images in the interests of bringing the looters and rioters to justice. Here's why it bothers me: even the most radical pro-privacy campaigner is not suggesting that using these images in this way is wrong. But in saying it, Cameron placed human rights on the side of lawlessness. One can oppose the privacy invasiveness of embedding crowdsourced facial recognition into Facebook and still support the use of the same techniques by law enforcement to identify criminals.

It may seem picky to focus on one phrase in a long speech in a crisis, but this kind of thinking is endemic - and, when it's coupled with bad things happening and a need for politicians to respond quickly and decisively, dangerous. Cameron shortly followed it with the suggestion that it might be appropriate to shut down access to social media sites when they are being used to plan "violence, disorder and criminality".

Consider the logic there: given the size of the population, there are probably people right now planning crimes over pints of beer in pubs, over the phone, and sitting in top-level corporate boardrooms. Fellow ORG advisory council member Kevin Marks blogs a neat comparison by Douglas Adams to cups of tea. But no, let's focus on social media.

Louise Mensch, MP and novelist, was impressove during the phone hacking hearings aside from her big gaffe about Piers Morgan. But she's made another mistake here in suggesting that taking Twitter and/or Facebook down for an hour during an emergency is about like shutting down a road or a railway station.

First of all, shutting down the tube in the affected areas has costs: innocent bystanders were left with no means to escape their violent surroundings. (This is the same thinking that wanted to shut down the tube on New Year's Eve 1999 to keep people out of central London.)

But more important, the comparison is wrong. Shutting down social networks is the modern equivalent of shutting down radio, TV, and telephones, not transport. The comparison suggests that Mensch is someone who uses social media for self-promotion rather than, like many of us, as a real-time news source and connector to friends and family. This is someone for whom social media are a late add-on to an already-structured life; in 1992 an Internet outage was regarded as a non-issue, too. The ability to use social media in an emergency surely takes pressure off the telephone network by helping people reassure friends and family, avoid trouble areas, find ways home, and so on. Are there rumors and misinformation? Sure. That's why journalists check stuff out before publishing it (we hope). But those are vastly overshadowed by the amount of useful and timely updates.

Is barring access is even possible? As Ben Rooney writes in the Wall Street Journal Europe, it's hard enough to ground one teenager these days, let alone a countryful. But let's say they decide to try. What approaches can they take?

One: The 95 percent approach. Shut down access to the biggest social media sites and hope that the crimes aren't being planned on the ones you haven't touched. Like the network that the Guardian finds was really used - Blackberry messaging.

Two: The Minority Report approach. Develop natural language processing and artificial intelligence technology to the point where it can interact on the social networks, spot prospective troublemakers, and turn them in before they commit crimes.

Three: The passive approach. Revive all the net.wars of the past two decades. Reinstate the real-world policing. One of the most important drawbacks to relying on mass surveillance technologies is that they encourage a reactive, almost passive, style of law enforcement. Knowing that the police can catch the crooks later is no comfort when your shop is being smashed up. It's a curious, schizophrenic mindset politicians have: blame social ills on new technology while imagining that other new technology can solve them.

The riots have ended - at least for now, but we will have to live for a long time with the decisions we make about what comes next. Let's not be hasty. Think of the PATRIOT Act, which will be ten years old soon.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 1, 2011

Equal access

It is very, very difficult to understand the reasoning behind the not-so-secret plan to institute Web blocking. In a http://www.openrightsgroup.org/blog/2011/minister-confirms-voluntary-site-blocking-discussionsletter to the Open Rights Group, Ed Vaizey, the minister for culture, communications, and creative industries, confirmed that such a proposal emerged from a workshop to discuss "developing new ways for people to access content online". (Orwell would be so proud.)

We fire up Yes, Minister once again to remind everyone the four characteristics of proposals ministers like: quick, simple, popular, cheap. Providing the underpinnings of Web site blocking is not likely to be very quick, and it's debatable whether it will be cheap. But it certainly sounds simple, and although it's almost certainly not going to be popular among the 7 million people the government claims engage in illegal file-sharing - a number PC Pro has done a nice job of dissecting - it's likely to be popular with the people Vaizey seems to care most about, rights holders.

The four opposing kiss-of-death words are: lengthy, complicated, expensive, and either courageous or controversial, depending how soon the election is. How to convince Vaizey that it's these four words that apply and not the other four?

Well, for one thing, it's not going to be simple, it's going to be complicated. Web site blocking is essentially a security measure. You have decided that you don't want people to have access to a particular source of data, and so you block their access. Security is, as we know, not easy to implement and not easy to maintain. Security, as Bruce Schneier keeps saying, is a process, not a product. It takes a whole organization to implement the much more narrowly defined IWF system. What kind of infrastructure will be required to support the maintenance and implementation of a block list to cover copyright infringement? Self-regulatory, you say? Where will the block list, currently thought to be about 100 sites come from? Who will maintain it? Who will oversee it to ensure that it doesn't include "innocent" sites? ISPs have other things to do, and other than limiting or charging for the bandwidth consumption of their heaviest users (who are not all file sharers by any stretch) they don't have a dog in this race. Who bears the legal liability for mistakes?

The list is most likely to originate with rights holders, who, because they have shown over most of the last 20 years that they care relatively little if they scoop innocent users and sites into the net alongside infringing ones, no one trusts to be accurate. Don't the courts have better things to do than adjudicate what percentage of a given site's traffic is copyright-infringing and whether it should be on a block list? Is this what we should be spending money on in a time of austerity? Mightn't it be...expensive?

Making the whole thing even more complicated is the obvious (to anyone who knows the Internet) fact that such a block list will - according to Torrentfreak already has - start a new arms race.

And yet another wrinkle: among blocking targets are cyberlockers. And yet this is a service that, like search, is going mainstream: Amazon.com has just launched such a service, which it calls Cloud Drive and for which it retains the right to police rather thoroughly. Encrypted files, here we come.

At least one ISP has already called the whole idea expensive, ineffective, and rife with unintended consequences.

There are other obvious arguments, of course. It opens the way to censorship. It penalizes innocent uses of technology as well as infringing ones; torrent search sites typically have a mass of varied material and there are legitimate reasons to use torrenting technology to distribute large files. It will tend to add to calls to spy on Internet users in more intrusive ways (as Web blocking fails to stop the next generation of file-sharing technologies). It will tend to favor large (often American) services and companies over smaller ones. Google, as IsoHunt told the US Court of Appeals two weeks ago, is the largest torrent search engine. (And, of course, Google has other copyright troubles of its own; last week the court rejected the Google Books settlement.)

But the sad fact is that although these arguments are important they're not a good fit if the main push behind Web blocking is an entrenched belief that only way to secure economic growth is to extend and tighten copyright while restricting access to technologies and sites that might be used for infringement. Instead, we need to show that this entrenched belief is wrong.

We do not block the roads leading to car boot sales just because sometimes people sell things at them whose provenance is cloudy (at best). We do not place levies on the purchase of musical instruments because someone might play copyrighted music on them. We should not remake the Internet - a medium to benefit all of society - to serve the interests of one industrial group. It would make more sense to put the same energy and financial resources into supporting the games industry which, as Tom Watson (Lab - Bromwich) has pointed out has great potential to lift the British economy.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 18, 2011

Block party

When last seen in net.wars, the Internet Watch Foundation was going through the most embarrassing moment of its relatively short life: the time it blocked a Wikipedia page. It survived, of course, and on Tuesday this week it handed out copies of its latest annual report (PDF) and its strategic plan for the years 2011 to 2014 (PDF) in the Strangers Dining Room at the House of Commons.

The event was, more or less, the IWF's birthday party: in August it will be 15 years since the suspicious, even hostile first presentation, in 1996, of the first outline of the IWF. It was an uneasy compromise between an industry accused of facilitating child abuse, law enforcement threatening technically inept action, and politicians anxious to be seen to be doing something, all heightened by some of the worst mainstream media reporting I've ever seen.

Suspicious or not, the IWF has achieved traction. It has kept government out of the direct censorship business and politicians and law enforcement reasonably satisfied. Without - as was pointed out - cost to the taxpayer, since the IWF is funded from a mix of grants, donations, and ISPs' subscription fees.

And to be fair, it has been arguably successful at doing what it set out to do, which is to disrupt the online distribution of illegal pornographic images of children within the UK. The IWF has reported for some years now that the percentage of such images hosted within the UK is near zero. On Tuesday, it said the time it takes to get foreign-hosted content taken down has halved. Its forward plan includes more of the same, plus pushing more into international work by promoting the use its URL list abroad and developing partnerships.

Over at The Register Jane Fae Ozniek has done a good job of tallying up the numbers the IWF reported, and also of following up on remarks made by Culture Minister Ed Vaizey and Home Office Minister James Brokenshire that suggested the IWF or its methods might be expanded to cover other categories of material. So I won't rehash either topic here.

Instead, what struck me is the IWF's report that a significant percentage of its work now concerns sexual abuse images and videos that are commercially distributed. This news offered a brief glance into a shadowy world that is illegal for any of us to study since under UK law (and the laws of many other countries) it's illegal to access such material. If this is a correct assessment, it certainly follows the same pattern as the world of malware writing, which has progressed from the giggling, maladjusted teenager writing a bit of disruptive code in his bedroom to a highly organized, criminal, upside-down image of the commercial software world (complete, I'm told by experts from companies like Symantec and Sophos, with product trials, customer support, and update patches). Similarly, our, or at least my, image was always of like-minded amateurs exchanging copies of the things they managed to pick up rather like twisted stamp collectors.

The IWF report says it has identified 715 such commercial sources, 321 of which were active in 2010. At least 47.7 percent of the commercially branded material is produced by the top ten, and the most prolific of these brands used 862 URLs. The IWF has attempted to analyze these brands, and believes that they are operated in clusters by criminals. To quote the report:

Each of the webpages or websites is a gateway to hundreds or even thousands of individual images or videos of children being sexually abused, supported by layers of payment mechanisms, content sores, membership systems, and advertising frames. Payment systems may include pre-pay cards, credit cards, "virtual money" or e-payment systems, and may be carried out across secure webpages, text, or email.

This is not what people predicted when they warned at the original meeting that blocking access to content would drive it underground into locations that were harder to police. I don't recall anyone saying: it will be like Prohibition and create a new Mafia. How big a problem this is and how it relates to events like yesterday's shutdown of boylovers.net remains to be seen. But there's logic to it: anything that's scarce attracts a high price and anything high-priced and illegal attracts dedicated criminals. So we have to ask: would our children be safer if the IWF were less successful?

The IWF will, I think always be a compromise. Civil libertarians will always be rightly suspicious of any organization that has the authority and power to shut down access to content, online or off. Still, the IWF's ten-person board now includes, alongside the representatives of ISPs, top content sites, and academics, a consumer representative, and seems to be less dominated by repressive law enforcement interests. There's an independent audit in the offing, and while the IWF publishes no details of its block list for researchers to examine, it advocates transparency in the form of a splash screen that tells users a site that is blocked and why. They learned, the IWF's departing head, Peter Robbins, said in conversation, a lot from the Wikipedia incident.

My summary: the organization will know it has its balance exactly right when everyone on all sides has something to complain about.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 4, 2011

Tax returns

In 1994, when Jeff Bezos was looking for a place to put the online bookseller he intended to grow into the giant, multi-faceted online presence it is today, he began with a set of criteria that included, high up on the list, avoiding liability for sales tax as much as possible. That meant choosing a small state, so that the vast majority of the new site's customers would be elsewhere.

Bezos could make this choice because of the 1992 Supreme Court decision in Quill Corp v. North Dakota, blocking states from compelling distance sellers to collect sales tax from customers unless the seller had a substantial physical operation (a "nexus") in the customer's state. Why, the reasoning went, should a company be required to pay taxes in a state where it receives no benefit in the form of public services? The decision helped fuel the growth of first mail-order sales and then ecommerce.

And so throughout the growth of electronic commerce Americans have gone along taking advantage of the relief from sales tax afforded by online sales. This is true despite the fact that many states have laws requiring their residents to declare and pay the sales tax on purchases over a certain amount. Until the current online tax disputes blew up, few knew about these laws - I only learned of them from a reader email some years ago - and as far as I'm aware it isn't enforced. Doing so would require comprehensive surveillance of ecommerce sites.

But this is the thing when something is new: those setting up businesses can take advantage of loopholes created for very different markets and conditions. A similar situation applies in the UK with respect to DVD and CD sales. Fulfilled by subsidiaries or partners based in the Channel Islands, the DVD and CD sales of major retailers such as Amazon, Tesco, and others take advantage of tax relief rules intended to speed shipments of agricultural products. Basically, any package valued under £18 is exempt from VAT. For consumers, this represents substantial savings; for local shops, it represents a tough challenge.

Even before that, in the early 1990s, CompuServe and AOL, as US-based Internet service providers, were able to avoid charging VAT in the UK based on a rule making services taxable based on their point of origin. That gave those two companies a significant - 17.5 percent - advantage over native ISPs like Demon and Pipex. There were many objections to this situation, and eventually the loophole was closed and both CompuServe and AOL began charging VAT.

You can't really blame companies for taking advantage of the structures that are there. No one wants to pay more tax - or pay for more administration - than is required by law, and anyone running those companies would make the same decisions. But as the recession continues to bite and state, federal, and central governments are all scrambling to replace lost revenues from a tax base that's been , the calls to level the playing field by closing off these tax-advantage workarounds are getting louder.

This type of argument is as old as mail order. But in the beginning there was a general view - implemented also in the US as a moratorium on taxing Internet services that was renewed as recently as 2007 - that exempting the Internet from as many taxes as possible would help the new medium take root and flourish. There was definitely some truth to the idea that this type of encouragement helped; an early FCC proposal to surcharge users for transmitting data was dropped after 10,000 users sent letters of complaint. Nonetheless, the FCC had to continue issuing denials for years as the dropped proposal continued to make the rounds as the "modem tax" hoax spam.

The arguments for requiring out-of-state sellers to collect and remit sales taxes (or VAT) are fairly obvious. Local retailers, especially small independents, are operating at a price disadvantage (even though customers must pay shipping and delivery charges when they buy online). Governments are losing one of their options for raising revenues to pay for public services. In addition, people buy online for many more reasons than saving money. Online shopping is convenient and offers greater choice. It is also true, though infrequently remembered, that the demographics of online shopping skew toward the wealthier members of our society - that is, the people who best afford to pay the tax.

The arguments against largely boil down to the fact that collecting taxes in many jurisdictions is administratively burdensome. There are some 8,000 different tax rates across the US's 50 states, and although there are many fewer VAT rates across Europe, once your business in a country has reached a certain threshold the rules and regulations governing each one can be byzantine and inconsistent. Creating a single, simple, and consistent tax rule to apply across the board to distance selling would answer these.

No one likes paying taxes (least of all us). But the fact that Amazon would apparently rather jettison the associates program that helped advertise and build its business than allow a state to claim those associates constitute a nexus exposing it to sales tax liability says volumes about how far we've come. And, therefore, how little the Net's biggest businesses now need the help.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 25, 2011

Wartime economy

Everyone loves a good headline, and £27 billion always makes a *great* one. In this case, that was the sum that a report written by the security consultancy firm Detica, now part of BAE Systems and issued by the Office of Cyber Security and Information Assurance (PDF) estimates that cybercrime is costing the UK economy annually. The claim was almost immediately questioned by ZDNet's Tom Espiner, who promptly checked it out with security experts. Who complained that the report was full of "fake precision" (LSE professor Peter Sommer), "questionable calculations" (Harvard's Tyler Moore), and "nonsense" (Cambridge's Richard Clayton).

First, some comparisons.

Twenty-seven billion pounds (approximately $40 billion) is slightly larger than a year's worth of the International Federation of the Phonographic Industry's estimate of the cumulative retail revenue lost to piracy by the European creative industries from 2008 to 2015 (PDF) (total €240 billion, about £203 million, eight years, £25.4 billion a year). It is roughly the estimated cost of the BP oil spill, the amount some think Facebook will be worth at an IPO, and noticeably less than Apple's $51 billion cash hoard. But: lots smaller than the "£40 billion underworld" The Times attributed to British gangs in 2008.

Several things baffle about this report. The first is that so little information is given about the study's methodology. Who did the researchers talk to? What assumptions did they make and what statistical probabilities did they assign in creating the numbers and charts? How are they defining categories like "online scams" or "IP theft" (they're clear about one thing: they're not including file-sharing in that figure)? What is the "causal model" they developed?

We know one person they didn't talk to: Computer Weekly notes the omission of Detective superintendent Charlie McMurdie, head of the Metropolitan Police's Central e-Crime Unit, who you'd' think would be one of the first ports of call for understanding the on-the-ground experience.

One issue the report seems to gloss over is how very difficult it is to define and categorize cybercrime. Last year, the Oxford Internet Institute conducted a one-day forum on the subject, out of which came the report Mapping and Measuring Cybercrime (PDF) , published in June 2010. Much of this report is given over to the difficulty of such definitions; Sommer, who participated in the forum, argued that we shouldn't worry about the means of commission - a crime is a crime. More recently - perhaps a month ago - Sommer teamed up with the OII's Ian Brown to publish a report for an OECD project on future global shocks, Reducing Systemic Cybersecurity Risk (PDF). The authors' conclusion: "very few single cyber-related events have the capacity to cause a global shock". This report also includes considerable discussion of cybercrime in assessing whether "cyberwarfare" is a genuine global threat. But the larger point about both these reports is that they disclose their methodology in detail.

And as a result, they make much more modest and measured claims, which is one reason that critics have looked at the source of the OCSIA/Detica report - BAE - and argued that the numbers are inflated and the focus largely limited to things that fit BAE's business interests (that is, IP theft and espionage; the usual demon, abuse of children, is left untouched).

The big risk here is that this report will be used in determining how policing resources are allocated.

"One of the most important things we can do is educate the public," says Sommer. "Not only about how to protect themselves but to ensure they don't leave their computers open to be formed into botnets. I am concerned that the effect of all these hugely military organizations lobbying for funding is that in the process things like Get Safe Online will suffer."

There's a broader point that begins with a personal nitpick. On page four, the report says this: "...the seeds of criminality planted by the first computer hackers 20 years ago." Leaving aside the even smaller nitpick that the *real*, original computer hackers, who built things and spent their enormous cleverness getting things to work, date to 40 and 50 years ago, it is utterly unfair to compare today's cybercrime to the (mostly) teenaged hackers of 1990, who spent their Saturday nights in their bedrooms war-dialling sites and trying out passwords. They were the computer equivalent of joy-riders, caused little harm, and were so disproportionately the targets of freaked-out, uncomprehending law enforcement that the the Electronic Frontier Foundation was founded to spread some sanity on the situation. Today's cybercrime underground is composed of professional criminals who operate in an organized and methodical way. There is no more valid comparison between the two than there is between Duke Nukem and al-Qaeda.

One is not a gateway to the other - but the idea that criminals would learn computer techniques and organized crime would become active online was repeatedly used as justification for anti-society legislation from cryptographic key escrow to data retention and other surveillance. The biggest risk of a report like this is that it will be used as justification for those wrong-headed policies rather than as it might more rightfully be, as evidence of the failure of no less than five British governments to plan ahead on our behalf.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 19, 2010

Power to the people

We talk often about the fact that ten years of effort - lawsuits, legislation, technology - on the part of the copyright industries has made barely a dent in the amount of material available online as unauthorized copies. We talk less about the similar situation that applies to privacy despite years of best efforts by Privacy International, Electronic Privacy Information Center, Center for Democracy and Technology, Electronic Frontier Foundation, Open Rights Group, No2ID, and newcomer Big Brother Watch. The last ten years have built Google, and Facebook, and every organization now craves large data stores of personal information that can be mined. Meanwhile, governments are complaisant, possibly because they have subpoena power. It's been a long decade.

"Information is the oil of the 1980s," wrote Thomas McPhail and Brenda McPhail in 1987 in an article discussing the politics of the International Telecommunications Union, and everyone seems to take this encomium seriously.

William Heath, who spent his early career founding and running Kable, a consultancy specializing in government IT. The question he focused on a lot: how to create the ideal government for the digital era, has been saying for many months now that there's a gathering wave of change. His idea is that the *new* new thing is technologies to give us back control and up-end the current situation in which everyone behaves as if they own all the information we give them. But it's their data only in exactly the same way that taxpayers' money belongs to the government. They call it customer relationship management; Heath calls the data we give them volunteered personal information and proposes instead vendor relationship management.

Always one to put his effort where his mouth is (Heath helped found the Open Rights Group, the Foundation for Policy Research, and the Dextrous Web as well as Kable), Heath has set up not one, but two companies. The first, Ctrl-Shift, is a research and advisory businesses to help organizations adjust and adapt to the power shift. The second, Mydex, a platform now being prototyped in partnership with the Department of Work and Pensions and several UK councils (PDF). Set up as a community interest company, Mydex is asset-locked, to ensure that the company can't suddenly reverse course and betray its customers and their data.

The key element of Mydex is the personal data store, which is kept under each individual's own control. When you want to do something - renew a parking permit, change your address with a government agency, rent a car - you interact with the remote council, agency, or company via your PDS. Independent third parties verify the data you present. To rent a car, for example, you might present a token from the vehicle licensing bureau that authenticates your age and right to drive and another from your bank or credit card company verifying that you can pay for the rental. The rental company only sees the data you choose to give it.

It's Heath's argument that such a setup would preserve individual privacy and increase transparency while simultaneously saving companies and governments enormous sums of money.

"At the moment there is a huge cost of trying to clean up personal data," he says. "There are 60 to 200 organisations all trying to keep a file on you and spending money on getting it right. If you chose, you could help them." The biggest cost, however, he says, is the lack of trust on both sides. People vanish off the electoral rolls or refuse to fill out the census forms rather than hand over information to government; governments treat us all as if we were suspected criminals when all we're trying to do is claim benefits we're entitled to.

You can certainly see the potential. Ten years ago, when they were talking about "joined-up government", MPs dealing with constituent complaints favored the notion of making it possible to change your address (for example) once and have the new information propagate automatically throughout the relevant agencies. Their idea, however, was a huge, central data store; the problem for individuals (and privacy advocates) was that centralized data stores tend to be difficult to keep accurate.

"There is an oft-repeated fallacy that existing large organizations meant to serve some different purpose would also be the ideal guardians of people's personal data," Heath says. "I think a purpose-created vehicle is a better way." Give everyone a PDS, and they can have the dream of changing their address only once - but maintain control over where it propagates.

There are, as always, key questions that can't be answered at the prototype stage. First and foremost is the question of whether and how the system can be subverted. Heath's intention is that we should be able to set our own terms and conditions for their use of our data - up-ending the present situation again. We can hope - but it's not clear that companies will see it as good business to differentiate themselves on the basis of how much data they demand from us when they don't now. At the same time, governments who feel deprived of "their" data can simply pass a law and require us to submit it.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 29, 2010

Wanted: less Sir Humphrey, more shark


Seventeen MPs showed up for Thursday's Backbenchers' Committee debate on privacy and the Internet, requested by Robert Halfon (Con-Harlow). They tell me this is a sell-out crowd. The upshot: Google and every other Internet company may come to rue the day that Google sent its Street View cars around Britain. It crossed a line.

That line is this: "Either your home is your castle or it's not." Halfon, talking about StreetView and email he had from a vastly upset woman in Cornwall whose home had been captured and posted on the Web. It's easy for Americans to forget how deep the "An Englishman's home is his castle" thing goes.

Halfon's central question: are we sleepwalking into a privatized surveillance society, and can we stop it? "If no one has any right to privacy, we will live in a Big Brother society run by private companies." StreetView, he said, "is brilliant - but they did it without permission." Of equal importance to Halfon is the curious incident of the silent Information Commissioner (unlike apparently his equivalent everywhere else in the world) and Google's sniffed wi-fi data. The recent announcement that the sniffed data includes contents of email messages, secure Web pages, and passwords has prompted the ICO to take another look.

The response of the ICO, Halfon said, "has been more like Sir Humphrey than a shark with teeth, which is what it should be."

Google is only one offender; Julian Huppert (LibDem-Cambridge) listed some of the other troubles, including this week's release of Firesheep, a Firefox add-on designed to demonstrate Facebook's security failings. Several speakers raised the issue of the secret BT/Phorm trials. A key issue: while half the UK's population choose to be Facebook users (!), and many more voluntarily use Google daily, no one chose to be included in StreetView; we did not ask to be its customers.

So Halfon wants two things. He wants an independent commission of inquiry convened that would include MPs with "expertise in civil liberties, the Internet, and commerce" to suggest a new legal framework that would provide a means of redress, perhaps through an Internet bill of rights. What he envisions is something that polices the behavior of Internet companies the way the British Medical Association or the Law Society provides voluntary self-regulation for their fields. In cases of infringement, fines, perhaps.

In the ensuing discussion many other issues were raised. Huppert mentioned "chilling" (Labour) government surveillance, and hoped that portions of the Digital Economy Act might be repealed. Huppert has also been asking Parliamentary Questions about the is-it-still-dead? Interception Modernization Programme; he is still checking on the careful language of the replies. (Asked about it this week, the Home Office told me they can't speculate in advance about the details will that be provided "in due course"; that what is envisioned is a "program of work on our communications abilities"; that it will be communications service providers, probably as defined in RIPA Section 2(1), storing data, not a government database; that the legislation to safeguard against misuse will probably but not certainly, be a statutory instrument.)

David Davis (Con-Haltemprice and Howden) wasn't too happy even with the notion of decentralized data held by CSPs, saying these would become a "target for fraudsters, hackers and terrorists". Damien Hinds (Con-East Hampshire) dissected Google's business model (including £5.5 million of taxpayers' money the UK government spent on pay-per-click advertising in 2009).

Perhaps the most significant thing about this debate is the huge rise in the level of knowledge. Many took pains to say how much they value the Internet and love Google's services. This group know - and care - about the Internet because they use it, unlike 1995, when an MP was about as likely to read his own email as he was to shoot his own dog.

Not that I agreed with all of them. Don Foster (LibDem-Bath) and Mike Weatherley (Con-Hove) were exercised about illegal file-sharing (Foster and Huppert agreed to disagree about the DEA, and Damian Collins (Con-Folkestone and Hythe complained that Google makes money from free access to unauthorized copies). Nadine Dorries (Con-Mid Bedfordshire) wanted regulation to young people against suicide sites.

But still. Until recently, Parliament's definition of privacy was celebrities' need for protection from intrusive journalists. This discussion of the privacy of individuals is an extraordinary change. Pressure groups like PI, , Open Rights Group, and No2ID helped, but there's also a groundswell of constituents' complaints. Mark Lancaster (Con-Milton Keynes North) noted that a women's refuge at a secret location could not get Google to respond to its request for removal and that the town of Broughton formed a human chain to block the StreetView car. Even the attending opposition MP, Ian Lucas (Lab-Wrexham), favored the commission idea, though he still had hopes for self-regulation.

As for next steps, Ed Vaizey (Con-Wantage and Didcot), the Minister for Communication, Culture, and the Creative Industries, said he planned to convene a meeting with Google and other Internet companies. People should have a means of redress and somewhere to turn for mediation. For Halfon that's still not enough. People should have a choice in the first place.

To be continued...

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 23, 2010

An affair to remember

Politicians change; policies remain the same. Or if, they don't, they return like the monsters in horror movies that end with the epigraph, "It's still out there..."

Cut to 1994, my first outing to the Computers, Freedom, and Privacy conference. I saw: passionate discussions about the right to strong cryptography. The counterargument from government and law enforcement and security service types was that yes, strong cryptography was a fine and excellent thing at protecting communications from prying eyes and for that very reason we needed key escrow to ensure that bad people couldn't say evil things to each other in perfect secrecy. The listing of organized crime, terrorists, drug dealers, and pedophiles as the reasons why it was vital to ensure access to cleartext became so routine that physicist Timothy May dubbed them "The Four Horsemen of the Infocalypse". Cypherpunks opposed restrictions on the use and distribution of strong crypto; government types wanted at the very least a requirement that copies of secret cryptographic keys be provided and held in escrow against the need to decrypt in case of an investigation. The US government went so far as to propose a technology of its own, complete with back door, called the Clipper chip.

Eventually, the Clipper chip was cracked by Matt Blaze, and the needs of electronic commerce won out over the paranoia of the military and restrictions on the use and export of strong crypto were removed.

Cut to 2000 and the run-up to the passage of the UK's Regulation of Investigatory Powers Act. Same Four Horsemen, same arguments. Eventually RIPA passed with the requirement that individuals disclose their cryptographic keys - but without key escrow. Note that it's just in the last couple of months that someone - a teenager - has gone to jail in the UK for the first time for refusing to disclose their key.

It is not just hype by security services seeking to evade government budget cuts to say that we now have organized cybercrime. Stuxnet rightly has scared a lot of people into recognizing the vulnerabilities of our infrastructure. And clearly we've had terrorist attacks. What we haven't had is a clear demonstration by law enforcement that encrypted communications have impeded the investigation.

A second and related strand of argument holds that communications data - that is traffic data such as email headers and Web addresses - must be retained and stored for some lengthy period of time, again to assist law enforcement in case an investigation is needed. As the Foundation for Information Policy Research and Privacy International have consistently argued for more than ten years, such traffic data is extremely revealing. Yes, that's why law enforcement wants it; but it's also why the American Library Association has consistently opposed handing over library records. Traffic data doesn't just reveal who we talk to and care about; it also reveals what we think about. And because such information is of necessity stored without context, it can also be misleading. If you already think I'm a suspicious person, the fact that I've been reading proof-of-concept papers about future malware attacks sounds like I might be a danger to cybersociety. If you know I'm a journalist specializing in technology matters, that doesn't sound like so much of a threat.

And so to this week. The former head of the Department of Homeland Security, Michael Chertoff, at the RSA Security Conference compared today's threat of cyberattack to nuclear proliferation. The US's Secure Flight program is coming into effect, requiring airline passengers to provide personal data for the US to check 72 hours in advance (where possible). Both the US and UK security services are proposing the installation of deep packet inspection equipment at ISPs. And language in the UK government's Strategic Defence and Security Review (PDF) review has led many to believe that what's planned is the revival of the we-thought-it-was-dead Interception Modernisation Programme.

Over at Light Blue Touchpaper, Ross Anderson links many of these trends and asks if we will see a resumption of the crypto wars of the mid-1990s. I hope not; I've listened to enough quivering passion over mathematics to last an Internet lifetime.

But as he says it's hard to see one without the other. On the face of it, because the data "they" want to retain is traffic data and note content, encryption might seem irrelevant. But a number of trends are pushing people toward greater use of encryption. First and foremost is the risk of interception; many people prefer (rightly) to use secured https, SSH, or VPN connections when they're working over public wi-fi networks. Others secure their connections precisely to keep their ISP from being able to analyze their traffic. If data retention and deep packet inspection become commonplace, so will encrypted connections.

And at that point, as Anderson points out, the focus will return to long-defeated ideas like key escrow and restrictions on the use of encryption. The thought of such a revival is depressing; implementing any of them would be such a regressive step. If we're going to spend billions of pounds on the Internet infrastructure - in the UK, in the US, anywhere else - it should be spent on enhancing robustness, reliability, security, and speed, not building the technological infrastructure to enable secret, warrantless wiretapping.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

October 15, 2010

The elected dictatorship

I wish I had a nickel for every time I had the following conversation with some British interlocutor in the 1970s and 1980s:

BI: You should never have gotten rid of Nixon.

wg: He was a crook.

BI: They're all crooks. He was the best foreign policy president you ever had.

As if it were somehow touchingly naïve to expect that politicians should be held to standards of behaviour in office. (Look, I don't care if they have extramarital affairs; I care if they break the law.)

It is, however, arguable that the key element of my BIs' disapproval was that Americans had the poor judgment and bad taste to broadcast the Watergate hearings live on television. (Kids, this was 1972. There was no C-Span then.) If Watergate had happened in the UK, it's highly likely no one would ever have heard about it until 50 or however many years later the Public Records Office opened the archives.

Around the time I founded The Skeptic, I became aware of the significant cultural difference in how people behave in the UK versus the US when they are unhappy about something. Britons write to their MP. Americans...make trouble. They may write letters, but they are equally likely to found an organization and create a campaign. This do-it-yourself ethic is completely logical in a relatively young country where democracy is still taking shape.

Britain, as an older - let's be polite and call it mature - country, operates instead on a sort of "gentlemen's agreement" ethos (vestiges of which survive in the US Constitution, to be sure). You can get a surprising amount done - if you know the right people. That system works perfectly for the in-group, and so to effect change you either have to become one of them (which dissipates your original desire for change) or gate-crash the party. Sometimes, it takes an American...

This was Heather Brooke's introduction to English society. The daughter of British parents and the wife of a British citizen, burned out from years of investigative reporting on murders and other types of mayhem in the American South, she took up residence in Bethnal Green with her husband. And became bewildered when repeated complaints to the council and police about local crime produced no response. Stonewalled, she turned to writing her book Your Right to Know, which led her to make her first inquiries about viewing MPs' expenses. The rest is much-aired scandal.

In her latest book, The Silent State, Brooke examines the many ways that British institutions are structured to lock out the public. The most startling revelation: things are getting worse, particularly in the courts, where the newer buildings squeeze public and press into cramped, uncomfortable spaces but the older buildings. Certainly, the airport-style security that's now required for entry into Parliament buildings sends the message that the public are both unwelcome and not to be trusted (getting into Thursday's apComms meeting required standing outside in the chill and damp for 15 minutes while staff inspected and photographed one person at a time).

Brooke scrutinizes government, judiciary, police, and data-producing agencies such as the Ordnance Survey, and each time finds the same pattern: responsibility for actions cloaked by anonymity; limited access to information (either because the information isn't available or because it's too expensive to obtain); arrogant disregard for citizens' rights. And all aided by feel-good, ass-covering PR and the loss of independent local press to challenge it. In a democracy, she argues, it should be taken for granted that citizens should have a right to get an answer when they ask the how many violent attacks are taking place on their local streets, take notes during court proceedings or Parliamentary sessions, or access and use data whose collection they paid for. That many MPs seem to think of themselves as members of a private club rather than public servants was clearly shown by the five years of stonewalling Brooke negotiated in trying to get a look at their expenses.

In reading the book, I had a sudden sense of why electronic voting appeals to these people. It is yet another mechanism for turning what was an open system that anyone could view and audit - it doesn't take an advanced degree to be able to count pieces of paper - into one whose inner workings can effectively be kept secret. That its inner workings are also not understandable to MPs =themselves apparently is a price they're willing to pay in return for removing much of the public's ability to challenge counts and demand answers. Secrecy is a habit of mind that spreads like fungus.

We talk a lot about rolling back newer initiatives like the many databases of Blair's and Brown's government, data retention, or the proliferation of CCTV cameras. But while we're trying to keep citizens from being run down by the surveillance state we should also be examining the way government organizes its operations and block the build-out of further secrecy. This is a harder and more subtle thing to do, but it could make the lives of the next generation of campaigners easier.

At least one thing has changed in the last 30 years, though: people's attitudes. In 2009, when the scandal over MPs' expenses broke, you didn't hear much about how other qualities meant we should forgive MPs. Britain wanted *blood*.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

July 30, 2010

Three-legged race

"If you are going to do this damn silly thing, don't do it in this damn silly way," Sir Humphrey Appleby tells Jim Hacker in a fit of unaccustomed straight talking.

We think of this often these days, largely because it seems as though lawmakers, having been belittled by impatient and malcontent geeks throughout the 1990s for being too slow to keep up with Internet time, are trying to speed through the process of creating legislation by eliminating thought, deliberation, and careful drafting. You can see why they'd want to get rid of so many civil servants, who might slow this process down.

In that particular episode of Yes, Minister, "The Writing on the Wall" (S1e05), Appleby and Hacker butt heads over who will get the final say over the wording of a draft proposal on phased Civil Service reductions (today's civil servants and ministers might want to watch episode S1e03, "The Economy Drive", for what their lives will soon be like). Hacker wins that part of the battle only to discover that his version, if implemented, will shut down his own department. Oops.

Much of the Digital Economy Act (2010) was like this: redrafted at the last minute in all sorts of unhelpful ways. But the devil is always in the details, and it was not unreasonable to hope that Ofcom, charging with defining and consulting on those details, would operate in a more measured fashion. But apparently not, and so we have a draft code of practice that's so incomplete that it could be a teenager's homework.

Both Consumer Focus and the Open Rights Group have analyses of the code's non-compliance with the act and a helpful <"a href=http://e-activist.com/ea-campaign/clientcampaign.do?ea.client.id=1422&ea.campaign.id=7268">online form should you wish to submit your opinions. The consultation closes today, so run, do not walk, to add your comments.

What's more notable is when it opened: May 28, only three days after the State Opening of the post-election parliamentary session, three weeks after the election, and six weeks after the day that Gordon Brown called the election. Granted, civil servants do not down pencils while the election is proceeding. But given that the act went through last-second changes and then was nodded through the House of Commons in the frantic dash to get home to start campaigning, the most time Ofcom can have had to draft this mish-mash was about six weeks. Which may explain the holes and inadequacies, but then you have to ask: why didn't they take their time and do it properly?

The Freedom bill, which is to repeal so many of the items on our wish list, is mute on the subject of the Digital Economy Act, despite a number of appearances on the Freedom bill's ideas site. (Big Brother Watch has some additional wish list items.)

The big difficulty for anyone who hates the copyright protectionist provisions in the act - the threat to open wi-fi, the disconnection or speed-limitation of Internet access ("technical measures") to be applied to anyone who is accused of copyright infringement three times ("three-strikes", or HADOPI, after the failed French law attempting to do the same) - is that what you really want is for the act to go away. Preferably back where it came from, some copyright industry lobbyist's brain. A carefully drafted code of practice that pays attention to ensuring that the evidentiary burden on copyright holders is strong enough to deter the kind of abuse seen in the US since the passage of the Digital Millennium Copyright Act (1998) is still not a good scenario, merely a least-worst one.

Still, ORG and Consumer Focus are not alone in their unhappiness. BT and TalkTalk have expressed their opposition, though for different reasons. TalkTalk is largely opposed to the whole letter-writing and copyright infringement elements; but both ISPs are unhappy about Ofcom's decision to limit the code to fixed-line ISPs with more than 400,000 customers. In the entire UK, there are only seven: TalkTalk, BT, Post Office, Virgin, Sky, Orange, and O2. Yet it makes sense to exclude mobile ISPs for now: at today's prices it's safe to guess that no one spends a lot of time downloading music over them. For the rest...these ISPs can only benefit if unauthorised downloading on their services decreases, don't all ISPs want the heaviest downloaders to leech off someone else's service?

LINX, the largest membership organisation for UK Internet service providers has also objected (PDF) to the Act's apportionment of costs: ISPs, LINX's Malcolm Hutty argues, are innocent third parties, so rather than sharing the costs of writing letters and retaining the data necessary to create copyright infringement reports ISPs should be reimbursed for not only the entire cost of implementing the necessary systems but also opportunity costs. It's unclear, LINX points out, how much change Ofcom has time to make to the draft code and still meet its statutory timetable.

So this is law on Internet time: drafted for, if not by, special interests, undemocratically rushed through Parliament, hastily written, poorly thought-out, unfairly and inequitably implemented in direct opposition to the country's longstanding commitment to digital inclusion. Surely we can do better.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

June 11, 2010

Bonfire of the last government's vanities

"We have no hesitation in making the national identity card scheme an unfortunate footnote in history. There it should remain - a reminder of a less happy time when the Government allowed hubris to trump civil liberties," the Home Secretary, Theresa May, told the House of Commons at the second reading of the Identity Documents Bill 2010, which will erase the 2006 act introducing ID cards and the National Identity Register. "This will not be a literal bonfire of the last Government's vanities, but it will none the less be deeply satisfying." Estimated saving: £86 million over the next four years.

But not so fast...

An "unfortunate footnote" sounds like the perfect scrapheap on which to drop the National Identity Register and its physical manifestation, ID cards, but if there's one thing we know about ID cards it's that, like the monster in horror movies, they're always "still out there".

In 2005, Lilian Edwards, then at the Centre for Research in Intellectual Property and Law at the University of Edinburgh, invited me to give a talkIdentifying Risks, on the history of ID cards, an idea inspired by a comment from Ross Anderson. The gist: after the ID card was scrapped in 1952 at the end of World War II, attempts to bring it back an ID card were made, on average, about every two or three years. (Former cabinet minister Peter Lilley, speaking at Privacy International's 2002 conference, noted that every new IT minister put the same set of ID card proposals before the Cabinet.)

The most interesting thing about that history is that the justification for bringing in ID cards varied so much; typically, it drew on the latest horrifying public event. So, in 1974 it was the IRA bombings in Guildford and Birmingham. In 1988, football hooliganism and crime. In 1989, social security fraud. In 1993, illegal immigration, fraud, and terrorism.

Within the run of just the 2006 card, the point varied. The stated goals began with blocking benefit fraud, then moved on to include preventing terrorism and serious crime, stopping illegal immigration, and needing to comply with international standards that require biometric features in passports. It is this chameleon-like adaptation to the troubles of the day that makes ID cards so suspect as the solution to anything.

Immediately after the 9/11 attacks, Tony Blair rejected the idea of ID cards (which he had actively opposed in 1995, when John Major's government issued a green paper). But by mid-2002 a consultation paper had been published and by 2004 Blair was claiming that the civil liberties objections had vanished.

Once the 2006 ID card was introduced as a serious set of proposals in 2002, events unfolded much as Simon Davies predicted they would at that 2002 meeting. The government first clothed the ID card in user-friendly obfuscation: an entitlement card. The card's popularity in the polls, at first favourable (except, said David Blunkett for a highly organised minority), slid inexorably as the gory details of its implementation and costs became public. Yet the (dear, departed) Labour government clung to the proposals despite admitting, from time to time, their utter irrelevance for preventing terrorism.

Part of the card's sliding popularity has been due to people's increased understanding of the costs and annoyance it would impose. Their apparent support for the card was for the goals of the card, not the card itself. Plus, since 2002 the climate has changed: the Iraq war is even less popular and even the 2005 "7/7" London attacks did not keep acceptance of the "we are at war" justification for increased surveillance from declining. And the economic climate since 2008 makes large expenditure on bureaucracy untenable.

Given the frequency with which the ID card has resurfaced in the past, it seems safe to say that the idea will reappear at some point, though likely not during this coalition government. The LibDems always opposed it; the Conservatives have been more inconsistent, but currently oppose large-scale public IT projects.

Depending how you look at it, ID cards either took 54 years to resurface (from their withdrawal in1952 to the 2006 Identity Cards Act), or the much shorter time to the first proposals to reinstate them. Australia might be a better guide. In 1985, Bob Hawke made the "Australia card" a central plank of his government. He admitted defeat in 1987, after widespread opposition fueled by civil liberties groups. ID card proposals resurfaced in Australia in 2006, to be withdrawn again at the end of 2007. That's about 21 years - or a generation.

In 2010 Britain, it's as important that much of the rest of the Labour government's IT edifice, such as the ContactPoint database, intended to track children throughout their school years, is being scrapped. Left in place, it might have taught today's generation of children to perceive state tracking as normal. The other good news is that many of today's tireless campaigners against the 2006 ID card will continue to fight the encroachment of the database state. In 20 years - or sooner, if (God forbid) some catastrophe makes it politically acceptable - when or if an ID card comes back, they will still be young enough to fight it. And they will remember how.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

May 7, 2010

Wish list

It's 2am on election night, so of course no one can think about anything except the returns. Reported so far: 57 of 650 seats. Swing from Labour to Conservative: 4 percent.

The worst news of the night so far is that people have been turned away from polling stations because the queues couldn't be processed fast enough to get everyone through before the official closing time of 10pm. Creative poll workers locked the unvoted inside the station and let them vote. Uncreative ones sent them home, or tried to - I'm glad to see there were angry protests and, in some cases, sit-ins. Incredibly, some people couldn't vote because their stations ran out of ballot papers. In one area, hundreds of postal ballots are missing. It's an incredible shambles considering Britain's centuries of experience of running elections. Do not seize on this mess as an excuse to bring in electronic voting, something almost every IT security expert warns is a very bad idea. Print some more ballot papers, designate more polling stations, move election day to Saturday.

Reported: 69 Swing: 3.8 percent: Both Conservatives and LibDems have said they will scrap the ID card. Whether they'll follow through remains to be seen. My sense from interviews with Conservative spokespeople for articles in the last year is that they want to scrap large IT projects in favor of smaller, more manageable ones undertaken in partnership with private companies. That should spell death for the gigantic National Identity Register database and profound change for the future of NHS IT; hopefully smaller systems should give individuals more control. It does raise the question of handing over data to private companies in, most likely, other countries. The way LibDem peers suddenly switched sides on the Digital Economy Act last month dinged our image of the LibDems as the most sensible on net.wars issues of all the parties. Whoever gets in, yes, please, scrap the National Identity Register and stick to small, locally grown IT projects that serve their users. That means us, not the Whitehall civil service.

Reported: 82. Swing: 3.6 percent: Repeal the Digital Economy Act and take time out for a rethink and public debate. The copyright industries are not going to collapse without three-strikes and disconnection notices. Does the UK really want laws that France has rejected?

Reported: 104. Swing: 4.1 percent: Coincidentally, today I received today a letter "inviting" me to join a study on mobile phones and brain cancer; I would be required to answer periodic surveys about my phone use. The explanatory leaflet notes: "Imperial College will review your health directly through routine medical and other health-related records" using my NHS number, name, address, and date of birth - for the next 20 to 30 years. Excuse me? Why not ask me to report relevant health issues, and request more detailed access only if I report something relevant? This Labour government has fostered this attitude of We Will Have It All. I'd participate in the study if I could choose what health information I give; I'm not handing over untrammeled right of access. New government: please cease to regard our health data as yours to hand over "for research purposes" to whomever you feel like. Do not insult our intelligence and knowledge by claiming that anonymizing data protects our privacy; such data can often be very easily reidentified.

Reported: 120. Swing: 3.9 percent: Reform libel law. Create a public interest defense for scientific criticism, streamline the process, and lower costs for defendants. Re-allocate the burden of proof to the plaintiff. Stop hearing cases with little or no connection to the UK.

Reported: 149. Swing: 4.3 percent: While you're reforming legal matters, require small claims court to hear cases in which photographers (and other freelances) pursue publishers who have infringed their copyright. Photographers say these courts typically kick such "specialist" cases up to higher levels, making it impracticably expensive to get paid.

Reported: 231. Swing: 4.8 percent: Any government that's been in power as long as Labour currently has is going to seem tired and in need of new ideas. But none of the complaints above - the massive growth in surveillance, the lack of regard for personal privacy, the sheer cluelessness about IT - knocked Labour down. Even lying about the war didn't do it. It was, as Clinton's campaign posted on its office walls, the economy. Stupid.

Reported: 327. Swing: 5 percent: Scrap ContactPoint, the (expensive, complicated) giant database intended to track children through their school days to adulthood - and, by the time they get there, most likely beyond. Expert reports the government commissioned and paid for advised against taking the risk of data breaches. Along with it modernize data protection instead of data retention.

Reported: 626. Swing: 5.3 percent:
A hung Parliament (as opposed to hanging chad). Good. For the last 36 years Britain has been ruled by an uninterrupted elected dictatorship. It is about time the parties were forced to work together again. Is anyone seriously in doubt that the problems the country has are bigger than any one party's interests? Bring on proportional representation. Like they have in Scotland.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

April 30, 2010

Child's play

In the TV show The West Wing (Season 6, Episode 17, "A Good Day") young teens tackle the president: why shouldn't they have the right to vote? There's probably no chance, but they made their point: as a society we trust kids very little and often fail to take them or their interests seriously.

That's why it was so refreshing to read in 2008's < a href="http://www.dcsf.gov.uk/byronreview/actionplan/">Byron Review the recommendation that we should consult and listen to children in devising programs to ensure their safety online. Byron made several thoughtful, intelligent analogies: we supervise as kids learn to cross streets, we post warning signs at swimming pools but also teach them to swim.

She also, more controversially, recommended that all computers sold for home use in the UK should have Kitemarked parental control software "which takes parents through clear prompts and explanations to help set it up and that ISPs offer and advertise this prominently when users set up their connection."

The general market has not adopted this recommendation; but it has been implemented with respect to the free laptops issued to low-income families under Becta's £300 million Home Access Laptop scheme, announced last year as part of efforts to bridge the digital divide. The recipients - 70,000 to 80,000 so far - have a choice of supplier, of ISP, and of hardware make and model. However, the laptops must meet a set of functional technical specifications, one of which is compliance with PAS 74:2008, the British Internet safety standard. That means anti-virus, access control, and filtering software: NetIntelligence.

Naturally, there are complaints; these fall precisely in line with the general problems with filtering software, which have changed little since 1996, when the passage of the Communications Decency Act inspired 17-year-old Bennett Haselton to start Peacefire to educate kids about the inner working of blocking software - and how to bypass it. Briefly:

1. Kids are often better at figuring out ways around the filters than their parents are, giving parents a false sense of security.

2. Filtering software can't block everything parents expect it to, adding to that false sense of security.

3. Filtering software is typically overbroad, becoming a vehicle for censorship.

4. There is little or no accountability about what is blocked or the criteria for inclusion.

This case looks similar - at first. Various reports claim that as delivered NetIntelligence blocks social networking sites and even Google and Wikipedia, as well as Google's Chrome browser because the way Chrome installs allows the user to bypass the filters.

NetIntelligence says the Chrome issue is only temporary; the company expects a fix within three weeks. Marc Kelly, the company's channel manager, also notes that the laptops that were blocking sites like Google and Wikipedia were misconfigured by the supplier. "It was a manufacturer and delivery problem," he says; once the software has been reinstalled correctly, "The product does not block anything you do not want it to." Other technical support issues - trouble finding the password, for example - are arguably typical of new users struggling with unfamiliar software and inadequate technical support from their retailer.

Both Becta and NetIntelligence stress that parents can reconfigure or uninstall the software even if some are confused about how to do it. First, they must first activate the software by typing in the code the vendor provides; that gets them password access to change the blocking list or uninstall the software.

The list of blocked sites, Kelly says, comes from several sources: the Internet Watch Foundation's list and similar lists from other countries; a manual assessment team also reviews sites. Sites that feel they are wrongly blocked should email NetIntelligence support. The company has, he adds, tried to make it easier for parents to implement the policies they want; originally social networks were not broken out into their own category. Now, they are easily unblocked by clicking one button.

The simple reaction is to denounce filtering software and all who sail in her - censorship! - but the Internet is arguably now more complicated than that. Research Becta conducted on the pilot group found that 70 percent of the parents surveyed felt that the built-in safety features were very important. Even the most technically advanced of parents struggle to balance their legitimate concerns in protecting their children with the complex reality of their children's lives.

For example: will what today's children post to social networks damage their chances of entry into a good university or a job? What will they find? Not just pornography and hate speech; some parents object to creationist sites, some to scary science fiction, others to Fox News. Yesterday's harmless flame wars are today's more serious cyber-bullying and online harassment. We must teach kids to be more resilient, Byron said; but even then kids vary widely in their grasp of social cues, common sense, emotional make-up, and technical aptitude. Even experts struggle with these issues.

"We are progressively adding more information for parents to help them," says Kelly. "We want the people to keep the product at the end. We don't want them to just uninstall it - we want them to understand it and set the policies up the way they want them." Like all of us, Kelly thinks the ideal is for parents to engage with their children on these issues, "But those are the rules that have come along, and we're doing the best we can."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

March 19, 2010

Digital exclusion: the bill

The workings of British politics are nearly as clear to foreigners as cricket; and unlike the US there's no user manual. (Although we can recommend Anthony Trollope's Palliser novels and the TV series Yes, Minister as good sources of enlightenment on the subject.) But what it all boils down to in the case of the Digital Economy Bill is that the rights of an entire nation of Internet users are about to get squeezed between a rock and an election unless something dramatic happens.

The deal is this: the bill has completed all the stages in the House of Lords, and is awaiting its second reading in the House of Commons. Best guesses are that this will happen on or about March 29 or 30. Everyone expects the election to be called around April 8, at which point Parliament disbands and everyone goes home to spend three weeks intensively disrupting the lives of their constituency's voters when they're just sitting down to dinner. Just before Parliament dissolves there's a mad dash to wind up whatever unfinished business there is, universally known as the "wash-up". The Digital Economy Bill is one of those pieces of unfinished business. The fun part: anyone who's actually standing for election is of course in a hurry to get home and start canvassing. So the people actually in the chamber during the wash-up while the front benches are hastily agreeing to pass stuff thought on the nod are likely to be retiring MPs and others who don't have urgent election business.

"What we need," I was told last night, "is a huge, angry crowd." The Open Rights Group is trying to organize exactly that for this Wednesday, March 24.

The bill would enshrine three strikes and disconnection into law. Since the Lords' involvement, it provides Web censorship. It arguably up-ends at least 15 years of government policy promoting the Internet as an engine of economic growth to benefit one single economic sector. How would the disconnected vote, pay taxes, or engage in community politics? What happened to digital inclusion? More haste, less sense.

Last night's occasion was the 20th anniversary of Privacy International (Twitter: @privacyint), where most people were polite to speakers David Blunkett and Nick Clegg. Blunkett, who was such a front-runner for a second Lifetime Menace Big Brother Award that PI renamed the award after him, was an awfully good sport when razzed; you could tell that having his personal life hauled through the tabloid press in some detail has changed many of his views about privacy. Though the conversion is not quite complete: he's willing to dump the ID card, but only because it makes so much more sense just to make passports mandatory for everyone over 16.

But Blunkett's nearly deranged passion for the ID card was at least his own. The Digital Economy Bill, on the other hand, seems to be the result of expert lobbying by the entertainment industry, most especially the British Phonographic Industry. There's a new bit of it out this week in the form of the Building a Digital Economy report, which threatens the loss of 250,000 jobs in the UK alone (1.2 million in the EU, enough to scare any politician right before an election). Techdirt has a nice debunking summary.

A perennial problem, of course, is that bills are notoriously difficult to read. Anyone who's tried knows these days they're largely made up of amendments to previous bills, and therefore cannot be read on their own; and while they can be marked up in hypertext for intelligent Internet perusal this is not a service Parliament provides. You would almost think they don't really want us to read these things.

Speaking at the PI event, Clegg deplored the database state that has been built up over the last ten to 15 years, the resulting change in the relationship between citizen and state, and especially the omission that, "No one ever asked people to vote on giant databases." Such a profound infrastructure change, he argued, should have been a matter for public debate and consideration - and wasn't. Even Blunkett, who attributed some of his change in views to his involvement in the movie Erasing David (opening on UK cinema screens April 29), while still mostly defending the DNA database, said that "We have to operate in a democratic framework and not believe we can do whatever we want."

And here we are again with the Digital Economy Bill. There is plenty of back and forth among industry representatives. ISPs estimate the cost of the DEB's Web censorship provisions at up to £500 million. The BPI disagrees. But where is the public discussion?

But the kind of thoughtful debate that's needed cannot take place in the present circumstances with everyone gunning their car engines hoping for a quick getaway. So if you think the DEB is just about Internet freedoms, think again; the way it's been handled is an abrogation of much older, much broader freedoms. Are you angry yet?


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

November 20, 2009

Thou shalt not steal

As we're so fond of saying, technology moves fast, and law moves slowly. What we say far less often is that law should move slowly. It is not a sign of weakness to deliberate carefully about laws that affect millions of people's lives and will stay on the books for a long, long time. It's always seemed to me that the Founding Fathers very deliberately devised the US system to slow things down - and to ensure that the further-reaching the change the more difficult it is to enact.

Cut to today's Britain. The Internet may perceive censorship as damage and route around it, but politicians seem increasingly to view due and accountable legal process as an unnecessary waste of time and try to avoid it. Preventing this is, of course, what we have constitutions for; democracy is a relatively mature technology.

Today's Digital Economy bill is loaded with provisions for enough statutory instruments to satisfy the most frustrated politician's desire to avoid all that fuss and bother of public debate and research. Where legislation requires draft bills, public consultations, and committee work, a statutory instrument can pass both houses of Parliament on the nod. For minor regulatory changes - such as, for example, the way money is paid to pensioners (1987) - limiting the process to expert discussion and a quick vote makes sense. But when it comes to allowing the Secretary of State to change something as profound and far-reaching in impact as copyright law with a minimum of public scrutiny, it's an outrageous hijack of the democratic process.

Here is the relevant quote from the bill, talking about the Copyright, Designs, and Patents Act 1988:

The Secretary of State may by order amend Part 1 or this Part for the purpose of preventing or reducing the infringement of copyright by means of the internet, if it appears to the Secretary of State appropriate to do so having regard to technological developments that have occurred or are likely to occur.

Lower down, the bill does add that:

Before making any order under this section the Secretary of State must consult such persons who the Secretary of State thinks likely to be affected by the order, or who represent any of those persons, as the Secretary of State thinks fit.

Does that say he (usually) has to consult the public? I don't think so; until very recently it was widely held that the only people affected by copyright law were creators and rights holders - these days rarely the same people even though rights holders like, for public consumption, to pretend otherwise (come contract time, it's a whole different story). We would say that everyone now has a stake in copyright law, given the enormously expanded access to the means to create and distribute all sorts of media, but it isn't at all clear that the Secretary of State would agree or what means would be available to force him to do so. What we do know is that the copyright policies being pushed in this bill come directly from the rights holders.

Stephen Timms, talking to the Guardian, attempted to defend this provision this way:

The way that this clause is formed there would be a clear requirement for full public consultation [before any change] followed by a vote in favour by both houses of Parliament."

This is, put politely, disingenuous: this government has, especially lately - see also ID cards - a terrible record of flatly ignoring what public consultations are telling them, even when the testimony submitted in response to such consultations comes from internationally recognized experts.

Timms' comments are a very bad joke to anyone who's followed the consultations on this particular bill's provisions on file-sharing and copyright, given that everyone from Gowers to Dutch economists are finding that loosening copyright restrictions has society-wide benefits, while Finland has made 1Mb broadband access a legal right and even France's courts see Internet access as a fundamental human right (especially ironic given that France was the first place three strikes actually made it into law).

In creating the Digital Economy bill, not only did this government ignore consultation testimony from everyone but rights holders, it even changed its own consultation mid-stream, bringing back such pernicious provisions as three-strikes-and-you're-disconnected even after agreeing they were gone. This government is, in fact, a perfect advertisement for the principle that laws that are enacted should be reviewed with an eye toward what their effect will be should a government hostile to its citizenry come to power.

Here is some relevant outrage from an appropriately native British lawyer specializing in Net issues, Lilian Edwards:

So clearly every time things happen fast and the law might struggle to keep up with them, in future, well we should just junk ordinary democratic safeguards before anyone notices, and bow instead to the partisan interests who pay lobbyists the most to shout the loudest?

Tell me to "go home if you don't like it here" because I wasn't born in the UK if you want to, but she's a native. And it's the natives who feel betrayed that you've got to watch out for.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

November 13, 2009

Cookie cutters

Sometimes laws sneak up on you while you're looking the other way. One of the best examples was the American Telecommunications Act of 1996: we were so busy obsessing about the freedom of speech-suppressing Communications Decency Act amendment that we failed to pay attention to the implications of the bill itself, which allowed the regional Baby Bells to enter the long distance market and changed a number of other rules regarding competition.

We now have a shiny, new example: we have spent so much time and electrons over the nasty three-strikes-and-you're offline provisions that we, along with almost everyone else, utterly failed to notice that the package contains a cookie-killing provision last seen menacing online advertisers in 2001 (our very second net.wars).

The gist: Web sites cannot place cookies on users' computers unless said users have agreed to receive them unless the cookies are strictly necessary - as, for example, when you select something to buy and then head for the shopping cart to check out.

As the Out-Law blog points out this proposal - now to become law unless the whole package is thrown out - is absurd. We said it was in 2001 - and made the stupid assumption that because nothing more had been heard about it the idea had been nixed by an outbreak of sanity at the EU level.

Apparently not. Apparently MEPs and others at EU level spend no more time on the Web than they did eight years ago. Apparently none of them have any idea what such a proposal would mean. Well, I've turned off cookies in my browser, and I know: without cookies, browsing the Web is as non-functional as a psychic being tested by James Randi.

But it's worse than that. Imagine browsing with every site asking you to opt in every - pop-up - time - pop-up - it - pop-up - wants - pop-up - to - pop-up - send - pop-up - you - a - cookie - pop-up. Now imagine the same thing, only you're blind and using the screen reader JAWS.

This soon-to-be-law is not just absurd, it's evil.

Here are some of the likely consequences.

As already noted, it will make Web use nearly impossible for the blind and visually impaired.

It will, because such is the human response to barriers, direct ever more traffic toward those sites - aggregators, ecommerce, Web bulletin boards, and social networks - that, like Facebook, can write a single privacy policy for the entire service to which users consent when they join (and later at scattered intervals when the policy changes) that includes consent to accepting cookies.

According to Out-Law, the law will trap everyone who uses Google Analytics, visitor counters, and the like. I assume it will also kill AdSense at a stroke: how many small DIY Web site owners would have any idea how to implement an opt-in form? Both econsultancy.com and BigMouthMedia think affiliate networks generally will bear the brunt of this legislation. BigMouthMedia goes on to note a couple of efforts - HTTP.ETags and Flash cookies - intended to give affiliate networks more reliable tracking that may also fall afoul of the legislation. These, as those sources note, are difficult or impossible for users to delete.

It will presumably also disproportionately catch EU businesses compared to non-EU sites. Most users probably won't understand why particular sites are so annoying; they will simply shift to sites that aren't annoying. The net effect will be to divert Web browsing to sites outside the EU - surely the exact opposite of what MEPs would like to see happen.

And, I suppose, inevitably, someone will write plug-ins for the popular browsers that can be set to respond automatically to cookie opt-in requests and that include provisions for users to include or exclude specific sites. Whether that will offer sites a safe harbour remains to be seen.

The people it will hurt most, of course, are the sites - like newspapers and other publications - that depend on online advertising to stay afloat. It's hard to understand how the publishers missed it; but one presumes they, too, were distracted by the need to defend music and video from evil pirates.

The sad thing is that the goal behind this masterfully stupid piece of legislation is a reasonably noble one: to protect Internet users from monitoring and behavioural targeting to which they have not consented. But regulating cookies is precisely the wrong way to go about achieving this goal, not just because it disables Web browsing but because technology is continuing to evolve. The EU would be better to regulate by specifying allowable actions and consequences rather than specifying technology. Cookies are not in and of themselves inherently evil; it's how they're used.

Eight years ago, when the cookie proposals first surfaced, they, logically enough, formed part of a consumer privacy bill. That they're now part of the telecoms package suggests they've been banging around inside Parliament looking for something to attach themselves to ever since.

I probably exaggerate slightly, since Out-Law also notes that in fact the EU did pass a law regarding cookies that required sites to offer visitors a way to opt out. This law is little-known, largely ignored, and unenforced. At this point the Net's best hope looks to be that the new version is treated the same way.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter or by email to netwars@skeptic.demon.co.uk).

October 30, 2009

Kill switch

There's an old sort-of joke that goes, "What's the best way to kill the Internet?" The number seven answer, according to Simson Garfinkel, writing for HotWired in 1997: "Buy ten backhoes." Ba-boom.

The US Senate, never folks to avoid improving a joke, came up with a new suggestion: install a kill switch. They published this little gem (as S.773) on April 1. It got a flurry of attention and then forgotten until the last week or two. (It's interesting to look back at Garfinkel's list of 50 ways to kill the Net and notice that only two are government actions, and neither is installing a "kill switch").

To be fair, "kill switch" is an emotive phrase for what they have in mind, which is that the president:

may declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network
.

Now, there's a lot of wiggle room in a vague definition like "critical infrastructure system". That could be the Federal government's own servers. Or the electrical grid, the telephone network, the banking system, the water supply, or even, arguably, Google. (It has 64+ percent of US search queries, and if you can't find things the Internet might as well be dead.) But what this particular desire of the Senate's sounds most like is those confused users who think they can catch a biological virus from their computers.

Still, for the media, calling the Senate's idea a "kill switch" is attention-getting political genius. We don't call the president's power to order the planes out of the sky, as happened on 9/11 a "crash switch", but imagine the outcry against it if we did.

Technically, the idea that there's a single off switch waiting to be implemented somewhere, is of course ridiculous.

The idea is also administrative silliness: Obama, we hope, is kind of busy. The key to retaining sanity when you're busy is to get other people to do all the things they can without your input. We would hope that the people running the various systems powering the federal government's critical infrastructure could make their own, informed decisions - faster than Obama can - about when they need to take down a compromised server.

Despite wishful thinking, John Gilmore's famous aphorism, "The Net perceives censorship as damage and routes around", doesn't really apply here. For one thing, even a senator knows - probably - that you can't literally shut down the entire Internet from a single switch sitting in the President's briefcase (presumably next to the nuclear attack button). Much of the Internet is, after all, outside the US; much of it is in private ownership. (Perhaps the Third Amendment could be invoked here?)

For another, Gilmore's comment really didn't apply to individual Internet-linked computer networks; Google's various bits of outages this year ought to prove that it's entirely possible for those to be down without affecting the network at large. No, the point was that if you try to censor the Net its people will stop you by putting up mirror servers and passing the censored information around until everyone has a copy. The British Chiropractic Association (quacklash!) and Trafigura are the latest organizations to find out what Gilmore knew in 1993. He also meant, I suppose, that the Internet protocols were designed for resilience and to keep trying by whatever alternate routes are available if data packets don't get through.

Earlier this week another old Net hand, Web inventor Tim Berners-Lee, gave some rather sage advice to the Web 2.0 conference. One key point: do not build your local laws into the global network. That principle would not, unfortunately, stop the US government from shutting off its own servers (to spite its face?), but it does nix the idea of, say, building the network infrastructure to the specification of any one particular group - the MPAA or the UK government, in defiance of the increasingly annoyed EU. In the same talk, Berners-Lee also noted (according to CNET): "I'm worried about anything large coming in to take control, whether it's large companies or government."

Threats like these were what he set up W3C to protect against. People talk with reverence of Berners-Lee's role as inventor, but many fewer understand that the really big effort is the 30 years since the aha! moment of creation, during which Berners-Lee has spent his time and energy nurturing the Web and guiding its development. Without that, it could easily have been strangled by competing interests, both corporate and government. As, of course, it still could be, depending on the outcome of the debates over network neutrality rules.

Dozens of decisions like Berners-Lee's were made in creating the Internet. They have not made it impossible to kill - I'm not sure how many backhoes you'd need now, but I bet it's still a surprisingly finite number - but they have made it a resilient and robust network. A largely democratic medium, in fact, unlike TV and radio, at least so far. The Net was born free; the battles continue over whether it should be in chains.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on Twitter, or by email to netwars@skeptic.demon.co.uk.

July 3, 2009

What's in an assigned name?

There's a lot I didn't know at the time about the founding of the Internet Corporation for Assigned Names and Numbers, but I do remember the spat that preceded it. Until 1998, the systems for assigning domain names (DNS) and assigning Internet numbers (IANA) were both managed by one guy, Jon Postel, who by all accounts and records was a thoughtful and careful steward and an important contributor to much of the engineering that underpins the Internet even now. Even before he died in October 1998, however, plans were underway to create a successor organization to take over the names and numbers functions.

The first proposal was to turn these bits of management over to the International Telecommunications Union, and a memorandum of understanding was drawn up that many, especially within the ITU, assumed would pass unquestioned. Instead, there was much resentment and many complaints that important stakeholders (consumers, most notably) had been excluded. Eventually, ICANN was created under the auspices of the US Department of Commerce intended to become independent once it had fulfilled certain criteria. We're still waiting.

As you might expect, the US under Bush II wasn't all that interested in handing off control. The US government had some support in this, in part because many in the US seem to have difficulty accepting that the Internet was not actually built by the US alone. So alongside the US government's normal resistance to relinquishing control was an endemic sense that it would be "giving away" something the US had created.

All that aside, the biggest point of contention was not ICANN's connection to the US government, as desirable as that might be to those outside the US. Nor was it the assignment of numbers, which, since numbers are the way the computers find each other, is actually arguably the most important bit of the whole thing. It wasn't even, or at least not completely, the money (PDF), as staggering as it is that ICANN expects to rake in $61 million in revenue this year as its cut of domain name registrations. No, of course it was the names that are meaningful to people: who should be allowed to have what?

All this background is important because on September 30 the joint project agreement with DoC under which ICANN operates expires, and all these debates are being revisited. Surprisingly little has changed in the arguments about ICANN since 1998. Michael Froomkin argued in 2000 (PDF) that ICANN bypassed democratic control and accountability. Many critics have argued in the intervening years that ICANN needs to be reined in: its mission kept to a narrow focus on the DNS, and its structure designed to be transparent and accountable, and kept free of not only US government inteference but that of other governments as well.

Last month, the Center for Democracy and Technology published its comments to that effect. Last year, and in 2006, former elected ICANN board member Karl Auerbach">argued similarly, with much more discussion of ICANN's finances, which he regards as a "tax". Perhaps even more than might have been obvious then: ICANN's new public dashboard has revealed that the company lost $4.6 million on the stock market last year, an amount reporter John Levine equates to the 20-cent fee from 23 million domain name registrations. As Levine asks, if they could afford to lose that amount then they didn't need the money - so why did they collect it from us? There seems to be no doubt that ICANN can keep growing in size and revenues by creating more top-level domains, especially as it expands into long-mooted non-ASCII names (iDNs).

Arguing about money aside, the fact is that we have not progressed much, if at all, since 1998. We are asking the same questions and having the same arguments. What is the DNS for? Should it be a directory, a handy set of mnemonics, a set of labels, a zoning mechanism, or a free-for-all? Do languages matter? Early discussions included the notion that there would be thousands, even tens of thousands of global top-level domains. Why shouldn't Microsoft, Google, or the Electronic Frontier Foundation operate their own registries? Is managing the core of the Internet an engineering, legal, or regulatory problem? And, latterly, given the success and central role of search engines, do we need DNS at all? Personally, I lean toward the view that the DNS has become less important than it was, as many services (Twitter, instant messaging, VOIP) do not require it. Even the Web needs it less than it did. But if what really matters about the DNS is giving people names they can remember, then from the user point of view it matters little how many top-level domains there are. The domain info.microsoft is no less memorable than microsoft.info or microsoft.com.

What matters is that the Internet continues to function and that anyone can reach any part of it. The unfortunate thing is that none of these discussions have solved the problems we really have. Four years after the secured version of DNS (DNSsec) was developed to counteract security threats such as DNS cache poisoning that had been mooted for many more years than that, it's still barely deployed.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, follow on , or send email to netwars@skeptic.demon.co.uk.

June 26, 2009

Pass the policy

For quite a few years now, the Canadian law professor Michael Geist has been writing a column on technology law for the Toronto Star. (Brief pause to admire the Star for running such a thing.) This week, he settled down with a simple question: where does copyright policy come from?

The story is that about a month ago the Conference Board of Canada recalled three reports on intellectual property rights after Geist accused the Board of plagiarism and also of accepting funding for the reports from various copyright lobby groups. The source of the copied passages: the International Intellectual Property Alliance. According to its own Web site, the IIPA was "formed in 1984 to represent the US copyright-based industries." It includes: the Association of American Publishers, the Business Software Alliance, the Entertainment Software Association, the Independent Film and Television Alliance, the Motion Picture Association of America, the National Music Publishers' Association, and the Recording Industry Association of America. We know *those* guys, or most of them.

This week, Geist settled down to examine the sources more closely in a lovely bit of spaghetti-detangling. Basically, just two organizations - Canada's equivalents of the MPAA and RIAA - were the source of multiple reports as well as funding for further lobbying organizations. "The net effect," Geist writes, "has been a steady stream of reports that all say basically the same thing, cite to the same sources, make the same recommendations, and often rely on each other to substantiate the manufactured consensus on copyright reform." And of course, these guys don't mean "copyright reform" the way Geist - or the Electronic Frontier Foundation or the Open Rights Group would. We say reform, we mean liberalize and open up; they say reform, they mean tighten and extend. I'd call their way "business as usual".

What's interesting, of course, is to compare Geist's handy table of who recommended what and to whom to the various proposals that are flying around the UK and Europe at the moment. To wit:

Create an IP Council. The Digital Britain report, launched ten days ago, calls this the "Digital Rights Agency", and there's even an entire separate report (PDF) outlining what it might be good for. It would include industry representatives working in collaboration with government (but would not be a government agency), and it would, among other things, educate the public. Which leads us to...

Create public education awareness programs. Of course, I predicted something like this in 1997 - for 2002.

Create an Intellectual Property Crime Task Force. While I'm not aware of speciific Briitsh proposals for this, I would note that Britain does have various law enforcement agencies already who deal with physical forms of IP counterfeiting, and the Internet Watch Foundation has throughout its history mentioned the possibility of tackling online copyright infringement.

Tougher penalties. The Digital Britain report is relatively polite on this one. It says flatly that for-profit counterfeiters will be pursued under criminal law, and calls file-sharing, flatly, "wrong", but also says that most people would prefer to remain within the law (true) and therefore it intends to encourage the development of legal downloading markets (good). However, it also proposes that ISPs should use methods such as bandwidth throttling to deter persistent file-sharers.

Implement the WIPO Internet treaties and anti-circumvention measures. Already done. Anti-circumvention was a clause in the 2001 European Union Copyright Directive and was enacted in the UK in 2003, with some exceptions for cryptographic research.

Increase funding and resources to tackle IP crime. Well. Where agencies come doubtless funding will follow.

The Digital Britain report's proposed next steps include passing legislation to enact sanctions such as bandwidth throttling. There's also a consultation on "illicit peer-to-peer filesharing" (deadline September 15); the government's proposals would require ISPs to notify alleged infringers, keep records of how often they've been notified, and allows rightsholders to use this information, anonymized, to decide when to initiate legal action. Approving the code will be Ofcom, for the time being. The consultation document helpfully reviews the state of legislative play in other countries.

It's extremely rare that we get a case where the origins of a particular set of policies can, as Geist has done here, be traced with such clarity and certainty. And it means that advocates of real copyright reform were right to distrust the claims in this area - the figures the industry claims represent losses to rightsholders from file-sharing - no matter how neutral the apparent source.

I first heard the term "policy laundering" from Privacy International's Gus Hosein; it's used to describe the way today's unwanted policies are shopped around until their sponsors can find a taker. The game works like this, as Geist shows: you publish reports until a government agency - any government agency - adopts your point of view in an apparently neutral document. Then you cite that to other governments until someone passes the laws you want. Then you promote that legislation to other countries: Be the envy of other major governments.

The Digital Britain report sells these policies as aiding the British intellectual property industry. But that's not where they came from originally. Does anyone really think the MPAA and RIAA have Britain's best interests at heart?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and links to earlier columns in this series. Readers are welcome to post here, follow on Twitter or send email to netwars@skeptic.demon.co.uk (but please turn off HTML).

April 11, 2009

Statebook of the art

The bad thing about the Open Rights Group's new site, Statebook is that it looks so perfectly simple to use that the government may decide it's actually a good idea to implement something very like it. And, unfortunately, that same simplicity may also create the illusion in the minds of the untutored who still populate the ranks of civil servants and politicians that the technology works and is perfectly accurate.

For those who shun social networks and all who sail in her: Statebook's interface is an almost identical copy of that of Facebook. True, on Facebook the applications you click on to add are much more clearly pointless wastes of time, like making lists of movies you've liked to share with your friends or playing Lexulus (the reinvention of the game formerly known as Scrabulous until Hasbrouck got all huffy and had it shut down).

Politicians need to resist the temptation to believe it's as easy as it looks. The interfaces of both the fictional Statebook and the real Facebook look deceptively simple. In fact, although friends tell me how much they like the convenience of being able to share photos with their friends in a convenient single location, and others tell me how much they prefer Facebook's private messaging to email, Facebook is unwieldy and clunky to use, requiring a lot of wait time for pages to load even over a fast broadband connection. Even if it weren't, though, one of the difficulties with systems attempting to put EZ-2-ewes front ends on large and complicated databases is that they deceive users into thinking the underlying tasks are also simple.

A good example would be airline reservations systems. The fact is that underneath the simple searching offered by Expedia or Travelocity lies some extremely complex software; it prices every itinerary rather precisely depending on a host of variables. These may not just the obvious things like the class of cabin, but the time of day, the day of the week, the time of year, the category of flyer, the routing, how far in advance the ticket is being purchased, and the number of available seats left. Only some of this is made explicit; frequent flyers trying to maxmize their miles per dollar despair while trying to dig out arcane details like the class of fare.

In his 1988 book The Design of Everyday Things, Donald Norman wrote about the need to avoid confusing the simplicity or complexity of an interface with the characteristics of the underlying tasks. He also writes about the mental models people create as they attempt to understand the controls that operate a given device. His example is a refrigerator with two compartments and two thermostatic controls. An uninformed user naturally assumes each thermostat controls one compartment, but in his example, one control sets the thermostat and the other directs the proportion of cold air that's sent to each comparment. The user's mental model is wrong and, as a consequence, attempts that user makes to set the temperature will also, most likely, be wrong.

In focusing on the increasing quantity and breadth of data the government is collecting on all of us, we've neglected to think about how this data will be presented to its eventual users. We have warned about the errors that build up in very large databases that are compiled from multiple sources. We have expressed concern about surveillance and about its chilling impact on spontaneous behaviour. And we have pointed out that data is not knowledge; it is very easy to take even accurate data and build a completely false picture of a person's life. Perhaps instead we should be focusing on ensuring that the software used to query these giant databases-in-progress teaches users not to expect too much.

As an everyday example of what I mean, take the automatic line-calling system used in tennis since 2005, Hawkeye. Hawkeye is not perfectly accurate. Its judgements are based on reconstructions that put together the video images and timing data from four or more high-speed video cameras. The system uses the data to calculate the three-dimensional flight of the ball; it incorporates its knowledge of the laws of physics, its model of the tennis court, and its database of the rules of the game in order to judge whether the ball is in or out. Its official margin for error is 3.6mm.

A study by two researchers at Cardiff University disputed that number. But more relevant here, they pointed out that the animated graphics used to show the reconstructed flight of the ball and the circle indicating where it landed on the court surface are misleading because they look to viewers as though they are authoritative. The two researchers, Harry Collins and Robert Evans, proposed that in the interests of public education the graphic should be redesigned to display the margin for error and the level of confidence.

This would be a good approach for database matches, too, especially since the number of false matches and errors will grow with the size of the databases. A real-life Statebook that doesn't reflect the uncertainty factor of each search, each match, and each interpretation next to every hit would indeed be truly dangerous.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 13, 2009

Threat model

It's not about Phorm, it's about snooping. At Wednesday morning's Parliamentary roundtable, "The Internet Threat", the four unhappy representatives I counted from Phorm had a hard time with this. Weren't we there to trash them and not let them reply? What do you mean the conversation isn't all about them?

We were in a committee room many medieval steps up unside the House of Lords. The gathering, was convened by Baroness Miller of Chilthorne Domer with the idea of helping Parliamentarians understand the issues raised not only by Phorm but also by the Interception Modernisation Programme, Google, Microsoft, and in fact any outfit that wants to collect huge amounts of our data for purposes that won't be entirely clear until later.

Most of the coverage of this event has focused on the comments of Sir Tim Berners-Lee, the indefatigable creator of the 20-year-old Web (not the Internet, folks!), who said categorically, "I came here to defend the integrity of the Internet as a medium." Using the Internet, he said, "is a fundamental human act, like the act of writing. You have to be able to do it without interference and/or snooping." People use the Internet when they're in crisis; even just a list of URLs you've visited is very revealing of sensitive information.

Other distinguished speakers included Professor Wendy Hall, Nicholas Bohm representing the Foundation for Information Policy Research, the Cambridge security research group's Richard Clayton, the Open Rights Group's new executive director, Jim Killock, and the vastly experienced networking and protocol consultant Robb Topolski.

The key moment, for me, was when one of the MPs the event was intended to educate asked this: "Why now?" Why, in other words, is deep packet inspection suddenly a problem?

The quick answer, as Topolski and Clayton explained, is "Moore's Law." It was not, until a couple-three years ago, possible to make a computer fast enough to sit in the middle of an Internet connection and not only sniff the packets but examine their contents before passing them on. Now it is. Plus, said Clayton, "Storage."

But for Kent Ertegrul, Phorm's managing director, it was all about Phorm. The company had tried to get on the panel and been rejected. His company's technology was being misrepresented. Its system makes it impossible for browsing habits to be tracked back to people. Tim Berners-Lee, of all people, if he understood their system, would appreciate the elegance of what they've actually done.

Berners-Lee was calm, but firm. "I have not at all criticized behavioral advertising," he pointed out. "What I'm saying is a mistake is snooping on the Internet."

Right on.

The Internet, Berners-Lee and Topolski explained, was built according to the single concept that all the processing happens at the ends, and that the middle is just a carrier medium. That design decision has had a number of consequences, most of them good. For example, it's why someone can create the new application of the week and deploy it without getting permission. It's why VOIP traffic flows across the lines of the telephone companies whose revenues it's eating. It is what network neutrality is all about.

Susan Kramer, saying she was "the most untechie person" (and who happens to be my MP), asked if anyone could provide some idea of what lawmakers can actually do. The public, she said, is "frightened about the ability to lose privacy through these mechanisms they don't understand".

Bohm offered the analogy of water fluoridation: it's controversial because we don't expect water flowing into our house to have been tampered with. In any event, he suggested that if the law needs to be made clearer it is in the area of laying down the purposes for which filtering, management, and interference can be done. It should, he said, be "strictly limited to what amounts to matters of the electronic equivalent of public health, and nothing else."

Fluoridation of water is a good analogy for another reason: authorities are transparent about it. You can, if you take the trouble, find out what is in your local water supply. But one of the difficulties about a black-box-in-the-middle is that while we may think we know what it does today - because even if you trust, say, Richard Clayton's report on how Phorm works (PDF) there's no guarantee of how the system will change in the future. Just as, although today's government may have only good intentions in installing a black box in every ISP that collects all traffic data, the government of ten years hence may use the system in entirely different ways for which today's trusting administration never planned. Which is why it's not about Phorm and isn't even about behavioural advertising; Phorm was only a single messenger in a bigger problem.

So the point is this: do we want black boxes whose settings we don't know and whose workings we don't understand sitting at the heart of our ISPs' networks examining our traffic? This was the threat Baroness Miller had in mind - a threat *to* the Internet, not the threat *of* the Internet beloved of the more scaremongering members of the press. Answers on a postcard...


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML)

February 6, 2009

Forty-five years

This week the EU's legal affairs committee, JURI, may vote - again - on term extension in sound recordings. As of today, copyright is still listed on the agenda.

Opposing term extension was a lot simpler at the national level in the UK; the path from proposal to legislation is well-known, well trodden, and well-watched by the national media. At the EU level, JURI is only one of four committees involved in proposing and amending term extension on behalf of the European Parliament - and then even after the Parliament votes it's the Commission who makes the final decision. The whole thing drags on for something close to forever, which pretty much guarantees that only the most obsessed stay in touch through the whole process. If you had designed a system to ensure apathy except among lobbyists who like good food, you'd have done exactly this.

There are many reasons to oppose term extension, most of which we've covered before. Unfortunately, these seem invisible to some politicians. As William Patry blogs, the harm done by term extension is diffuse and hard to quantify while easily calculable benefits accrue to a small but wealthy and vocal set of players.

What's noticeable is how many independent economic reviews agree with what NGOs like the Electronic Frontier Foundation and the Open Rights Group have said all along.

According to a joint report from several European intellectual property law centers (PDF), the Commission itself estimates that 45 extra years of copyright protection will hand the European music industry between €44 million and €843 million - uncertain by a factor of 20! The same report also notes that term extension will not net performers additional broadcast revenue; rather, the same pot will be spread among a larger pool of musicians, benefiting older musicians at the expense of young incomers. The report also notes that performers don't lose control over their music when the term of copyright ends; they lose it when they sign recording contracts (so true).

Other reports are even less favorable. In 2005, for example, the Dutch Institute for Information Law concluded that copyright in sound recordings has more in common with design rights and patents than with other areas of copyright, and it would be more consistent to reduce the term rather than extend it. More recently, an open letter from Bournemouth University's Centre for Intellectual Property Policy Management questioned exactly where those estimated revenues were going to come from, and pointed out the absurdity of the claim that extension would help performers.

And therein is the nub. Estimates are that the average session musician will benefit from term extension in the amount of €4 to €58 (there's that guess-the-number-within-a-factor-of-20 trick again). JURI's draft opinion puts the number of affected musicians at 7,000 per large EU member state, less in the rest. Call it 7,000 in all 27 and give each musician €20; that's €3.78 million, hardly enough for a banker's bonus. We could easily hand that out in cash, if handouts to aging performers are the purpose of the exercise.

Benefiting performers is a lobbyists' red herring that cynically plays on our affection for our favorite music and musicians; what term extension will do, as the Bournemouth letter points out, is benefit recording companies. Of that wackily wide range of estimated revenues in the last paragraph, 90 percent, or between €39 million and €758 million will go to record producers, even according to the EU's own impact assessment (PDF), based on a study carried out by PriceWaterhouseCooper.

If you want to help musicians, the first and most important thing you should do is improve the industry's standard contracts and employment practices. We protect workers in other industries from exploitation; why should we make an exception for musicians? No one is saying - not even Courtney Love - that musicians deserve charity. But we could reform UK bankruptcy law so that companies acquiring defunct labels are required to shoulder ongoing royalty payment obligations as well as the exploitable assets of the back catalogue. We could put limits on what kind of clauses a recording company is allowed to impose on first-time recording artists. We could set minimums for what is owed to session musicians. And we could require the return of rights to the performers in the event of a recording's going out of print. Any or all of those things would make far more difference to the average musician's lifetime income than an extra 45 years of copyright.

Current proposals seem to focus on this last idea as a "use it or lose it" clause that somehow makes the rest of term extension all right. Don Foster, the conservative MP who is shadow minister for the Department of Culture, Media, and Sport, for example, has argued for it repeatedly. But by itself it's not enough of a concession to balance the effect of term extension and the freezing of the public domain.

If you want to try to stop term extension, this is a key moment. Lobby your MEP and the members of the relevant committees. Remind them of the evidence. And remind them that it's not just the record companies and the world's musicians who have an interest in copyright; it's the rest of us, too.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 30, 2009

Looking backward

Governments move slowly; technology moves fast. That's not a universal truth - witness Obama's first whirlwind week in office - but in the early days of the Net it was the kind of thing people said smugly when they wanted to claim that cyberspace was impervious to regulation. It worked well enough for, say, setting free strong cryptography over the objections of the State Department and ITAR.

This week had two perfect examples. First: Microsoft noted in its 10-Q that the EU may force it to do something about tying Internet Explorer to Windows - remove it, make it one of only several browsers consumers can choose from at setup, or randomly provide different browsers. Still fighting the browser wars? How 1995.

Second: the release of the interim Digital Britain report by the Department for Culture, Media, and Sport. Still proposing Digital Rights Management as a way of protecting rightsholders' interest in content? How 2005.

It probably says something about technology cycles that the DRM of 2005 is currently more quaint and dated than the browser wars of 1995-1998. The advent of cloud computing and Google's release of Chrome last year have reinvigorated the browser "market". After years of apparent stagnation it suddenly matters again that we should have choices and standards to keep the Internet from turning into a series of walled gardens (instead of a series of tubes).

DRM, of course, turns content into a series of walled gardens and causes a load of other problems we've all written about extensively. But the most alarming problem about its inclusion in the government's list of action items is that even the music industry that most wanted it is abandoning it. What year was this written in? Why is a report that isn't even finished proposing to adopt a technological approach that's already a market failure? What's next, a set of taxation rules designed for CompuServe?

The one bit of good, forwarding-thinking news - which came as a separate announcement from Intellectual Property Minister David Lammy, is that apparently the UK government is ready to abandon the "three strikes" idea for punishing file-sharers - it's too complicated (Yes, Minister rules!) to legislate. And sort of icky arresting teenagers in their bedrooms, even if the EU doesn't see anything wrong with that and the Irish have decided to go ahead with it.

The interim report bundles together issues concerning digital networks (broadband, wireless, infrastructure), digital television and radio, and digital content. It's the latter that's most contentious: the report proposes creating a Rights Agency intended to encourage good use (buying content) and discourage bad use (whatever infringes copyright law). The report seems to turn a blind eye to the many discussions of how copyright law should change. And then there's a bunch of stuff about whether Britain should have a second public service broadcaster to compete "for quality" with the BBC. How all these things cohere is muddy.

For a really scathing review of the interim report, see The Guardian , where Charles Arthur attacks not only the report's inclusion of DRM and a "rights agency" to collaborate on developing it, but its dirt path approach to broadband speed and its proposed approach to network neutrality (which it calls "net neutrality", should you want to search the report to find out what it says).

The interim report favors allowing the kind of thing Virgin has talked about: making deals with content providers in which they're paid for guaranteed service levels. That turns the problem of who will pay for high-speed fiber into a game of pass-the-parcel. Most likely, consumers will end up paying, whether that money goes to content providers or ISPs. If the BBC pays for the iPlayer, so do we, through the TV license. If ISPs pay, we pay in higher bandwidth charges. If we're going to pay for it anyway, why shouldn't we have the freedom of the Internet in return?

This is especially true because we do not know what's going to come next or how people will use it. When YouTube became the Next Big Thing, oh, say, three or four years ago, it was logical to assume that all subsequent Next Big Things were going to be bandwidth hogs. The next NBT turned out to be Twitter, which is pretty much your diametrical opposite. Now, everything is social media - but if there's one thing we know about the party on the Internet it's that it keeps on moving on.

There's plenty that's left out of this interim report. There's a discussion of spectrum licensing that doesn't encompass newer ideas about spectrum allocation. It talks about finding new business models for rightsholders without supporting obsolete ones and the "sea of unlawful activity in which they have to swim" and mentions ISPs - but leaves out consumers except as "customers" or illegal copiers. It nods at the notion that almost anyone can be a creator and find distribution, but still persists in talking of customers and rightsholders as if they were never the same people.

No one ever said predicting the future was easy, least of all Niels Bohr, but it does help if you start by noticing the present.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

December 5, 2008

Saving seeds

The 17 judges of the European Court of Human Rights ruled unanimously yesterday that the UK's DNA database, which contains more than 3 million DNA samples, violates Article 8 of the European Convention on Human Rights. The key factor: retaining, indefinitely, the DNA samples of people who have committed no crime.

It's not a complete win for objectors to the database, since the ruling doesn't say the database shouldn't exist, merely that DNA samples should be removed once their owners have been acquitted in court or the charges have been dropped. England, the court said, should copy Scotland, which operates such a policy.

The UK comes in for particular censure, in the form of the note that "any State claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance..." In other words, before you decide to be the first on your block to use a new technology and show the rest of the world how it's done, you should think about the consequences.

Because it's true: this is the kind of technology that makes surveillance and control-happy governments the envy of other governments. For example: lacking clues to lead them to a serial killer, the Los Angeles Police Department wants to copy Britain and use California's DNA database to search for genetic profiles similar enough to belong to a close relative .The French DNA database, FNAEG, was proposed in 1996, created in 1998 for sex offenders, implemented in 2001, and broadened to other criminal offenses after 9/11 and again in 2003: a perfect example of function creep. But the French DNA database is a fiftieth the size of the UK's, and Austria's, the next on the list, is even smaller.

There are some wonderful statistics about the UK database. DNA samples from more than 4 million people are included on it. Probably 850,000 of them are innocent of any crime. Some 40,000 are children between the ages of 10 and 17. The government (according to the Telegraph) has spent £182 million on it between April 1995 and March 2004. And there have been suggestions that it's too small. When privacy and human rights campaigners pointed out that people of color are disproportionately represented in the database, one of England's most experienced appeals court judges, Lord Justice Sedley, argued that every UK resident and visitor should be included on it. Yes, that's definitely the way to bring the tourists in: demand a DNA sample. Just look how they're flocking to the US to give fingerprints, and how many more flooded in when they upped the number to ten earlier this year. (And how little we're getting for it: in the first two years of the program, fingerprinting 44 million visitors netted 1,000 people with criminal or immigration violations.)

At last week's A Fine Balance conference on privacy-enhancing technologies, there was a lot of discussion of the key technique of data minimization. That is the principle that you should not collect or share more data than is actually needed to do the job. Someone checking whether you have the right to drive, for example, doesn't need to know who you are or where you live; someone checking you have the right to borrow books from the local library needs to know where you live and who you are but not your age or your health records; someone checking you're the right age to enter a bar doesn't need to care if your driver's license has expired.

This is an idea that's been around a long time - I think I heard my first presentation on it in about 1994 - but whose progress towards a usable product has been agonizingly slow. IBM's PRIME project, which Jan Camenisch presented, and Microsoft's purchase of Credentica (which wasn't shown at the conference) suggest that the mainstream technology products may finally be getting there. If only we can convince politicians that these principles are a necessary adjunct to storing all the data they're collecting.

What makes the DNA database more than just a high-tech fingerprint database is that over time the DNA stored in it will become increasingly revealing of intimate secrets. As Ray Kurzweil kept saying at the Singularity Summit, Moore's Law is hitting DNA sequencing right now; the cost is accordingly plummeting by factors of ten. When the database was set up, it was fair to characterize DNA as a high-tech version of fingerprints or iris scans. Five - or 15, or 25, we can't be sure - years from now, we will have learned far more about interpreting genetic sequences. The coded, unreadable messages we're storing now will be cleartext one day, and anyone allowed to consult the database will be privy to far more intimate information about our bodies, ourselves than we think we're giving them now.

Unfortunately, the people in charge of these things typically think it's not going to affect them. If the "little people" have no privacy, well, so what? It's only when the powers they've granted are turned on them that they begin to get it. If a conservative is a liberal who's been mugged, and a liberal is a conservative whose daughter has needed an abortion, and a civil liberties advocate is a politician who's been arrested...maybe we need to arrest more of them.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 4, 2008

The new normal

The (only) good thing about a war is you can tell when it's over.

The problem with the "War on Terror" is that terrorism is always with us, as Liberty's director, Shami Chakrabarti, said yesterday at the Homeland and Border Security 08 conference. "I do think the threat is very serious. But I don't think it can be addressed by a war." Because, "We, the people, will not be able to verify a discernible end."

The idea that "we are at war" has justified so much post 9/11 legislation, from the ID card (in the UK) and Real ID (US) to the continued expansion of police powers.

How long can you live in a state of emergency before emergency becomes the new normal? If there is no end, when do you withdraw the latitude wartime gives a government?

Several of yesterday's speakers talked about preserving "our way of life" while countering the threat with better security. But "our way of life" is a moving target.

For example, Baroness Pauline Neville-Jones, the shadow security minister, talked about the importance of controlling the UK's borders. "Perimeter security is absolutely basic." Her example: you can't go into a building without having your identity checked. But it's not so long ago - within the 18 years I've been living in London - that you could do exactly that, even sometimes in central London. In New York, of course, until 9/11, everything was wide open; these days midtown Manhattan makes you wait in front of barriers while you're photographed, checked, and treated with great suspicion if the person you're visiting doesn't answer the phone.

Only seven years ago, flying did not involve two hours of standing in line. Until January, tourists do not have to register three days before flying to the US for pre-screening.

It's not clear how much would change with a Conservative government. "There is a very great deal by this government we would continue," said Neville-Jones. But, she said, besides trackling threats, whether motivated (terrorists) or not (floods, earthquakes, "we are also at any given moment in the game of deciding what kind of society we want to have and what values we want to preserve." She wants "sustainable security, predicated on protecting people's freedom and ensuring they have more, not less, control over their lives." And, she said, "While we need protective mechanisms, the surveillance society is not the route down which we should go. It is absolutely fundamental that security and freedom lie together as an objective."

To be sure, Neville-Jones took issue with some of the present government's plans - the Conservatives would not, she said, go ahead with the National Identity Register, and they favour "a more coherent and wide-ranging border security force". The latter would mean bringing together many currently disparate agencies to create a single border strategy. The Conservatives also favour establishing a small "homeland command for the armed forces" within the UK because, "The qualities of the military and the resources they can bring to complex situations are important and useful." At the moment, she said, "We have to make do with whoever happens to be in the country."

OK. So take the four core elements of the national security strategy according to Admiral Lord Alan West, a Parliamentary under-secretary of state at the Home Office: pursue, protect, prepare, and prevent. "Prevent" is the one that all this is about. If we are in wartime, and we know that any measure that's brought in is only temporary, our tolerance for measures that violate the normal principles of democracy is higher.

Are the Olympics wartime? Security is already in the planning stages, although, as Tarique Ghaffur pointed out, the Games are one of several big events in 2012. And some events like sailing and Olympic football will be outside London, as will 600 training camps. Add in the torch relay, and it's national security.

And in that case, we should be watching very closely what gets brought in for the Olympics, because alongside the physical infrastructure that the Games always leave behind - the stadia and transport - may be a security infrastructure that we wouldn't necessarily have chosen for daily life.

As if the proposals in front of us aren't bad enough. Take for example, the clause of the counterterrorism bill (due for its second reading in the Lords next week) that would allow the authorities to detain suspects for up to 42 days without charge. Chakrabarti lamented the debate over this, which has turned into big media politics.

"The big frustration," she said, "is that alternatives created by sensible, proportionate means of early intervention are being ignored." Instead, she suggested, make the data legally collected by surveillance and interception admissible in fair criminal trials. Charge people with precursor terror offenses so they are properly remanded in custody and continue the investigation for the more serious plot. "That is a way of complying with ancient principles that you should know what you are accused of before being banged up, but it gives the police the time and powers they need."

Not being at war gives us the time to think. We should take it.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 27, 2008

Mistakes were made

This week we got the detail on what went wrong at Her Majesty's Revenue and Customs that led to the loss of those two CDs full of the personal details of 25 million British households last year with the release of the Poynter Review (PDF). We also got a hint of how and whether the future might be different with the publication yesterday of Data Handling: Proecures in Government (PDF), written by Sir Gus O'Donnell and commissioned by the Prime Minister after the HMRC loss. The most obvious message of both reports: government needs to secure data better.

The nicest thing the Poynter review said was that HMRC has already made changes in response to its criticisms. Otherwise, it was pretty much a surgical demonstration of "institutional deficiencies".

The chief points:


- Security was not HMRC's top priority.

- HMRC in fact had the technical ability to send only the selection of data that NAO actually needed, but the staff involved didn't know it.

- There was no designated single point of contact between HMRC and NAO.

- HMRC used insecure methods for data storage and transfer.

- The decision to send the CDs to the NAO was taken by junior staff without consulting senior managers - which under HMRC's own rules they should have done.

- The reason HMRC's junior staff did not consult managers was that they believed (wrongly) that NAO had absolute authority to access any and all information HMRC had.

- The HMRC staffer who dispatched the discs incorrectly believed the TNT Post service was secure and traceable, as required by HMRC policy. A different TNT service that met those requirements was in fact available.

- HMRC policies regarding information security and the release of data were not communicated sufficiently through the organization and were not sufficiently detailed.

- HMRC failed on accountability, governance, information security...you name it.

The real problem, though, isn't any single one of these things. If junior staff had consulted senior staff, it might not have mattered that they didn't know what the policies were. If HMRC used proper information security and secure methods for data storage (that is, encryption rather than simple password protection), they wouldn't have had access to send the discs. If they'd understood TNT's services correctly, the discs wouldn't have gotten lost - or at least been traceable if they had.

The real problem was the interlocking effect of all these factors. That, as Nassim Nicholas Taleb might say, was the black swan.

For those who haven't read Taleb's The Black Swan: The Impact of the Highly Improbable, the black swan stands for the event that is completely unpredictable - because, like black swans until one was spotted in Australia, no such thing has ever been seen - until it happens. Of course, data loss is pretty much a white swan; we've seen lots of data breaches. The black swan, really, is the perfectly secure system that is still sufficiently open for the people who need to use it.

That challenge is what O'Donnell's report on data handling is about and, as he notes, it's going to get harder rather than easier. He recommends a complete rearrangement of how departments manage information as well as improving the systems within individual departments. He also recommends greater openness about how the government secures data.

"No organisation can guarantee it will never lose data," he writes, "and the Government is no exception." O'Donnell goes on to consider how data should be protected and managed, not whether it should be collected or shared in the first place. That job is being left for yet another report in progress, due soon.

It's good to read that some good is coming out of the HMRC data loss: all departments are, according to the O'Donnell report, reviewing their data practices and beginning the process of cultural change. That can only be a good thing.

But the underlying problem is outside the scope of these reports, and it's this government's fondness for creating giant databases: the National Identity Register, ContactPoint, the DNA database, and so on. If the government really accepted the principle that it is impossible to guarantee complete data security, what would they do? Logically, they ought to start by cancelling the data behemoths on the understanding that it's a bad idea to base public policy on the idea that you can will a black swan into existence.

It would make more sense to create a design for government use of data that assumes there will be data breaches and attempts to limit the adverse consequences for the individuals whose data is lost. If my privacy is compromised alongside 50 million other people's and I am the victim of identity theft does it help me that the government department that lost the data knows which staff member to blame?

As Agatha Christie said long ago in one of her 80-plus books, "I know to err is human, but human error is nothing compared to what a computer can do if it tries." The man-machine combination is even worse. We should stop trying to breed black swans and instead devise systems that don't create so many white ones.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 30, 2008

Ten

It's easy to found an organization; it's hard to keep one alive even for as long as ten years. This week, the Foundation for Information Policy Research celebrated its tenth birthday. Ten years is a long time in Internet terms, and even longer when you're trying to get government to pay attention to expertise in a subject as difficult as technology policy.

My notes from the launch contain this quote from FIPR's first director, Caspar Bowden, which shows you just how difficult FIPR's role was going to be: "An educational charity has a responsibility to speak the truth, whether it's pleasant or unpleasant." FIPR was intended to avoid the narrow product focus of corporate laboratory research and retain the traditional freedoms of an academic lab.

My notes also show the following list of topics FIPR intended to research: the regulation of electronic commerce; consumer protection; data protection and privacy; copyright; law enforcement; evidence and archiving; electronic interaction between government, businesses, and individuals; the risks of computer and communications systems; and the extent to which information technologies discriminate against the less advantaged in society. Its first concern was intended to be researching the underpinnings of electronic commerce, including the then recent directive launched for public consultation by the European Commission.

In fact, the biggest issue of FIPR's early years was the crypto wars leading up to and culminating in the passage of the Regulation of Investigatory Powers Act (2000). It's safe to say that RIPA would have been a lot worse without the time and energy Bowden spent listening to Parliamentary debates, decoding consultation papers, and explaining what it all meant to journalists, politicians, civil servants, and anyone else who would listen.

Not that RIPA is a fountain of democratic behavior even as things are. In the last couple of weeks we've seen the perfect example of the kind of creeping functionalism that FIPR and Privacy International warned about at the time: the Poole council using the access rules in RIPA to spy on families to determine whether or not they really lived in the right catchment area for the schools their children attend.

That use of the RIPA rules, Bowden said at at FIPR's half-day anniversary conference last Wednesday, sets a precedent for accessing traffic data for much lower level purposes than the government originally claimed it was collecting the data for. He went on to call the recent suggestion that the government may be considering a giant database, updated in real time, of the nation's communications data "a truly Orwellian nightmare of data mining, all in one place."

Ross Anderson, FIPR's founding and current chair and a well-known security engineer at Cambridge, noted that the same risks adhere to the NHS database. A clinic that owns its own data will tell police asking for the names of all its patients under 16 to go away. "If," said Anderson, "it had all been in the NHS database and they'd gone in to see the manager of BT, would he have been told to go and jump in the river? The mistake engineers make too much is to think only technology matters."

That point was part of a larger one that Anderson made: that hopes that the giant databases under construction will collapse under their own weight are forlorn. Think of developing Hulk-Hogan databases and the algorithms for mining them as an arms race, just like spam and anti-spam. The same principle that holds that today's cryptography, no matter how strong, will eventually be routinely crackable means that today's overload of data will eventually, long after we can remember anything we actually said or did ourselves, be manageable.

The most interesting question is: what of the next ten years? Nigel Hickson, now with the Department of Business, Enterprise, and Regulatory Reform, gave some hints. On the European and international agenda, he listed the returning dominance of the large telephone companies on the excuse that they need to invest in fiber. We will be hearing about quality of service and network neutrality. Watch Brussels on spectrum rights. Watch for large debates on the liability of ISPs. Digital signatures, another battle of the late 1990s, are also back on the agenda, with draft EU proposals to mandate them for the public sector and other services. RFID, the "Internet for things" and the ubiquitous Internet will spark a new round of privacy arguments.

Most fundamentally, said Anderson, we need to think about what it means to live in a world that is ever more connected through evolving socio-technological systems. Government can help when markets fail; though governments themselves seem to fail most notoriously with large projects.

FIPR started by getting engineers, later engineers and economists, to talk through problems. "The next growth point may be engineers and psychologists," he said. "We have to progressively involve more and more people from more and more backgrounds and discussions."

Probably few people feel that their single vote in any given election really makes a difference. Groups like FIPR, PI, No2ID, and ARCH remind us that even a small number of people can have a significant effect. Happy birthday.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).


May 23, 2008

The haystack conundrum

Early this week the news broke that the Home Office wants to create a giant database in which will be stored details of all communications sent in Britain. In other words, instead of data retention, in which ISPs, telephone companies, and other service providers would hang onto communications data for a year or seven in case the Home Office wanted it, everything would stream to a Home Office data center in real time. We'll call it data swallowing.

Those with long memories - who seem few and far between in the national media covering this sort of subject - will remember that in about 1999 or 2000 there was a similar rumor. In the resulting outraged media coverage it was more or less thoroughly denied and nothing had been heard of it since, though privacy advocates continued to suspect that somewhere in the back of a drawer the scheme lurked, dormant, like one of those just-add-water Martians you find in the old Bugs Bunny cartoons. And now here it is again in another leak that the suspicious veteran watcher of Yes, Minister might think was an attempt to test public opinion. The fact that it's been mooted before makes it seem so much more likely that they're actually serious.

This proposal is not only expensive, complicated, slow, and controversial/courageous (Yes, Minister's Fab Four deterrents), but risk-laden, badly conceived, disproportionate, and foolish. Such a database will not catch terrorists, because given the volume of data involved trying to use it to spot any one would-be evil-doer will be the rough equivalent of searching for an iron filing in a haystack the size of a planet. It will, however, make it possible for anyone trawling the database to make any given individual's life thoroughly miserable. That's so disproportionate it's a divide-by-zero error.

The risks ought to be obvious: this is a government that can't keep track of the personal details of 25 million households, which fit on a couple of CDs. Devise all the rules and processes you want, the bigger the database the harder it will be to secure. Besides personal information, the giant communications database would include businesses' communication information, much of likely to be commercially sensitive. It's pretty good going to come up with a proposal that equally offends civil liberties activists and businesses.

In a short summary of the proposed legislation, we find this justification: "Unless the legislation is updated to reflect these changes, the ability of public authorities to carry out their crime prevention and public safety duties and to counter these threats will be undermined."

Sound familiar? It should. It's the exact same justification we heard in the late 1990s for requiring key escrow as part of the nascent Regulation of Investigatory Powers Act. The idea there was that if the use of strong cryptography to protect communications became widespread law enforcement and security services would be unable to read the content of the messages and phone calls they intercepted. This argument was fiercely rejected at the time, and key escrow was eventually dropped in favor of requiring the subjects of investigation to hand over their keys under specified circumstances.

There is much, much less logic to claiming that police can't do their jobs without real-time copies of all communications. Here we have real analogies: postal mail, which has been with us since 1660. Do we require copies of all letters that pass through the post office to be deposited with the security services? Do we require the Royal Mail's automated sorting equipment to log all address data?

Sanity has never intervened in this government's plans to create more and more tools for surveillance. Take CCTV. Recent studies show that despite the millions of pounds spent on deploying thousands of cameras all over the UK, they don't cut crime, and, more important, the images help solve crime in only 3 percent of cases. But you know the response to this news will not be to remove the cameras or stop adding to their number. No, the thinking will be like the scheme I once heard for selling harmless but ineffective alternative medical treatments, in which the answer to all outcomes is more treatment. (Patient gets better - treatment did it. Patient stays the same - treatment has halted the downward course of the disease. Patient gets worse - treatment came too late.)

This week at Computers, Freedom, and Privacy, I heard about the Electronic Privacy Information Center's work on fusion centers, relatively new US government efforts to mine many commercial and public sources of data. EPIC is trying to establish the role of federal agencies in funding and controlling these centers, but it's hard going.

What do these governments imagine they're going to be able to do with all this data? Is the fantasy that agents will be able to sit in a control room somewhere and survey it all on some kind of giant map on which criminals will pop up in red, ready to be caught? They had data before 9/11 and failed to collate and interpret it.

Iron filing; haystack; lack of a really good magnet.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

May 9, 2008

Swings and roundabouts

There was a wonderful cartoon that cycled frequently around computer science departments in the pre-Internet 1970s - I still have my paper copy - that graphically illustrated the process by which IT systems get specified, designed, and built, and showed precisely why and how far they failed the user's inner image of what it was going to be. There is a scan here. The senior analyst wanted to make sure no one could possibly get hurt; the sponsor wanted a pretty design; the programmers, confused by contradictory input, wrote something that didn't work; and the installation was hideously broken.

Translate this into the UK's national ID card. Consumers, Sir James Crosby wrote in March (PDF)want identity assurance. That is, they - or rather, we - want to know that we're dealing with our real bank rather than a fraud. We want to know that the thief rooting through our garbage can't use any details he finds on discarded utility bills to impersonate us, change our address with our bank, clean out our accounts, and take out 23 new credit cards in our name before embarking on a wild spending spree leaving us to foot the bill. And we want to know that if all that ghastliness happens to us we will have an accessible and manageable way to fix it.

We want to swing lazily on the old tire and enjoy the view.

We are the users with the seemingly simple but in reality unobtainable fantasy.

The government, however - the project sponsor - wants the three-tiered design that barely works because of all the additional elements in the design but looks incredibly impressive. ("Be the envy of other major governments," I feel sure the project brochure says.) In the government's view, they are the users and we are the database objects.

Crosby nails this gap when he draws the distinction between ID assurance and ID management:

The expression 'ID management' suggests data sharing and database consolidation, concepts which principally serve the interests of the owner of the database, for example, the Government or the banks. Whereas we think of "ID assurance" as a consumer-led concept, a process that meets an important consumer need without necessarily providing any spin-off benefits to the owner of any database.

This distinction is fundamental. An ID system built primarily to deliver high levels of assurance for consumers and to command their trust has little in common with one inspired mainly by the ambitions of its owner. In the case of the former, consumers will extend use both across the population and in terms of applications such as travel and banking. While almost inevitably the opposite is true for systems principally designed to save costs and to transfer or share data.

As writer and software engineer Ellen Ullman wrote in her book Close to the Machine, databases infect their owners, who may start with good intentions but are ineluctibly drawn to surveillance.

So far, the government pushing the ID card seems to believe that it can impose anything it likes and if it means the tree collapses with the user on the swing, well, that's something that can be ironed out later. Crosby, however, points out that for the scheme to achieve any of the government's national security goals it must get mass take-up. "Thus," he writes, "even the achievement of security objectives relies on consumers' active participation."

This week, a similarly damning assessment of the scheme was released by the Independent Scheme Assurance Panel (PDF) (you may find it easier to read this clean translation - scroll down to policywatcher's May 8 posting). The gist: the government is completely incompetent at handling data, and creating massive databases will, as a result, destroy public trust in it and all its systems.

Of course, the government is in a position to compel registration, as it's begun doing with groups who can't argue back, like foreigners, and proposes doing for employees in "sensitive roles or locations, such as airports". But one of the key indicators of how little its scheme has to do with the actual needs and desires of the public is the list of questions it's asking in the current consultation on ID cards, which focus almost entirely on how to get people to love, or at least apply for, the card. To be sure, the consultation document pays lip service to accepting comments on any ID card-related topic, but the consultation is specifically about the "delivery scheme".

This is the kind of consultation where we're really damned if we do and damned if we don't. Submit comments on, for example, how best to "encourage" young people to sign up ("Views are invited particularly from young people on the best way of rolling out identity cards to them") without saying how little you like the government asking how best to market its unloved policy to vulnerable groups and when the responses are eventually released the government can say there are now no objectors to the scheme. Submit comments to the effect that the whole National Identity scheme is poorly conceived and inappropriate, and anything else you say is likely to be ignored on the grounds that they've heard all that and it's irrelevant to the present consultation. Comments are due by June 30.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 14, 2008

Uninformed consent

Apparently the US Congress is now being scripted by Jon Stewart of the Daily Show. In a twist of perfect irony, the House of Representatives has decided to hold its first closed session in 25 years to debate - surveillance.

But it's obvious why they want closed doors: they want to talk about the AT&T case. To recap: AT&T is being sued for its complicity in the Bush administration's warrantless surveillance of US citizens after its technician Mark Klein blew the whistle by taking documents to the Electronic Frontier Foundation (which a couple of weeks ago gave him a Pioneer Award for his trouble).

Bush has, of course, resisted any effort to peer into the innards of his surveillance program by claiming it's all a state secret, and that's part of the point of this Congressional move: the Democrats have fielded a bill that would give the whole program some more oversight and, significantly, reject the idea of giving telecommunications companies - that is, AT&T - immunity from prosecution for breaking the law by participating in warrantless wiretapping. 'Snot fair that they should deprive us of the fun of watching the horse-trading. It can't, surely, be that they think we'll be upset by watching them slag each other off. In an election year?

But it's been a week for irony, as Wikipedia founder Jimmy Wales has had his sex life exposed when he dumped his girlfriendand been accused of - let's call it sloppiness - in his expense accounts. Worse, he stands accused of trading favorable page edits for cash. There's always been a strong element of Schadenpedia around, but the edit-for-cash thing really goes to the heart of what Wikipedia is supposed to be about.

I suspect that nonetheless Wikipedia will survive it: if the foundation has the sense it seems to have, it will display zero tolerance. But the incident has raised valid questions about how Wikipedia can possibly sustain itself financially going forward. The site is big and has enviable masses of traffic; but it sells no advertising, choosing instead to live on hand-outs and the work of volunteers. The idea, I suppose, is that accepting advertising might taint the site's neutral viewpoint, but donations can do the same thing if they're not properly walled off: just ask the US Congress. It seems to me that an automated advertising system they did not control would be, if anything, safer. And then maybe they could pay some of those volunteers, even though it would be a pity to lose some of the site's best entertainment.

With respect to advertising, it's worth noting that Phorm, which we is under increasing pressure. Earlier this week, we had an opportunity to talk to Kent Ertegrul, CEO of Phorm, who continues to maintain that Phorm's system, because it does not store data, is more protective of privacy than today's cookie-driven Web. This may in fact be true.

Less certain is Ertegrul's belief that the system does not contravene the Regulation of Investigatory Powers Act, which lays down rules about interception. Ertegrul has some support from a informal letter from the Home Office whose reasoning seems to be that if users have consented and have been told how they can opt out, it's legal. Well, we'll see; there's a lot of debate going on about this claim and it will be interesting to hear the Information Commissioner's view. If the Home Office's interpretation is correct, it could open a lot of scope for abusive behavior that could be imposed upon users simply by adding it to the terms of service to which they theoretically consent when they sign up, and a UK equivalent of AT&T wanting to assist the government with wholesale warrantless wiretapping would have only to add it to the terms of service.

The real problem is that no one really knows how Phorm's system works. Phorm doesn't retain your IP address, but the ad servers surely have to know it when they're sending you ads. If you opt out but can still opt back in (as Ertegrul said you can), doesn't that mean you still have a cookie on your system and that your data is still passed to Phorm's system, which discards it instead of sending you ads? If that's the case, doesn't that mean you can not opt out of having your data shared? If that isn't how it works, then how does it work? I thought I understood it after talking to Ertegrul, I really did - and then someone asked me to explain how Phorm's cookie's usefulness persisted between sessions, and I wasn't sure any more. I think the Open Rights Group: Phorm should publish details of how its system works for experts to scrutinize. Until Phorm does that the misinformation Ertegrul is so upset about will continue. (More disclosure: I am on ORG's Advisory Council.

But maybe the Home Office is on to something. Bush could solve his whole problem by getting everyone to give consent to being surveilled at the moment they take US citizenship. Surely a newborn baby's footprint is sufficient agreement?

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

March 7, 2008

Techitics

This year, 2008, may go down in history as the year geeks got politics. At etech this week I caught a few disparaging references to hippies' efforts to change politics. Which, you know, seemed kind of unfair, for two reasons. First: the 1960s generation did change an awful lot of things, though not nearly as many as they hoped. Second: a lot of those hippies are geeks now.

But still. Give a geek something that's broken and he'll itch to fix it. And one thing leads to another. Which is why on Wednesday night Lawrence Lessig explained in an hour-long keynote that got a standing ovation how he plans to fix what's wrong with Congress.

No, he's not going to run. Some 4,500 people on Facebook were trying to push him into it, and he thought about it, but preliminary research showed that his chances of beating popular Silicon Valley favorite, Jackie Speier, were approximately zero.

"I wasn't afraid of losing," he said, noting ruefully that in ten years of copyfighting he's gotten good at it. Instead, the problem was that Silicon Valley insiders would have known that no one was going to beat Jackie Speier. But outsiders would have pointed, laughed, and said, "See? The idea of Congressional reform has no legs." And on to business as usual. So, he said, counterproductive to run.

Instead, he's launching Change Congress. "Obama has taught us that it's possible to imagine many people contributing to real change."

The point, he said, will be to provide a "signalling function". Like Creative Commongs, Change Congress will give candidates an easy way to show what level of reform they're willing to commit tto. The system will start with three options: 1) refusing money from lobbyists and political action committees (private funding groups); 2) ban earmarks (money allocated to special projects in politicians' home states); 3) commit to public financing for campaigns. Candidates can then display the badge generated from those choices on their campaign materials.

From there, said Lessig, layer something like Emily's List on top, to help people identify candidates they're willing to suppot with monthly donations, thereby subsidizing reform.

Money, he admitted, isn't the entire problem. But, like drinking for an alcoholic, it's the first problem you must solve to be able to tackle any of the others with any hope of success.

In a related but not entirely similar vein, the guys who brought us They Work For You nearly four years ago are back with UN democracy, an attempt to provide a signalling function to the United Nations> by making it easy to find out how your national representatives are voting in UN meetings. The driving force behind UNdemocracy.com is Liverpool's Julian Todd, who took the UN's URL obscurantism as a personal challenge. Since he doesn't fly, presenting the new service were Tom Loosemore, Stefan Mogdalinski, and Danny O'Brien, who pointed out that when you start looking at the decisions and debates you start to see strange patterns: what do the US and Israel have in common with Palau and Micronesia?

The US Congress and the British Parliament are all, they said, now well accustomed to being televised, and their behaviour has adapted to the cameras. At the UN, "They don't think they're being watched at all, so you see horse trading in a fairly raw form."

The meta-version they believe can be usefully and widely applied: 1) identify broken civic institution; 2) liberate data from said institution. There were three more ingredients, but they vanished the slide too quickly. But Mogdalinski noted that where in the past they have said "Ask forgiveness, not permission", alluding to the fact that most institutions if approached will behave as though they own the data. He's less inclined to apologise now. After all, isn't it *our* data that's being released in the public interest?

Data isn't everything. But the Net community has come a long way since the early days, when the prevailing attitude was that technological superiority would wash away politics-as-usual by simply making an end run around any laws governments tried to pass. Yes, technology can change the equation a whole lot. For example, once PGP escaped laws limiting the availability of strong encryption were pretty much doomed to fail (though not without a lot of back-and-forth before it became official). Similarly, in the copyright wars it's clear that copyrighted material will continue to leak out no matter how hard they try to protect it.

But those are pretty limited bits of politics. Technology can't make such an easy end run around laws that keep shrinking the public domain. Nor can it by itself solve policies that deny the reality of global climate change or that, in one of Lessig's examples, back government recommendations off from a daily caloric intake of 10 percent sugar to one of 25 percent. Or that, in another of his examples, kept then Vice-President Al Gore from succeeding with a seventh part to the 1996 Communications Act deregulating ADSL and cable because without anything to regulate what would Congressmen do without the funds those lobbyists were sending their way? Hence, the new approach.

"Technology," Lessig said, "doesn't solve any problems. But it is the only tool we have to leverage power to effect change."

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

February 22, 2008

Strikeout

There is a certain kind of mentality that is actually proud of not understanding computers, as if there were something honorable about saying grandly, "Oh, I leave all that to my children."

Outside of computing, only television gets so many people boasting of their ignorance. Do we boast how few books we read? Do we trumpet our ignorance of other practical skills, like balancing a cheque book, cooking, or choosing wine? When someone suggests we get dressed in the morning do we say proudly, "I don't know how"?

There is so much insanity coming out of the British government on the Internet/computing front at the moment that the only possible conclusion is that the government is made up entirely of people who are engaged in a sort of reverse pissing contest with each other: I can compute less than you can, and see? here's a really dumb proposal to prove it.

How else can we explain yesterday's news that the government is determined to proceed with Contactpoint even though the report it commissioned and paid for from Deloitte warns that the risk of storing the personal details of every British child under 16 can only be managed, not eliminated? Lately, it seems that there's news of a major data breach every week. But the present government is like a batch of 20-year-olds who think that mortality can't happen to them.

Or today's news that the Department of Culture, Media, and Sport has launched its proposals for "Creative Britain", and among them is a very clear diktat to ISPs: deal with file-sharing voluntarily or we'll make you do it. By April 2009. This bit of extortion nestles in the middle of a bunch of other stuff about educating schoolchildren about the value of intellectual property. Dare we say: if there were one thing you could possibly do to ensure that kids sneer at IP, it would be to teach them about it in school.

The proposals are vague in the extreme about what kind of regulation the DCMS would accept as sufficient. Despite the leaks of last week, culture secretary Andy Burnham has told the Financial Times that the "three strikes" idea was never in the paper. As outlined by Open Rights Group executive director Becky Hogge in New Statesman, "three strikes" would mean that all Internet users would be tracked by IP address and warned by letter if they are caught uploading copyrighted content. After three letters, they would be disconnected. As Hogge says (disclosure: I am on the ORG advisory board), the punishment will fall equally on innocent bystanders who happen to share the same house. Worse, it turns ISPs into a squad of private police for a historically rapacious industry.

Charles Arthur, writing in yesterday's Guardian, presented the British Phonographic Institute's case about why the three strikes idea isn't necessarily completely awful: it's better than being sued. (These are our choices?) ISPs, of course, hate the idea: this is an industry with nanoscale margins. Who bears the liability if someone is disconnected and starts to complain? What if they sue?

We'll say it again: if the entertainment industries really want to stop file-sharing, they need to negotiate changed business models and create a legitimate market. Many people would be willing to pay a reasonable price to download TV shows and music if they could get in return reliable, fast, advertising-free, DRM-free downloads at or soon after the time of the initial release. The longer the present situation continues the more entrenched the habit of unauthorized file-sharing will become and the harder it will be to divert people to the legitimate market that eventually must be established.

But the key damning bit in Arthur's article (disclosure: he is my editor at the paper) is the BPI's admission that they cannot actually say that ending file-sharing would make sales grow. The best the BPI spokesman could come up with is, "It would send out the message that copyright is to be respected, that creative industries are to be respected and paid for."

Actually, what would really do that is a more balanced copyright law. Right now, the law is so far from what most people expect it to be - or rationally think it should be - that it is breeding contempt for itself. And it is about to get worse: term extension is back on the agenda. The 2006 Gowers Review recommended against it, but on February 14, Irish EU Commissioner Charlie McCreevy (previously: champion of software patents) has announced his intention to propose extending performers' copyright in sound recordings from the current 50-year term to 95 years. The plan seems to go something like this: whisk it past the Commission in the next two months. Then the French presidency starts and whee! new law! The UK can then say its hands are tied.

That change makes no difference to British ISPs, however, who are now under the gun to come up with some scheme to keep the government from clomping all over them. Or to the kids who are going to be tracked from cradle to alcopop by unique identity number. Maybe the first target of the government computing literacy programs should be...the government.


Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

February 8, 2008

If you have ID cards, drink alcohol


One of the key identifiers of an addiction is that indulgence in it persists long after all the reasons for doing it have turned from good to bad.

A sobered-up Scottish alcoholic once told me the following examplar of alcoholic thinking. A professor is lecturing to a class of alcoholics on the evils of drinking. To make his point, he takes two glasses, one filled with water, the other with alcohol. Into each glass he drops a live worm. The worm in the glass of water lives; the worm in the glass of alcohol dies.

"What," the professor asks, "can we learn from this?"

One of the alcoholics raises his hand. "If you have worms, drink alcohol."

In alcoholic thinking, of course, there is no circumstance in which the answer isn't "Drink alcohol."

So, too, with the ID card. The purpose as mooted between 2001 and 2004 was preventing benefit fraud and making life more convenient for UK citizens and residents. The plan promised perfect identification via the combination of a clean database (the National Identity Register) and biometrics (fingerprints and iris scans). The consultation document made a show of suggesting the cheaper alternative of a paper card with minimal data collection, but it was clear what they really wanted: the big, fancy stuff that would make them the envy of other major governments.

Opponents warned of the UK's poor track record with large IT projects, the privacy-invasiveness, and the huge amount such a system was likely to cost. Government estimates, now at £5.4 billion, have been slowly rising to meet Privacy International's original estimate of £6 billion.

By 2006, when the necessary legislation was passed, the government had abandoned the friendly "entitlement card" language and was calling it a national ID card. By then, also, the case had changed: less entitlement, more crime prevention.

It's 2008, and the wheels seem to be coming off. The government's original contention that the population really wanted ID cards has been shredded by the leaked documents of the last few weeks. In these, it's clear that the government knows the only way it will get people to adopt the ID card is by coercion, starting with the groups who are least able to protest by refusal: young people and foreigners.

Almost every element deemed important in the original proposal is now gone - the clean database populated through interviews and careful documentation (now the repurposed Department of Work and Pensions database); the iris scans (discarded); probably the fingerprints (too expensive except for foreigners). The one element that for sure remains is the one the government denied from the start: compulsion.

The government was always open about its intention for non-registration to become increasingly uncomfortable and eventually to make registration compulsory. But if the card is coming at least two years later than they intended, compulsion is ahead of schedule.

Of course, we've always maintained that the key to the project is the database, not the card. It's an indicator of just how much of a mess the project is that the Register, the heart of the system, was first to be scaled back because of its infeasibility. (I mean, really, guys. Interview and background-check the documentation of every one of 60 million people in any sort of reasonable time scale?)

The project is even fading in popularity with the very vendors who want to make money supplying the IT for it. How can you specify a system whose stated goals keep changing?

The late humorist and playwright Jean Kerr (probably now best known for her collection of pieces about raising five boys with her drama critic husband in a wacky old house in Larchmont, NY, Please Don't Eat the Daisies) once wrote a piece about the trials and tribulations of slogging through the out-of-town openings of one of her plays. In these pre-Broadway trial runs, lines get cut and revised; performances get reshaped and tightened. If the play is in trouble, the playwright gets no sleep for weeks. And then, she wrote, one day you look up at the stage, and, yes, the play is much better, and the performances are much better, and the audience seems to be having a good time. And yet - the play you're seeing on the stage isn't the play you had in mind at all.

It's one thing to reach that point in a project and retain enough perspective to be honest about it. It may be bad - but it isn't insane - to say, "Well, this play isn't what I had in mind, but you know, the audience is having a good time, and it will pay me enough to go away and try again."

But if you reach the point where the project you're pushing ahead clearly isn't any more the project you had in mind and sold hard, and yet you continue to pretend to yourself and everyone else that it is - then you have the kind of insanity problem where you're eating worms in order to prove you're not an alcoholic.

The honorable thing for the British government to do now is say, "Well, folks, we were wrong. Our opponents were right: the system we had in mind is too complicated, too expensive, and too unpopular because of its privacy-invasiveness. We will think again." Apparently they're so far gone that eating worms looks more sensible.

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 18, 2008

Harmony, where is thy sting?

On the Net, John Perry Barlow observed long ago, everything is local and everything is global, but nothing is national. It's one of those pat summations that sometimes is actually right. The EU, in the interests of competing successfully with the very large market that is the US, wants to harmonize the national laws that apply to content online.

They have a point. Today's market practices were created while the intangible products of human ingenuity still had to be fixed in a physical medium. It was logical for the publishers and distributors of said media to carve up the world into national territories. But today anyone trying to, say, put a song in an online store, or create a legal TV download service has to deal with a thicket of national collection societies and licensing authorities.

Where there's a problem there's a consultation document, and so there is in this case: the EU is giving us until February 29 (leap year!) to tell them what we think (PDF).

The biggest flaw in the consultation document is that the authors (who needed a good copy editor) seem to have bought wholesale the 2005 thinking of rightsholders (whom they call "right holders"). Fully a third of the consultation is on digital rights management: should it be interoperable, should there be a dispute resolution process, should SMEs have non-discriminatory access to these systems, should EULAs be easier to read?

Well, sure. But the consultation seems to assume that DRM is a) desirable and b) an endemic practice. We have long argued that it's not desirable; DRM is profoundly anti-consumer. Meanwhile, the industry is clearly fulfilling Naxos founder Klaus Heymann's April 2007 prophecy that DRM would be gone from online music within two years. DRM is far less of an issue now than it was in 2006, when the original consultation was launched. In fact, though, these questions seem to have been written less to aid consumers than to limit the monopoly power of iTunes.

That said, DRM will continue to be embedded in some hardware devices, most especially in the form of HDCP, a form of copy protection being built, invisibly to consumers until it gets in their way, into TV sets and other home video equipment. Unfortunately, because the consultation is focused on "Creative Content Online", such broader uses of DRM aren't included.

However, because of this and because some live streaming services similarly use DRM to prevent consumers from keeping copies of their broadcasts (and probably more will in future as Internet broadcasting becomes more widespread), public interest limitations on how DRM can be used seem like a wise idea. The problem with both DRM and EULAs is that the user has no ability to negotiate terms. The consultation leaves out an important consumer consideration: what should happen to content a consumer pays for and downloads that's protected with DRM if the service that sold it closes down? So far, subscribers lose it all; this is clea

The questions regarding multi-territory licensing are far more complicated, and I suspect answers to those depend largely on whether you're someone trying to clear rights for reuse, someone trying to protect your control over your latest blockbuster's markets, or someone trying to make a living as a creative person. The first of those clearly wants to buy one license rather than dozens. The second wants to sell dozens of licenses rather than one (unless it's for a really BIG sum of money). The third, who is probably part of the "Long Tail" mentioned in the question, may be very suspicious of any regime that turns everything he created before 2005 into "back catalogue works" that are subject to a single multi-territory license. Science fiction authors, for example, have long made significant parts of their income by selling their out-of-print back titles for reprint. An old shot in a photographer's long tail may be of no value for 30 years – until suddenly the subject emerges as a Presidential candidate. Any regime that is adopted must be flexible enough to recognize that copyrighted works have values that fluctuate unpredictably over time.

The final set of question has to do with the law and piracy. Should we all follow France's lead and require ISPs to throw users offline if they're caught file-sharing more than three times? We have said all along that the best antidote to unauthorized copying is to make it easy for people to engage in authorized copying. If you knew, for example, that you could reliably watch the latest episode of The Big Bang Theory (if there ever is one) 24 hours after the US broadcast, would you bother chasing around torrent sites looking for a download that might or might not be complete? Technically, it's nonsense to think that ISPs can reliably distinguish an unauthorized download of copyrighted material from an authorized one; filtering cannot be the answer, no matter how much AT&T wants to kill itself trying. We would also remind the EU of the famed comment of another Old Netizen, John Gilmore: "The Internet perceives censorship as damage, and routes around it."

But of course no consultation can address the real problem, which isn't how to protect copyright online: it's how to encourage creators.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 4, 2008

If God had meant us to vote...

It seems like a couple of years now that people in the UK have been asking me, "Do you think Hillary or Guliani is going to win?" Sometimes they mention Obama. But it's like the meme of a few years ago about Arnold Schwarzenegger becoming president: the famous name dominates the coverage beyond all reason.

When the Schwarzenegger thing came up, I tried patiently to explain about the Constitution: to be elected president, you must have been born a US citizen. I assume the Founding Fathers, even without the benefit of having seen George Bernard Shaw's The Apple Cart, were worried that some English king would come over to the US, get himself naturalized, win the president's job in one of those democratic elections, and then push the country back to colonialism.

"They'll amend the Constitution," people said.

Well, not quite. It takes an incredible amount of effort to onstitution: the prospective amendment has to pass both legislative houses by a two-thirds majority, and then three-quarters of the states. Often, there's a time limit of seven years, which is what eventually scuppered the Equal Rights Amendment. (Apparently the fastest-ever passage of an amendment, 107 days, was not getting Prohibition repealed but lowering the voting age to 18 during the Vietnam War.) While there was, apparently, an attempt in 2004 to introduce an amendment allowing foreign-born, naturalized citizens to become president, it's hard for me to believe even Schwarzenegger thinks he has a chance in his lifetime; he's 60 this year. Certainly, it's not a possibility people inside the US seem to take seriously.

That so many people outside the US think of the chief presidential candidates as Hillary, Obama, Giuliani, and Schwarzenegger tells you how little of the US's real politics seeps out to other countries. Fantasy politics all you want, sure: as many British friends as American ones bought into The West Wing's fictional White House.

Very fictional: Josiah Bartlett might have managed to get elected president despite being a Catholic (Kennedy) and having multiple sclerosis, but he'd never be able to overcome the twin disadvantages of being a Nobel Laureate in economics and, above all, being SHORT. Martin Sheen is 5 foot 7. You have to go all the way back to 1900 and William McKinley to find a president that small, and even then that was short by historical standards. In 1988, Michael Dukakis lost the election when moving around the debate podiums to shake hands with George H.W. Bush revealed that he "http://www6.miami.edu/debate04/art/pagephotos/sphotohistory.jpg">barely came up to Bush's shoulder. Over and out.

Most reports guesstimate Obama at six feet, but Clinton reportedly clocks in at 5 foot 8 and a half – tallish for a woman, maybe, but not for a presidential candidate. Giuliani claims 5 foot 10 (though some observers claim he's shorter). And John Edwards comes in at 5 foot 10.

John Edwards? Things look very different from inside the US. Here, although Clinton, Obama, and Giuliani are still getting most of the headlines there are plenty of other candidates to pick from even just within the Democratic party, all of whom look more like a US president usually looks: white, male, and middle-aged. Giuliani's best moment may have been when, as New Yorkers gleefully keep saying, his every sentence was summed up by Democratic hopeful Senator Joseph R. Biden as "a noun, a verb, and 9/11",

Yesterday's Iowa caucus – the first primaries of the 2008 presidential election – is the first real data we've had. And reality started to hit: Giuliani polled 4 percent; the Republican front runner is former Arkansas governor Mike Huckabee, who suddenly came out of nowhere in the last few weeks . Clinton polled 30 percent, which sounds respectable until you find out she came third, narrowly below Edwards. Obama led with 37 percent. Lots more to go there.

Some more notes for the coming weeks:

- Giuliani was more or less hated in New York while he was mayor.

- Clinton, like her husband, was politically hated when she was First Lady, despite her exceptional star-name fundraising ability.

- Huckabee crossed WGA picket lines to appear on Jay Leno's Tonight Show on January 3 (without a deal with striking union writers, Huckabee was the best Leno could do for a guest.) When asked, he said he thought Leno had a deal with the WGA. No. "Oh." Oops.

- Vice-president Dick Cheney has vehemently denied all possibility that he will run. "And if elected I will not serve."

- Lots of press speculation that New York's current mayor, the megawealthy Steve Bloomberg, will enter as an independent.

- No matter who runs, from the primaries onwards the technology of voting is going to be an unholy mess and doubtless, in some districts, a deciding factor.

I can't guess November's nominees, but I don't expect to see Clinton among them unless it's as someone's vice-president (and who's going to want Bill hanging around kibitzing?). Clinton's trailing Edwards, even if it's only the first state, suggests the big show will feature dismal, "safe" choices.

Cue Utah Phillips: "If God had meant us to vote, he'd have given us candidates."

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 23, 2007

Road block

There are many ways for a computer system to fail. This week's disclosure that Her Majesty's Revenue and Customs has played lost-in-the-post with two CDs holding the nation's Child Benefit data is one of the stranger ones. The Child Benefit database includes names, addresses, identifying numbers, and often bank details, on all the UK's 25 million families with a child under 16. The National Audit Office requested a subset for its routine audit; the HMRC sent the entire database off by TNT post.

There are so many things wrong with this picture that it would take a village of late-night talk show hosts to make fun of them all. But the bottom line is this: when the system was developed no one included privacy or security in the specification or thought about the fundamental change in the nature of information when paper-based records are transmogrified into electronic data. The access limitations inherent in physical storage media must be painstakingly recreated in computer systems or they do not exist. The problem with security is it tends to be inconvenient.

With paper records, the more data you provide the more expensive and time-consuming it is. With computer records, the more data you provide the cheaper and quicker it is. The NAO's file of email relating to the incident (PDF) makes this clear. What the NAO wanted (so it could check that the right people got the right benefit payments): national insurance numbers, names, and benefit numbers. What it got: everything. If the discs hadn't gotten lost, we would never have known.

Ironically enough, this week in London also saw at least three conferences on various aspects of managing digital identity: Digital Identity Forum, A Fine Balance, and Identity Matters. All these events featured the kinds of experts the UK government has been ignoring in its mad rush to create and collect more and more data. The workshop on road pricing and transport systems at the second of them, however, was particularly instructive. Led by science advisor Brian Collins, the most notable thing about this workshop is that the 15 or 20 participants couldn't agree on a single aspect of such a system.

Would it run on GPS or GSM/GPRS? Who or what is charged, the car or the driver? Do all roads cost the same or do we use differential pricing to push traffic onto less crowded routes? Most important, is the goal to raise revenue, reduce congestion, protect the environment, or rebalance the cost of motoring so the people who drive the most pay the most? The more purposes the system is intended to serve, the more complicated and expensive it will become, and the less likely it is to answer any of those goals successfully. This point has of course also been made about the National ID card by the same sort of people who have warned about the security issues inherent in large databases such as the Child Benefit database. But it's clearer when you start talking about something as limited as road charging.

For example: if you want to tag the car you would probably choose a dashboard-top box that uses GPS data to track the car's location. It will have to store and communicate location data to some kind of central server, which will use it to create a bill. The data will have to be stored for at least a few billing cycles in case of disputes. Security services and insurers alike would love to have copies. On the other hand, if you want to tag the driver it might be simpler just to tie the whole thing to a mobile phone. The phone networks are already set up to do hand-off between nodes, and tracking the driver might also let you charge passengers, or might let you give full cars a discount.

The problem is that the discussion is coming from the wrong angle. We should not be saying, "Here is a clever technological idea. Oh, look, it makes data! What shall we do with it?" We should be defining the problem and considering alternative solutions. The people who drive most already pay most via the fuel pump. If we want people to drive less, maybe we should improve public transport instead. If we're trying to reduce congestion, getting employers to be more flexible about working hours and telecommuting would be cheaper, provide greater returns, and, crucially for this discussion, not create a large database system that can be used to track the population's movements.

(Besides, said one of the workshop's participants: "We live with the congestion and are hugely productive. So why tamper with it?")

It is characteristic of our age that the favored solution is the one that creates the most data and the biggest privacy risk. No one in the cluster of organisations opposing the ID card - No2ID, Privacy International, Foundation for Information Policy Research, or Open Rights Group - wanted an incident like this week's to happen. But it is exactly what they have been warning about: large data stores carry large risks that are poorly understood, and it is not enough for politicians to wave their hands and say we can trust them. Information may want to be free, but data want to leak.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

August 10, 2007

Wall of sheep

Last week at Defcon my IM ID and just enough of the password to show they knew what it was appeared on the Wall of Sheep. This screen projection of the user IDs, partial passwords, and activities captured by the installed sniffer inevitably runs throughout the conference.

It's not that I forgot the sniffer was there, or that there is a risk in logging onto an IM client unencrypted over a Wi-Fi hot spot (at a hacker conference!) but that I had forgotten that it was set to log in automatically whenever it could. Easily done.

It's strange to remember now that once upon a time this crowd – or at least, type of crowd – was considered the last word in electronic evil. In 1995 the capture of Kevin Mitnick made headlines everywhere because he was supposed to be the baddest hacker ever. Yet other than gaining online access and free phone calls, Mitnick is not known to have ever profited from his crimes – he didn't sell copied source code to its owners' competitors, and he didn't rob bank accounts. We would be grateful – really grateful – if Mitnick were the worst thing we had to deal with online now.

Last night, the House of Lords Science and Technology Committee released its report on Personal Internet Security. It makes grim reading even for someone who's just been to Defcon and Black Hat. The various figures the report quotes, assembled after what seems to have been an excellent information-gathering process (that means, they name-check a lot of people I know and would have picked for them to talk to) are pretty depressing. Phishing has cost US banks around $2 billion, and although the UK lags well behind - £33.5 million in bank fraud in 2006 – here, too, it's on the rise. Team Cymru found (PDF) that on IRC channels dedicated to the underground you could buy credit card account information for between $1 (basic information on a US account) to $50 (full information for a UK account); $1,599,335.80 worth of accounts was for sale on a single IRC channel in one day. Those are among the few things that can be accurately measured: the police don't keep figures breaking out crimes committed electronically; there are no good figures on the scale of identity theft (interesting, since this is one of the things the government has claimed the ID card will guard against); and no one's really sure how many personal computers are infected with some form of botnet software – and available for control at four cents each.

The House of Lords recommendations could be summed up as "the government needs to do more". Most of them are unexceptional: fund more research into IT security, keep better statistics. Some measures will be welcomed by a lot of us: make banks responsible for losses resulting from electronic fraud (instead of allowing them to shift the liability onto consumers and merchants); criminalize the sale or purchase of botnet "services" and require notification of data breaches. (Now I know someone is going to want to say, "If you outlaw botnets, only outlaws will have botnets", but honestly, what legitimate uses are there for botnets? The trick is in defining them to include zombie PCs generating spam and exclude PCs intentionally joined to grids folding proteins.)

Streamlined Web-based reporting for "e-crime" could only be a good thing. Since the National High-Tech Crime Unit was folded into the Serious Organised Crime Agency there is no easy way for a member of the public to report online crime. Bringing in a central police e-crime unit would also help. The various kite mark schemes – for secure Internet services and so on – seem harmless but irrelevant.

The more contentious recommendations revolve around the idea that we the people need to be protected, and that it's no longer realistic to lay the burden of Internet security on individual computer users. I've said for years that ISPs should do more to stop spam (or "bad traffic") from exiting their systems; this report agrees with that idea. There will likely be a lot of industry ink spilled over the idea of making hardware and software vendors liable if "negligence can be demonstrated". What does "vendor" mean in the context of the Internet, where people decide to download software on a whim? What does it mean for open source? If I buy a copy of Red Hat Linux with a year's software updates, that company's position as a vendor is clear enough. But if I download Ubuntu and install it myself?

Finally, you have to twitch a bit when you read, "This may well require reduced adherence to the 'end-to-end' principle." That is the principle that holds that the network should carry only traffic, and that services and applications sit at the end points. The Internet's many experiments and innovations are due to that principle.
The report's basic claim is this: criminals are increasingly rampant and increasingly rapacious on the Internet. If this continues, people will catastrophically lose confidence in the Internet. So we must improve security by making the Internet safer. Couldn't we just make it safer by letting people stop using it? That's what people tell you to do when you're going to Defcon.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 27, 2007

There ain't no such thing as a free Benidorm

This has been the week for reminders that the border between real life and cyberspace is a permeable blood-brain barrier.

On Wednesday, Linden Labs announced that it was banning gambling in Second Life. The resentment expressed by some of SL residents is understandable but naive. We're not at the beginning of the online world any more; Second Life is going through the same reformation to take account of national laws as Usenet and the Web did before it.

Second, this week MySpace deleted the profiles of 29,000 American users identified as sex offenders. That sounds like a lot, but it's a tiny percentage of MySpace's 180 million profiles. None of them, be it noted, are Canadian.

There's no question that gambling in Second Life spills over into the real world. Linden dollars, the currency used in-world, have active exchange rates, like any other currency, currently running about L$270 to the US dollar. (When I was writing about a virtual technology show, one of my interviewees was horrified that my avatar didn't have any distinctive clothing; she was and is dressed in the free outfit you are issued when you join. He insisted on giving me L$1,000 to take her shopping. I solemnly reported the incident to my commissioning editor, who felt this wasn't sufficiently corrupt to worry about: US$3.75! In-world, however, that could buy her several cars.) Therefore: the fact that the wagering takes place online in a simulated casino with pretty animated decorations changes nothing. There is no meaningful difference between craps on an island in Second Life and poker on an official Web-based betting site. If both sites offer betting on real-life sporting events, there's even less difference.

But the Web site will, these days, have gone through considerable time and money to set up its business. Gaming, even outside the US, is quite difficult to get into: licenses are hard to get, and without one banks won't touch you. Compared to that, the $3,800 and 12 to 14 hours a day Brighton's Anthony Smith told Information Week he'd invested in building his SL Casino World is risibly small. You have to conclude that there are only two possibilities. Either Smith knew nothing about the gaming business - if he did, he know that the US has repeatedly cracked down on online gambling over the last ten years and that ultimately US companies will be forced to decide to live within US law. He'd also have known how hard and how expensive it is to set up an online gambling operation even in Europe. Or, he did know all those things and thought he'd found a loophole he could exploit to avoid all the red tape and regulation and build a gaming business on the cheap.

I have no personal interest in gaming; risking real money on the chance draw of a card or throw of dice seems to me a ridiculous waste of the time it took to earn it. But any time you have a service that involves real money, whether that service is selling an experience (gaming), a service, or a retail product, when the money you handle reaches a certain amount governments are going to be interested. Not only that, but people want them involved; people want protection from rip-off artists.

The MySpace decision, however, is completely different. Child abuse is, rightly, illegal everywhere. Child pornography is, more controversially, illegal just about everywhere. But I am not aware of any laws that ban sex offenders from using Web sites, even if those Web sites are social networks. Of course, in the moral panic following the MySpace announcement, someone is proposing such a law. The MySpace announcement sounds more like corporate fear (since the site is now owned by News International) than rational response. There is a legitimate subject for public and legislative debate here: how much do we want to cut convicted sex offenders out of normal social interaction? And a question for scientists: will greater isolation and alienation be effective strategies to keep them from reoffending? And, I suppose, a question for database experts: how likely is it that those 29,000 profiles all belonged to correctly identified, previously convicted sex offenders? But those questions have not been discussed. Still, this problem, at least in regards to MySpace, may solve itself: if parents become better able to track their kids' MySpace activities, all but the youngest kids will surely abandon it in favour of sites that afford them greater latitude and privacy.

A dozen years ago, John Perry Barlow (in)famously argued that national governments had no place in cyberspace. It was the most hyperbolic demonstration of what I call the "Benidorm syndrome": every summer thousands of holidaymakers descend on Benidorm, in Spain, and behave in outrageous and sometimes lawless ways that they would never dare indulge in at home in the belief that since they are far away from their normal lives there are no consequences. (Rinse and repeat for many other tourist locations worldwide, I'm sure.) It seems to me only logical that existing laws apply to behaviour in cyberspace. What we have to guard against is deforming cyberspace to conform to laws that don't exist.


Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

July 6, 2007

Born digital

Under one of my bookcases there is a box containing 40 or 50 5.25inch floppy disks next to an old floppy drive of the same size. The disks were created in SuperScripsit in the early 1980s, and require an emulator that pretends my Core2Duo is a TRS-80 Model III.

If, like me, you have had a computer for any length of time you, too, have stowed somewhere a batch of old files that you save because they are or were important to you but that you're not sure you could actually read, though you keep meaning to plug that old drive in and find out. But the Domesday Book, drafted in 1085, is still perfectly readable. In fact, it's more readable than a 1980s digital Domesday Book that was unreadable only 15 years after its creation because the technology it was stored on was outmoded.

The average life of an electronic document before it becomes obsolete is seven years. And that's if it survives that long. Paper can last centuries – and the National Archives, which holds 900 years of Britain's records, has to think in centuries.

This week, the National Archives announced it was teaming up with Microsoft to ensure that the last decade or two of government archives do not become a black hole in history.

The problem of preserving access to today's digital documents is not newly discovered. Digital preservation and archiving were on the list of topics of interest in 1997, when the Foundation for Information Policy Research was founded. Even before that, NASA had discovered the problem, in connection with the vast amounts of data collected at taxpayer expense by the various space missions. Librarians have known all along that the many format changes of the digital age posed far greater problems than deciphering an unfamiliar language chiseled into a chunk of stone.

But it takes a while for non-technical people to understand how complex a problem it really is. Most people, Natalie Ceeney, chief executive of the National Archives, said on Tuesday, think all you have to do is make back-ups. But for an archivist this isn't true, even for the simple case of, say, a departmental letter written in the early 1980s in WordStar. The National Archives wants not only to preserve the actual text of the letter but its look, feel, and functionality. To do that, you need to be able to open the document in the software in which it was originally created – which means having a machine you can run that software on. Lather, rinse, and repeat for any number of formerly common but now obsolete systems. The National Archives estimates it has 580Tb of data in obsolete formats. And more new formats are being invented every day: email, Web, instant messages, telephone text messages, email, databases, ministers' blogs, internal wikis…and as they begin to interact without human intervention that will be a whole new level of complication.

"We knew in the paper world what to keep," Ceeney said. "In the digital world, it's harder to know. But if we tried to keep everything we'd be spending the entire government budget on servers."

So for once Microsoft is looking like a good guy in providing the National Archives with Virtual PC 2007, which (it says here) combines earlier versions of Windows and Office in order to make sure that all government documents that were created using Microsoft products can be opened and read. Naturally, that isn't everything; but it's a good start. Gordon Frazer, Microsoft's UK managing director, promised open formats (or at least, Open XML) for the future. The whole mess is part of a four-year Europe-wide project called Planets.

Digital storage is surprisingly expensive compared to, say, books or film. A study reported by the head of preservation for the Swedish national archives shows that digital can cost up to eight times as much (PDF, see p4) as the same text on paper. But there is a valuable trade-off: the digital version can be easily accessed and searched by far more people. The National Archives' Web site had 66 million downloads in 2006, compared to the 250,000 visitors to its physical premises in Kew.

Listening to this discussion live, you longed to say, "Well, just print it all out, then." But even if you decided to waive the requirements for original look, feel, and functionality, not eveything could be printed out anyway. (Plus, the National Archives casually mentions that its current collection of government papers is 175 kilometres long already.) The most obvious case in point is video evidence, now being kept by police in huge amounts – and, in cases of unsolved crimes or people who have been sentenced for serious crimes, for long periods. Can't be printed. But even text-based government documents: when these were created on paper you saved the paper. The documents of the last 20 years were born digital. Paper is no longer the original but the copy. The National Archives is in the business of preserving originals.

Nor, of course, does it work to say, "Let the Internet archive take care of it: too much of the information is not published on the Web but held in internal government systems, from where it will be due to emerge in a few decades under Britain's 30-year rule. Hopefully we'll know before then that this initiative has been successful.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

January 26, 2007

Vote early, vote often...

It is a truth that ought to be universally acknowledged that the more you know about computer security the less you are in favor of electronic voting. We thought – optimists that we are – that the UK had abandoned the idea after all the reports of glitches from the US and the rather indeterminate results of a couple of small pilots a few years ago. But no: there are plans for further trials for the local elections in May.

It's good news, therefore, that London is to play host to two upcoming events to point out all the reasons why we should be cautious. The first, February 6, is a screening of the HBO movie Hacking Democracy, a sort of documentary thriller. The second, February 8, is a conference bringing together experts from several countries, most prominently Rebecca Mercuri, who was practically the first person to get seriously interested in the security problems surrounding electronic voting. Both events are being sponsored by the Open Rights Group and the Foundation for Information Policy Research, and will be held at University College London. Here is further information and links to reserve seats. Go, if you can. It's free.

Hacking Democracy (a popular download) tells the story of ,a href="http://www.blackboxvoting.org">Bev Harris and Andy Stephenson. Harris was minding her own business in Seattle in 2000 when the hanging chad hit the Supreme Court. She began to get interested in researching voting troubles, and then one day found online a copy of the software that runs the voting machines provided by Diebold, one of the two leading manufacturers of such things. (And, by the way, the company whose CEO vowed to deliver Ohio to Bush.) The movie follows this story and beyond, as Harris and Stephenson dumpster-dive, query election officials, and document a steady stream of glitches that all add up to the same point: electronic voting is not secure enough to protect democracy against fraud.

Harris and Stephenson are not, of course, the only people working in this area. Among computer experts such as Mercuri, David Chaum, David Dill, Deirdre Mulligan, Avi Rubin, and Peter Neumann, there's never been any question that there is a giant issue here. Much argument has been spilled over the question of how votes are recorded; less so around the technology used by the voter to choose preferences. One faction – primarily but not solely vendors of electronic voting equipment – sees nothing wrong with Direct Recording Electronic, machines that accept voter input all day and then just spit out tallies. The other group argues that you can't trust a computer to keep accurate counts, and that you have to have some way for voters to check that the vote they thought they cast is the vote that was actually recorded. A number of different schemes have been proposed for this, but the idea that's catching on across the US (and was originally promoted by Mercuri) is adding a printer that spits out a printed ballot the voter can see for verification. That way, if an audit is necessary there is a way to actually conduct one. Otherwise all you get is the machine telling you the same number over again, like a kid who has the correct answer to his math homework but mysteriously can't show you how he worked the problem.

This is where it's difficult to understand the appeal of such systems in the UK. Americans may be incredulous – I was – but a British voter goes to the polls and votes on a small square of paper with a stubby, little pencil. Everything is counted by hand. The UK can do this because all elections are very, very simple. There is only one election – local council, Parliament – at a time, and you vote for one of only a few candidates. In the US, where a lemon is the size of an orange, an orange is the size of a grapefruit, and a grapefruit is the size of a soccer ball, elections are complicated and on any given polling day there are a lot of them. The famous California governor's recall that elected Arnold Schwarzeneger, for example, had hundreds of candidates; even a more average election in a less referendum-happy state than California may have a dozen races, each with six to ten candidates. And you know Americans: they want results NOW. Like staying up for two or three days watching the election returns is a bad thing.

It is of course true that election fraud has existed in all eras; you can "lose" a box of marked paper ballots off the back of a truck, or redraw districts according to political allegiance, or "clean" people off the electoral rolls. But those types of fraud are harder to cover up entirely. A flawed count in an electronic machine run by software the vendor allows no one to inspect just vanishes down George Orwell's memory hole.

What I still can't figure out is why politicians are so enthusiastic about all this. Yes, secure machines with well-designer user interfaces might get rid of the problem of "spoiled" and therefore often uncounted ballots. But they can't really believe – can they? – that fancy voting technology will mean we're more likely to elect them? Can it?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 24, 2006

The Great Firewall of Britain

We may joke about the "Great Firewall of China", but by the end of 2007 content blocking will be a fact of Internet life in the UK. In June, Vernon Coaker, Parliamentary Under-Secretary for the Home Department told Parliament, "I have recently set the UK Internet industry a target to ensure that by the end of 2007 all Internet service providers offering broadband Internet connectivity to the UK public prevent their customers from accesssing those Web sites." By "those", he means Web sites carrying pornographic images of children.

Coaker went on to say that by the end of 2006 he expects 90 percent of ISPs to have blocked "access to sites abroad", and that, "We believe that working with the industry offers us the best way forward, but we will keep that under review if it looks likely that the targets will not be met."

The two logical next questions: How? And How much?

Like a lot of places, the UK has two major kinds of broadband access: cable and DSL. DSL is predominantly provided by BT, either retail directly to customers or wholesale to smaller ISPs. Since 2004, BT's retail service is filtered by its Cleanfeed system, which last February the company reported was blocking about 35,000 attempts to access child pornography sites per day. The list of sites to block comes from the Internet Watch Foundation, and is compiled from reports submitted by the public. ISPs pay IWF £5,000 a year to be supplied with the list – insignificant to a company like BT but not necessarily to a smaller one. But the raw cost of the IWF list is insignificant compared to the cost of reengineering a network to do content blocking.

How much will it cost for the entire industry?

Malcolm Hutty, head of public affairs at Linx, says he can't even begin to come up with a number. BT, he thinks, spent something like £1 million in creating and deploying Cleanfeed – half on original research and development, half on deployment. Most of the first half of that would not now be necessary for an ISP trying to decide how to proceed, since a lot more is known now than back in 2003.

Although it might seem logical that Cleanfeed would be available to any DSL provider reselling BT's wholesale product, that's not the case.

"You can be buying all sorts of different products to be able to provide DSL service," he says. A DSL provider might simply rebrand BT's own service – or it might only be paying BT to use the line from your home to the exchange. "You have to be pretty close to the first extreme before BT Cleanfeed can work for you." So adopting Cleanfeed might mean reengineering your entire product.

In the cable business, things are a bit different. There, an operator like ntl or Telewest owns the entire network, including the fibre to each home. If you're a cable company that implemented proxy caching in the days when bandwidth was expensive and caching was fashionable, the technology you built then will make it cheap to do content blocking. According to Hutty, ntl is in this category – but its Telewest and DSL businesses are not.

So the expense to a particular operator varies for all sorts of reasons: the complexity of the network, how it was built, what technologies it's built on. This mandate, therefore, has no information behind it as to how much it might cost, or the impact it might have on an industry that other sectors of government regard as vital for Britain's economic future.

The How question is just as complicated.

Cleanfeed itself is insecure (PDF), as Cambridge researcher Richard Clayton has recently discovered. Cleanfeed was intended to improve on previous blocking technologies by being both accurate and inexpensive. However, Clayton has found that not only can the system be circumvented but it also can be used as an "oracle to efficiently locate illegal websites".

Content blocking is going to be like every other security system: it must be constantly monitored and updated as new information and attacks becomes known or are developed. You cannot, as Clayton says, "fit and forget".

The other problem in all this is the role of the IWF. It was set up in 1996 as a way for the industry to regulate itself; the meeting where it was proposed came after threats of external regulation. If all ISPs are required to implement content blocking, and all content blocking is based on the IWF's list, the IWF will have considerable power to decide what content should be blocked. So far, the IWF has done a respectable job of sticking to clearly illegal pornography involving children. But its ten years have been marked by occasional suggestions that it should broaden its remit to include hate speech and even copyright infringement. Proposals are circulating now that the organisation should become an independent regulator rather than an industry-owned self-regulator. If IWF is not accountable to the industry it regulates; if it's not governed by Parliamentary legislation; if it's not elected….then we will have handed control of the British Internet over to a small group of people with no accountability and no transparency. That sounds almost Chinese, doesn't it?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

June 30, 2006

Technical enough for government work

Wednesday night was a rare moment of irrelevant glamor in my life, when I played on the Guardian team in a quiz challenge grudge match.

In March, Richard Sarson (intriguingly absent, by the way) accused MPs of not knowing which end was up, technically speaking, and BT funded a test. All good fun.

But Sarson had a serious point: MPs are spending billions and trillions of public funds without the technical knowledge to them. His particular focus was the ID card, which net.wars has written about so often. Who benefits from these very large IT contracts besides, of course, the suppliers and contractors? It must come down to Yes, Minister again: commissioning a huge, new IT system gives the Civil Service a lot of new budget and bureaucracy to play with, especially if the ministers don't understand the new system. Expanded budgets are expanded power, we know this, and if the system doesn't work right the first time you need an even bigger budget to fix them with.

And at that point, the issue collided in my mind with this week's other effort, a discussion of Vernor Vinge's ideas of where our computer-ridden world might be going. Because the strangest thing about the world Vernor Vinge proposes in his new book, Rainbows End, is that all the technology pretty much works as long as no one interferes with it. For example: this is a world filled with localizer sensors and wearable computing; it's almost impossible to get out of view of a network node. People decide to go somewhere and snap! a car rolls up and pops open its doors.

I'm wondering if Vinge has ever tried to catch a cab when it was raining in Manhattan.

There are two keys to this world. First: it is awash in so many computer chips that IPv6 might not have enough addresses (yeah, yeah, I know, no electron left behind and all that). Second: each of these chips has a little blocked off area called the Secure Hardware Environment (SHE), which is reserved for government regulation. SHE enables all sorts of things: detailed surveillance, audit trails, the blocking of undesirable behavior. One of my favorite of Vinge's ideas about this is that the whole system inverts Lawrence Lessig's idea of code is law into "law is code". When you make new law, instead of having to wait five or ten years until all the computers have been replaced so they conform to the new law, you can just install the new laws as a flash regulatory update. Kind of like Microsoft does now with Windows Genuine Advantage. Or like what I call "idiot stamps" – today's denominationless stamps, intended for people who can never remember how much postage is.

There are a lot of reasons why we don't want this future, despite the convenience of all those magically arriving cars, and despite the fact that Vinge himself says he thinks frictional costs will mean that SHE doesn't work very well. "But it will be attempted, both by the state and by civil special interest petitioners." For example, he said, take the reaction of a representative he met from a British writers' group who thought it was a nightmare scenario – but loved the bit where microroyalties were automatically and immediately transmitted up the chain. "If we could get that, but not the monstrous rest of it…"

For another, "You really need a significant number of people who are willing to be Amish to the extent that they don't allow embedded microprocessors in their lifestyle." Because, "You're getting into a situation where that becomes a single failure point. If all the microprocessors in London went out, it's hard to imagine anything short of a nuclear attack that would be a deadlier disaster."

Still, one of the things that makes this future so plausible is that you don't have to posit the vast, centralized expenditure of these huge public IT projects. It relies instead on a series of developments coming together. There are examples all around us. Manufacturers and retailers are leaping gleefully onto RFID in everything. More and more desktop and laptop computers are beginning to include the Trusted Computing Module, which is intended to provide better security through blocking all unsigned programs from running but as a by-product could also allow the widescale, hardware-level deployment of DRM. The business of keeping software updated has become so complex that most people are greatly relieved to be able to make it automatic. People and municipalities all over the place are installing wireless Internet for their own use and sharing it. To make Vinge's world, you wait until people have voluntarily bought or installed much of the necessary infrastructure and then do a Project Lite to hook it up to the functions you want.

What governments would love about the automatic regulatory upgrade is the same thing that the Post Office loves about idiot stamps: you can change the laws (or prices) without anyone's really being aware of what you're doing. And there, maybe, finally, is some real value for those huge, failed IT projects: no one in power can pretend they aren't there. Just, you know, God help us if they ever start being successful.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, at her personal blog, or by email to netwars@skeptic.demon.co.uk (but please turn off HTML).

November 9, 2001

Save the cookie

You would think that by this time in the Internet's history we would have reached the point where the politicians making laws would have learned a thing or two about how it works, and would therefore not be proposing (and passing) quite such stupid laws as they used to. Apparently not.

Somehow, tacked onto an otherwise sensible bill aimed at protecting consumer privacy are provisions requiring Web sites to use cookies only on an opt-in basis. Consultation to remove this bit of idiocy closes in mid-November.

The offending bit appears in the second report on the proposal for a European Parliament and Council directive concerning the processing of personal data and the protection of privacy in the electronic communications sector" (PDF), and is labelled "amendment 26 to article 5, paragraph 2a". What seems to be upsetting the EC is that cookies may enter a user's computer without that user's specific permission.

Well, that's true. On the other hand, it's pretty easy to set any browser to alert you whenever a site wants to send you a cookie - and have fun browsing like that, because you'll be interrupted about every two and a half seconds. Microsoft's Internet Explorer 6 lets you opt out of cookies entirely.

A lot of people are oddly paranoid about cookies, which are, like the Earth in the Hitchhiker's Guide to the Galaxy, mostly harmless. At heart, what cookies do is give Web sites persistent memory. Unlike what many people think, a connection to a Web site is not continuous; you request a page, and then you request another page, and without cookies the Web site does not connect the two transactions.

Cookies are what make it possible to build up an order in a shopping cart or personalize a site so it remembers your ID and password or knows you're interested in technology news and not farming. These uses do not invade privacy.

There are, of course, plenty of things you can do with cookies that are not harmless. Take Web bugs. These hidden graphics, usually 1x1 pixels, enable third parties to track what you do on the Web and harvest all sorts of information about you, your computer, and what browser you use. Privacy-protecting sites like the Anonymizer depend on cookies.

Similarly, the advertising agency DoubleClick has been under severe fire for the way it tracks users from site to site, even though it says that the data are anonymized and the purpose is merely to ensure that the ads you see are targeted to your interests rather than random.

MEPs who want to protect consumer privacy, therefore, should not be looking at the technology itself but at how the technology is used. They should be attrempting to regulate behavior that invades privacy, not the technology itself. To be fair, the report mentions all these abuses. The problem is simply that the clause is overbroad, and needs some revision. Something along the lines of requiring sites to explain in their privacy policies how they use cookies and a prohibition on actually spying on users would do nicely.

The point is to get at what people do with technology, not outlaw the technology itself.

We've had similar problems in the US, most recently and notably with the Digital Millennium Copyright Act, which also tends to criminalize technology rather than behaviour. This is the crevasse that Sklyarov fell into. For those who haven't been following the story, Sklyarov, on behalf of his Russian software company, Elcomsoft, wrote a routine that takes Adobe eBooks and converts them into standard PDFs. Yes, that makes them copiable. But it also makes it possible for people who have bought eBooks to back them up, run them through text-to-speech software (indispensable for the blind), or read them on a laptop or PDA after downloading them onto their desktop machine.

In the world of physical books, we would consider these perfectly reasonable things to do. But in the world of digital media these actions are what rightsholders most fear. Accordingly, the DMCA criminalizes creating and distributing circumvention. As opponents to the act pointed out at the time, this could include anything from scissors and a bottle of Tippex to sophisticated encryption cracking software. The fuss over DeCSS, which removes regional coding from DVDs, is another case in point. While the movie studios argue that DeCSS is wholly intended to enable people to illegally copy DVDs, the original purpose was to let Linux users play the DVDs they'd paid for on their computers, for which no one provides a working commercial software player.

The Internet Advertising Bureau has of course gone all out to save the cookie. It is certainly true, as they say, that it would impair electronic commerce in Europe, the more so because it would be impossible to impose the same restrictions on non-EU businesses.

If MEPs really want to protect consumer privacy, here's what they should do. First of all, learn something about what they are doing. Second of all, focus on behaviour.